Sunday, December 18, 2011

It's Christmas!

December 18, 2011 10:37 am

It's Christmas, term is winding down at the University, the students are going home and that means spare time to do something other than official work. Here is a small Windows App I wrote that simulates a simple enzyme mechanism. The point of the simulation is to illustrate when the quasi-steady-state assumption is reasonable or not. In the simulation, you can change the different kinetic binding and unbinding constants but also by way of sliders change the initial total enzyme and substrate concentrations. The point to observe is that when the total substrate is low compared to the total enzyme, it is possible that the steady-state assumption is no longer reasonable and the derivation of the classic Briggs-Haldane equation (Often incorrectly referred to as the Michaelis-Menten equation) is no longer valid - it also depends on the Km of the enzyme (See Quasi-Steady-State Index). The first plot below shows the case when the steady-state assumption does not readily hold. Over a given timescale the enzyme-substrate complex reaches a peak then quickly declines (Steady-state index = 1.43, anything << 1.0 means that we can reasonably assume the quasi-steady state, see Murray, 2002).



The second plot shows the case where the total enzyme is now much smaller than the total substrate and this time the concentration of enzyme-substrate complex (shown in blue) remains relatively steady. The time axis in both plots is the same but this time the steady-state index = 0.07 which is much less than 1.0.



The Windows application that made these plots can be downloaded here. This is a single exe, no need to install anything, just run the executable and play with the sliders and settings. I've zipped the exe to save space, so unzip the file to get at the executable.

[latexpage]
Quasi-Steady-State Index, if $\epsilon \ll 1$ then the steady state assumption is reasonable:

$$ \epsilon = \frac{E_o}{S_o + K_m} $$

where $E_o$ is the total amount of enzyme and $S_o$ the total amount of substrate.

Murray, J.D. (2002). Mathematical Biology: I. An Introduction (3 ed.). Springer.ISBN 978-0387952239., equation 6.18.

Tuesday, September 20, 2011

Let's Build a Compiler

September 20, 2011 11:01 am

I decided to LaTeX-ify Jack Crenshaw's 'Let's Build a Compiler' series. Between 1988 and 1995 Jack Crenshaw wrote a sixteen-part series on how to build a compiler. This was a non-technical introduction (No Dragon book necessary) and appears to have encouraged many people to try their hand at developing a compiler.

The entire series does not seem to be available now on the original web site but I tracked down a word doc that I used as the basis for this LaTeX version. You can download a pdf copy here. There may still be some tuning of the formatting but the document is in a decent shape. For Delphi users, the compiler series used Pascal.

Download pdf

The LaTeX source code is available on GitHub 

For those interested in writing interpreters, I would also highly recommend the book by Kernighan and Pike where the authors introduce yacc, lex and the programming language hoc. This is one of the easiest introductions to writing an interpreter I have come across (apart perhaps from the book by Brown). What is nice about hoc is how the authors introduce the interpreter step by step. Both books can be bought very cheaply on the second-hand market.

Kernighan, Brian W.; Pike, Rob (1984). The Unix Programming Environment. Prentice Hall. ISBN 013937681X.

Brown, P.J (1992) Writing Interactive Compilers and Interpreters John Wiley & Sons Inc, ISBN 10: 0471100722

Wednesday, September 7, 2011

Scientists Patent the Scientific Method

Originally Posted on  by hsauro


There has been a lot of talk in the news about the patent office awarding obvious or frivolous patents. I recently came across this one awarded to Palsson, Covert and Herrgard. I only read the abstract which is given here:

“The present invention provides a method of refining a biosystem reaction network. The method consists of: (a) providing a mathematical representation of a biosystem; (b) reconciling said mathematical representation of said biosystem; (c) determining differences between observed behavior of a biosystem and in silico behavior of said mathematical representation of said biosystem under similar conditions; (d) modifying a structure of said mathematical representation of said biosystem, and (e) determining differences between said observed behavior of said biosystem and in silico behavior of said modified mathematical representation of said biosystem under similar conditions.”

If I am not mistaken this sounds very much like the scientific method. Does this mean we’re going to have to pay these scientists every time I study cellular biology? There might be more to it in the main text, but from the abstract it sure sounds like they’ve just patented the way we study the world, albeit a specific part of the real world, in this case, biochemical reaction networks.

Sunday, September 4, 2011

Useful tips for fireMonkey and Delphi XE2

Originally Posted on  by hsauro

Here is a list of useful tips for using fireMonkey.

Update (7 Sept 2011): A report from Embarcadero indicates that Anchor support will eventually be provided in FireMonkey.

1. Fonts too fuzzy? You may find that font rendered in controls such as TMemo or TEdit are a bit fuzzy, particularly on a white background. To improve font quality, follow a suggestion from Jermey North:

This could be due to the Direct 2D canvas. Try the GDI+ plus canvas and see if this makes the fonts clearer.

Before Application.Initialize in the project file, add this line:

FMX.Types.GlobalUseDirect2D := False;

You need to put FMX.Types and FMX.Canvas.GDIP in the uses clause as
well.

Works for me. Update 12 Sept: For an even better solution see tip #11.

2. Qualified Font Styles. From Jon Souter, can’t get this to compile?

CheckBox1.Font.Style:=[fsBold];
Label1.Font.Style:=[fsBold];

Use this instead (from Jeremy North and Chris Rolliston):

CheckBox1.Font.Style:=[TFontStyle.fsBold];
Label1.Font.Style:=[TFontStyle.fsBold];

Styles must now be qualified.

3. Font Size Differences: In VCL, font sizes are specified using points, eg 10 point, that is 72 points per inch. In fireMonkey font sizes are specified using device-independent pixels (DIPs), which at set at 96 per logical inch. A font set at 10 point in foreMonkey will therefore be slightly smaller compared so the same size in a VCL application.

4. Mouse Position: In Windows world we were used to getting the mouse position using GetCursorPos(). For crossplatform, we have to use a different approach. Lukily there is a platform object from which we can obtain the mouse coordinates (from Ralph Wesseling):

uses FMX.Platform;

p : TPointF

p := Platform.GetMousePos;

5. Unit Scope Helper: Drs Bob’s has release a simple browser tool to help you convert VCL unit names to their equivalent FMX names. http://www.bobswart.nl/blog/

6. Multiple Select: If you select multiple controls on a form, the usual selection cues are not visible. However the group of controls is selected, you can move them about as if there were a group.

7. OnKeyDown/KeyUp in Forms: Unlike a VCL form which has over 40 events that one can tap into, a FM form has only 9 events exposed. In particular it has no OneKeyDown or OnKeyUp exposed. In addition, FM forms have no KeyPreview. As recently mentioned by Chris Rolliston, “Looking at the source, I’ve found the situation is in effect an enforced KeyPreview=True – the only window from an OS POV is the form itself, which
then passes on key events to the controls. While there are no OnKeyDown and OnKeyUp events exposed by TForm, you can just override the form’s KeyDown and KeyUp methods:”

type
  TForm1 = class(TForm)
    Memo1: TMemo;
  private
  public
    procedure KeyDown(var Key: Word; var KeyChar: Char; Shift:
TShiftState); override;
  end;

//...

procedure TForm1.KeyDown(var Key: Word; var KeyChar: Char; Shift:
TShiftState);
begin
  if Key = vkEscape then
    Close
  else
    inherited;
end;

Rolliston goes on to suggest that adding KeyPreview to FM forms should be straightforward and supplies the necessary code to accomplish this.

From https://forums.embarcadero.com/thread.jspa?threadID=60054&tstart=0.

8. Where are my Anchors gone? Possibly one of the biggest surprises for many was the absence of the anchor property in FM controls. This has been discussed before and refer the reader to Update 2 on this page.

9. Glyphs on buttons Question (Gaetan Maerten): I have made a simple app with an OK and Cancel button, and now I am trying to find how I can add the standard OK and Cancel glyphs?

Answer (Chris Rolliston): With the TButton object selected, add a TImage (*not* a TImageControl) to it, assign the image’s Bitmap property, and set the image’s HitTest property to False so that any mouse events float through to the button. You also might want to use a TLabel control for the button’s text instead of using the button’s own Text property, so as to make sure it all aligns nicely.

10. Writing text on a TImage (GIAN 55)

Code to write text on a TImage canvas:


procedure TForm2.Button1Click(Sender: TObject);
var
LRect: TRectF;
begin
myimage.Bitmap.LoadFromFile(CurrDir+'\bg.bmp');  // delphi need a brackground to write on

LRect.Create(0, 0, 300, 300);
myimage.Bitmap.Canvas.Font.Family:='Times New Roman';
myimage.Bitmap.Canvas.Font.Size:=18;
myimage.Bitmap.Canvas.Fill.Color:=claBlack;
myimage.Bitmap.Canvas.FillText(LRect, 'Delphi XE2 for iOS', True, 255, [],TTextAlign.taLeading,TTextAlign.taCenter);
myimage.Bitmap.BitmapChanged;
end;

11. More on Fuzzy Font. Not happy with the fuzzy fonts rendered by fireMonkey? Want to use cleartype? If so consider using a replacement canvas for fireMonkey by Mattias Anderson. Here is the contents of the readme file:

FMX.Canvas.VPR
==============

Description:

FMX.Canvas.VPR is a new FireMonkey TCanvas implementation that uses the
open source polygon rasterizer VPR for all of its rendering tasks.

How to use:

Add FMX.Canvas.VPR as the first unit in the project file of your
FireMonkey project.

Advantages:

- perfect antialiasing (superior to GDI+ and D2D);
- high performance;
- minimal API dependencies;
- support for gamma correction;
- support for cleartype font rendering;
- can be easily ported to other platforms.

Copyright (C) 2011 by Mattias Andersson

12. FastMM for Delphi XE2. For user of FastMM, Pierre Le Riche has released a version for Delphi XE2. FastMM is a memory manager and debugger tool. For more information consult this blog and this stackoverflow question.

13. Using Lua with Delphi XE2Lua is a high performance, light-weight and easy to use scripting language that games writers often use as a lightweight embedded language to develop the game’s AI for example. Lua4Delphi allows someone to use Lua from within a Delphi application. Delphi3Lua provides full Lua integration.

14. Creating Components at Runtime. Here is a simple example from Peter Soderman for creating a string grid at runtime:

procedure TForm1.Button1Click(Sender: TObject);
var
  p: TPanel;
  sg: TStringGrid;
begin
  p := TPanel.Create(Self);
  sg := TStringGrid.Create(Self);
  sg.Parent := p;
  p.Parent := Self;
  sg.Align := TAlignLayout.alClient;
  p.Align := TAlignLayout.alLeft;
end;

15. Adding a column to StringGrid. Peter Soderman reports how to add a column to a stringgrid at runtime as well as setting a cell’s text:

StringGrid1.AddObject(TStringColumn.Create(Self));

To set the text in a given cell:

StringGrid1.Cells[aCol,aRow] := ‘Hello!’;

16. How do I set the colour of the text of a TLabel Set the color of the font in the style editor but other on attributes in the property editor.

Thursday, June 30, 2011

MAPK: Feedback Amplifier, Part 2

May 30, 2011 2:27 pm

In part 1 of this series, a brief history of negative feedback was given. Here we will look at the advantages and disadvantages of negative feedback, particularly in relation to signal transmission.

Amplifiers

Amplifiers used by electrical engineers are designed to magnify the current or voltage in an electrical circuit. A simple example is the voltage amplifier (i.e. voltage-controlled voltage source) which samples the voltage in one part of a circuit and produces a proportionally larger voltage in another. Of critical importance to an amplifier is not so much the amplification factor itself but how accurate the amplification is. There is no such thing however as the perfect amplifier and real amplifiers, electrical or biological will introduce errors or distortions into the signal. These distortions can be classed into three types:

  1. Frequency distortion
  2. Phase distortion
  3. Harmonic distortion

Frequency distortion is do to the fact that amplifiers do not amplify signals at different frequencies to the same extend. The term bandwidth is used to indicate the range of frequencies over which a given amplifier can faithfully amplify a signal.

The second source of distortion, that is phase distortion, is do to the fact that as the amplifier operates it will add delays into the signal. The amount of delay will often be a function of the signal frequency.

Finally, harmonic distortion is due to the fact that amplifiers do not amplify a signal by a fixed amount. That is the amplifier will have some nonlinear behavior which will often be a function of signal frequency.

In the 1920s, such distortions were a huge problem to the new telecommunications industry and it was Harold Black's solution to use negative feedback that solved the problem.

Negative Feedback

In order to understand how negative feedback can improve the performance of a signal amplifier, we must consider a very simple example. The figure below comes from the paper: "MAPK Cascades as Feedback Amplifiers"




Let us consider only the steady-state behavior of the system. The input is given by $u$, the output by $y$, the error $e$, and the disturbance by $d$. The input is the signal we want to magnify and the magnified version of the input is the output, $y$. Some of the output we feed back via $F$ to the input, where we subtract it from the input, $u$. $A$ is the amplifier itself. We will ignore the disturbance $d$ for the moment. The way to look at this diagram mathematically is that any arrow coming out of a block is the product of the block and the arrow coming into the block. For example, consider the output, $y$. This output is a result of the amplifier, $A$, magnifying the error, $e$, that is $y = A e$. What about $e$? $e$ is the result of subtracting $F y$ from $u$. From these statements we can write the following two equations:

$$ y = A e $$ $$ e = u - F y $$ From these two equations we can eliminate $e$ to find: $$ y = \frac{A u}{1 + A F} $$

Calling $G = A/(1 + A F)$ the system gain, we have simply, $y = G u$. Comparing $G$ with $A$, it should be clear that the feedback reduces the gain of the amplifier. Further, if the loop gain $A F$ is large ($A F \ge 1$), then

$$ G \approx \frac{A}{A F} = \frac{1}{F} $$

That is, as the gain $A F$ increases, the system behavior becomes more dependent on the feedback loop and less dependent on the amplifier itself. But so what? Three things are apparent from this simple analysis, the first is that any variation is $A$ has no effect on the operation of the system, that is because $G$ is independent of $A$.From a practical point of view, the manufacturing tolerance of $A$ doesn't have to be so high which makes it possible to make cheap $A$s. Instead the designer need only provide a stable feedback mechanism, in electronics this is in the form of cheap but high tolerance resistors.

Secondly advantage of having feedback, if we introduce a disturbance, $d$, into the output we find that in the presence of feedback, the influence of the disturbance decreases. Finally, and this is the real magic, any nonlinearity present in the amplifier $A$, is eliminated (or at least greatly reduced), this means that our feedback amplifier is very good at faithfully magnifying the input signal, exactly what we want from an amplifier (Proofs of these assertions can be found in the original papers, eg see [3]). In summary a feedback amplifier provides the following desirable characteristics: 1. Increased robustness with respect to internal perturbations. 2. Insulation from external perturbation, resulting in functional modularization. 3. A linear graded response over an extended operating range.

A word about the source of the internal variations in $A$. There are two primary sources, the first is manufacturing variability, that is not every amplifier is exactly the same as it comes off the assembly line. Secondly, when the circuit operates, it heats up and this introduces thermal noise which feeds noise directly into the circuit. Also repeated heating and cooling can cause the amplifier components to slow change in behavior. All these sources of variation can be greatly reduced by adding a negative feedback loop.

What has this got to do with MAPK? The MAPK cascade has many features that are similar to a negative feedback amplifier. The input is from a small signal at the receptor, The amplifier part, that is the three phosphorylation cycles act as amplifiers with high gain. A negative feedback wraps the entire structure and finally, in the last stage, the protein ERK2, must diffuse to the nucleus to make any difference. This represents a disturbance in the output signal. The noise in the amplifier part of MAPK can comes from a number of sources. First, there may be natural allelic variation in the cascade proteins but perhaps more significant is the stochastic noise that occurs in transcription and translation which means that the mean concentration of the cascade proteins vary over time. Negative feedback will greatly reduce the effect the variations have on the performance of MAPK.

There is more to a negative feedback amplifier than described here but the most important points are described.

References

1. Quantitative analysis of signaling networks. Sauro HM, Kholodenko BN, Prog Biophys Mol Biol. 2004 Sep;86(1):5–43.

2. The Computational Versatility of Proteomic Signaling Networks, Sauro HM In: Current Proteomics, Vol. 1, Bentham Science Publishers Ltd. (2004) , p. 67-81.

3. MAPK Cascades as Feedback Amplifiers Sauro HM, Ingalls B

Sunday, June 26, 2011

MAPK: Feedback Amplifier, Part I

 A recent paper by Sturm et al., report results that support the hypothesis that the MAPK cascade acts as a negative feedback amplifier.

The systems biology literature is full of reviews and articles about oscillators and bistable systems and very little else other than Uri Alon et al's refreshingly unique work on feedforward systems. An alien race, upon reading the literature, would most likely believe that the only thing biochemical networks can do is oscillate, show bistability and perhaps a little ultrasensitivity. This is probably because many of the modelers and theoreticians in systems biology are unaware of the possible signal processing capabilities offered by the engineering field. For example, an engineer looking at the MAPK cascade would probably immediately think of a negative feedback amplifier. Mention the word negative feedback amplifier to a systems biologist however and you're likely to get a blank stare. So what is a negative feedback amplifier? Let's start with some recent history.

ndustrial Revolution

Probably the most famous modern device that employed negative feedback was the governor. Thomas Mead in 1787 took out a patent on a device that could regulate the speed of windmill sails. His idea was to measure the speed of the mill by the centrifugal motion of a revolving pendulum and use this to regulate the position of the sail. Very shortly afterwards in early 1788, James Watt is told of this device in a letter from his partner, Matthew Boulton. Watt recognizes the utility of the governor as a device to regulate the new steam engines that were rapidly becoming an important source of new power for the industrial revolution. The image below illustrates an engraving of a governor from an early book entitled ”An Elementary Treatise on Stream and the Steam-engine by Clark and Sewell published in 1892.

The operation of the governor is simple (See Figure below), its purpose is to maintain the speed of a rotating engine at a constant predetermined value in spite of changes in load and steam pressure. The vertical axle of the governor is connected to the rotation of the steam engine. As the steam engine, for one reason or another, speeds up, the rotation increases, thereby causing the centrifugal pendulums to swing out. A linkage transmits this motion to the stream valve in such a manner that the flow of steam is reduced thus slowing down the engine. If the engine slows down too much, as a result of a sudden load, the flyweights will swing back and the steam value is opened so that the steam engine can accelerate. The governor was a highly successful device and it is estimated that by 1868, 75,000 governors were in operation (A History of Control Engineering, 1800-1930 By Stuart Bennett, 1979).



Figure: Illustration of a Governor from ”An Elementary Treatise on Stream and the Steam-engine by Clark and Sewell published in 1892.

This description of the governor illustrates one of the basic operational characteristics of negative feedback. The output of the device, in this case the steam engine speed, is ”fed back” to control the rate of steam entering the steam engine and thus influence the engine speed.

The Greeks

Although the governor is an example of one of the earliest negative feedback systems in modern times, the concept actually goes back much further in history. There is documentary evidence to show that the ancient Greeks were aware of the concept and used it in a wide variety of ways to control different mechanisms. Probably the most famous of these was the use of floats in water clocks to maintain a steady flow of water which could be used to measure time. The earliest recorded water clock that used negative feedback was described by Ktesibios who probably lived between 285 and 247 BC in Alexandria. Further work was done by Philon and particularly Heron (13 AD) who left us with an extensive book (Pneumatica) detailing many amusing water devices that employed negative feedback.


Figure: Ktesibios (270 BC) negative feeback value to regulate water flow. Modified from Stefano Penzier (http://www.dia.uniroma3.it/autom/FdAcomm/Lucidi)

Modern Times

In more recent times negative feedback has been used extensively in the electronics industry to confer, among other things, electrical stability to electronic devices and amplifiers. In fact, without negative feedback considerable swathes of modern technology would not be able to function. The application of negative feedback to modern devices is probably one of the most important innovations of the 20th century. Before continuing, let's make sure we understand what an amplifier is. The purpose of an amplifier is to faithfully scale up the power of a small time-varying electrical signal without adding distortion. This is an important task in both man-made and biological systems as we are often confronted with weak signals that need a boost to make them useful.

The story of how negative feedback came to be used in amplifiers begins in 1921 when Harold Black, who recently graduated from Worcester Polytechnic Institute in electrical engineering, took a job at Western Electric, the forerunner of Bell Labs. One of the challenges facing engineers in the 1920s in the US was how to design amplifiers that didn't distort the signal over long distances. In the early days, engineers would install what were called repeaters. Such repeaters would boost the signal but would also add their own distortions. By the time the signal had traveled 4000 miles across the country with repeaters less than 1000 miles apart, the signal at the end was barely intelligible.  These difficulties were ultimately overcome by the introduction of the feedback amplifier, designed in 1927 by Harold S. Black (Mindell, 2000). The basic idea was to introduce a negative feedback loop from the output of the amplifier to its input. At first sight, the addition of negative feedback to an amplifier might seem counterproductive. Indeed, Black had to contend with just such opinions when introducing the concept—his director at Western Electric dissuaded him from following up on the idea, and his patent applications were at first dismissed. In his own words ‘our patent application was treated in the same manner as one for a perpetual motion machine’ (Black, 1977). While Black’s detractors were correct in insisting that the negative feedback would reduce the gain of the amplifier, they failed to appreciate his key insight—that the reduction in gain is accompanied by increased robustness of the amplifier and improved fidelity of signal transfer. Since then, negative feedback has been widely used in the electrical industry (See opamps).

Biology

The reader may be wondering what on earth has this story got to do with MAPK cascades? Simple, we have a small hormonal signal coming in via receptors but need a larger signal inside the cell but without adding distortion and unwanted noise. You may be asking, where does the distortion and noise come from? The most obvious source of noise is the natural variability in protein levels due to stochastic events at the transcription and translation layers and source of the distortion is the nonlinear behavior of protein cascades. This combination is a recipe for disaster and to me at least, it isn't a surprise that evolution hit on the idea of wrapping the MAPK cascade in a negative feedback loop.

To understand the role of negative feedback is a system such as MAPK we need to examine more closely the advantages (and sometimes disadvantages) of negative feedback.

Go to Part II - under construction!

Black, H.S., 1977. Inventing the negative feedback amplifier. IEEE Spectrum 14, 55–60.

Mindell, D. (2000). Opening black’s box. Technology and Culture, 14, 405–434.

Sturm OE , Orton R , Grindlay J , Birtwistle M , Vyshemirsky V , Gilbert D , Calder M , Pitt A , Kholodenko B , Kolch W (2010) The mammalian MAPK/ERK pathway exhibits properties of a negative feedback amplifier. Sci Signal 3: ra90

 


Thursday, June 16, 2011

Biochemical Control Analyis 101: Part 2

Originally Posted on  by hsauro

What is a control coefficient?

First we should indicate what we mean by control. The term control has a special meaning in biochemical control analysis. Control refers to the ability of a system parameter to affect a system variable. For example, changes to the external glucose concentration in a microbial culture will most likely change the culture’s growth rate. The concentration of glucose therefore has ‘control’ over the growth rate. Engineering an enzyme in pathway so that its kcat is larger will result in changes to the pathways flux and metabolite concentrations. Changes to the promoter consensus sequence of a particular gene will result in changes to the concentration of the expressed protein and any other variables that depends on that protein. It is possible to quantify this concept of control by either measuring or computing control coefficients.

Control coefficients come in two flavors, flux control and concentration control. First consider flux control:

flux control coefficient measures the relative steady state change in pathway flux (J)  in response to a relative change in enzyme activity, e_i, often through changes in enzyme concentration.  This definition assumes that the enzyme concentration is under the direct control of the experimenter and as such can be classed as a parameter. This assumes that a change on the level of the enzyme does not change the level of enzymes. This assumption will not always necessarily be true, in which case the control coefficient can be generalized to be independent of any particular parameter. For now however, with loss of generality, that we can change the enzyme concentration without affecting other enzymes. We define the flux control coefficients as follows:

  \[C^J_{e_i} = \frac{dJ}{de_i} \frac{e_i}{J} \right) = \frac{d\ln J}{d\ln e_i}\]

The more generalized definition in terms of changes to the local reaction rate v_i of step i is given by:

  \[C^J_{v_i} = \left( \frac{dJ}{dp} \frac{p}{J} \right) \bigg/ \left(  \frac{dv_i}{dp}\frac{p}{v_i} \right) = \frac{d\ln J}{d\ln v_i}\]

so that the definition is now independent of the particular parameter, p, used to perturb the reaction step. A very important property of all control coefficients is that they can only be measured in the intact system. It is not possible to isolate an enzyme and try to measure its control coefficient. The effect that a particular parameter, for example e_i, has on a flux (or concentration) is a system property and depends on all the enzymes in the pathway. This is why it is not possible, or at least very difficult, to judge the importance of an enzyme by just looking at the enzyme alone.

concentration control coefficient measures the relative steady state change in a species concentration (S)  in response to a relative change in enzyme activity, e_i, often through changes in enzyme concentration.  This definition assumes that the enzyme concentration is under the direct control of the experimenter and as such can be classed as a parameter. It is important to note that concentration control coefficients are properties of the intact system and cannot be measured from the isolated reaction step or enzyme. The same comments that were made with respect to the flux control coefficient applies here.

  \[C^S_{e_i} = \frac{dS}{de_i} \frac{e_i}{S} \right) = \frac{d\ln S}{d\ln e_i}\]

Or in generalized form:

  \[C^S_{v_i} = \frac{dS}{dv_i} \frac{v_i}{S} \right) = \frac{d\ln S}{d\ln v_i}\]

What do the control coefficients actually mean, and why are they defined the way they are? The first thing to note about the control coefficient is that they are dimensionless. This is because we scale the derivative to eliminate units. The second important point is that as a result of the scaling, a control coefficients is a ratio of relative changes. This means that, roughly speaking, a control coefficient measures the effect a certain percentage change in enzyme concentration has on the percentage change in flux or metabolite concentration. For example a flux control coefficient of 0.4 means, that a 1% increase in the enzyme concentration, results in as 0.4% change in the steady state flux.

  \[C^J_{e_i} = \left. \frac{\delta J}{J}  \right/ \frac{\delta e_i}{e_i}  \approx \frac{J\%}{ e_i\% }\]

Friday, June 10, 2011

Sun Dogs in Seattle

May 10, 2011 9:24 pm

Here is a photograph I took of a small sun dog that I spotted from my house in Seattle on the evening of May 11, 2011. In the image you will see a 'mock' sun to the left of the real sun but at the same horizontal line. You'll also see a faint circular arc rising from the sun dog and in theory, reaching a second sun dog to the right of the sun (not visible however). Not a terribly spectacular one, but interesting nevertheless. Caused by high altitude ice crystals drifting into lower levels and becoming vertically aligned.

Small Sun Dog seen on in Seattle on the Evening of May 11, 2011[/caption]

Tuesday, June 7, 2011

Metabolic Control Analyis 101: Part 1

Originally Posted on  by hsauro

What is Metabolic Control Analysis?

Broadly speaking, Metabolic Control Analysis is a mathematical approach that allows us to understand and quantify how perturbations to a biochemical pathway propagate out from the disturbance to the rest of the system.

Metabolic Control Analysis (MCA) was born in an era when work on metabolism was in full swing. Since then, protein signaling and gene regulatory network have largely taken the limelight. Renewed interest in topics such biofuels, metabolism may be making a comeback. Since the development of MCA, we now know that it is much more general and applies to any kind of biochemical network be it metabolic, signal or gene regulatory. Therefore the first thing I’m going to do is stop called Metabolic Control Analysis, Metabolic Control Analysis. Instead, to indicate its generality, I will call it Biochemical Control Analysis (BCA).

BCA quantifies how variables, such as fluxes and species concentrations, depend on the system’s parameters. In particular it is able to describe how network dependent properties, called control coefficients, depend on local properties called elasticities.This will be the first in a series of short articles on the basic ideas embodied in BCA.

In this first article I will clarify the meaning of a couple of words used in BCA:

Variable

A variable, also called a dependent variable or state variable is a measurable characteristic of a system that can only be changed by an observer through changes to a suitable parameter. Variables are by definition determined by the system. Examples of possible variables include the pathway flux and species concentrations, such as metabolites or proteins.

Parameter

A parameter is a measurable characteristic of a system that can in principle be controlled by the observer. Parameters are also often called independent variables. By definition, a parameter cannot be changed by the system itself, if it can then it is called a variable. Examples of parameters include external concentrations such as glucose fed to a culture or externally added drug compounds, internal parameters such as kinetic constants, and depending on the system under study, enzyme concentrations.

Control

The term control has a special meaning in control analysis. Control refers to the ability of a system parameter to affect a system variable. For example, changes to the external glucose concentration in a microbial culture will most likely change the culture’s growth rate. The concentration of glucose therefore has ‘control’ over the growth rate. Engineering an enzyme in pathway so that its kcat is larger will result in changes to the pathways flux and metabolite concentrations. Changes to the promoter consensus sequence of a particular gene will result in changes to the concentration of the expressed protein and any other variables that depends on that protein. It is possible to quantify control by either measuring or computing control coefficients.

Regulation

Regulation may be defined as the capacity to achieve control. Such control may involve homeostasis or ability to move from one state to another in a particular manner.

Flux

The flux is the steady state flow of mass through a pathway.

Wednesday, June 1, 2011

Solving ODEs using Mathematica

Originally Posted on  by hsauro

Solving ODEs using Mathematica

I have found that the documentation that comes with Mathematica is not always very helpful. At least it is not very helpful when you want to know the most common operations. One of those is solving systems of first order ordinary differential equations (odes) with initial conditions. I don’t know why but I always spend a little time reading the docs to figure out how to do this when I need this functionality. To help me and perhaps others, I give the Mathematica recipe here.

Here are two examples of how to solve a single ode and a simple system of two odes. The function we will use is DSolve[]. This function takes three arguments:

DSolve[{odes, initial conditions}, {variable names}, independent variable]

Note the curly brackets, they must be present. For example to solve the single ode with initial condition, y(0) = 1

  \[\frac{dy}{dt} = y (k_1 - k_2) - k_2\]

we need to input the following Mathematica command:

DSolve[{y'[t] == y[t] (k1 – k2) – k2, y[0] == 1}, y[t], t]

Note that the dependent variable y is denoted by y[t]. The derivative of the dependent variable is denoted by y’[t]. Also note how the initial condition is entered, y[0] == 1. Don’t forget to use == instead of = in the syntax. The solution yields:

  \[\left\{\left\{y[t]\to \frac{e^{(k_1-k_2) t} k_1+k_2-2 e^{(k_1-k_2) t} k_2}{k_1-k_2}\right\}\right\}\]

To solve a set of first order differential equations with initial equations, for example the two equations:

  \[\frac{dA}{dt} = v_o - k_1 A + k_2 B\]

  \[\frac{dB}{dt} = k_1 A - k_2 B - k_3 B\]

with initial conditions A(0) = 0 and B(0) = 0 we write the following Mathematica code where we enter the multiple equations inside the curly brackets separated by commas. Also note that because there is more than one dependent variable, we must also put these in curly brackets.

DSolve[{A'[t] == vo – k1 A[t] + k2 B[t], B’[t] == k1 A[t] – k2 B[t] – k3 B[t], A[0] == 0, B[0] == 0}, {A[t], B[t]}, t]

This will yield:

  \[\left\{\left\{A[t]\to \frac{v_o-e^{-k_1 t} v_o}{k_1},B[t]\to \frac{\left(k_1-e^{-k_3 t} k_1+\left(-1+e^{-k_1 t}\right) k_3\right) v_o}{(k_1-k_3) k_3}\right\}\right\}\]