Thursday, June 30, 2011

MAPK: Feedback Amplifier, Part 2

May 30, 2011 2:27 pm

In part 1 of this series, a brief history of negative feedback was given. Here we will look at the advantages and disadvantages of negative feedback, particularly in relation to signal transmission.

Amplifiers

Amplifiers used by electrical engineers are designed to magnify the current or voltage in an electrical circuit. A simple example is the voltage amplifier (i.e. voltage-controlled voltage source) which samples the voltage in one part of a circuit and produces a proportionally larger voltage in another. Of critical importance to an amplifier is not so much the amplification factor itself but how accurate the amplification is. There is no such thing however as the perfect amplifier and real amplifiers, electrical or biological will introduce errors or distortions into the signal. These distortions can be classed into three types:

  1. Frequency distortion
  2. Phase distortion
  3. Harmonic distortion

Frequency distortion is do to the fact that amplifiers do not amplify signals at different frequencies to the same extend. The term bandwidth is used to indicate the range of frequencies over which a given amplifier can faithfully amplify a signal.

The second source of distortion, that is phase distortion, is do to the fact that as the amplifier operates it will add delays into the signal. The amount of delay will often be a function of the signal frequency.

Finally, harmonic distortion is due to the fact that amplifiers do not amplify a signal by a fixed amount. That is the amplifier will have some nonlinear behavior which will often be a function of signal frequency.

In the 1920s, such distortions were a huge problem to the new telecommunications industry and it was Harold Black's solution to use negative feedback that solved the problem.

Negative Feedback

In order to understand how negative feedback can improve the performance of a signal amplifier, we must consider a very simple example. The figure below comes from the paper: "MAPK Cascades as Feedback Amplifiers"




Let us consider only the steady-state behavior of the system. The input is given by $u$, the output by $y$, the error $e$, and the disturbance by $d$. The input is the signal we want to magnify and the magnified version of the input is the output, $y$. Some of the output we feed back via $F$ to the input, where we subtract it from the input, $u$. $A$ is the amplifier itself. We will ignore the disturbance $d$ for the moment. The way to look at this diagram mathematically is that any arrow coming out of a block is the product of the block and the arrow coming into the block. For example, consider the output, $y$. This output is a result of the amplifier, $A$, magnifying the error, $e$, that is $y = A e$. What about $e$? $e$ is the result of subtracting $F y$ from $u$. From these statements we can write the following two equations:

$$ y = A e $$ $$ e = u - F y $$ From these two equations we can eliminate $e$ to find: $$ y = \frac{A u}{1 + A F} $$

Calling $G = A/(1 + A F)$ the system gain, we have simply, $y = G u$. Comparing $G$ with $A$, it should be clear that the feedback reduces the gain of the amplifier. Further, if the loop gain $A F$ is large ($A F \ge 1$), then

$$ G \approx \frac{A}{A F} = \frac{1}{F} $$

That is, as the gain $A F$ increases, the system behavior becomes more dependent on the feedback loop and less dependent on the amplifier itself. But so what? Three things are apparent from this simple analysis, the first is that any variation is $A$ has no effect on the operation of the system, that is because $G$ is independent of $A$.From a practical point of view, the manufacturing tolerance of $A$ doesn't have to be so high which makes it possible to make cheap $A$s. Instead the designer need only provide a stable feedback mechanism, in electronics this is in the form of cheap but high tolerance resistors.

Secondly advantage of having feedback, if we introduce a disturbance, $d$, into the output we find that in the presence of feedback, the influence of the disturbance decreases. Finally, and this is the real magic, any nonlinearity present in the amplifier $A$, is eliminated (or at least greatly reduced), this means that our feedback amplifier is very good at faithfully magnifying the input signal, exactly what we want from an amplifier (Proofs of these assertions can be found in the original papers, eg see [3]). In summary a feedback amplifier provides the following desirable characteristics: 1. Increased robustness with respect to internal perturbations. 2. Insulation from external perturbation, resulting in functional modularization. 3. A linear graded response over an extended operating range.

A word about the source of the internal variations in $A$. There are two primary sources, the first is manufacturing variability, that is not every amplifier is exactly the same as it comes off the assembly line. Secondly, when the circuit operates, it heats up and this introduces thermal noise which feeds noise directly into the circuit. Also repeated heating and cooling can cause the amplifier components to slow change in behavior. All these sources of variation can be greatly reduced by adding a negative feedback loop.

What has this got to do with MAPK? The MAPK cascade has many features that are similar to a negative feedback amplifier. The input is from a small signal at the receptor, The amplifier part, that is the three phosphorylation cycles act as amplifiers with high gain. A negative feedback wraps the entire structure and finally, in the last stage, the protein ERK2, must diffuse to the nucleus to make any difference. This represents a disturbance in the output signal. The noise in the amplifier part of MAPK can comes from a number of sources. First, there may be natural allelic variation in the cascade proteins but perhaps more significant is the stochastic noise that occurs in transcription and translation which means that the mean concentration of the cascade proteins vary over time. Negative feedback will greatly reduce the effect the variations have on the performance of MAPK.

There is more to a negative feedback amplifier than described here but the most important points are described.

References

1. Quantitative analysis of signaling networks. Sauro HM, Kholodenko BN, Prog Biophys Mol Biol. 2004 Sep;86(1):5–43.

2. The Computational Versatility of Proteomic Signaling Networks, Sauro HM In: Current Proteomics, Vol. 1, Bentham Science Publishers Ltd. (2004) , p. 67-81.

3. MAPK Cascades as Feedback Amplifiers Sauro HM, Ingalls B

Sunday, June 26, 2011

MAPK: Feedback Amplifier, Part I

 A recent paper by Sturm et al., report results that support the hypothesis that the MAPK cascade acts as a negative feedback amplifier.

The systems biology literature is full of reviews and articles about oscillators and bistable systems and very little else other than Uri Alon et al's refreshingly unique work on feedforward systems. An alien race, upon reading the literature, would most likely believe that the only thing biochemical networks can do is oscillate, show bistability and perhaps a little ultrasensitivity. This is probably because many of the modelers and theoreticians in systems biology are unaware of the possible signal processing capabilities offered by the engineering field. For example, an engineer looking at the MAPK cascade would probably immediately think of a negative feedback amplifier. Mention the word negative feedback amplifier to a systems biologist however and you're likely to get a blank stare. So what is a negative feedback amplifier? Let's start with some recent history.

ndustrial Revolution

Probably the most famous modern device that employed negative feedback was the governor. Thomas Mead in 1787 took out a patent on a device that could regulate the speed of windmill sails. His idea was to measure the speed of the mill by the centrifugal motion of a revolving pendulum and use this to regulate the position of the sail. Very shortly afterwards in early 1788, James Watt is told of this device in a letter from his partner, Matthew Boulton. Watt recognizes the utility of the governor as a device to regulate the new steam engines that were rapidly becoming an important source of new power for the industrial revolution. The image below illustrates an engraving of a governor from an early book entitled ”An Elementary Treatise on Stream and the Steam-engine by Clark and Sewell published in 1892.

The operation of the governor is simple (See Figure below), its purpose is to maintain the speed of a rotating engine at a constant predetermined value in spite of changes in load and steam pressure. The vertical axle of the governor is connected to the rotation of the steam engine. As the steam engine, for one reason or another, speeds up, the rotation increases, thereby causing the centrifugal pendulums to swing out. A linkage transmits this motion to the stream valve in such a manner that the flow of steam is reduced thus slowing down the engine. If the engine slows down too much, as a result of a sudden load, the flyweights will swing back and the steam value is opened so that the steam engine can accelerate. The governor was a highly successful device and it is estimated that by 1868, 75,000 governors were in operation (A History of Control Engineering, 1800-1930 By Stuart Bennett, 1979).



Figure: Illustration of a Governor from ”An Elementary Treatise on Stream and the Steam-engine by Clark and Sewell published in 1892.

This description of the governor illustrates one of the basic operational characteristics of negative feedback. The output of the device, in this case the steam engine speed, is ”fed back” to control the rate of steam entering the steam engine and thus influence the engine speed.

The Greeks

Although the governor is an example of one of the earliest negative feedback systems in modern times, the concept actually goes back much further in history. There is documentary evidence to show that the ancient Greeks were aware of the concept and used it in a wide variety of ways to control different mechanisms. Probably the most famous of these was the use of floats in water clocks to maintain a steady flow of water which could be used to measure time. The earliest recorded water clock that used negative feedback was described by Ktesibios who probably lived between 285 and 247 BC in Alexandria. Further work was done by Philon and particularly Heron (13 AD) who left us with an extensive book (Pneumatica) detailing many amusing water devices that employed negative feedback.


Figure: Ktesibios (270 BC) negative feeback value to regulate water flow. Modified from Stefano Penzier (http://www.dia.uniroma3.it/autom/FdAcomm/Lucidi)

Modern Times

In more recent times negative feedback has been used extensively in the electronics industry to confer, among other things, electrical stability to electronic devices and amplifiers. In fact, without negative feedback considerable swathes of modern technology would not be able to function. The application of negative feedback to modern devices is probably one of the most important innovations of the 20th century. Before continuing, let's make sure we understand what an amplifier is. The purpose of an amplifier is to faithfully scale up the power of a small time-varying electrical signal without adding distortion. This is an important task in both man-made and biological systems as we are often confronted with weak signals that need a boost to make them useful.

The story of how negative feedback came to be used in amplifiers begins in 1921 when Harold Black, who recently graduated from Worcester Polytechnic Institute in electrical engineering, took a job at Western Electric, the forerunner of Bell Labs. One of the challenges facing engineers in the 1920s in the US was how to design amplifiers that didn't distort the signal over long distances. In the early days, engineers would install what were called repeaters. Such repeaters would boost the signal but would also add their own distortions. By the time the signal had traveled 4000 miles across the country with repeaters less than 1000 miles apart, the signal at the end was barely intelligible.  These difficulties were ultimately overcome by the introduction of the feedback amplifier, designed in 1927 by Harold S. Black (Mindell, 2000). The basic idea was to introduce a negative feedback loop from the output of the amplifier to its input. At first sight, the addition of negative feedback to an amplifier might seem counterproductive. Indeed, Black had to contend with just such opinions when introducing the concept—his director at Western Electric dissuaded him from following up on the idea, and his patent applications were at first dismissed. In his own words ‘our patent application was treated in the same manner as one for a perpetual motion machine’ (Black, 1977). While Black’s detractors were correct in insisting that the negative feedback would reduce the gain of the amplifier, they failed to appreciate his key insight—that the reduction in gain is accompanied by increased robustness of the amplifier and improved fidelity of signal transfer. Since then, negative feedback has been widely used in the electrical industry (See opamps).

Biology

The reader may be wondering what on earth has this story got to do with MAPK cascades? Simple, we have a small hormonal signal coming in via receptors but need a larger signal inside the cell but without adding distortion and unwanted noise. You may be asking, where does the distortion and noise come from? The most obvious source of noise is the natural variability in protein levels due to stochastic events at the transcription and translation layers and source of the distortion is the nonlinear behavior of protein cascades. This combination is a recipe for disaster and to me at least, it isn't a surprise that evolution hit on the idea of wrapping the MAPK cascade in a negative feedback loop.

To understand the role of negative feedback is a system such as MAPK we need to examine more closely the advantages (and sometimes disadvantages) of negative feedback.

Go to Part II - under construction!

Black, H.S., 1977. Inventing the negative feedback amplifier. IEEE Spectrum 14, 55–60.

Mindell, D. (2000). Opening black’s box. Technology and Culture, 14, 405–434.

Sturm OE , Orton R , Grindlay J , Birtwistle M , Vyshemirsky V , Gilbert D , Calder M , Pitt A , Kholodenko B , Kolch W (2010) The mammalian MAPK/ERK pathway exhibits properties of a negative feedback amplifier. Sci Signal 3: ra90

 


Thursday, June 16, 2011

Biochemical Control Analyis 101: Part 2

Originally Posted on  by hsauro

What is a control coefficient?

First we should indicate what we mean by control. The term control has a special meaning in biochemical control analysis. Control refers to the ability of a system parameter to affect a system variable. For example, changes to the external glucose concentration in a microbial culture will most likely change the culture’s growth rate. The concentration of glucose therefore has ‘control’ over the growth rate. Engineering an enzyme in pathway so that its kcat is larger will result in changes to the pathways flux and metabolite concentrations. Changes to the promoter consensus sequence of a particular gene will result in changes to the concentration of the expressed protein and any other variables that depends on that protein. It is possible to quantify this concept of control by either measuring or computing control coefficients.

Control coefficients come in two flavors, flux control and concentration control. First consider flux control:

flux control coefficient measures the relative steady state change in pathway flux (J)  in response to a relative change in enzyme activity, e_i, often through changes in enzyme concentration.  This definition assumes that the enzyme concentration is under the direct control of the experimenter and as such can be classed as a parameter. This assumes that a change on the level of the enzyme does not change the level of enzymes. This assumption will not always necessarily be true, in which case the control coefficient can be generalized to be independent of any particular parameter. For now however, with loss of generality, that we can change the enzyme concentration without affecting other enzymes. We define the flux control coefficients as follows:

  \[C^J_{e_i} = \frac{dJ}{de_i} \frac{e_i}{J} \right) = \frac{d\ln J}{d\ln e_i}\]

The more generalized definition in terms of changes to the local reaction rate v_i of step i is given by:

  \[C^J_{v_i} = \left( \frac{dJ}{dp} \frac{p}{J} \right) \bigg/ \left(  \frac{dv_i}{dp}\frac{p}{v_i} \right) = \frac{d\ln J}{d\ln v_i}\]

so that the definition is now independent of the particular parameter, p, used to perturb the reaction step. A very important property of all control coefficients is that they can only be measured in the intact system. It is not possible to isolate an enzyme and try to measure its control coefficient. The effect that a particular parameter, for example e_i, has on a flux (or concentration) is a system property and depends on all the enzymes in the pathway. This is why it is not possible, or at least very difficult, to judge the importance of an enzyme by just looking at the enzyme alone.

concentration control coefficient measures the relative steady state change in a species concentration (S)  in response to a relative change in enzyme activity, e_i, often through changes in enzyme concentration.  This definition assumes that the enzyme concentration is under the direct control of the experimenter and as such can be classed as a parameter. It is important to note that concentration control coefficients are properties of the intact system and cannot be measured from the isolated reaction step or enzyme. The same comments that were made with respect to the flux control coefficient applies here.

  \[C^S_{e_i} = \frac{dS}{de_i} \frac{e_i}{S} \right) = \frac{d\ln S}{d\ln e_i}\]

Or in generalized form:

  \[C^S_{v_i} = \frac{dS}{dv_i} \frac{v_i}{S} \right) = \frac{d\ln S}{d\ln v_i}\]

What do the control coefficients actually mean, and why are they defined the way they are? The first thing to note about the control coefficient is that they are dimensionless. This is because we scale the derivative to eliminate units. The second important point is that as a result of the scaling, a control coefficients is a ratio of relative changes. This means that, roughly speaking, a control coefficient measures the effect a certain percentage change in enzyme concentration has on the percentage change in flux or metabolite concentration. For example a flux control coefficient of 0.4 means, that a 1% increase in the enzyme concentration, results in as 0.4% change in the steady state flux.

  \[C^J_{e_i} = \left. \frac{\delta J}{J}  \right/ \frac{\delta e_i}{e_i}  \approx \frac{J\%}{ e_i\% }\]

Friday, June 10, 2011

Sun Dogs in Seattle

May 10, 2011 9:24 pm

Here is a photograph I took of a small sun dog that I spotted from my house in Seattle on the evening of May 11, 2011. In the image you will see a 'mock' sun to the left of the real sun but at the same horizontal line. You'll also see a faint circular arc rising from the sun dog and in theory, reaching a second sun dog to the right of the sun (not visible however). Not a terribly spectacular one, but interesting nevertheless. Caused by high altitude ice crystals drifting into lower levels and becoming vertically aligned.

Small Sun Dog seen on in Seattle on the Evening of May 11, 2011[/caption]

Tuesday, June 7, 2011

Metabolic Control Analyis 101: Part 1

Originally Posted on  by hsauro

What is Metabolic Control Analysis?

Broadly speaking, Metabolic Control Analysis is a mathematical approach that allows us to understand and quantify how perturbations to a biochemical pathway propagate out from the disturbance to the rest of the system.

Metabolic Control Analysis (MCA) was born in an era when work on metabolism was in full swing. Since then, protein signaling and gene regulatory network have largely taken the limelight. Renewed interest in topics such biofuels, metabolism may be making a comeback. Since the development of MCA, we now know that it is much more general and applies to any kind of biochemical network be it metabolic, signal or gene regulatory. Therefore the first thing I’m going to do is stop called Metabolic Control Analysis, Metabolic Control Analysis. Instead, to indicate its generality, I will call it Biochemical Control Analysis (BCA).

BCA quantifies how variables, such as fluxes and species concentrations, depend on the system’s parameters. In particular it is able to describe how network dependent properties, called control coefficients, depend on local properties called elasticities.This will be the first in a series of short articles on the basic ideas embodied in BCA.

In this first article I will clarify the meaning of a couple of words used in BCA:

Variable

A variable, also called a dependent variable or state variable is a measurable characteristic of a system that can only be changed by an observer through changes to a suitable parameter. Variables are by definition determined by the system. Examples of possible variables include the pathway flux and species concentrations, such as metabolites or proteins.

Parameter

A parameter is a measurable characteristic of a system that can in principle be controlled by the observer. Parameters are also often called independent variables. By definition, a parameter cannot be changed by the system itself, if it can then it is called a variable. Examples of parameters include external concentrations such as glucose fed to a culture or externally added drug compounds, internal parameters such as kinetic constants, and depending on the system under study, enzyme concentrations.

Control

The term control has a special meaning in control analysis. Control refers to the ability of a system parameter to affect a system variable. For example, changes to the external glucose concentration in a microbial culture will most likely change the culture’s growth rate. The concentration of glucose therefore has ‘control’ over the growth rate. Engineering an enzyme in pathway so that its kcat is larger will result in changes to the pathways flux and metabolite concentrations. Changes to the promoter consensus sequence of a particular gene will result in changes to the concentration of the expressed protein and any other variables that depends on that protein. It is possible to quantify control by either measuring or computing control coefficients.

Regulation

Regulation may be defined as the capacity to achieve control. Such control may involve homeostasis or ability to move from one state to another in a particular manner.

Flux

The flux is the steady state flow of mass through a pathway.

Wednesday, June 1, 2011

Solving ODEs using Mathematica

Originally Posted on  by hsauro

Solving ODEs using Mathematica

I have found that the documentation that comes with Mathematica is not always very helpful. At least it is not very helpful when you want to know the most common operations. One of those is solving systems of first order ordinary differential equations (odes) with initial conditions. I don’t know why but I always spend a little time reading the docs to figure out how to do this when I need this functionality. To help me and perhaps others, I give the Mathematica recipe here.

Here are two examples of how to solve a single ode and a simple system of two odes. The function we will use is DSolve[]. This function takes three arguments:

DSolve[{odes, initial conditions}, {variable names}, independent variable]

Note the curly brackets, they must be present. For example to solve the single ode with initial condition, y(0) = 1

  \[\frac{dy}{dt} = y (k_1 - k_2) - k_2\]

we need to input the following Mathematica command:

DSolve[{y'[t] == y[t] (k1 – k2) – k2, y[0] == 1}, y[t], t]

Note that the dependent variable y is denoted by y[t]. The derivative of the dependent variable is denoted by y’[t]. Also note how the initial condition is entered, y[0] == 1. Don’t forget to use == instead of = in the syntax. The solution yields:

  \[\left\{\left\{y[t]\to \frac{e^{(k_1-k_2) t} k_1+k_2-2 e^{(k_1-k_2) t} k_2}{k_1-k_2}\right\}\right\}\]

To solve a set of first order differential equations with initial equations, for example the two equations:

  \[\frac{dA}{dt} = v_o - k_1 A + k_2 B\]

  \[\frac{dB}{dt} = k_1 A - k_2 B - k_3 B\]

with initial conditions A(0) = 0 and B(0) = 0 we write the following Mathematica code where we enter the multiple equations inside the curly brackets separated by commas. Also note that because there is more than one dependent variable, we must also put these in curly brackets.

DSolve[{A'[t] == vo – k1 A[t] + k2 B[t], B’[t] == k1 A[t] – k2 B[t] – k3 B[t], A[0] == 0, B[0] == 0}, {A[t], B[t]}, t]

This will yield:

  \[\left\{\left\{A[t]\to \frac{v_o-e^{-k_1 t} v_o}{k_1},B[t]\to \frac{\left(k_1-e^{-k_3 t} k_1+\left(-1+e^{-k_1 t}\right) k_3\right) v_o}{(k_1-k_3) k_3}\right\}\right\}\]