# Mathematical Trends in Reaction Network Theory in Copenhagen /3

Alan RendallDynamics of phosphorylation cycles. A protein is a chain of amino acids. Often other chmical groups are attached to aminoacids, for example the phosphate group attached to serine etc. Phosphorylation and dephosphorylation then happen, and it is catalysed by enzymes (kenases and phosphatases). There emerge interacting networks of phosphoproteins, which allow storage and transfer of information. There is a cascade of phosphorylated things that act as enzymes for the next step, as studied by Huang and Ferrell, 1996 (with MAK for explicit enzymes included). For Michaelis-Menten, one needs a clear distinction between enzyme and substrate, but here it is not possible; moreover, it can happen that one enzyme is not specific, i.e. it catalyses two reactions (sequestration). Moreover, the cascade can be stuck in feedback loops which can lead to oscillations (Kholodenko 2000). Originally no interesting dynamics was expected without feedback, but in 2008 Wang & Sontag proved multistationarity in a model with a dual futile cycle, and similarly already in MM one can obtain bistability (Ortega et al 2006), for certain parameter values. Baez asks the question everybody wanted to ask: what’s a futile cycle? Something like X -> X + Phophate -> X + 2 Phosphate -> X, which uses energy but goes back to the initial state so seems to be pointless. A quite detailed list of more and more mathematical results on existence of periodic solutions in the cascade follows.

Manoj Gopalkrishnan, Statistical inference with a chemical soup. In the same spirit as Doty, how would we design a network to do something interesting? Website: Molecular programming project. The idea is to find a model of computation native to reaction networks, via Boolean circuits (Winfree, Qian) etc. His proposal is to use statistical inference, in particular Log-linear models, “arguably the most popular and important statistical models for the analysis of categorical data” (Fienberg et al., Maximum likelyhood… Log-linear models). There is a matrix A that contains some “stoichiometric coefficients” regulating the probability of outputs from certain unknown parameters. One asks for $Pr[X | \Theta] = \Theta^A$ (in compressed CN notation). Given several trials, one can build the Maximum likelihood estimator $\Theta^\ast = \mathrm{argmax} Pr[x|\Theta]$ (x being the frequency of X in series of trials), and a Maximum likelihood distribution for the probability of certain parameters. Now he has what looks like a very nice result: if you take a desing matrix A, applying MAK etc. the Maximum Likelihood Estimator is the equilibrium point of the corresponding reaction network. He has some claim that “MAK maximises entropy”. Refs: Napp & Adams, Message passing inference with CRM, Pachter and Sturmfels, Algebraic Statistics for computational biology, Soloveichik, Seeling and Winfree, DNA as a universal substrate for chemical kinetics.

David Anderson, Stochastic models of biochemical reaction systems and absolute concentration robustness. He talks about a subclass of deficiency-1 models, called ACR (absolute concentration robustness) models (Feinberg and Shinar, Structural sources of robustness in biochemical reaction networks, Science 2010. Remarkably for Science, the main result is a theorem!). Consider

$A+B \stackrel{\alpha}{\to} 2B, B \stackrel{\beta}{\to} A$

Fixed points are: $\bar{x}_A = \beta/\alpha$, $\bar{x}_B = M-\beta/\alpha$, where $M$ is a conserved quantity. Interestingly, the concentration of $\bar{x}_A$ is robust in the sense that it does not depend to the conserved quantities (while $\bar{x}_B$ does). Then there is a Theorem (Feinberg and Shinar): Consider a deterministic mass-action reaction system that has a deficiency of one, admits a positive steady state and has two nonterminal complexes that differ only in species S, then the system has absolute concentratikon robustness in S. “Differing in one species” means that they differ in one species, e.g. $A$ and $A+B$. Also $A$ and $A+2B$ should. To prove, let’s write

$\dot{x} = YA_k \Psi(x)$

where $Y$ is the complex-to-species matrix and $A_k$ the so-called Kirchhoff matrix (a Laplacian) containing the rates, and $\Psi(x)$ contains the monomials corresponding to complexes. We have

$\Psi(\bar{x})= \sum_i c_i v_i, \qquad v_i \in \ker{YA_k}$

Now there must be (at least one) vector $v_i$ that has support in non-terminal complexes, and then if one can write the solution in terms of its entries then one is done. But here I didn’t really see how deficiency steps in.

Now, at the stochastic level they started with a conjecture: What is the marginal of a robust species? The conjecture is that it should be Poissonian, but the simplest example proved it wrong. But they came up with a Theorem: Consider CN satisfying deficiency one, etc. then with probability one the species will be extinct (Anderson Enciso et al.) But they observed that they dohave some quasi-stationary Poisson behavior, and the are turning it into some Theorem (Anderson, Cappelletti, Kurtz, being written): time averages of some function look like Poisson averages. Here the story is somehow reminiscent of ageing and glassy bahevior. A simple example of a system that has unique stable fixed point for ODE, but no stationary distribution is (Ankit Gupta) $\to X_1, \to X_2, X_1+X_2 \to$.

Mustafa Khammash, Real-time control of gene expression. A control theory perspective to gene expression, based on feedback control. He showed the simplest Feedback Control Algorithm, with $e(t) = r - y(t)$ being the error and $y(t)$ the variable and $r$ the desired value (didn’t have time to copy the equation). Basically you want to correct $y(t)$ based on some measure, there will be an error and some control system adapts the response of the system. Similar processes happen in the body, for example for maintaining the Calcium concentration in blood, via input from bone resoprtion and intestinal absorpition, and output via calcium clearance. Similar phenomena appear for balancing. Certain integrals in control theory allow to reach a steady state that is independent of the external conditions and with zero error, while if one applies proportional feedback one will depend on external conditions. He then applies some ideas to the Chemical Master Equation, but since controlling the full distribution is hard, one would want to control the moments. The problem is that the moment closure gives spectacular problems (as mentioned before). The control problem is: Augment the stochastic reaction so to obtain a given average for the species in the long-time limit.

Advertisements