I’ll try to live blog as much as I can from the workshop we organized here in Luxembourg on Chemical Networks. I won’t be able to cover all talks as most of them I need to follow very closely and cannot get distracted.
Luca Peliti, On the value of information in gambling, thermodynamics and population dynamics. Interesting table by Rivoire (2015) of analogies between gambling, thermodynamics and a population evolving. e.g. an “option” would be a “state” of a thermodynamic system, or a “type” in evolutionary dynamics. Donaldson and Matasci /2010) proposed a model of the history of an evolving population. The growth rate is given by a probabilistic measure involving the average of the fitness, the Shannon entropy, and the relative entropy. Kelly’s result is that the growth rate of a population is increased by the mutual information. The analogy with thermodynamic systems was proposed by Vinkler et al. (2014) and by Rivoire (2015). Very interesting collection of references.
Hong Qian, The mathematical foundation of a landscape theory for living matter and life. We don’t understand what entropy is. Provocative question: do we understand what is energy? In biology, there are all kinds of landscapes: protein folding, cell differentiation etc. But there is a fundamental conceptual difference between an overdamped nondriven system like a protein and an animate object… (question: are we ready to teach thermodynamics starting from Markov processes for real?). He gives a very nice basic introduction to thermodynamics based on Markov processes, which I think will be very useful for all. He proposes to replace entropy with free energy in every book of thermodynamics (I would agree on this). He also mentioned the “Langevin analog” of a chemical master equation, which is the Poisson-type time-change representation of Thomas Kurtz. He also has a nice argument that, by the splitting of symmetric vs. antisymmetric parts of a Fokker-Planck operator, the dissipative part is basically a Newtonian dynamics that preserves the free energy (just like symplectic dynamics preserves entropy). So, in a way, irreversibility comes from the deterministic part, which is an idea I’m very sympathetic to.
Gheorghe Craciun. Boltzmann apparently, in response to Lorentz relaxed the assumption of detailed balance to make it “semi detailed balance”, which is now what we would call complex balance. Shear and Higgins (1967,1968), “An analog of the Boltzmann H-Theorem for systems of coupled chemical reactions”, and “remarks on Lyapunov function for systems of chemical reactions”, but unfortunately they thought this would hold for any reaction network while it’s obviously not true. He keeps on tracing the history of the global attractor conjecture. Quite humbly, he hardly mentions that he proved the outstanding conjecture in the field.
Pierre Gaspard. Remarkable talk, couldn’t take notes. Interesting final considerations on the quasi-species theory of Eigen and Schuster: the maximum genome size should be inversely proportional to the mutation rate.
Mueller. Has some nice sort of flux topes that are somehow special to FBA instead of flux modes. Furthermore, they consider the thermodynamic constraint. They also proved that for arbitrary networks and arbitrary kinetics, optimal solutions of enzyme allocation problems are Elementary Flux Modes. IN particular, this explains the low-yield pathways (e.g. Crabtree etc.).
Ouldridge. Nice explanation of how the Szilard engine paradox is explained, not in terms of erasing a bit, but in terms of copying the state of the particle, and correlating it to the state of the weight being pulled. I think this way of thinking is much cleaner and it might also solve the usual problem with Landauer’s interpretation that dissipation is only in erasing bits (should ask him question about it). I think my theory of marginal statistics for Markov processes could be employed to derive results in these networks where, by the copying mechanism, you basically have a cartesian product of Markov processes (by the way, there exists a notion of cartesian product of Markov chains, see here).
Smith. Very nice explanation of why CRN dynamics and topology is significantly more complicated than dynamics on graphs. He introduces the concept of “concurrency”, which basically is the idea that while reactions happen between complexes, there is a concurrency of species in complexes. Goes through the Doi-Peliti formalis, moving to a basis of coherent states. A good idea is to apply the Liouvillian on the coherent state and derive conditions on the numbers, that actually give you complex balance. He has an argument that the nonequilibrium work relations are not about work (I’m sympathetic to this). Looks for stationary trajectories (using the Liouvillian as Markovian). Why do we have two fields in a system that initially only had one variable. He argues this is related to the adjoint generator on observables. Another consideration: introducing the likelihood ratio to tilt the distribution in importance sampling is a bit alike introduing the two fields, the left field being like the likelihood ratio and the right one is the importance distribution. [Baish 2015] is a transformation of the fields that is appropriate to obtain a new Liouvillian with an extra total time derivative. From that representation one finds the transpose evolution for the mean number, rather than for the probability (we need to talk about duality in graphs…). In path integrals, FDR are symmetries of the 2-field [Kamenev 2001]. Ah: Jaynes said nutty things but also useful things. Like that.
Supriya. Defines a factorial moment, that is more suitable for CRN. Basically you consider the descending powers of the species according to their stoichiometry. Then there is an equation connecting moments, a moment hierarchy, and that equation is not that bad when it comes to factorial moments. If you take the time derivative of the factorial moment, apply the Liouvillian, and then commute all off the operators that need to be commuted, then you get a lot of other moments. What she argues is that you can get an equation which only writes in terms of the incidence matrix and the adjacency (Kirchhoff) matrix, like it happens for the order 1 moment. The key is that any factorial moment of a Poisson distribution is very simple (is it correct to say that factorial moments are to Poissonians what cumulants are to Gaussians?). Has a very nice equation for the factorial moments’s equation. You can basically say that there are no factorial moments of order larger than some number. She finds recursive relations for the ratio of consecutive steady-state factorial moments, that can be solved very nicely running the recursive relation “up and down” according to asymptotic expansions. Question: can this recursion be used to observe power-law “critical distributions”, or phase transitions from a localized to an explosive state?
Barbara Bravi. Considers subnework dynamics, how to marginalize over the unobservable degrees of freedom. I like this because while I also consider marginal thermodynamics, there is no dynamics in my thermodynamics! They have developed the right methods to project away the evironmental degrees of freedom. I think what would really be interesting here is to re-do this approach with the tilted dynamics for the SCGF to obtain marginals of the currents in subnetworks.
Mast. The origin of life: it’s not clear where the DNA to RNA to proteins back to DNA cycle emerged. Improbable that they came along all together, more probably there was a RNA world before, with RNA completely duplicating itself. It’s also unclear which among metabolism and genetics came before. They study nonequilibrium systems at large, because of chemical nonequilibrium and physical forces, like temperature gradients. Thermophoresis: “movement of molecules along a thermal gradient”. In their case: charged molecules in water (DNA, proteins). One of the arguments is that the accumulation due to thermophoresis in certain regions (e.g. in convection cells) might enhance elongation of polymers [Escalation of polymerization in a thermal gradient] (I could consider to rewrite my paper on equilibrium in temperature gradients but in a truly nonequilibrium situation where the underlying medium has a Bernard flow, and see theoretically if I obtain accumulation of particles at a corner). Apparently there is a sort of phase transition to gelation.
Keil. In an equilibrium setting there is a “tyranny of the shortest”, because they are much faster to form. Therefore one has to go far from equilibrium.
Schuster. Excess sugar is converted into fat. Is the reverse pathway possible? How can Inuit live on a practically carbohydrate-free diet? This is not clear. So far it’s understood that fatty acids can only be respired. Some say “Fats burn in the fire of carbohydrates”. But for example converting fats into sugar is needed to fuel the brain. Textbook pathways do not cover the complete combinatorial multitude of biochemical conversion. Theoretical methods for studying metabolic networks: dynamic simulation, stability and bifurcation analysis, metabolic control analysis, pathway analysis, flux analysis, optimization and game theory, etc. Pathways analysis: decomposition of the network into the smallest functional entities. Doesn’t need kinetic parameters. Steady state condition Nv = 0, sign restriction for irreversibility of the fluxes, whose solution is convex region. There might be “internal elementary modes”, by which I think he means what we call futile cycles. Different pathways can have different yield (thus allowing the system to switch from one to the other). He argues that EFM is not scalable to the genome-scale metabolic networks, like FBA. In several works they argued that there is no gluconeogenesis from fatty acids [can sugars be produced from fatty acids?].
Ewald. The idea is: killing pathogenic fungi through self-poisoning. Fungi can give pretty bad illnesses. There are as many people dying from fungi than from tuberculosis every year.
Yordanov. Steady-state differential dose response. e.g. by knocking down a metabolite. A natural perturbtion is interferon signalling. They use “Laplacian models”: zero and first-order mass action kinetics (basically, master equations with only stoichiometric coefficients). Theory: Gunawardena 2012, Mirzaev & Gunawardena 2013, and a lot of references to other applications. He gave a method for compression of Kirchhoff polynomials, that does allow to avoid the combinatorial explosion [Yordanov and Stelling 2016].
Poolman. Introduction to history of the group. Now works on acetogen, a peculiar class of microbes that have a metabolism based on carbon monoxide. This might be interesting for industrial purposes, both because you might want to eliminate a poisonous gas and because it might give some useful product out. Their goal is to provide a hierarchical decomposition of large metablic networks, by an improved “divide and conquer” algorithm. He considers kernel matrices that are orthogonal (but how often are they orthogonal? that doesn’t seem to be often the case…). He then defines by the angle between vectors “correlations between reaction”. If the coefficient is zero the reactions are disconnected, if it’s +-1 they are in the same reaction (enzyme) subset. But is metabolism really modular? Metabolism wasn’t constructed by human engineers after all…
Roldàn. The exponential of the entropy production is a martingale. Hence Doob’s theorems for martingales hold: about stopping times and about probability of records. Neri, Roldàn, Julicher. Obtain probabilities of the stopping times analogous to the Qian, Xie and Ge (and Qians) “Haldane” equalities, and then they can study the negative records of the entropy production. There’s a cumulative distribution of the infimum, and the infimum law says that <S_inf(t) > = – k_B (remarkable!). They also argue that they have a universal infimum distribution, based on a recent paper on arXiv.
Bar Even, Noor. Intro: proposed new reference potential defined not at one mole but at one micromole (more reasonable for biochemistry of the molecule). First story: how to measure delta G zero’s? They argued that applying the stoichiometric matrix to the delta g’s of formation gives the delta G’s of reaction, that the latter are easier to calculate, but that the problem is much underconstrained (here I could ask about the parallel with inorganic chemistry, where for the delta G’s of formation you need to compare to the most stable compound of that specie). Group contributions [1991 Mavrovouniolis, 2008 Jankowki, 2012 Noor]: assume that most thermodynamic properties are somehow additive, by splitting down molecules in big groups. However group contributions cannot be combined with existing known delta G’s. [Noor, Haralsdottir, Milo, Fleming 2012], and they brought it to eQuilibrator [Flamholz, Noor, Bar-Even, Milo (2012)] (in inorganic chemistry the Delta G of formation of something is computed taking the elements of the periodic table and setting to 0 the G of their most stable compound. Is this group contribution method the same thing, but giving 0 to groups? Then this will create two disjoint areas of chemical thermodynamics, with numbers that do not really combine one with the other). Second story: definition of thermodynamic unfeasibility. Unfortunately, they also define it on a single reaction, while I believe it can only be defined on cycles.
Why is NAD(P)H the universal electron carrier? By the phenomenon that the “rich get richer” (didn’t understand that). Basically he seems to argue that NADPH can both accept and donate electrons from most things that need or have one, by some coincidence; all such reactions (oxidoreduction) are almost always in acceptance, but they are almost completely reversible. Smith has an argument that close to equilibrium one can operate these reactions very efficiently (a local equilibrium in the middle of strong forces might enhance efficiency? Is this related to tight coupling? Might be… we ned to think about this). Not dissipating too much and being very very efficient looks a lot like my picture of the quantum dot, where one level is basically at equilibrium with the reservoir and the others are very very far. That’s because the primary need of the machine is not to process electrons! It’s to process food! Electrons only need to be there whenever they are needed.
Look at [Bar-Even, etc., Thermodynamic constraints shape the structure of carbon fixation pathways].
[The problem with group contributions: it is as if one claims that the delta G of formation of Q2 is equal to the sum of the G of formation of single molecules of O. It is not related to the delta G of reactions (which is just due to combinatorics of population): it is an assumption on the energetics, and that sucks]
Fleming. interesting consideration: not so clear that differential equations are useful in biochemistry, because it’s not clear what the initial conditions are, and moreover, the rates are not often known, and it’s difficult to fit the rates to actual parameters. He defines elementary reactions as those that are only specified by their stoichiometry (and not, for example, by Hill parameters). Constraint-based modelling is based on the idea of physicochemically and biochemically infeasible network states with constraints, and then choose some objective function which is typically a linear function of the reaction velocities. Argument for high dimensional modelling: organisms are the units of selection, hence if you want to understand how nonequilibrium thermodynamics is related to biology you have to get to that scale. Mentions the problem of infeasible cycles. what biologists do they tweak the bounds l < v < u until you get rid of the problem, it’s good but it’s bad because it’s not systematic. He proposes an approach that is somewhere in between kinetic modelling and constraint-based modelling. He mentions duplomonotone functions, which I’ve never heard of f(x)^T . nabla f . f > 0; they developed an algorithm (to find steady states?) based on this property.
Estevez-Torres. Can we synthesize a smiley face by reaction-diffusion equations? First observation of reaction diffusion: 1906, proposed velocity of reaction front v = Sqrt(diffusion), then explained by Kolmogoroff A. and Fisher (Fisher KPP): d_t a = a(1-a) + D Lapl a. Then the Belousov Zhabotinsky reaction (1950-70), with Belousov having a lot of problems publishing it. Winfree AT Science 1972: vortex concentrations. Turing: d_t a = f(a,b) + D_a Lapl a, d_t b = g(a,b) è D_d Lapl b, then if the Jacobian has J_11 > 0 and J_22< 0 (one is a self inhibitor and one is a self stimulator), and D_b >> D_a, then one can have these Turing patterns. With kinesin: Nedelec ’97. Loose et al Science 2008. But, the problem with BZ reaction is that, while it’s sustained, it’s not programmable nor biocompatible. Purified kinesin motors + microtubules, and purified Min proteins are sustained, biocompatible, but not programmable. People in chemistry synthesize molecules but not networks. DNA is a good candidate for this problem because it has a sequence, you can get the reactivity, and you can build networks from the structure of your molecules [Isalan et al. PLOS Biol. 2005]. So for example A activates B activates C inhibits C can be done, which is otherwise difficult. Chemistry for physicists: you buy your things, you mix them up, and you wait some time. Rondelez invented a system using two types of SSDNA species to make dynamic networks [Montagne et al. 2011 Mol Sys Biol]. The idea is that it’s a sustained system in a closed reactor, but the concentration of something is so big that is at a steady-state like for a long-enough time. The nice thing is that DNA you can buy it with the features that you want. See [Zadorin et al, PRL 15]: two fronts in the two directions, and that don’t see each other because they are independent (they only share the resources, but the DNA’s don’t interact). However, you can also make so that they inhibit each other and stimulate themselves, and then you have the two fronts colliding and stopping when they collide. Can we then make a material capable of development (an artificial simple embryo?). This might be useful to make materials that follow a developmental program. So first there is pattern formation, undistinguished, then there is morphogenesis due to external forces, and then there is cell differentiation and finally growth. He shows an example that is totally synthetic. Wolpert, J. Theo. Biol 1969: the French flag model: an archetypical model of pattern formation with three colors. They make a Polish flag (two colors) that is stable for more than 15 hours, but then a parasite emerges and consumes all of the resources very fast; they tried to push the parasite back but it’s another story. I didn’t understand how making a French flag reproduces an embryo.
Bo. Interesting observation: having a time-scale separation is a problem for simulations, because you need to wait a lot of fast events before doing a slow one. If you manage to coarse-grain the fast one then you can simulate more effectively. The question is: given a network with randomly chosen fast and slow rates, we want a criterion for whether we can do a fast-slow separation (I should really consider the fate of my marginal FR when you have a fast-slow time separation between the observable and the unobservable state spaces; discuss this with Gianmaria). (Another idea for coarse-graining is to consider the eigenvector associated to the dominant eigenvalue: if it is well localized, then one can coarse grain on the localized subspaces.) Bo: if the fast rates are strongly connected, then there is no hope to coarse grain. Otherwise, identify the blocks and coarse grain them. Example: Stochastic Michaelis-Menten with quasi-equilibrium or the slow complex formation hypothesis. (Would be nice to have a result for how the eigenvector behaves).
Skupin. Microscopic dynamics of a mitochondrion [Pietrobon 1985].