Live blogging from ΣΦ

@ SigmaPhi in Corfu

Dominique Chu, Entropy production during computation. The question he is interested in is what is the cost of deterministic, finite-time computation. The discussion started a long time ago and there was a conclusion that there was no cost for deterministic computation – Feynman and Bennet (but then, there is no computation at all): e.g. the logic gates constructed with billiards. However, such zero-energy computation is either infinitely slow (quasi-static) or inaccurate (billiard balls) [this is very close to my thoughts on computation, accuracy and speed]. Has some nice plot of a scaling of cost vs. time vs. accuracy in biological cells, in a paper where he was asking what is the minimal energetic cost of computation. Digital deterministic computation can be performed fast and efficiently. Take-home message: 1) The cost and time scale scale linearly with the system size; 2) Accuracy scales with the power of the system size (???). He considers a continuous time Markov chain that relaxes to equilibrium; the initial state is the input, and the output is the equilibrium state (why equilibrium and not non equilibrium steady states?). Computation is limited by several aspects: entropy production, sampling time, and cost of sampling. The accuracy depends on the number of samples.

Many people when they want to consider computation go for the Turing machine, but he prefers to avoid this because, apparently, there is a complicated thermodynamic pattern there. He only considers a logical circuit. The minimal implementation of an AND gate is a CTMC consisting of states (0,0), (0,1), (1,0) and (1,1) and there are transitions between states [this reminds me of some comments by Horowitz et al. that other gates cannot be realised by a two-state model because you need a sort of wolf-goat-cabbage problem]. We can consider this as a minimal chemical system made of only two molecules. For a few kT you get a lot of accuracy.

Stefano Ruffo, Out-of-equilibrium physics in spontaneous synchronization. Dynamical systems’ theory. It was historically born in communication. He will cover the Kuramoto and Sakaguchi models, the role of noise, then his own work related to inertia, and a lot of other topics, including fluctuation theorems (very recent work that he might not cover). Synchronization is the adjustment of the rhythm of active, dissipative oscillators caused by a weak interaction. Prerequisite: the must be an external source that keeps the oscillators moving. An active oscillator generates periodic oscillations in the absence of periodic forces. The first to observe antiphase synchronization was Huygens with pendulum clocks. Also the synchronized in-phase is possible, and recently the experiment has been done and the conditions to obtain it are quite involved. In more recent times synchro was found in radio communication, flashing fireflies, circadian rhythms, the brain (actually synchro is not good for the brain, you have to find ways to de-synchro). The Lyapunov exponent in the direction of time, that changes the phase, is zero so it is very easy to synchronize phase. Then, the idea of Kuramoto was to look at the dynamics of the phases only. The Sakaguchi model introduces a drift in the equation that drives the system out of equilibrium [by the way, I should keep in mind this result that you can recast the system as an Hamiltonian action-angle variable, which is a very powerful result]. Sakaguchi, by some trick, managed to calculate the stationary distribution of the Fokker-Planck equation [it would be interesting to calculate the distribution of the currents; Ruffo mentions that he uses this distribution to make considerations about the Fluctuation Theorem, but we need to keep in mind that the steady state is in occupation while the FT is for the currents’ large deviation function]. Ermentrout (1991) introduced inertia in the model going underdamped, and used it to analyze electric network distributions. On a fully connected network, the model has a first order phase transition.

David Mukamel. In many 1D models and some 2D models one observes anomalous heat conduction, that is, the conductivity diverges with some power with the scale of the system, and there are strange temperature profiles that diverge with some meniscus exponent close to the two reservoirs at the boundary.

Alberto Imparato, Autonomous thermal motors. The question is: given two reservoirs at different temperatures, what is the minimal design to extract work? He considers a Langevin equation with a potential, a time-dependent protocol, and a nonconservative force, and the potential and the external force are either periodic or stochastic forces. e.g. Tilting ratchet and pulsing ratchet. The model he considers is inspired by that of Gomez-Martin et al. with two degrees of freedom and some periodic tilting potential. Moves to center of mass/relative coordinates, and then goes to the limit of strong coupling where the spring constant is large. He can adiabatically eliminate the fast variable (the relative distance) and find an effective equation for the center of mass. He can then find a periodic steady-state solution of the FP equation with some current that is uniform. The current can only be nonvanishing if the effective potential is not periodic [this reminds me of something by Landauer and Buttiker, check it out…] And in fact, he shows that this is equivalent to the Buttiker-Landauer model where there is a periodic potential and a periodic temperature profile, with non-commensurable periods [Any approximation of non-commensurable periods to real number (which must be the physical situation) gives commensurable periods, hence the equilibrium vs. nonequilibrium nature of the system depends on time scales, and the time scale depends on the representation of the real numbers; this might give rise to interesting and maybe even paradoxical results].

Sung J., Chemical fluctuation theorem for vibrant reaction networks in living cells. He shows that experiment indicate that intracellular reactions regulated by enzymes are not Poissonian at all, and proposes a new concept of “vibrant reaction process”. Basically, he proposes a stochastic reaction rate, due to the uncertainty and variation in enzyme expression. He finds some new relative variance [it seems to me that the calculations are possible because the fluctuation in the “vibrant” is independent of the intrinsic noise of the chemical evolution].

Afshin Monthabhak. There are many pieces of evidence that the brain works at criticality. The question he is interested in is how does the brain approach such criticality? Excitation and inhibition tendencies balance each other. There’s not just a critical line or point, but a whole critical phase, and this is the case with the Kuramoto model with hierarchical networks that mimic the game, and then the critical behavior is in an extended region. But is there a dynamical origin of such extended criticality? His model is a random network, diracted, where every two nodes are connected with some probability q hence the average degree of the network is k = qN. The dynamics works through a transfer function: at a given time the probability of a node to be activated is given by the activation of the neighbours with some transfer function in between: if neighbours all fire, then fires, if there’s no activity in between, no fire. Very simple. The largest eigenvalue of the adjacency matrix provides a lot of information about the collective dynamics of the network. The important parameters is the number of active sites. The activity-dependen branching ratio is the expected activation at the next tie step given that there is a certain degree of activation. The hypothesis is that a branching ratio of 1 s a good characteristic of a critical system. There’s some mean-field analysis that allows to calculate the activity-dependent branching ratio, which has a linear part and a nonlinear part, distinghishing the two types of behaviour. There’s a parameter such that if that parameter is at the critical point, then there is a transition between stability and instability (all firing, all non-firing). Actually, he has a whole interval of criticality. He then studies the fluctuations around the critical point, beyond mean-field. In the critical region, of course, it’s power law with fatter and fatter tails. Furthermore, one has avalanches: in the critical region one perturbs the system and then it goes on and on for a long while [divergent self-correlation?].

Vasilyev, Survival of a lazy evasive pray. Punchline: If the prey does not know where the predators are, it should stay where it is.

Carlos, Negative response to an effective bias by a mixed population of voters. A model of opinion formation: one often encounters situations in which trying harder, pushing stronger, making any excessive effort, appears counterproductive, leading to a smaller effect as compared to the outcome achieved with a more modest investment. In particular, in long-lasting human relationships. But there are also examples in physics: electron transfer in semiconductors at low temperature, hopping processes in disordered media, etc. He considers a large society made of many small communities, each comprising N individuals. They have to vote for one of to candidates, one blue and one red. Each community is exposed to an external bias prompting them to vote for a preferential candidate. Each member of the community will either align with the bias (ordinary voters) or against (contrarians). They model this system by, guess what?, a spin model, with a temperature that allows for fluctuations in the opinion within a given communities, with an interaction term between ordinary voters and contrarian, and there is a “magnetization” of the contrarians and a “magnetization” of the ordinary (so these are two populations of spins). To try to solve for the partition function, he goes to continuum and finds some expressions for the order parameter. There are several limits in which the expressions can be simplified, and one can find a negative slope in the coupling parameter, showing that the red candidate might win from an excessive attempt of pro voters to move the vote of the anti voters. I don’t see the physical mechanism behind this, and I’m surprised that this negative response occurs also for equilibrium systems.

Ruppeiner. Very qualitative talk, with many words but not many formulas. He mentions that the Fokker-Planck equation allows to interpret the thermodynamic scalar curvature. He shows a table with calculation of the thermodynamic curvature for several thermodynamic systems (all of them are equilibrium systems). Interestingly, it’s the first time I see a speaker read from a prepared speech. He makes the statement that the entropy of the universe is a nondecreasing quantity. Mentions some facts in black hole thermodynamics, stating that it is an application of the theory.

Sahay. Considers anti-de Sitter black holes, that are expecially good for thermodynamics. AdS spacetime solves the Einstein equation with a negative cosmological constant. The attractive gravity force increases with distance, acts as a box, and then allows to formulate stable canonical ensembles [this reminds me of the project I had of defining the canonical ensemble in GR…]. One then defines the metric as the derivative of the entropy with respect to parameters. The invariant scalar curvature is then defined. The thermodynamic geomety is flat for the ideal gas, curved for the van der Walls fluid and singular for something else. The curvature encodes first order phase transitions. The thermodynamic geometry for black holes. [like the previous one, a very wordy talk with few calculations made explicit, the few formulas are standard, so it’s difficult to understand exactly what this is all about]. Finally he mentions the Kerr-AdS black-hole.






Smoluchowski’s demon: we take bets

At a recent workshop Luca Peliti mentioned Smoluchowski’s proposed realization of an antisymmetric “Maxwell demon”. Right afterwards with some of the participants we started discussing how the system would behave and decided to finally take a little bet (not much, a bottle of good quality Rum). It turned out that the betting process itself is an interesting sociological experiment in itself, as the specification of the conditions, of the meaning of concepts, and of the procedure by which the bet would be claimed to be won are far from settled. So here this post, in the hope that it might draw more discussion and to an agreement.

So here the situation. There’s a gas in a box, with a wall in between and a small opening with a door. The door opens and closes asymmetrically only on one side, and there’s a coiled spring exerting a restoring force to close it. The spring is embedded in wall and door, so that it does not interact with the gas. The system is overall isolated. We assume that before time t=0 the door is kept open by some string, and that the gas has been prepared at equilibrium with a well-defined notion of temperature (e.g. by putting it in contact with a reservoir that was then removed some time in the past, that is the same on both sides of the box. At time t=0 the string is cut with minimal effort. As well known in all these cases, the door can occasionally be opened by molecules hitting on it from one side. Hence there might or might not be a displacement of particles. We assume there is no interaction between particles nor dissipation due to interaction with the walls. The question is whether the temperature of the gas will be the same on the two sects after a sufficiently long time has passed.


The kind of argument I’m interested in is one where the details of the system do not matter so much, the number of particles, the mass of the door, the string constant etc. I have many personal reasons why I like this problem and I have my own opinion on it.

@ LuxCNworkshop

I’ll try to live blog as much as I can from the workshop we organized here in Luxembourg on Chemical Networks. I won’t be able to cover all talks as most of them I need to follow very closely and cannot get distracted.

Luca Peliti, On the value of information in gambling, thermodynamics and population dynamics. Interesting table by Rivoire (2015) of analogies between gambling, thermodynamics and a population evolving. e.g. an “option” would be a “state” of a thermodynamic system, or a “type” in evolutionary dynamics. Donaldson and Matasci /2010) proposed a model of the history of an evolving population. The growth rate is given by a probabilistic measure involving the average of the fitness, the Shannon entropy, and the relative entropy. Kelly’s result is that the growth rate of a population is increased by the mutual information. The analogy with thermodynamic systems was proposed by Vinkler et al. (2014) and by Rivoire (2015). Very interesting collection of references.

Hong Qian, The mathematical foundation of a landscape theory for living matter and life. We don’t understand what entropy is. Provocative question: do we understand what is energy? In biology, there are all kinds of landscapes: protein folding, cell differentiation etc. But there is a fundamental conceptual difference between an overdamped nondriven system like a protein and an animate object… (question: are we ready to teach thermodynamics starting from Markov processes for real?). He gives a very nice basic introduction to thermodynamics based on Markov processes, which I think will be very useful for all. He proposes to replace entropy with free energy in every book of thermodynamics (I would agree on this). He also mentioned the “Langevin analog” of a chemical master equation, which is the Poisson-type time-change representation of Thomas Kurtz. He also has a nice argument that, by the splitting of symmetric vs. antisymmetric parts of a Fokker-Planck operator, the dissipative part is basically a Newtonian dynamics that preserves the free energy (just like symplectic dynamics preserves entropy). So, in a way, irreversibility comes from the deterministic part, which is an idea I’m very sympathetic to.

Gheorghe Craciun. Boltzmann apparently, in response to Lorentz relaxed the assumption of detailed balance to make it “semi detailed balance”, which is now what we would call complex balance. Shear and Higgins (1967,1968), “An analog of the Boltzmann H-Theorem for systems of coupled chemical reactions”, and “remarks on Lyapunov function for systems of chemical reactions”, but unfortunately they thought this would hold for any reaction network while it’s obviously not true. He keeps on tracing the history of the global attractor conjecture. Quite humbly, he hardly mentions that he proved the outstanding conjecture in the field.

Pierre Gaspard. Remarkable talk, couldn’t take notes. Interesting final considerations on the quasi-species theory of Eigen and Schuster: the maximum genome size should be inversely proportional to the mutation rate.

Mueller. Has some nice sort of flux topes that are somehow special to FBA instead of flux modes. Furthermore, they consider the thermodynamic constraint. They also proved that for arbitrary networks and arbitrary kinetics, optimal solutions of  enzyme allocation problems are Elementary Flux Modes. IN particular, this explains the low-yield pathways (e.g. Crabtree etc.).

Ouldridge. Nice explanation of how the Szilard engine paradox is explained, not in terms of erasing a bit, but in terms of copying the state of the particle, and correlating it to the state of the weight being pulled. I think this way of thinking is much cleaner and it might also solve the usual problem with Landauer’s interpretation that dissipation is only in erasing bits (should ask him question about it). I think my theory of marginal statistics for Markov processes could be employed to derive results in these networks where, by the copying mechanism, you basically have a cartesian product of Markov processes (by the way, there exists a notion of cartesian product of Markov chains, see here).

Smith. Very nice explanation of why CRN dynamics and topology is significantly more complicated than dynamics on graphs. He introduces the concept of “concurrency”, which basically is the idea that while reactions happen between complexes, there is a concurrency of species in complexes. Goes through the Doi-Peliti formalis, moving to a basis of coherent states. A good idea is to apply the Liouvillian on the coherent state and derive conditions on the numbers, that actually give you complex balance. He has an argument that the nonequilibrium work relations are not about work (I’m sympathetic to this). Looks for stationary trajectories (using the Liouvillian as Markovian). Why do we have two fields in a system that initially only had one variable. He argues this is related to the adjoint generator on observables. Another consideration: introducing the likelihood ratio to tilt the distribution in importance sampling is a bit alike introduing the two fields, the left field being like the likelihood ratio and the right one is the importance distribution. [Baish 2015] is a transformation of the fields that is appropriate to obtain a new Liouvillian with an extra total time derivative. From that representation one finds the transpose evolution for the mean number, rather than for the probability (we need to talk about duality in graphs…). In path integrals, FDR are symmetries of the 2-field [Kamenev 2001]. Ah: Jaynes said nutty things but also useful things. Like that.

Supriya. Defines a factorial moment, that is more suitable for CRN. Basically you consider the descending powers of the species according to their stoichiometry. Then there is an equation connecting moments, a moment hierarchy, and that equation is not that bad when it comes to factorial moments. If you take the time derivative of the factorial moment, apply the Liouvillian, and then commute all off the operators that need to be commuted, then you get a lot of other moments. What she argues is that you can get an equation which only writes in terms of the incidence matrix and the adjacency (Kirchhoff) matrix, like it happens for the order 1 moment. The key is that any factorial moment of a Poisson distribution is very simple (is it correct to say that factorial moments are to Poissonians what cumulants are to Gaussians?). Has a very nice equation for the factorial moments’s equation. You can basically say that there are no factorial moments of order larger than some number. She finds recursive relations for the ratio of consecutive steady-state factorial moments, that can be solved very nicely running the recursive relation “up and down” according to asymptotic expansions. Question: can this recursion be used to observe power-law “critical distributions”, or phase transitions from a localized to an explosive state?

Barbara Bravi. Considers subnework dynamics, how to marginalize over the unobservable degrees of freedom. I like this because while I also consider marginal thermodynamics, there is no dynamics in my thermodynamics! They have developed the right methods to project away the evironmental degrees of freedom. I think what would really be interesting here is to re-do this approach with the tilted dynamics for the SCGF to obtain marginals of the currents in subnetworks.

Mast. The origin of life: it’s not clear where the DNA to RNA to proteins back to DNA cycle emerged. Improbable that they came along all together, more probably there was a RNA world before, with RNA completely duplicating itself. It’s also unclear which among metabolism and genetics came before. They study nonequilibrium systems at large, because of chemical nonequilibrium and physical forces, like temperature gradients. Thermophoresis: “movement of molecules along a thermal gradient”. In their case: charged molecules in water (DNA, proteins).  One of the arguments is that the accumulation due to thermophoresis in certain regions (e.g. in convection cells) might enhance elongation of polymers [Escalation of polymerization in a thermal gradient] (I could consider to rewrite my paper on equilibrium in temperature gradients but in a truly nonequilibrium situation where the underlying medium has a Bernard flow, and see theoretically if I obtain accumulation of particles at a corner). Apparently there is a sort of phase transition to gelation.

Keil. In an equilibrium setting there is a “tyranny of the shortest”, because they are much faster to form. Therefore one has to go far from equilibrium.

Schuster. Excess sugar is converted into fat. Is the reverse pathway possible? How can Inuit live on a  practically carbohydrate-free diet? This is not clear. So far it’s understood that fatty acids can only be respired. Some say “Fats burn in the fire of carbohydrates”. But for example converting fats into sugar is needed to fuel the brain. Textbook pathways do not cover the complete combinatorial multitude of biochemical conversion. Theoretical methods for studying metabolic networks: dynamic simulation, stability and bifurcation analysis, metabolic control analysis, pathway analysis, flux analysis, optimization and game theory, etc. Pathways analysis: decomposition of the network into the smallest functional entities. Doesn’t need kinetic parameters. Steady state condition Nv = 0, sign restriction for irreversibility of the fluxes, whose solution is  convex region. There might be “internal elementary modes”, by which I think he means what we call futile cycles. Different pathways can have different yield (thus allowing the system to switch from one to the other). He argues that EFM is not scalable to the genome-scale metabolic networks, like FBA. In several works they argued that there is no gluconeogenesis from fatty acids [can sugars be produced from fatty acids?].

Ewald. The idea is: killing pathogenic fungi through self-poisoning. Fungi can give pretty bad illnesses. There are as many people dying from fungi than from tuberculosis every year.

Yordanov. Steady-state differential dose response. e.g. by knocking down a metabolite. A natural perturbtion is interferon signalling. They use “Laplacian models”: zero and first-order mass action kinetics (basically, master equations with only stoichiometric coefficients). Theory: Gunawardena 2012, Mirzaev & Gunawardena 2013, and a lot of references to other applications. He gave a method for compression of Kirchhoff polynomials, that does allow to avoid the combinatorial explosion [Yordanov and Stelling 2016].

Poolman. Introduction to history of the group. Now works on acetogen,  a peculiar class of microbes that have a metabolism based on carbon monoxide. This might be interesting for industrial purposes, both because you might want to eliminate a poisonous gas and because it might give some useful product out. Their goal is to provide a hierarchical decomposition of large metablic networks, by an improved “divide and conquer” algorithm. He considers kernel matrices that are orthogonal (but how often are they orthogonal? that doesn’t seem to be often the case…). He then defines by the angle between vectors “correlations between reaction”. If the coefficient is zero the reactions are disconnected, if it’s +-1 they are in the same reaction (enzyme) subset. But is metabolism really modular? Metabolism wasn’t constructed by human engineers after all…

Roldàn. The exponential of the entropy production is a martingale. Hence Doob’s theorems for martingales hold: about stopping times and about probability of records. Neri, Roldàn, Julicher. Obtain probabilities of the stopping times analogous to the Qian, Xie and Ge (and Qians) “Haldane” equalities, and then they can study the negative records of the entropy production. There’s a cumulative distribution of the infimum, and the infimum law says that <S_inf(t) > = – k_B (remarkable!). They also argue that they have a universal infimum distribution, based on a recent paper on arXiv.

Bar Even, Noor. Intro: proposed new reference potential defined not at one mole but at one micromole (more reasonable for biochemistry of the molecule). First story: how to measure delta G zero’s? They argued that applying the stoichiometric matrix to the delta g’s of formation gives the delta G’s of reaction, that the latter are easier to calculate, but that the problem is much underconstrained (here I could ask about the parallel with inorganic chemistry, where for the delta G’s of formation you need to compare to the most stable compound of that specie). Group contributions [1991 Mavrovouniolis, 2008 Jankowki, 2012 Noor]: assume that most thermodynamic properties are somehow additive, by splitting down molecules in big groups. However group contributions cannot be combined with existing known delta G’s. [Noor, Haralsdottir, Milo, Fleming 2012], and they brought it to eQuilibrator [Flamholz, Noor, Bar-Even, Milo (2012)] (in inorganic chemistry the Delta G of formation of something is computed taking the elements of the periodic table and setting to 0 the G of their most stable compound. Is this group contribution method the same thing, but giving 0 to groups? Then this will create two disjoint areas of chemical thermodynamics, with numbers that do not really combine one with the other). Second story: definition of thermodynamic unfeasibility. Unfortunately, they also define it on a single reaction, while I believe it can only be defined on cycles.

Why is NAD(P)H the universal electron carrier? By the phenomenon that the “rich get richer” (didn’t understand that). Basically he seems to argue that NADPH can both accept and donate electrons from most things that need or have one, by some coincidence; all such reactions (oxidoreduction) are almost always in acceptance, but they are almost completely reversible. Smith has an argument that close to equilibrium one can operate these reactions very efficiently (a local equilibrium in the middle of strong forces might enhance efficiency? Is this related to tight coupling? Might be… we ned to think about this). Not dissipating too much and being very very efficient looks a lot like my picture of the quantum dot, where one level is basically at equilibrium with the reservoir and the others are very very far. That’s because the primary need of the machine is not to process electrons! It’s to process food! Electrons only need to be there whenever they are needed.

Look at [Bar-Even, etc., Thermodynamic constraints shape the structure of carbon fixation pathways].

[The problem with group contributions: it is as if one claims that the delta G of formation of Q2 is equal to the sum of the G of formation of single molecules of O. It is not related to the delta G of reactions (which is just due to combinatorics of population): it is an assumption on the energetics, and that sucks]

Fleming. interesting consideration: not so clear that differential equations are useful in biochemistry, because it’s not clear what the initial conditions are, and moreover, the rates are not often known, and it’s difficult to fit the rates to actual parameters. He defines elementary reactions as those that are only specified by their stoichiometry (and not, for example, by Hill parameters). Constraint-based modelling is based on the idea of physicochemically and biochemically infeasible network states with constraints, and then choose some objective function which is typically a linear function of the reaction velocities. Argument for high dimensional modelling: organisms are the units of selection, hence if you want to understand how nonequilibrium thermodynamics is related to biology you have to get to that scale. Mentions the problem of infeasible cycles. what biologists do they tweak the bounds l < v < u until you get rid of the problem, it’s good but it’s bad because it’s not systematic. He proposes an approach that is somewhere in between kinetic modelling and constraint-based modelling. He mentions duplomonotone functions, which I’ve never heard of f(x)^T . nabla f . f > 0; they developed an algorithm (to find steady states?) based on this property.

Estevez-Torres. Can we synthesize a smiley face by reaction-diffusion equations? First observation of reaction diffusion: 1906, proposed velocity of reaction front v = Sqrt(diffusion), then explained by Kolmogoroff A. and Fisher (Fisher KPP): d_t a = a(1-a) + D Lapl a. Then the Belousov Zhabotinsky reaction (1950-70), with Belousov having a lot of problems publishing it. Winfree AT Science 1972: vortex concentrations. Turing: d_t a = f(a,b) + D_a Lapl a, d_t b = g(a,b) è D_d Lapl b, then if the Jacobian has J_11 > 0 and J_22< 0 (one is a self inhibitor and one is a self stimulator), and D_b >> D_a, then one can have these Turing patterns. With kinesin: Nedelec ’97. Loose et al Science 2008. But, the problem with BZ reaction is that, while it’s sustained, it’s not programmable nor biocompatible. Purified kinesin motors + microtubules, and purified Min proteins are sustained, biocompatible, but not programmable. People in chemistry synthesize molecules but not networks. DNA is a good candidate for this problem because it has a sequence, you can get the reactivity, and you can build networks from the structure of your molecules [Isalan et al. PLOS Biol. 2005]. So for example A activates B activates C inhibits C can be done, which is otherwise difficult. Chemistry for physicists: you buy your things, you mix them up, and you wait some time. Rondelez invented a system using two types of SSDNA species to make dynamic networks [Montagne et al. 2011 Mol Sys Biol]. The idea is that it’s a sustained system in a closed reactor, but the concentration of something is so big that is at a steady-state like for a long-enough time. The nice thing is that DNA you can buy it with the features that you want. See [Zadorin et al, PRL 15]: two fronts in the two directions, and that don’t see each other because they are independent (they only share the resources, but the DNA’s don’t interact). However, you can also make so that they inhibit each other and stimulate themselves, and then you have the two fronts colliding and stopping when they collide. Can we then make a material capable of development (an artificial simple embryo?). This might be useful to make materials that follow a developmental program. So first there is pattern formation, undistinguished, then there is morphogenesis due to external forces, and then there is cell differentiation and finally growth. He shows an example that is totally synthetic. Wolpert, J. Theo. Biol 1969: the French flag model: an archetypical model of pattern formation with three colors. They make a Polish flag (two colors) that is stable for more than 15 hours, but then a parasite emerges and consumes all of the resources very fast; they tried to push the parasite back but it’s another story. I didn’t understand how making a French flag reproduces an embryo.

Bo. Interesting observation: having a time-scale separation is a problem for simulations, because you need to wait a lot of fast events before doing a slow one. If you manage to coarse-grain the fast one then you can simulate more effectively. The question is: given a network with randomly chosen fast and slow rates, we want a criterion for whether we can do a fast-slow separation (I should really consider the fate of my marginal FR when you have a fast-slow time separation between the observable and the unobservable state spaces; discuss this with Gianmaria). (Another idea for coarse-graining is to consider the eigenvector associated to the dominant eigenvalue: if it is well localized, then one can coarse grain on the localized subspaces.) Bo: if the fast rates are strongly connected, then there is no hope to coarse grain. Otherwise, identify the blocks and coarse grain them. Example: Stochastic Michaelis-Menten with quasi-equilibrium or the slow complex formation hypothesis. (Would be nice to have a result for how the eigenvector behaves).

Skupin. Microscopic dynamics of a mitochondrion [Pietrobon 1985].

MacKie @ Budapest

Here the slides of my talk:



Vladana Vukojevic. Interested in the neurology of alcoholics, trying to understand the physiological insurgence of dependence. Quenching of chemical oscillations. Work originated from the work of Graae Sorensen in Copenhagen [Quenching of chemical oscillations , JPC 1987]. The Briggs-Rauscher reaction oscillates (there are youtube videos on this). A stirred reactor with input and output of reactions; oscillations depend on the velocity of feeding, and you can have a supercritical Hopf bifurcation. A quench is a deliberate perturbation of the system, whereby you drive the system from the limit cycle towards the stable manifold. They studied all species that are intermediates in the reaction. There are several models of the BR reaction, so apparently it is not clear how many intermediates there are.  Mentions some stoichiometric stability analysis. They consider a mathematical model for HPA based on cortesol levels, where ethanol is one of the molecules, so by drinking a glass of wine one alters the cycle. The conclusions is that the HPA cycle is affected by ethanol, for example the peak in cortesol is in the evening which affects the sleeping patterns.



The entropy production is not the production of entropy

Main message: the concepts “entropy” and “production” do not commute, if they make sense at all.

I said that entropy production is not the production of entropy out loud at the JETC, and I believe many were disoriented. So let’s try to clarify what I mean. The confusion arises because people freely carry lingo from equilibrium thermodynamics out of equilibrium. At equilibrium, state functions are great. But out of equilibrium, talking about state functions might create confusion. I believe the major culprit for this shameful situation is Clausius nonetheless (I’ve heard Boltzmannn medal Dan Frankel saying something similar, so it’s not completely my responsibility).

Entropy, like energy etc., is a state function. That is, for a system X with states x, entropy is some function of the state S(x), a scalar. Full stop. At the moment I don’t care how this function is determined, if it’s uniquely defined (which is not) and whether it is interesting at all (not so much). Let us further assume that there is an internal dynamics within the system. As a consequence, there is a certain increment of entropy dS as the system moves from one state to another. Increases of scalars are very special types of differentials, called exact 1-forms. Because it is exact, the production of entropy ∫dS along a path is independent of the path and only depends on the initial and final states. Therefore, whenever the system goes back to the initial state the production of entropy vanishes. If we assume that the system comes close to the initial state often enough, then there will never be a net production of entropy.

Notice that so far no mention to the equilibrium vs. nonequilibrium character of the dynamics was made. But, from the above description of the production of entropy, it would seem that the rate at which entropy is produced within the system is practically zero for long enough times, which would seem to be un-characteristic of a non-equilibrium system, which is expected to produce entropy forever and ever. So what’s the problem?

Fact is that the production of internal entropy is not the whole story, because it is understood that “entropy flows” away from the system and towards the environment, adding something to the overall balance.

Let’s call this other flow across the system’s boundary the entropy production. I think this is a very unfortunate nomenclature, but let’s stick to it. Because entropy production is something that increases along a path in the system’s state space, it is well described by differential 1-form σ (by definition a 1-form is something that can be integrated along a path). Differing from the production of entropy, this form is generally not exact, which means that there is no function Σ  such that σ = dΣ. As a consequence, the entropy production does not vanish when integrated along a cyclic trajectory:

∫σ ≠ 0 along a cyclic trajectory

And, as a matter of fact, this integral is “most often positive” with respect to some probability measure over the trajectories in state space, according to the 2nd Law of thermodynamics. But let’s throw this issue into the closet for the moment.

If σ happens to be exact, then the system is “not a nonequilibrium system” (I could at this point turn this double-negation positive by mentioning “equilibrium”, “detailed balanced”, or “time-reversibility”,  but these three expressions have subtle meanings that I prefer to bury in the above-mentioned closet) . The connection to thermodynamics is that, for non-non-equilibrium systems, Σ comes to coincide precisely with the internal entropy of the system, Σ=S.

– – –

So far so good. Unless we are considering an non-non-equilibrium system, entropy production and production of entropy seem to be very different concepts with a very similar name: one is an exact form, the other is an inexact form, and the two only coincide if the system is not a nonequilibrium system. So where does the confusion originate from?

The reason is that people like to think of entropy production along a path occurring within the system, as production of entropy within some environment Y. The idea is that such environment (which can be further split into several reservoirs, if there is good reason to distinguish among them, but this is also in the closet) accumulates entropy in an ever-increasing way as transitions occur within the system. However, as I will try to argue below, this idea of “adding up to the entropy balance” is very rough and imprecise, and if taken too seriously leads to confusion, in particular to that masterwork of confusion that is the following formulation of the Second Law of thermodynamics:

“the entropy of the universe cannot decrease”.

To me, this is completely nonsensical. Let’s see why.

When people think of the universe they don’t actually mean the Universe, the multitude of things that actually surround us, but a portion of that multitude that is reasonably isolated from external influence, that is, something that ideally includes the system and the environment and that has no further environment beyond its boundaries. Let XY denote this universe with states (x,y) resolving the system’s and the environment’s states. Now, we are free to define the universe entropy as some state function SU(x,y). We’re obviously back to the initial considerations in this post: As a state function, the production of entropy vanishes along any closed path in XY so that the entropy of the universe will remain fairly constant in time  (in fact, if the evolution in the universe is unitary – for quantum systems – or Hamiltonian – for classical systems – then the entropy – Von Neumann’s or Shannon’s – is a constant of motion). There is no “thermodynamics of the universe”.

So let’s focus on the system by disregarding the environmental degrees of freedom, that is by projecting (x,y) → x. The idea is that, upon this projection, the exact differential dSU describing the universe increase of entropy provides an inexact differential σ in the system’s state space describing the… well, how to call it? People like to call it entropy increase, or production. Like it or not.

Let’s give a simple example for simplicity. Suppose  XY = {(a,A), (a,B), (b,A)}. Now let’s project down to X = {a,b}. A closed trajectory in X is for example a → b → a. To this trajectory, the following paths might correspond in XY:

(a,A)→ (b,A) → (a,A)

(a,A)→ (b,A) → (a,B)

Notice that the first path is closed in the universe, but the second is not (but it’s still closed in the system!). Hence, at the end of that second path the “entropy of the environment” has increased by amount SU(a,B) – SU(a,A). This defines our entropy production σ in the system, and shows that the entropy production is path-dependent. Notice that this entropy production is not the differential of a state function in a quite literal way, because SU takes two possible values at x=a. As a consequence, its increase between two system’s states takes different values according  the “hidden path” in the environment.

I didn’t give this example just “for simplicity”, but because defining σ in a rigorous way out of dSU is a quite challenging task.

Conclusion: Is it legitimate to call the entropy production a production of entropy? Yes and no: we need to keep in mind that this entropy production in the production of entropy is “virtual”. It can be figured to happen in the environment but it does not correspond to an actual state function called “entropy” that increases. If one wants to include the environment into the description, and have a system+environment = universe, and talk about an entropy function there, one is back to the situation where this entropy function cannot increase, and thus one can forget about the second law. This is because thermodynamics is intrinsically about open systems.

Perspective: Defining σ out of SU and a system/environment splitting is certainly doable in discrete state spaces, though it requires a lot of analysis of special cases. It is certainly much more involved when it comes to continuous state space. But what is really interesting is how to generalize this question to arbitrary differential forms, discrete or continuous. And to understand the role of dualities.