Insights in robustness and plasticity of metabolic phenotypes from large-scale metabolic modelling

Interesting talk by Zoran Nikoloski at LCSB, bipartite: one first very theoretical part discussing concentration robustness and a second part which remained unclear to me.

Genotypes encode metabolic networks that yield particular metabolic phenotype. Metabolic phenotype is determined by fluxes of chemical reactions and metabolite levels.  What determines metabolic fluxes? The concentration of substrates, of the active enzyme, and of other regulatory effectors (activators and repressors). Nikolski considers enzyme kinetics with mass-action law and thermodynamic constraints (he only mentions invertibility of reactions). The structure of the networks is resumed in a metabolic network and / or in the stoichiometric matrix. The space of feasible flux distributions is shrunk by analysis of genome, metabolome, thermodynamics, etc.

The plasticity of fluxes and robustness of concentrations is believed to be an important characteristic of metabolic networks. A very mathematical theorem by Shinar and Feinberg,

Shinar, G., and Feinberg, M., Structural sources of robustness in biochemical reaction networks, Science 327 (2010).

relates robustness to the so-called deficiency of the network. An analogous criterion has been devised by Nikolski and coworkers

Eloundou-Mbebi, J. M. O.; Küken, A.; Omranian, N.; Kleessen, S.; Neigenfind, J.; Basler, G.; Nikoloski, Z.: A network property necessary for concentration robustness, Nature Comm. 7 (2016).

where they write:

“Our main result is based on establishing whether or not the structural deficiency changes upon removing a single component from the network. To this end, we rely on the network obtained by eliminating a given component from each complex containing the component. Removal of a component may drastically alter the network, in terms of number of nodes, linkage classes and the rank of the stoichiometric matrix. […] The idea of removing a component from biochemical reaction network has been previously employed to make statements about the possibility of the network to exhibit multistationarity. Namely, for a given set of rate constants, it has been shown that if a reaction system obtained upon removal of a component admits multiple non-degenerated positive steady states, so does the original system. Therefore, this result may be used to identify subnetworks conferring multistationarity to the entire network. Here, we establish a connection between a structural deficiency, as a key network invariant, and ACR [Absolute Concentration Robustness] for a particular component. It is this connection that allows us to apply the results to large-scale networks, typically arising in the study of metabolism.”

Therefore it appear that this criterion would have more application to actual metabolic networks than the one devised by Feinberg. Actually, the criterion they discuss is a necessary condition for ACR and not a sufficient one:

“Consider a mass action reaction system that for given rate constants admits a positive steady state with and without removal of a component S. If the system has ACR in species S, then the systems with and without removal of S have the same structural deficiencies.”

Therefore, if removing a species modifies the deficiency of a network, then that species is certainly not robust. In their paper, they also have a very nice plot showing how many metabolites are robust across several kingdoms of life.

Science @ Festivaletteratura 2016

In a few days will occur the 20th birthday of Festivaletteratura, a major literature festival that takes place in the beautiful Renaissance city of Mantova, in northern Italy. In this context, since quite a long time already, I have the chance to give advise on the organization of events of scientific dissemination and discussion.

As a scientist, I try to inflate the same ethos that inspires (or should inspire) scientific inquiry, avoiding fags and stereotypes, inviting real experts, asking them to go in-depth where necessary, presenting all of the doubts and weaknesses behind scientific discovery*. With time, I noticed that the public was more interested in the content matter, rather than in its narration, which prompted me to propose (among others) a series of shorter events, at the blackboard, where scientists would be able to attack one and only one concept or key technical aspect of their work that they deem important.


Gian Francesco del Giudice

Francesca Vidotto


Giovanni Bietti

It is impossible to measure whether these events are a success (they are always incredibly crowded, but they are also ticket-free in a crowded situation…), but that’s not the point. People come and listen, sitting on the staircase of the beautiful cathedral of Mantova, and despite the fact that on average they only understand, say, 1% of what is being said (which includes a handful that understands 50% and have their perspective on something completely subverted…), I believe that this event succeeds in giving them the impression that knowledge is a very, very subtle issue. This is way more important than being fascinated by 100% of the usual misleading metaphors that only serve to deceive people into supporting… science?!

In proposing this kind of events, I need to get out of the modes of the industry of mass culture, of “infotainment”, and of “science popularization”, which, using the words of one of our guests this year, most often lead to a conception of  “science-as-Hollywood with its superstars, their vanity  bolstered by the media industry, and a new style of popular science publishing.” [H. Collins, Gravity’s Ghost]. I hope, with this kind of operation, to help avoiding what could be “the fate of science – to be a secular religion servicing the economy and the entertainment industry. But science […] has the potential to lead, not just follow.” [ibid.]

On this line, this year Festivaletteratura presents a multi-faceted project on Gravitational Waves, one event on Darwinism revisited, one about science vs. humanities, and somewhat off the main track a recollection of Ivan Illich’s thought. Below, I attempt a very coarse translation of the texts I wrote in Italian into English.

THE SOUND OF GRAVITY – A new window on the universe (Alberto Vecchio & Amedeo Balbi)

At 11:50:45 (UTC+1) of September 14, 2015, three mirrors oscillate for less of a millionth of the dimensions of a proton. It is the signal of the first gravitational wave ever observed by humankind. According to Einstein’s theory, every accelerated object emits a wave that propagates through the whole Universe. The direct observation of such waves has for long eluded direct observation and deluded more than one scientist. Gravitational waves are extremely weak, so much so that to detect them it took one of the most complex scientific enterprises of all times, the LIGO-Virgo collaboration. Wherefrom comes a gravitation wave, and how do we observe it? What new prospects for the observation of the universe?

THE SOUND OF GRAVITY – Information’s long pilgrimage: from black holes to scientific discovery (Eugenio Coccia e Harry Collins)

According to Stephen Hawking not only does God play dice, but sometimes he throws them where they cannot be seen: inside a black hole, an infinitely deep well of information. Nevertheless, recently we “heard” the merging of two black holes. To finally reconstruct this piece of information we needed one of the biggest research projects of all times, the LIGO-Virgo collaboration. Physicist Eugenio Coccia (Virgo experiment) and sociologist Harry Collins (infiltrated into the LIGO-Virgo collaboration) intertwine a dialogue on several meta-levels, on what is a gravitational wave, and what is the scientific fact “gravitational wave”.

THE SOUND OF GRAVITY – Waves of knowledge (Harry Collins)

When a gravitational wave hits a detector, scientists have to decide whether that event is “real”. Its ripples then further propagate to the scientific community and to society, before it is eventually accepted. Then, is scientific knowledge a social construct?

THE SOUND OF GRAVITY – Black Holes: alfa & omega of the Universe (Eugenio Coccia)

“The black hole teaches us that space can be crumpled like a piece of paper into an infinitesimal dot, that time can be extinguished like a blown-out flame, and that the laws of physics that we regard as ‘sacred,’ as immutable, are anything but.” (J. A. Wheeler).

THE SOUND OF GRAVITY – GW150914 (Alberto Vecchio)

When two black holes collide, keeping a long distance it is possible to pick up their sign. With some simple graphics and basic notions of physics it is possible to establish what is GW150914, the first gravitational wave ever observed by humankind.

Is Darwinism through a crisis? (Massimo Pigliucci & Guido Barbujani)

How can the variety of biological species be explained, with the mechanisms of mutation and selection only? Darwin himself posed this dilemma, as he dedicated a whole chapter of On the origin of species to difficulties of his own theory. His doubts propagate until today: the field of evolutionary biology is undergoing a phase of internal criticism, where defendants of the so-called Modern Synthesis, proposed in the years ’30 and ’40 of last century, face the advocates of a new Extended Synthesis that might amplify the Darwinism of the origins. Is this a scientific revolution? Or is it just a small – but due – change of route? Or else, just sheer polemics?

Human races? (Guido Barbujani)

While the concept of races looses weight among scientists, the rapid transformations that our society is undergoing are digging it out. The idea of race seems to be intuitive, yet now that we understand well the DNA we realize that there are is no such thing as distinct biological races.

Science without philosophy? (Massimo Pigliucci)

According to Albert Einstein, scientists are not intellectual unless they are also philosophers, while for Stephen Hawking philosophy is useless because it does not advance science. Where does it come from, and where it goes, the spreading variance between technical and human sciences?

Blogging as a modest exercise of intellectualism (Massimo Pigliucci)

The figure of the intellectual as a reference point on topics of social relevance is through a crisis, despite blogs and social networks are formidable instruments to broadcast their thought. The biologist and philosopher Massimo Pigliucci talks about his experience in the public arena, and about the attitude that both parts (academics and the laymen) should keep in order to make the conversation fruitful.

Ivan Illich: nemesis of modernity (Franco La Cecla and Piero Zanini)

“Societies in which most people depend for most of their goods and services on the personal whim, kindness, or skill of another are called underdeveloped, while those in which living has been transformed into a process of ordering from an all-encompassing store catalogue are called advanced.” Ivan Illich was one of the deepest and most radical and organic critics of modernity, and of the corruption of its institutions, including school, health, and religion. The anthropologist Franco La Cecla and the geographer Piero Zanini will confront with the impressive actuality of his thought, which is often abused, ignored, and misinterpreted.

Binge (Franco La Cecla)

A binge of words and images about food are devastating its sense. The mystification goes through the very  presumption that food is a cultural fact.

Welcome do Padania: the geography of landscape disaster (Filippo Minelli, Emanuele Galesi, Paola Bonora and Wu Ming 1)

[Come documentato con ironia nel progetto grafico Padania Classics, la “macroregione padana” esiste: un susseguirsi di capannoni vuoti, strade, svincoli e varianti, cantieri abbandonati, cave, discariche, parcheggi, centri commerciali, rotatorie, compro oro, piantumata a finti palmeti e insegne aziendali, costellata di spazi pubblicitari, teli forati arancioni, new jersey di plastica bianca e rossa. La decadenza paesaggistica è solo uno degli aspetti del fenomeno complesso del consumo di suolo, che divora territorio senza fine (in tutti i sensi della parola) e senza alcuna pianificazione, con ricadute in ogni ambito della geografia umana. Una chiacchierata tra l’artista ideatore del progetto Filippo Minelli, il giornalista Emanuele Galesi e la geografa Paola Bonora (Fermiamo il consumo di suolo), introdotti da Wu Ming 1 (Cent’anni a Nordest).]


* Actually, because I only invite active scientists who do not make their life out of science popularization and other “derivative values” of science [cit. Collins],  I do not have to ask them to attain to these principles. Most often, I actually have to play the devil’s advocate and try to avoid some obvious communication errors – like presenting 80 slides in a one-hour talk etc.



Live-blogging from STATPHYS26 /5

Last day. Blogging will be feeble.

Force from non-equilibrium fluctuations, M. Kardar. Fluctuation force: pressure of particles against the walls. As the temperature goes to zero the length scale becomes larger than the distance between the walls and one obtains Casimir forces [see “The theory of molecular attractive forces between solids”] Quantum effect. In classical systems you can get this kind of interactions if you have long-range correlations: the Goldstone mode, and when you have a nonequilibrium system. Rytov: fluctuational QED. Kardar considers an analogue of Rytov’s theory based on fluctuating hydrodynamics which is hydrodynamics with noise.

Final session. This was actually quite an interesting section. Lecomte’s talk on finite-time corrections to large deviations and dynamical phase transitions, an interesting talk on coalescent random walk (that is analytically solvable). Hartmann proposed a method to calculate work statistics using a computational trick based on generating a whole series of random numbers first, and then accepting events based on a modified “tilted” Metropolis rule, something that might provide an alternative to other algorithms like cloning. A nice large-deviation approach that they applied to an Ising model with 128^2 spins (quite a lot already). He also launched


Live-blogging from STATPHYS26 /4

Active matter, S. Ramaswamy. From a title like that one might expect a general overview on what is active matter and its properties. However the talk stirs soon towards a presentation of a simple system of Langevin equation, and the claim is made that “active matter is just a Langevin system”, which I find to be highly contrived [by the way, I already had this doubt from the Parma workshop, with respect to a talk by G. Gonnella on active particles modelled with Langevin equations: what I don’t understand is how it is possible that an external field would allow for the peculiar individual behavior of active particles, but indeed it might be the case and I just have to figure this out on my own. Surely, the system Gonnella was considering was more elaborate and complex than the one presented here]. Anyway, I don’t see how this theoretical model then has anything to do with the rest of the results presented in the talk.

Network geometry, Ginestra Bianconi. Simplicial complexes are used in quantum gravity [where they emerge from an underlying theory]. She wants to apply nonequilibrium ideas to network geometry. She makes the simplicial complex (or its underlying skeletal network) grow by adding stuff. The boundary of small-world networks scales like the volume, and she says they use the master equation, don’t know in which respect. They define a generalized degree which is the number of generalized multi-hedra that are incident to a given one [it appears that this has nothing to do with the Laplacian which I’m in love with]. It’s all a story of defining things, e.g. introducing an arbitrary probability for attachment, and with this rule they obtain growing structures – so basically it seems to me that they are defining an algorithm to build growing high-dimensional structures that are scale-free [by construction?].

Models of antibiotic action on bacterial growth, M. R. Evans. Active matter: self-propelled constituents [by this very definition I doubt that they can be described just in terms of external fields and noise]. Sometimes I have the impression that “active matter” is a new word for “automata”. Generally, active matter lives in non-equilibrium states exhibiting collective behavior, and don’t obey the fluctuation-dissipation theorem [is this consistent with modelling with Langevin equations for which FDR holds?]. Evans has a sort of Fokker-Planck equation where the motility depends itself on the distribution , so it’s non-linear and it cannot descend from the Langevin equation [see notes on the first talk today as about whether this is or isn’t possible.] Another model that he uses is random walkers with memory. So my question remains open: is it possible to model active matter with Langevin equations? From this talk it would seem that no.

Thermodynamics of the motility-induced phase separation, Solon et al. More active matter talks later: run and tumble particles, etc. moved by “internal forces” (what that means is still obscure to me). For microscopic models, hard-core repulsion with a potential [Fily and Marchetti PRL 2012, Stenhammar PRL 2013, Takatori et al. PRL 2014, Solon et al éR 2’15, Redner et al. PRL 2013, Mallory et al. PRE] or other. The Cahn-Hilliard equation can be used to understand liquid-gas phase separation in equilibrium. It is a very nonlinear equation for the density’s time evolution.

Collective behavior in animal groups, I. Giardina. Flocks of birds: global order, scale-free correlations, collective turns (two individuals are able to influence each other, even if they are far apart). Swarms of midges: no global order, no polarization, collective behavior is not related with anti-predatory behavior but to mating, still correlations show quasi-critical behavior and collective response. Collective turns are interesting because they can be monitored to study information propagation in animal groups. They turn because there is a predator, but not necessarily, they can also respond to noise. I really like this talk because it analyzes processes with “dynamical” concepts and it does not fill the analysis with pseudo-scientific concepts from equilibrium statistical mechanics. The track trajectories quite precisely (including oscillations due to flapping). There is first a linear propagation of the turning wave, like a normal wave, with a final damp; interestingly, it is four-times larger than the birds’ velocity, in the birds reference frame. There are several different velocities according to the event, and there is no clear dependence with respect to size of the flock etc. So the questions are: Why linear propagation, why weak attenuation, what is the propagation speed, and what triggers turning? As model of flocking is the Vicsek model \partial_t \varphi = \nabla^2 \varphi + \xi (or a discrete version of this), where \varphi is the angle (and here there is already some approximation). Dispersion relation \omega = i J k^2. This is not what they actually find, because the Vicsek model is not very reasonable because it requires immediate response of birds. So they have to keep in to account a rotational inertia and conservation laws, and they obtain a second-order pd.f., and in the limit of zero noise the rotational invariance of the system resists, one obtains \omega = c_s k and x=c_st which is what you expected (just because it’s a second-order wave equation). this also predicts that the speed can be expressed in terms of measurable quantities, so there is a clear-cut prediction of the theory. The fit is quite nice [Nature Phys. 10, 2014]. With simulations, they see that according to the Vicsek model if one makes one bird turn, the others don’t follow, and instead with inertia if one turns all others do as well (that’s really intriguing: would every bird be able to decide for all? What if he is half-crazy? Doesn’t this make it very unstable?). So what triggers spontaneous turns of the flock? Simple statmech arguments on spontaneous fluctuations don’t give such frequent turns.  Two things are different with respect to the Heisenberg model: the network is irregular, and interactions are not symmetric because of directionality, so the zero mode of the Laplacian (which gives the relaxation time) does not scale with N like in the Heisenberg model. [Personal note: after all I am somewhat intrigued by this use of spin models to study dynamical information such as currents.].

Serious work, good talk. The room was packed on occasion of this invited talk. By far one of the best things I’ve heard so far.

Searching for the Tracy-Widom distribution in non-equilibrium processes, H. Spohn. Vintage slides, great clarity, as usual. TASEP with preparation of all the particles on one side, and one can ask for the time-integrated current across the origin and look at the typical profile. Usual currents

\Phi(0,t) \sim - t/4 + (\Gamma t)^{1/3} \xi_{TW}

where the distribution \xi_{TW} is a universal distribution (and the dependency on time to the one-third already tells you that there is no central-limit theorem).It is the distribution of the largest eigenvalue of GUE random matrices, Tracy-Widom 1994. In what sense it is universal? (while 1/4, and \Gamma are model dependent). Does TW show up for 1-dimensional fluids? You have to look for it in a subtle way. He considers a model of a harmonic chain running with Hamiltonian dynamics and an initial domain wall, in search of a “rarefaction wave”. There is an hyperbolic conservation law in t,x space and becuase of this the current is a curl (in x-t space). Sort of. But I must confess I’m lost already.

Rigorous bound on energy absorption…, T. Mori. A formulation of ETH: Every energy eigenstate is locally indistinguishable from thermal equilibrium. \langle \phi_a |O|\phi_a|\rangle = \langle O \rangle_{thermal}. Floquest ETH: the temperature in the steady state should be infinite (?!).

Second order response theory, M. Kruger. An example of nonlinear response is nonlinear optics, where you have a powerful laser and one has the generation of a “second harmonic wave”. The system evolves along a path with some probability with action that can be separated into the symmetric part and the antisymmetric part, and as usual one considers the usual fluctuation dissipation part linear in the perturbation plus a second order term, where both the activity and the symmetric part appears (three-body correlation function). the question he asks is: is this quantity experimentally relevant? [Basu, Kruger, Lazarescu etc.] So apparently they did an experiment with a single colloidal particle performing Brownian motion, anharmonic overdamped oscillator, etc. As usual, these experiments exactly fit the theoretical framework of Stochastic Thermodynamics so we don’t expect anything strange to happen. The did the Fourier analysis of the response. It is not clear to me which data exactly they fit, but it seems to be a quite clean result.

Temperature response of nonequilibrium systems, M. Baiesi. Susceptibility to a change of one temperature in a system out of equilibrium. [Note: they use the overdamped equations with two baths, that we know don’t make too much sense]. The rest I already kind of know.

Thermodynamics of phase coexistence in nonequilibrium systems, R. Dickman. There are two systems that are able to exchange particles but such that the current ends up to be zero, and if one of them is a reservoir than you can use it to measure the chemical potential of the other (?). Driven lattice gas with nearest-neighbour exclusion interactions with a critical density. Katz-Lebowitz-Spohn model. But all is very vague.

Nonequilibrium thermodynamic potentials, G. Verley. I’m not too fond of the word “potential” applied to nonequilibrium systems, but if it allows people to make contact with what they know then let it be… Nice talk, very pedagogical: he managed to give an intuitive picture on what is the effect of changing the temperature of a system does on both the antisymmetric and the symmetric properties of a system. Interesting Onsager relations that I should take a look at in view of my upcoming work.


Live-blogging from STATPHYS26 /3

After the first two days, it is remarkable to notice that the nonequilibrium parallel session is by far the most populated. That’s encouraging…

Fluid models as scaling limits of systems of particles, L. Saint-Raymond. From microscopic Newton’s equations, to mesoscopic description in terms of Boltzmann’s kinetic equation, and to a macroscopic description as a continuous fluid equation of hydrodynamics. Boltzmann equation: f(t,x,v) fraction of particles of position x and velocity v at time t. Particles are only transported by velocity in vacuum, and the velocity is only changed by collisions. Of course at some point a dissipative step (in the eye of the beholder) will have to be attained. So one obtains

\partial_t f + v \nabla_x f = \alpha \int \int [f(t,x,v') f(t,x,v'_2) - f(t,x,v)f(t,x,v_2) ]|(v-v_2)\cdot \nu_2| dv dv_2 d\nu_2

Mass, momentum and kinetic energy are collision invariants [Q: is angular momentum as well?]. E.g. mass:

\partial_t \int f dv + \nabla_x \int d v dv

is we have a well-prescribed profile for f(x,v,t) we would obtain from this an hydrodynamic equation. Lyapunov functional of the Boltzmann equation:

S(t) = - \int f \log f dx dv

is an increasing function of time. That this is the case for the Boltzmann equation it’s fine. But that this should be the case for any system, it’s a crazy idea and lots of people are obsessed with this. The maximum of the entropy, constrained on conserved mass, energy and momentum is

\log f(v) = \gamma + u\cdot v + \beta v^2

This is true in the limit where the collision process is much faster than transport, in the long time. By plugging this equation in the conservation laws one obtains the Navier-Stokes equations, which is then the quasi-classical mean-field sort of limit of the Boltzmann equation. Notice that from Newton equation to Boltzmann equation some irreversibility entered (because the entropy is a constant of motion of Newtonian mechanics), somehow embedded in the collision integral: but it’s not clear where such dissipation is. Maybe in the angular momentum which might not be conserved? [Gianmaria says that also angular momentum is conserved. So what is not conserved?]

Statistical point of view: The starting point is the Liouville equation of the N particles, and one is interested in the first marginal with respect to one particle (that’s where the dissipation will come about, in this N to 1 process). At some point one resorts to the BBGKY hierarchy, where the molecular dyamics can be analyzed in terms of all possible histories, which give rise to collision trees, from the final time to the initial (you want to retrace the “parents” in the tree; the difference between the Boltzmann hierarchy and the BBGKY hierarchy is that in one of them you cannot have recollisions, i.e. the meeting of two particles that already met and hence are already correlated. There exists a theorem that tell you convergence after just a few collisions, but not in large times where one recovers the fluid dynamics.

[An idea: the BBGKY-kind of methods truncate the equations for the moments of some variable; but what if one considers another variable, or a coordinate transformation of a variable, which in some way is “optimal”?]

Human rights session. Apparently it’s a tradition instituted by J. Lebowitz at the time of the cold war, when it was difficult to have Russian scientists travelling through the iron curtain. A resolution of the United Nations for over-concern with security measures made so that five North-Corean scientists had to be expelled from SISSA and other institutions, to avoid (I guess) to provide them with some sort of strategic military information (they were working on string theory and DNA…). [By the way I was somehow relieved that the first mention of violation of human rights is a charge on the UN, since I have the eerie impression that human rights talk often serves as a vacuous propaganda tool when it comes from the side of this sort of international institutions]. Stefano Ruffo presents the case of Omid Kokabe, who was arrested in Teheran possibly because he refused to work on military projects.

The disorder created by entropy is in the mind, Daan Frenkel. Promising foundational talk. Entropy: the pre-history, the computer age, puzzles and send. Stat Mech started was pre-existent in thermodynamics, Sadi Carnot built the entire language (apparently he thought that heat had to be conserved and missed that it was heat+work, otherwise he would have got all of thermodynamics). Thermodynamics: 1st law energy is conserved, 2nd law heat does not spontaneously flow from cold to hot. Thermodynamics would not be so much hated by students if we sticked to these formulations (sic!). We must blame Clausius, who defined a quantity called entropy and found that in a closed system it never decreases and has that famous sentence that the entropy of the universe never decreases. But he didn’t say what is entropy, and we had to wait for Boltzmann. The person who actually first wrote the Boltzmann formula was Max Planck who said that inspired by Boltmann S = k log W + const. (the constant was then lost). And it was Max Planck who proposed to put that formula on Boltzmann’s monument. Popular view of entropy (Wikipedia): entropy is a measure of disorder: this is very often true, but it is NOT the second law, and it is dangerous to interpret it in this way [I so much agree with this! But I also think this is embedded in thinking that the second law is a statement about a thermodynamic potential (an exact form), which is not: it’s about the inexact form “flow” which can only be interpreted as entropy differences in reversible systems: so in a way the confusion about entropy arises because one keeps on talking about a potential for a 1-form that is not exact: which is precisely what happens when one moves from Carnot’s statement to Clausius’s]. e.g. Hard-sphere freezing (“Kirkwood-Alder transition”, whose simulations were run by another woman not acknowledged) is increase in order driven by entropy. Lesson: entropic ordering can lead to complex structures [Here I think we should point out that the use of the word “entropic” sometimes is not related to an actual entropy, but just to the understanding that certain processes are not “energetic” [what this difference amounts to is also mysterious];  but at least this becomes a discourse about processes and not about states]. “If I can’t simulate it then I can’t understand it”. He then goes into his favourite free energy calculations and start computing entropies. In one of them, he observes that not all volumes are all equally alike and uses the Gibbs entropy [I find that the interesting foundational premise is a bit betrayed by the development: the results are all above calculating “the entropy”]. In one of his systems, he finds that the log N! factor matters, and tries to combine this with the Gibbs paradox but without quantum mechanics, if you don’t have an operational procedure to distinguish particles (yet stating that entropy is all about the paradigms in one’s mind, with Kant). Entropy and society: BREXIT people vouching against entropy (and I would be among them); the other one with a cartel saying something like “I don’t trust the experts and science but I vote” (and I agree with this again…).

Ultimate statistical physics: fluorescence of a single atom, Yves Pomeau. Very technical.

Leo Kadanoff memorial, M. Feigenbaum. After a saddening introduction by Procaccia who ascertained how some of Kadanoff’s friends are either dead like Kadanoff himself or so old that they cannot easily go to conferences, emphasizing that a generation is gone – whereby I had the eerie sensation that he was implying most of these guys will soon be dead as well – he left the stage to Feigenbaum. After a lyrical prologue about the celestial eyes of Leo, Feigenbaum took the chance of talking about his own results (one of which absolutely crucial in his own words) in the light of his friendship with Leo, and lingered on a few episodes that were meant to give an impression on the humanity of Kadanoff, but which the other way around returned the image of a very competitive, overly proud person.

Live-blogging from STATPHYS26 /2

Statistical physics for real biological networks, W. Bialek. What you look back to history the attempt to apply statphys to life was to (over?)simplify and disconnect from experiment. What changed recently is the amount and quality of data. After a quite generic introduction, he concentrates on the question “what is the probability distribution?” of whatever, and then attempts an answer in terms of usual equilibrium concepts such as energy, maximum entropy (consistent with some constraints). With flocks of birds, he will “try the idea” that the most important thing is the correlation of velocities between neighbours.  He writes down a sort of Hamiltonian and then computes the maximum entropy model consistent wth these measurements. I’m not a fan of maximum entropy methods because they seem to be to be tentatives of objectivize what is really just a subjective judgement; and in doing this one might not realize that he is not making statements about nature, but about our way to describe nature (but is it possible to do otherwise? That’s a tricky question) (This also came about as an answer to a question, where he says that MAXENT is a method to do the minimal model that contains no further structure than that that is needed to reproduce the data, but what kind of analysis of the data, what kind of structure one wants to see in the data he does not take into consideration]. This is a sort of Bayesian inverse reasoning where things eventually work because they work, and if they don’t work one shall move to something else, so I hardly believe it has any predictive power, and overall I don’t see a unifying inspiring principle in this talk. I’m surprised that searching for the word “nonequilibrium” in his draft Biophysics: Searching for Principles I only find one marginal instance; even the chapter on chemical reaction dynamics has a treatment of diffusions only in conservative fields. I don’t believe that equilibrium statistical mechanics has anything whatsoever to say about biological systems; it is a usual tendency that people fall in love for their favourite model or scheme and try to apply it to just anything. That happened with the Ising model, with “self-organized criticality”, and it will also happen to nonequilibrium stat mech at some point. He openly states that “What I’ve done is I built the simplest model that is consistent with a set of data”, and then he checks that it predicts things correctly; but for this process to be scientifically sound one should show or argue that the set of “predictions” is significantly different from the facts to which the model was fit. Moreover, I see the possibility of a problem with “overanalysis”, which is well-known for example in large collaborations such as LIGO-Virgo: the tendency to do a lot of different analysis, some of which eventually turn out right. It might be that there is more substance beyond this talk (which is mostly a collection of well-known words and ideas from the field, vague enough so that nobody could possibly disagree), but at the moment I don’t see this as a “search or principles” in biophysics.

Floquet quantum many-body physics, R. Moessner. Equilibrium thermodynamics: equilibration, and thermalization via “entropy maximization”: everything can be described by few parameters. He wants to address thermodynamics for coherent quantum dynamics (no baths, just the wave function). “Thermodynamics holds as a matter of course”. Mentions “eigenstate thermalization” [Deutsch, Srednicki]. So apparently it’s precisely in that line of research where things don’t make sense to me; of course at some point there will come approximations that open up the system. Soon enough in fact: periodic driving is one such thing; he considers tides on a beach and talks about the “Hamiltonian”. Anyway, let’s go to the math. He takes H(t+T)=H(t). [“Something that goes out of the window is equilibration”: and otherwise?] then defines H_{Floq} = i /T \log U(T). This Floquet Hamltonian has properties. By the way, I think that the whole program of ETH is the obsession for “energy” that people have.

NonEq Stat Mech of systems with long-range interactions, Y. Levin. Core and halo structure of particles with long-distance interactions. He says that this structure violates the second law of thermodynamics (I guess, in its pointless formulation as “entropy should be maximized”, which should mean that the system should go to a uniform distribution). [Question: in what sense would that violate the second law?] He then concludes that the system should be nonequilibrium, on the basis that it does not have Maxwell-Boltzmann distribution. Fortunately he states that one should just go to the basis, which is kinetics. Fortunately, because that’s the only thing that makes sense: to consider processes and not names of things. [Question: by nonequilibrium one often means that there is a sustained current through the system: is there a sustained current in this system at its steady state?] Not surprisingly, under the Vlasov dynamics the entropy is a constant of motion and this should not be a surprise! [Levin et al. Phys. Rep.] That’s because the volume of phase space is preserved. He has to introduce a dissipation mechanism, which lies in the coarse-graining of the phase space: once again, it’s in the eye of the beholder! That’s where dissipative dynamics ensues. Who decides in what coordinates these cells should form a lattice? There is no favourite canonical set of variables*.

Statistical mechanics of general equilibrium theory, M. Marsili. Assumptions: Markets are competitive, price taking behavior, single period economy, welfare theorems etc. general theory theory states that you can obtain Pareto-optimal situations. [Question: beyond the specific assumptions that regulate markets, isn’t the very paradigm of “equilibrium” to be shaky, given that there are lots of external driving forces that drive these theorems?] Sheer reductionism at its highest expression.

A general comment: Unfortunately this is not a conference where it is possible to ask questions and start discussions easily, most people just stick to their communities. In huge controlled societies where in principle there is the possibility to interact with anybody, in practice people find enclosures and niches.

Phase separation out of equilibrium, T. Speck. Colloidal suspension in shear flow, with broken detailed balance. Phase separation of active particles, the single particle has symmetry-breaking.

Thermalization of a quantum system from first principles, Ithier and Benaych-Georges. They at some point deliberately introduce some randomness in the problem with a high-dimensional perturbation of the reduced density matrix. They argue that this is the same as the motivation of Wigner and Dyson, and they obtain the phenomenon of concentration of measure. The math s quite straightforward so I should be able to work that out form their paper. This talk is only based on an arXiv preprint and a paper “in preparation”.

Dissipation bound for thermodynamic control, B. Machta. He uses the Fisher-Rao metric and states that the entropy production bounds it with respect to two protocols [also: Sivak & Crooks]. He claims that there is a finite cost to control the Carnot engine for example and make it into a cycle, and they estimate this quantity. He models the system in terms of an Hamiltonian depending on some parameters. This might be the more interesting talk today. [Q: Where is the stochasticity entering? Not clear] [Q: The Fisher-Rao information metric is a measure of distance in a space of probability distribution: where does this space of distribution come from? Also, a classical result in estimation theory is the Cramer-Rao inequality: is this related to your bound?]. He obtains:

\langle \Delta S_{tot}\rangle \geq 2 \mathcal{L}(\lambda_0,\lambda_f)

B.B. Machta, Dissipation bound for thermodynamic control, PRL 115, 260603

Entropy production rate near nonequilibrium phase transitions, J. D. Noh. The paper by [Sevilla et al. Jstat. Phys 2104] is interesting. Also this talk is interesting but I’m tired.

MAXENT out of equilibrium, I. Ford. He argues that MAXENT is not only a principle of logical inference, but it has a basis on stochastic dynamics: entropy is configurational uncertainty, and it increases under typical dynamics. But he wants to maximize some components of entropy production. He will consider a Fokker-Planck equation with a nonconstant temperature (that’s what he means by nonequilibrium). He considers the mean entropy production. He then interprets the heat transfer as the change of uncertainty in the environment. He has the strange formula:

\frac{ d \langle \Delta s_{tot} \rangle }{dt} \geq 0

which is not the second law, it is something stronger. What it means is not clear to me and I don’t believe it holds in general stochastic systems. But it is a conjecture that can be checked (possibly my paper on convexity might already contain the answer…).

Canonical theory of dissipative systems, M. Suzuki. Wonderful vintage presentation, hand-written. Also the concepts are vintage. So sweet.

* This would be an interesting research project: to study different “equilibration” mechanisms by coarse-graining systems that have different sets of canonical variables.