I am doing a little spring-cleaning and tiding of my files, folders and emails, not so much to free the computer’s memory (which is infinite for all practical purposes), but rather to retrieve what was actually important in them. Among the several sketched novels, unwritten tales, unfinished scientific papers, and unsent love letters, I am amazed and amused by the amount of brief messages and notes that day-by-day I sent myself over an timespan of several years, and that infest almost every corner of my computer.

Scared of not being able of retaining pieces of information that he deems important, the myself-of-today constantly harasses the myself-of-tomorrow with plentiful messages, burdening him with the duties he was not able to tackle himself. Like a stalker, the myself-of-today chases the myself-of-tomorrow in every new environment: while some years ago he would only leave notes that opened up when logging into the computer, and that could easily be done with with CTRL+Q, now the myself-of-today attacks from all angles, sending emails, SMS messages, saving pdf files with SCREAMING.pdf titles on the Desktop, and even leaving hand-written notes (I found some in the wallet!). This is obviously unsustainable.

Fact is that emails accumulate (over 200 unread, most from myself-of-yesterday), the SMS inbox is always full, and SCREAMING.pdf files are sent to the TOREAD folder (one of many TOREAD folders, one nested into another…).

On the other hand, the myself-of-today is a real asshole.  That’s because he constantly makes promises to the myself-of-yesterday: yes of course I will look at this, I will do that, let’s be friends etc. The day after, he forgets, he procrastinates, and finally when enough time has passed he’s done with the myself-of-several-weeks-ago.

However, the myself-of-today, as a real bitch, knows perfectly well that if he wants to maintain any power on the myself-of-yesterday and on all his ancestors, every once in a while he should give him some consideration, just the little amount it takes so that the myself-of-yesterday sticks to the grand project that the myself-of-today has, keeping well in mind that such project will soon be surpassed by the one of the myself-of-tomorrow…

In all this, somehow the myself-of-several-years-ago managed to insert some messages into bottles that floated into a sea of bits and bytes of my computer. They often get lost, but somehow some of them run ashore.  Most of them contain silly or unreadable messages (“It is interesting to note that he starts from considerations about gravity. In the beginning I’m hoping for a connection of gravity to the whole story, but it’s only about Etere”), but some are full of surprises, suggestions, things that I should have thought about and that, indeed, I should have thought about.

The myself-of-right-now, right now, has the crazy idea came of keeping a permanent inventory of all the unfinished things, the aborted projects (most for good reasons…), the good and bad ideas. Let’s see what the myself-of-tomorrow will say.

A random walk…

[This was material for a very speculative broad-audience talk I gave some years ago; artwork by Francesco Vedovato]

This slideshow requires JavaScript.



Brownian motion was observed by british botanist Robert Brown (1827) and used by Einstein (1905) in thermodynamics. The mean position is zero, the mean distance from origin grows with the square root of time (position isn’t distance!). Good for describing diffusion, like when you drip some drops of black ink in a glass of water and see it spreading over time. Diffusion is an irreversible process.


Brownian motion was also used by Bachelier (1900) for stocks (a version called Geometric Brownian motion hat grows on average due to inflation – Leopardi’s “magnifiche sorti e progressive”). In both cases there is a lack of information about details, modelled through noise. We don’t know the behavior of all agents on a stock market and why they do what they do. For pollen grains: lack of knowledge of detailed interactions with underlying gas molecules.


Forget about pollen grains and think about your room, which is intrinsically an open system: With tiny draughts from the windows, cosmic rays penetrating through walls, cockroaches from the tub, people stepping inside… Detailed interaction with the environment is impossible to attain. The room belongs to an environment (the house) which belongs to an environment (the city) which belongs to a country … to the Universe. What does the Universe belong to? We come back to this later…


Back to the room: if entropy is a measure of disorder, what is the entropy of my room? My mother thinks my room is very messy. It looks messy to her mental order. I think it’s not because I know exactly where things are! My mother lacks knowledge about my room state, so her measure of entropy is different from mine. Entropy is a measure of ignorance. But ignorance with respect to what? Is entropy subjective?


We could acquire information about every single object within the room, then every single atom, then describe all nucelar and subnuclear interactions, then all gravitational fields, then all quantum gravitational interactions with a theory we don’t know yet… Up to what? Strings? Loops? Can we define ignorance with respect to an ultimate “atom” of reality? Well, most people who don’t work with strings think they are a dead theory (while most people working on it thinks it is still alive). Let’s concentrate on loops.


At the heart of Quantum Gravity lies the Wheeler-De Witt equation: the Hamiltonian constraint contains the laws of physics, the wave function of the Universe describes the state of the system (one of many possible states). All of that vanishes! That’s the difference with the Schroedinger equation, where time appears.


Each portion of the universe is a clock for each other portion of the Universe (including us). The whole Universe is not a physical system! Because we can’t do physics in there. But each portion of the Universe is necessarily open. Dissipation is necessary to define time, measurement and physics. Notice that clocks have always been defined either by referring to an outer environment, or by going into the innermost depths.


… of thermodynamics:

1 – You can’t win (you can’t obtain more energy than you poured in)
2 – You can’t tie (actually dissipation occurs making energy less usable)
3 – You can’t quit the game (you cannot perfectly isolate a system)

(there’s a zero-th law but let’s leave it aside).


How entropy as ignorance increases:

– if you are given the first snapshot, you can tell right away that the system has not yet relaxed. You can tell that it is some very early time in the evolution of the system.

– if you look at the second, you can still say that it must be sooner than some time

– if you look at the last, you can’t say anything: it might be a snapshot from any moment.

Entropy growth and loss of information determine the direction of time. They lead towards thermal equilibrium.


OK for where time goes. But how is it measured? By cycles. Irreversible cycles cannot go to equilibrium. Constant production of entropy is necessary to have structures. Clocks need to to constantly stay in nonequilibrium.


Global circulation of the atmosphere due to the forcing of the sun.
Nonequilibrium steady state. Cyclic. Alive.

In the entropy balance equation in this slide:

> 0 nonequilibrium (life)
= 0 equilibrium (death)


But wasn’t entropy subjective?
Does “reality” depend on the observer?
What is the observer is no Ph.D. in physics?

And here comes my own result: If you change perspective on entropy the second law stays the same. Physical laws are in the end invariant under this symmetry: the symmetry of changing prior beliefs. Yet breaking this symmetry, and “spending” some prejudices is necessary to be able to do physics (akin to choice of reference frames or measure units). The important fact is that physics gives us a dictionary to translate between different perspectives.


What is this symmetry of physics? How we assign probabilities to things. What is the probability of The Die? Doesn’t make sense. It makes sense talking of the probability of rolling a dice. Not the object, but the process. So what is the probability of rolling the dice? Is it 1/6? Well but maybe we could have some prejudices on the die. For example we might think it is loaded. Now the entropy of the system will depend on this prejudice. And also what we will learn when we roll the die (we might learn something = flux of information = flux of heat).  Prejudice and learning will change according to prior beliefs (entropy and entropy flux). But the die will roll independently of what we think of it!

That’s what science is about: having prejudices and confronting them with facts, and updating our prejudices to newer ones. It is based on the assumption that “reality” is independent of how we describe it. Physics is not about “reality”, it is about how we measure it.


It doesn’t make sense to talk about “the entropy of an object”, or the heat released. It makes sense to talk about in that particular context, with respect to a certain experimental apparatus, “how much entropy is measured”. For example, I prefer “The Higgs mechanism” to “The Higgs boson”. Language is how we describe reality. The language of physics today is still too oriented towards objects. Native American Indians had a natural language that adapted to processes. “The passage of a cloud”, rather than “a cloud”.

“All things physical are information-theoretic in origin and this is a participatory universe. Observer participancy gives rise to information; and information gives rise to physics.” J. A. Wheeler.

Ethnicity, race and the APS meeting

I just registered to the APS March meeting. Among the questions that were asked in the application form there were my ethnicity (whether hispanic or not), and my race, to be chosen in a quite un-comprehensive list. Already I don’t understand why the hispanic is an ethnicity (therefore defined on “cultural” traits) while the others are races, defined on other sorts of traits. While the several options presented did include a box “I prefer not to answer”, I didn’t feel compelled to tick that box given that I do have an answer: that defining “races” within humankind is a failed scientific enterprise as well explained by Guido Barbujani in The invention of races. Understanding human genome diversity (which at the moment I can only find in the original Italian), the burden of two hundred years of pseudo-scientific “scholarly” practice. I wrote what a (very probably false) anectode attributes to Einstein, where he specified “human”. But then I repented: I’m not quite sure that it makes any sense at all to define “humans” as a race in the biological sense…




Insights in robustness and plasticity of metabolic phenotypes from large-scale metabolic modelling

Interesting talk by Zoran Nikoloski at LCSB, bipartite: one first very theoretical part discussing concentration robustness and a second part which remained unclear to me.

Genotypes encode metabolic networks that yield particular metabolic phenotype. Metabolic phenotype is determined by fluxes of chemical reactions and metabolite levels.  What determines metabolic fluxes? The concentration of substrates, of the active enzyme, and of other regulatory effectors (activators and repressors). Nikolski considers enzyme kinetics with mass-action law and thermodynamic constraints (he only mentions invertibility of reactions). The structure of the networks is resumed in a metabolic network and / or in the stoichiometric matrix. The space of feasible flux distributions is shrunk by analysis of genome, metabolome, thermodynamics, etc.

The plasticity of fluxes and robustness of concentrations is believed to be an important characteristic of metabolic networks. A very mathematical theorem by Shinar and Feinberg,

Shinar, G., and Feinberg, M., Structural sources of robustness in biochemical reaction networks, Science 327 (2010).

relates robustness to the so-called deficiency of the network. An analogous criterion has been devised by Nikolski and coworkers

Eloundou-Mbebi, J. M. O.; Küken, A.; Omranian, N.; Kleessen, S.; Neigenfind, J.; Basler, G.; Nikoloski, Z.: A network property necessary for concentration robustness, Nature Comm. 7 (2016).

where they write:

“Our main result is based on establishing whether or not the structural deficiency changes upon removing a single component from the network. To this end, we rely on the network obtained by eliminating a given component from each complex containing the component. Removal of a component may drastically alter the network, in terms of number of nodes, linkage classes and the rank of the stoichiometric matrix. […] The idea of removing a component from biochemical reaction network has been previously employed to make statements about the possibility of the network to exhibit multistationarity. Namely, for a given set of rate constants, it has been shown that if a reaction system obtained upon removal of a component admits multiple non-degenerated positive steady states, so does the original system. Therefore, this result may be used to identify subnetworks conferring multistationarity to the entire network. Here, we establish a connection between a structural deficiency, as a key network invariant, and ACR [Absolute Concentration Robustness] for a particular component. It is this connection that allows us to apply the results to large-scale networks, typically arising in the study of metabolism.”

Therefore it appear that this criterion would have more application to actual metabolic networks than the one devised by Feinberg. Actually, the criterion they discuss is a necessary condition for ACR and not a sufficient one:

“Consider a mass action reaction system that for given rate constants admits a positive steady state with and without removal of a component S. If the system has ACR in species S, then the systems with and without removal of S have the same structural deficiencies.”

Therefore, if removing a species modifies the deficiency of a network, then that species is certainly not robust. In their paper, they also have a very nice plot showing how many metabolites are robust across several kingdoms of life.

Maximum entropy principle: a note

Some years ago I wrote a paper debunking several claims related to the so-called “maximum entropy production principle”:

Polettini M., Fact-checking Ziegler’s maximum entropy production principle beyond the linear regime and towards steady states, Entropy 15, 2570-2584 (2013) [pdf], arXiv:1307.2052

This is one of a few critical contributions in a field that is otherwise flourishing with papers very optimistic about the fact that the principle has anything to say about anything at all, each citing each other in a self-inconsistent way. The central reference in this business is the following review:

Martyushev, L. M., and V. D. Seleznev. Maximum entropy production principle in physics, chemistry and biology, Physics reports 426, 1-45 (2006).

I have been very critical of many claims made in this review (which is a patchwork, so it is not consistently wrong all over the place, only here and there…). In response to a criticism by Andresen et al.

Andresen, B.; Zimmermann, E. C.; Ross, J, Objections to a proposal on the rate of entropy production in systems far from equilibrium, J. Chem. Phys. 81, 4676−4677 (1984).

the authors of this review wrote a reply trying to show that such criticisms fall outside of the range of application of the principle:

Martyushev, L.M., Seleznev, V.D.: The restrictions of the maximum entropy production principle, Phys A: Stat. Mech. Appl. 410, 17–21 (2014).

In a footnote, they also address my paper by claiming that

“[The entropy production rate] is not negative for any values of [the] variables. The critics of MEPP often forget about this obvious corollary of the second law”.

This outrageous little sentence has been used by many authors to dismiss my work altogether, without actually reading it. Some examples (I have seen more…):

M. Amiri and M. Modarres, An Entropy-Based Damage Characterization, Entropy 16, 6434-6463 (2014)

Ž. Bonačić Lošić, T. Donđivić, and D. Juretić, Is the catalytic activity of triosephosphate isomerase fully optimized? An investigation based on maximization of entropy production, J. Biol. Phys. (2017).

So, apparently, I don’t understand the very foundations of thermodynamics! A similar observation had been made by one of the referees of my paper:

“Author’s forces (Eq. 15) contradict the foundation of non-equilibrium thermodynamics and common sense”

to whom I replied:

“Eq.(15) is just a generic Taylor expansion of the forces in terms of the fluxes, something one should take into account if one wants to go beyond the linear regime.”

So, do I know that the entropy production should be positive? Of course, I do! It’s a positive functions of several variables, some of which are called the fluxes flowing through the system. The linear regime is that situation in which it makes sense to approximate the entropy production as a positive quadratic form of the fluxes. Among several other observations, in my paper I show that this supposed over-arching principle only makes sense in the linear regime (I also argue that it does not say what people attribute to it, that it is actually much weaker, but let’s leave this aside…). In fact, I show that by expanding in Taylor series to (say) third order the entropy production, the predictions of the theory already fail.

However, the fact that I expand to third order does not mean that I assume that the entropy production is a cubic function! This would be like saying that the exponential function is not positive because it’s cubic:

exp x = x + x^2/2 + x^3/6 +

So, while I don’t know if and how Andresen et al.’s objections fall within the realm of application of MaxEP (if there is any), I know exactly that my objections fall perfectly within the assumptions of the review and I know very well that the entropy production rate is positive, thanks so much. Which makes the observation of Mr. Martyushev and Seleznev an impolite and stupid mockery.

I know that all of this is not really worth my time, but I’m sick and tired of seeing how this work is being mistreated, just for trying to make a little clarity in a field that is otherwise as foggy as 19th century London.

And I have not even started explaining in how many ways is the MaxEP a complete bluff…

Why I voted NO

The Italian government in charge proposed several modifications to the Italian Constitution, and I voted NO, for many reasons, the main one being that I think the modifications went in the direction of diminishing democracy.

There were lots of good reasons to vote NO, some of which have to do with the contingent political asset in Italy. I don’t care about who’s in power right now and who will be in power tomorrow. I want good rules for the game.

What was really important to me was that, according to the Constitution in effect, at the political elections I can express two votes; while according to the new proposed Constitution, I could only express one. To me, that makes for a 50% loss of democracy at political elections. We can add to this the fact that certain local institutions (provincie) were recently “abolished” in the sense that they still exist, but we don’t vote for their representatives.

Overall, I see this attempt to diminish democracy in Italy as part of an overall trend in Europe. I might or might not come back to this.