Smoluchowski’s demon: we take bets

At a recent workshop Luca Peliti mentioned Smoluchowski’s proposed realization of an antisymmetric “Maxwell demon”. Right afterwards with some of the participants we started discussing how the system would behave and decided to finally take a little bet (not much, a bottle of good quality Rum). It turned out that the betting process itself is an interesting sociological experiment in itself, as the specification of the conditions, of the meaning of concepts, and of the procedure by which the bet would be claimed to be won are far from settled. So here this post, in the hope that it might draw more discussion and to an agreement.

So here the situation. There’s a gas in a box, with a wall in between and a small opening with a door. The door opens and closes asymmetrically only on one side, and there’s a coiled spring exerting a restoring force to close it. The spring is embedded in wall and door, so that it does not interact with the gas. The system is overall isolated. We assume that before time t=0 the door is kept open by some string, and that the gas has been prepared at equilibrium with a well-defined notion of temperature (e.g. by putting it in contact with a reservoir that was then removed some time in the past, that is the same on both sides of the box. At time t=0 the string is cut with minimal effort. As well known in all these cases, the door can occasionally be opened by molecules hitting on it from one side. Hence there might or might not be a displacement of particles. We assume there is no interaction between particles nor dissipation due to interaction with the walls. The question is whether the temperature of the gas will be the same on the two sects after a sufficiently long time has passed.

smolu

The kind of argument I’m interested in is one where the details of the system do not matter so much, the number of particles, the mass of the door, the string constant etc. I have many personal reasons why I like this problem and I have my own opinion on it.

@ LuxCNworkshop

I’ll try to live blog as much as I can from the workshop we organized here in Luxembourg on Chemical Networks. I won’t be able to cover all talks as most of them I need to follow very closely and cannot get distracted.

Luca Peliti, On the value of information in gambling, thermodynamics and population dynamics. Interesting table by Rivoire (2015) of analogies between gambling, thermodynamics and a population evolving. e.g. an “option” would be a “state” of a thermodynamic system, or a “type” in evolutionary dynamics. Donaldson and Matasci /2010) proposed a model of the history of an evolving population. The growth rate is given by a probabilistic measure involving the average of the fitness, the Shannon entropy, and the relative entropy. Kelly’s result is that the growth rate of a population is increased by the mutual information. The analogy with thermodynamic systems was proposed by Vinkler et al. (2014) and by Rivoire (2015). Very interesting collection of references.

Hong Qian, The mathematical foundation of a landscape theory for living matter and life. We don’t understand what entropy is. Provocative question: do we understand what is energy? In biology, there are all kinds of landscapes: protein folding, cell differentiation etc. But there is a fundamental conceptual difference between an overdamped nondriven system like a protein and an animate object… (question: are we ready to teach thermodynamics starting from Markov processes for real?). He gives a very nice basic introduction to thermodynamics based on Markov processes, which I think will be very useful for all. He proposes to replace entropy with free energy in every book of thermodynamics (I would agree on this). He also mentioned the “Langevin analog” of a chemical master equation, which is the Poisson-type time-change representation of Thomas Kurtz. He also has a nice argument that, by the splitting of symmetric vs. antisymmetric parts of a Fokker-Planck operator, the dissipative part is basically a Newtonian dynamics that preserves the free energy (just like symplectic dynamics preserves entropy). So, in a way, irreversibility comes from the deterministic part, which is an idea I’m very sympathetic to.

Gheorghe Craciun. Boltzmann apparently, in response to Lorentz relaxed the assumption of detailed balance to make it “semi detailed balance”, which is now what we would call complex balance. Shear and Higgins (1967,1968), “An analog of the Boltzmann H-Theorem for systems of coupled chemical reactions”, and “remarks on Lyapunov function for systems of chemical reactions”, but unfortunately they thought this would hold for any reaction network while it’s obviously not true. He keeps on tracing the history of the global attractor conjecture. Quite humbly, he hardly mentions that he proved the outstanding conjecture in the field.

Pierre Gaspard. Remarkable talk, couldn’t take notes. Interesting final considerations on the quasi-species theory of Eigen and Schuster: the maximum genome size should be inversely proportional to the mutation rate.

Mueller. Has some nice sort of flux topes that are somehow special to FBA instead of flux modes. Furthermore, they consider the thermodynamic constraint. They also proved that for arbitrary networks and arbitrary kinetics, optimal solutions of  enzyme allocation problems are Elementary Flux Modes. IN particular, this explains the low-yield pathways (e.g. Crabtree etc.).

Ouldridge. Nice explanation of how the Szilard engine paradox is explained, not in terms of erasing a bit, but in terms of copying the state of the particle, and correlating it to the state of the weight being pulled. I think this way of thinking is much cleaner and it might also solve the usual problem with Landauer’s interpretation that dissipation is only in erasing bits (should ask him question about it). I think my theory of marginal statistics for Markov processes could be employed to derive results in these networks where, by the copying mechanism, you basically have a cartesian product of Markov processes (by the way, there exists a notion of cartesian product of Markov chains, see here).

Smith. Very nice explanation of why CRN dynamics and topology is significantly more complicated than dynamics on graphs. He introduces the concept of “concurrency”, which basically is the idea that while reactions happen between complexes, there is a concurrency of species in complexes. Goes through the Doi-Peliti formalis, moving to a basis of coherent states. A good idea is to apply the Liouvillian on the coherent state and derive conditions on the numbers, that actually give you complex balance. He has an argument that the nonequilibrium work relations are not about work (I’m sympathetic to this). Looks for stationary trajectories (using the Liouvillian as Markovian). Why do we have two fields in a system that initially only had one variable. He argues this is related to the adjoint generator on observables. Another consideration: introducing the likelihood ratio to tilt the distribution in importance sampling is a bit alike introduing the two fields, the left field being like the likelihood ratio and the right one is the importance distribution. [Baish 2015] is a transformation of the fields that is appropriate to obtain a new Liouvillian with an extra total time derivative. From that representation one finds the transpose evolution for the mean number, rather than for the probability (we need to talk about duality in graphs…). In path integrals, FDR are symmetries of the 2-field [Kamenev 2001]. Ah: Jaynes said nutty things but also useful things. Like that.

Supriya. Defines a factorial moment, that is more suitable for CRN. Basically you consider the descending powers of the species according to their stoichiometry. Then there is an equation connecting moments, a moment hierarchy, and that equation is not that bad when it comes to factorial moments. If you take the time derivative of the factorial moment, apply the Liouvillian, and then commute all off the operators that need to be commuted, then you get a lot of other moments. What she argues is that you can get an equation which only writes in terms of the incidence matrix and the adjacency (Kirchhoff) matrix, like it happens for the order 1 moment. The key is that any factorial moment of a Poisson distribution is very simple (is it correct to say that factorial moments are to Poissonians what cumulants are to Gaussians?). Has a very nice equation for the factorial moments’s equation. You can basically say that there are no factorial moments of order larger than some number. She finds recursive relations for the ratio of consecutive steady-state factorial moments, that can be solved very nicely running the recursive relation “up and down” according to asymptotic expansions. Question: can this recursion be used to observe power-law “critical distributions”, or phase transitions from a localized to an explosive state?

Barbara Bravi. Considers subnework dynamics, how to marginalize over the unobservable degrees of freedom. I like this because while I also consider marginal thermodynamics, there is no dynamics in my thermodynamics! They have developed the right methods to project away the evironmental degrees of freedom. I think what would really be interesting here is to re-do this approach with the tilted dynamics for the SCGF to obtain marginals of the currents in subnetworks.

Mast. The origin of life: it’s not clear where the DNA to RNA to proteins back to DNA cycle emerged. Improbable that they came along all together, more probably there was a RNA world before, with RNA completely duplicating itself. It’s also unclear which among metabolism and genetics came before. They study nonequilibrium systems at large, because of chemical nonequilibrium and physical forces, like temperature gradients. Thermophoresis: “movement of molecules along a thermal gradient”. In their case: charged molecules in water (DNA, proteins).  One of the arguments is that the accumulation due to thermophoresis in certain regions (e.g. in convection cells) might enhance elongation of polymers [Escalation of polymerization in a thermal gradient] (I could consider to rewrite my paper on equilibrium in temperature gradients but in a truly nonequilibrium situation where the underlying medium has a Bernard flow, and see theoretically if I obtain accumulation of particles at a corner). Apparently there is a sort of phase transition to gelation.

Keil. In an equilibrium setting there is a “tyranny of the shortest”, because they are much faster to form. Therefore one has to go far from equilibrium.

Schuster. Excess sugar is converted into fat. Is the reverse pathway possible? How can Inuit live on a  practically carbohydrate-free diet? This is not clear. So far it’s understood that fatty acids can only be respired. Some say “Fats burn in the fire of carbohydrates”. But for example converting fats into sugar is needed to fuel the brain. Textbook pathways do not cover the complete combinatorial multitude of biochemical conversion. Theoretical methods for studying metabolic networks: dynamic simulation, stability and bifurcation analysis, metabolic control analysis, pathway analysis, flux analysis, optimization and game theory, etc. Pathways analysis: decomposition of the network into the smallest functional entities. Doesn’t need kinetic parameters. Steady state condition Nv = 0, sign restriction for irreversibility of the fluxes, whose solution is  convex region. There might be “internal elementary modes”, by which I think he means what we call futile cycles. Different pathways can have different yield (thus allowing the system to switch from one to the other). He argues that EFM is not scalable to the genome-scale metabolic networks, like FBA. In several works they argued that there is no gluconeogenesis from fatty acids [can sugars be produced from fatty acids?].

Ewald. The idea is: killing pathogenic fungi through self-poisoning. Fungi can give pretty bad illnesses. There are as many people dying from fungi than from tuberculosis every year.

Yordanov. Steady-state differential dose response. e.g. by knocking down a metabolite. A natural perturbtion is interferon signalling. They use “Laplacian models”: zero and first-order mass action kinetics (basically, master equations with only stoichiometric coefficients). Theory: Gunawardena 2012, Mirzaev & Gunawardena 2013, and a lot of references to other applications. He gave a method for compression of Kirchhoff polynomials, that does allow to avoid the combinatorial explosion [Yordanov and Stelling 2016].

Poolman. Introduction to history of the group. Now works on acetogen,  a peculiar class of microbes that have a metabolism based on carbon monoxide. This might be interesting for industrial purposes, both because you might want to eliminate a poisonous gas and because it might give some useful product out. Their goal is to provide a hierarchical decomposition of large metablic networks, by an improved “divide and conquer” algorithm. He considers kernel matrices that are orthogonal (but how often are they orthogonal? that doesn’t seem to be often the case…). He then defines by the angle between vectors “correlations between reaction”. If the coefficient is zero the reactions are disconnected, if it’s +-1 they are in the same reaction (enzyme) subset. But is metabolism really modular? Metabolism wasn’t constructed by human engineers after all…

Roldàn. The exponential of the entropy production is a martingale. Hence Doob’s theorems for martingales hold: about stopping times and about probability of records. Neri, Roldàn, Julicher. Obtain probabilities of the stopping times analogous to the Qian, Xie and Ge (and Qians) “Haldane” equalities, and then they can study the negative records of the entropy production. There’s a cumulative distribution of the infimum, and the infimum law says that <S_inf(t) > = – k_B (remarkable!). They also argue that they have a universal infimum distribution, based on a recent paper on arXiv.

Bar Even, Noor. Intro: proposed new reference potential defined not at one mole but at one micromole (more reasonable for biochemistry of the molecule). First story: how to measure delta G zero’s? They argued that applying the stoichiometric matrix to the delta g’s of formation gives the delta G’s of reaction, that the latter are easier to calculate, but that the problem is much underconstrained (here I could ask about the parallel with inorganic chemistry, where for the delta G’s of formation you need to compare to the most stable compound of that specie). Group contributions [1991 Mavrovouniolis, 2008 Jankowki, 2012 Noor]: assume that most thermodynamic properties are somehow additive, by splitting down molecules in big groups. However group contributions cannot be combined with existing known delta G’s. [Noor, Haralsdottir, Milo, Fleming 2012], and they brought it to eQuilibrator [Flamholz, Noor, Bar-Even, Milo (2012)] (in inorganic chemistry the Delta G of formation of something is computed taking the elements of the periodic table and setting to 0 the G of their most stable compound. Is this group contribution method the same thing, but giving 0 to groups? Then this will create two disjoint areas of chemical thermodynamics, with numbers that do not really combine one with the other). Second story: definition of thermodynamic unfeasibility. Unfortunately, they also define it on a single reaction, while I believe it can only be defined on cycles.

Why is NAD(P)H the universal electron carrier? By the phenomenon that the “rich get richer” (didn’t understand that). Basically he seems to argue that NADPH can both accept and donate electrons from most things that need or have one, by some coincidence; all such reactions (oxidoreduction) are almost always in acceptance, but they are almost completely reversible. Smith has an argument that close to equilibrium one can operate these reactions very efficiently (a local equilibrium in the middle of strong forces might enhance efficiency? Is this related to tight coupling? Might be… we ned to think about this). Not dissipating too much and being very very efficient looks a lot like my picture of the quantum dot, where one level is basically at equilibrium with the reservoir and the others are very very far. That’s because the primary need of the machine is not to process electrons! It’s to process food! Electrons only need to be there whenever they are needed.

Look at [Bar-Even, etc., Thermodynamic constraints shape the structure of carbon fixation pathways].

[The problem with group contributions: it is as if one claims that the delta G of formation of Q2 is equal to the sum of the G of formation of single molecules of O. It is not related to the delta G of reactions (which is just due to combinatorics of population): it is an assumption on the energetics, and that sucks]

Fleming. interesting consideration: not so clear that differential equations are useful in biochemistry, because it’s not clear what the initial conditions are, and moreover, the rates are not often known, and it’s difficult to fit the rates to actual parameters. He defines elementary reactions as those that are only specified by their stoichiometry (and not, for example, by Hill parameters). Constraint-based modelling is based on the idea of physicochemically and biochemically infeasible network states with constraints, and then choose some objective function which is typically a linear function of the reaction velocities. Argument for high dimensional modelling: organisms are the units of selection, hence if you want to understand how nonequilibrium thermodynamics is related to biology you have to get to that scale. Mentions the problem of infeasible cycles. what biologists do they tweak the bounds l < v < u until you get rid of the problem, it’s good but it’s bad because it’s not systematic. He proposes an approach that is somewhere in between kinetic modelling and constraint-based modelling. He mentions duplomonotone functions, which I’ve never heard of f(x)^T . nabla f . f > 0; they developed an algorithm (to find steady states?) based on this property.

Estevez-Torres. Can we synthesize a smiley face by reaction-diffusion equations? First observation of reaction diffusion: 1906, proposed velocity of reaction front v = Sqrt(diffusion), then explained by Kolmogoroff A. and Fisher (Fisher KPP): d_t a = a(1-a) + D Lapl a. Then the Belousov Zhabotinsky reaction (1950-70), with Belousov having a lot of problems publishing it. Winfree AT Science 1972: vortex concentrations. Turing: d_t a = f(a,b) + D_a Lapl a, d_t b = g(a,b) è D_d Lapl b, then if the Jacobian has J_11 > 0 and J_22< 0 (one is a self inhibitor and one is a self stimulator), and D_b >> D_a, then one can have these Turing patterns. With kinesin: Nedelec ’97. Loose et al Science 2008. But, the problem with BZ reaction is that, while it’s sustained, it’s not programmable nor biocompatible. Purified kinesin motors + microtubules, and purified Min proteins are sustained, biocompatible, but not programmable. People in chemistry synthesize molecules but not networks. DNA is a good candidate for this problem because it has a sequence, you can get the reactivity, and you can build networks from the structure of your molecules [Isalan et al. PLOS Biol. 2005]. So for example A activates B activates C inhibits C can be done, which is otherwise difficult. Chemistry for physicists: you buy your things, you mix them up, and you wait some time. Rondelez invented a system using two types of SSDNA species to make dynamic networks [Montagne et al. 2011 Mol Sys Biol]. The idea is that it’s a sustained system in a closed reactor, but the concentration of something is so big that is at a steady-state like for a long-enough time. The nice thing is that DNA you can buy it with the features that you want. See [Zadorin et al, PRL 15]: two fronts in the two directions, and that don’t see each other because they are independent (they only share the resources, but the DNA’s don’t interact). However, you can also make so that they inhibit each other and stimulate themselves, and then you have the two fronts colliding and stopping when they collide. Can we then make a material capable of development (an artificial simple embryo?). This might be useful to make materials that follow a developmental program. So first there is pattern formation, undistinguished, then there is morphogenesis due to external forces, and then there is cell differentiation and finally growth. He shows an example that is totally synthetic. Wolpert, J. Theo. Biol 1969: the French flag model: an archetypical model of pattern formation with three colors. They make a Polish flag (two colors) that is stable for more than 15 hours, but then a parasite emerges and consumes all of the resources very fast; they tried to push the parasite back but it’s another story. I didn’t understand how making a French flag reproduces an embryo.

Bo. Interesting observation: having a time-scale separation is a problem for simulations, because you need to wait a lot of fast events before doing a slow one. If you manage to coarse-grain the fast one then you can simulate more effectively. The question is: given a network with randomly chosen fast and slow rates, we want a criterion for whether we can do a fast-slow separation (I should really consider the fate of my marginal FR when you have a fast-slow time separation between the observable and the unobservable state spaces; discuss this with Gianmaria). (Another idea for coarse-graining is to consider the eigenvector associated to the dominant eigenvalue: if it is well localized, then one can coarse grain on the localized subspaces.) Bo: if the fast rates are strongly connected, then there is no hope to coarse grain. Otherwise, identify the blocks and coarse grain them. Example: Stochastic Michaelis-Menten with quasi-equilibrium or the slow complex formation hypothesis. (Would be nice to have a result for how the eigenvector behaves).

Skupin. Microscopic dynamics of a mitochondrion [Pietrobon 1985].

MacKie @ Budapest

Here the slides of my talk:

seminar2

 

Vladana Vukojevic. Interested in the neurology of alcoholics, trying to understand the physiological insurgence of dependence. Quenching of chemical oscillations. Work originated from the work of Graae Sorensen in Copenhagen [Quenching of chemical oscillations , JPC 1987]. The Briggs-Rauscher reaction oscillates (there are youtube videos on this). A stirred reactor with input and output of reactions; oscillations depend on the velocity of feeding, and you can have a supercritical Hopf bifurcation. A quench is a deliberate perturbation of the system, whereby you drive the system from the limit cycle towards the stable manifold. They studied all species that are intermediates in the reaction. There are several models of the BR reaction, so apparently it is not clear how many intermediates there are.  Mentions some stoichiometric stability analysis. They consider a mathematical model for HPA based on cortesol levels, where ethanol is one of the molecules, so by drinking a glass of wine one alters the cycle. The conclusions is that the HPA cycle is affected by ethanol, for example the peak in cortesol is in the evening which affects the sleeping patterns.

 

 

The entropy production is not the production of entropy

Main message: the concepts “entropy” and “production” do not commute, if they make sense at all.

I said that entropy production is not the production of entropy out loud at the JETC, and I believe many were disoriented. So let’s try to clarify what I mean. The confusion arises because people freely carry lingo from equilibrium thermodynamics out of equilibrium. At equilibrium, state functions are great. But out of equilibrium, talking about state functions might create confusion. I believe the major culprit for this shameful situation is Clausius nonetheless (I’ve heard Boltzmannn medal Dan Frankel saying something similar, so it’s not completely my responsibility).

Entropy, like energy etc., is a state function. That is, for a system X with states x, entropy is some function of the state S(x), a scalar. Full stop. At the moment I don’t care how this function is determined, if it’s uniquely defined (which is not) and whether it is interesting at all (not so much). Let us further assume that there is an internal dynamics within the system. As a consequence, there is a certain increment of entropy dS as the system moves from one state to another. Increases of scalars are very special types of differentials, called exact 1-forms. Because it is exact, the production of entropy ∫dS along a path is independent of the path and only depends on the initial and final states. Therefore, whenever the system goes back to the initial state the production of entropy vanishes. If we assume that the system comes close to the initial state often enough, then there will never be a net production of entropy.

Notice that so far no mention to the equilibrium vs. nonequilibrium character of the dynamics was made. But, from the above description of the production of entropy, it would seem that the rate at which entropy is produced within the system is practically zero for long enough times, which would seem to be un-characteristic of a non-equilibrium system, which is expected to produce entropy forever and ever. So what’s the problem?

Fact is that the production of internal entropy is not the whole story, because it is understood that “entropy flows” away from the system and towards the environment, adding something to the overall balance.

Let’s call this other flow across the system’s boundary the entropy production. I think this is a very unfortunate nomenclature, but let’s stick to it. Because entropy production is something that increases along a path in the system’s state space, it is well described by differential 1-form σ (by definition a 1-form is something that can be integrated along a path). Differing from the production of entropy, this form is generally not exact, which means that there is no function Σ  such that σ = dΣ. As a consequence, the entropy production does not vanish when integrated along a cyclic trajectory:

∫σ ≠ 0 along a cyclic trajectory

And, as a matter of fact, this integral is “most often positive” with respect to some probability measure over the trajectories in state space, according to the 2nd Law of thermodynamics. But let’s throw this issue into the closet for the moment.

If σ happens to be exact, then the system is “not a nonequilibrium system” (I could at this point turn this double-negation positive by mentioning “equilibrium”, “detailed balanced”, or “time-reversibility”,  but these three expressions have subtle meanings that I prefer to bury in the above-mentioned closet) . The connection to thermodynamics is that, for non-non-equilibrium systems, Σ comes to coincide precisely with the internal entropy of the system, Σ=S.

– – –

So far so good. Unless we are considering an non-non-equilibrium system, entropy production and production of entropy seem to be very different concepts with a very similar name: one is an exact form, the other is an inexact form, and the two only coincide if the system is not a nonequilibrium system. So where does the confusion originate from?

The reason is that people like to think of entropy production along a path occurring within the system, as production of entropy within some environment Y. The idea is that such environment (which can be further split into several reservoirs, if there is good reason to distinguish among them, but this is also in the closet) accumulates entropy in an ever-increasing way as transitions occur within the system. However, as I will try to argue below, this idea of “adding up to the entropy balance” is very rough and imprecise, and if taken too seriously leads to confusion, in particular to that masterwork of confusion that is the following formulation of the Second Law of thermodynamics:

“the entropy of the universe cannot decrease”.

To me, this is completely nonsensical. Let’s see why.

When people think of the universe they don’t actually mean the Universe, the multitude of things that actually surround us, but a portion of that multitude that is reasonably isolated from external influence, that is, something that ideally includes the system and the environment and that has no further environment beyond its boundaries. Let XY denote this universe with states (x,y) resolving the system’s and the environment’s states. Now, we are free to define the universe entropy as some state function SU(x,y). We’re obviously back to the initial considerations in this post: As a state function, the production of entropy vanishes along any closed path in XY so that the entropy of the universe will remain fairly constant in time  (in fact, if the evolution in the universe is unitary – for quantum systems – or Hamiltonian – for classical systems – then the entropy – Von Neumann’s or Shannon’s – is a constant of motion). There is no “thermodynamics of the universe”.

So let’s focus on the system by disregarding the environmental degrees of freedom, that is by projecting (x,y) → x. The idea is that, upon this projection, the exact differential dSU describing the universe increase of entropy provides an inexact differential σ in the system’s state space describing the… well, how to call it? People like to call it entropy increase, or production. Like it or not.

Let’s give a simple example for simplicity. Suppose  XY = {(a,A), (a,B), (b,A)}. Now let’s project down to X = {a,b}. A closed trajectory in X is for example a → b → a. To this trajectory, the following paths might correspond in XY:

(a,A)→ (b,A) → (a,A)

(a,A)→ (b,A) → (a,B)

Notice that the first path is closed in the universe, but the second is not (but it’s still closed in the system!). Hence, at the end of that second path the “entropy of the environment” has increased by amount SU(a,B) – SU(a,A). This defines our entropy production σ in the system, and shows that the entropy production is path-dependent. Notice that this entropy production is not the differential of a state function in a quite literal way, because SU takes two possible values at x=a. As a consequence, its increase between two system’s states takes different values according  the “hidden path” in the environment.

I didn’t give this example just “for simplicity”, but because defining σ in a rigorous way out of dSU is a quite challenging task.

Conclusion: Is it legitimate to call the entropy production a production of entropy? Yes and no: we need to keep in mind that this entropy production in the production of entropy is “virtual”. It can be figured to happen in the environment but it does not correspond to an actual state function called “entropy” that increases. If one wants to include the environment into the description, and have a system+environment = universe, and talk about an entropy function there, one is back to the situation where this entropy function cannot increase, and thus one can forget about the second law. This is because thermodynamics is intrinsically about open systems.

Perspective: Defining σ out of SU and a system/environment splitting is certainly doable in discrete state spaces, though it requires a lot of analysis of special cases. It is certainly much more involved when it comes to continuous state space. But what is really interesting is how to generalize this question to arbitrary differential forms, discrete or continuous. And to understand the role of dualities.

Carnot efficiency @ phase transitions?

For some time I have been collecting pieces of evidence that efficient engines at nonvanishing power output might be feasible when the working substance is close to a phase transition, because of the physical fact that there is a lot of so-called “latent heat” by which a small perturbation of the thermodynamic forces (e.g. of the temperature) might give rise to a huge current.

In our PRL on efficiency fluctuations based on a Gaussian model, we noticed that the two peaks of the efficiency p.d.f. both converge to Carnot when the covariance matrix becomes singular, commenting that “it is tempting to parallel this behavior to the paradigm of criticality at phase transitions, where fluctuations become macroscopic, correlations diverge, and the covariance matrix degenerates“.

Another element in this direction come from the analysis of Campisi and Fazio

Campisi, Michele, and Rosario Fazio. “The power of a critical heat engine.” Nature communications 7 (2016).

who noticed that, when the working substance of a quantum Otto cycle consists of N coupled components, in the large-N limit close to a critical point a conspiracy of critical exponents might lead to a super-linear scaling of efficiency versus power. They conclude that “obstacles hindering the realization of the critical powerful Carnot engines appear to be of technological nature, rather than fundamental”.

Now a new piece of evidence comes from this recent manuscript:

Patrick Pietzonka, Udo Seifert, Universal trade-off between power, efficiency and constancy in steady-state heat engines, arXiv:1705.05817

They prove the following relation between efficiency, power, and the power’s scaled variance \Delta_P

\eta_C/\eta \geq 1 + 2T_c P /  \Delta_P

where the Carnot efficiency and the temperature of the cold reservoir appear. What is very interesting here is that, if one wants to maintain a finite efficiency bound, the fluctuations of the power output should be of the same order as the power output itself. Which is precisely one of the basic tenets of phase transitions! This can be seen to be the case in our Gaussian model of efficiency fluctuations, and it calls for the realization of a more explicit model working as an engine and displaying this sort of critical behavior.