This week the University of Luxembourg hosts Luxembourg Out-of-Equilibrium, a workshop dedicated to recent advances in nonequilibrium physics. The workshop brings together members of roughly two different communities, the “nonequilibrium statistical mechanics / stochastic thermodynamics” community (NESM) and the “soft condensed matter / molecular dynamics” community (MD). I’ll try to live-blog my notes and thoughts as seminars proceed. For reasons of clarity I will try to specify what comes from one community and what comes from the other, but of course boundaries are not so marked.
The first day is dedicated to short presentations by (relatively) younger researchers, introduced by two general lectures, one per community: Nonequilibrium Molecular Dynamics by Michael Allen, and Stochastic Thermodynamics by Udo Seifert.
– Nonequilibrium Molecular Dynamics (M. Allen, MD): Of course the two communities overlap in many respects, but it’s interesting to note that there are significant differences already at the level of terminology, in particular as regards the words “equilibrium” and “nonequilibrium”. In the context of Molecular Dynamics, by “equilibrium” one typically means a time-invariant state of the Liouville equation that describes a gas of interacting particles or in any case a large number of degrees of freedom. By “nonequilibrium” one then intends a time-dependent situation, or a time-dependent perturbation of such a system. Often such perturbations indefinitely heat up the system. Therefore, for practical purposes and for physical consistency, in MD simulations one also wants to implement some effective dissipating mechanism that maintains the system “thermostatted”. Allen’s lecture covered the standard approach to evolution of density functions in phase space and linear response to perturbations, and the more recent techniques for implementing termostats. One interesting thing I didn’t know about is that for non-Hamiltonian perturbations that preserve the phase space volume (but, then, break simplecticity) still the fluctuation-dissipation theorem holds (Evans, Morris).
– Statistics of sums of correlated variables described by a Matrix Product anszat (E. Bertin): The Central Limit Theorem states that sums of random variables converge to a Gaussian distribution when they are independent, identically distributed and with finite variance. Then one might obtain non-Gaussian statistics when either of these conditions fails. In certain models of NESM (exclusion processes) one obtains correlated statistics described by certain matrix products, and the matrix representation can be mapped onto a so-called Hidden Markov Chain (Agneletti, Bertin, Abry, IEEE 2013, EPL 2013, J Stat Phys 2014). Then from an algebraic problem one moves to properties of a Markov chain. One finds that if the Markov chain is ergodic, then the distribution is Gaussian, if there are multiple components then it is a mixture of Gaussians, and if there are irreversible transitions one obtains a non-Gaussian distribution which is a continuous mixture of Gaussians with different variances. They also considered large deviation properties, which does not require the mapping to hidden Markov chains. I wonder if detailed balance might play any role here.
– Microscopic theory for negative differential mobility in crowded environments (A. Sarracino, NESM): A peculiarity of nonequilibrium (in the second sense I will give to it, see below) systems is that you can “get less from pushing more”. In fact, while near equilibrium an increased external force always increases its corresponding current, due to dynamical effects this is no longer the case in nonequilibrium systems. Sarracino considered a model of a tracer particle (modelled by an asymmetric exclusion process) driven by an external force in a host medium (modelled by a symmetric exclusion process). Quite remarkably, under a certain decoupling approximation they can obtain analytic solutions for the response of the velocity of the tracer with respect to the force (the differential mobility), depending upon the medium’s density and the relative time scales of the evolution of the particle and of the medium. They derive exact asymptotic expressions in the high and low density limits, and compare theory to Monte Carlo numerical simulations at intermediate densities, showing good agreement. Perspectives include studying velocity fluctuations (higher order cumulants) and nonequilibrium fluctuation-dissipation relations.
– Temperature concepts, critical dynamics and first order transitions in sheared amorphous systems (K. Martens, MD): If in the bathtub you shear foam between your hands, you will see rearrangements of the bubbles within it creating internal currents. Foam, whose bubbles are reasonably homogeneous but disordered, is an example of an amorphous system. The flowing regime occurs after a critical shearing force is reached, thus manifesting a phase transition. Martens and collaborators have worked on computational models of this sort of behavior. This is as far as I could really understand; the talk follows in a rather technical direction that I am not acquainted with. I understand that one of the more interesting ideas is that, although the system is not subjected to thermal fluctuations, mechanical noise due to friction might effectively act as thermal noise. Then, one can model it by a Langevin-type equation to obtain the Hébraud-Lequex model, that is basically a time-dependent Fokker-Planck equation.
– Hydrodynamic coupling of two micro-particles trapped at different effective temperature (A. Berùt, NESM): S. Ciliberto is one of the (few) constant reference names for experimental work in Stochastic Thermodyamics. His experimental apparatuses allow to test small systems where stochasticity is under perfect control. In this work Berùt, Petrosyan and Ciliberto studied the energy flux between two micro-systems at different temperatures. The microsystems are two silica beads floating in water, trapped by optical tweezers and somehow interacting by hydrodynamic coupling. Since it is difficult to obtain a high temperature gradient at small scales, a higher temperature can be effectively emulated by subjecting one particle to noise, which I understand is achieved by illuminating it with white light. The system is theoretically modelled by coupled Langevin equations, which lead to a quite straightforward analysis. Because of the interaction there is a cross-correlation between particles, the variance of the hotter particle is smaller (than alone) and that of the colder one larger. Variances also decrease with distance. A Fluctuation Theorem seems to be verified.
– Stochastic thermodynamics (U. Seifert, NESM): The opening lecture of the afternoon section is about Stochastic Thermodynamics, a modern vision of thermodynamics that also applies to small fluctuating systems, such as biomolecules or colloidal particles. Often these systems are “nonequilibrium” in the second sense of the word I want to specify here: even if their state is invariant in time, such state supports a constant flow of entropy to the environment. It is not straightforward nor settled that this sense of the word is connected to the first sense I gave above. As regards Seifert’s talk, while I tend to think that most of the ingredients of the thermodynamics of Markov processes at the fluctuating level were in place early in the ’90s and possibly before (dating back to the work of the Qians), it is crucial for the physical interpretation of Stochastic Thermodynamics that the condition of “local detailed balance” puts the theory of Markov processes in contact with the energetics of small systems, as it brings about a First Law. In this respect Seifert has written classic pieces of extreme clarity, employing the full range of techniques from the theory of stochastic processes (optimal protocols, network analysis, etc.) to make statements about fluctuations of thermodynamic observables. He gave a review of these works – not very pedagogical, but still extremely clear. One thing I don’t particularly like of the way Stochastic Thermodynamics is often told is the plethora of fluctuation relations that, to the satisfaction of all possible actors on the scene, carry all kinds of names (Jarzynski, Crooks, Gallavotti-Cohen, Lebowitz-Spohn and whatever) while being simple consequences of one and only one very general symmetry relation emerging naturally from the theory of Markov processes. This he clarifies to some degree. The one thing I don’t know much about and sounds quite interesting is the generalization of the fluctuation-dissipation relation at nonequilibrium steady states and the concept of an effective velocity (Chetrite and Gawedzki J Stat Mech 2008, 2009, Baiesi et al. PRL 2009, Seifert and Speck 2010).
Remark: When proving the Jarzynski relation, Seifert writes that it is an identity in Stocastic Thermodynamics. This goes back to an open discussion whether the Fluctuation Theorem (or relation, if you like) is a theorem at all, or rather a completely obvious identity, or even a definition of the entropy production etc. My position, which I will make more precise on this blog one day, is that the Fluctuation Theorem is in fact a rather trivial identity given the definition of the stochastic entropy production; what has to be proven, though, and is not that trivial is that the average of the entropy production yields what one would expect from a macroscopic theory of thermodynamics.
– Fluctuation spectra of physical currents (B. Altaner, NESM): Altaner, Wachtel and Vollmer have a great result in their pocket: Cumulants of the currents can be calculated without solving for the full generating function (which often requires calculating horrible eigenvalues and invert nasty expressions). The technique is based on the implicit function theorem and gives explicit expressions, which for high cumulants become soon quite involved (but usually one wants the first two-three cumulants). He applied the techniques to a model of a molecular motor (the kinesin) that somehow “walks” and carries a cargo, under the influence of external chemical and mechanical forces, observing all kinds of phenomena such as the permanence of the fluctuation-dissipation relation at the stalling force, far from equilibrium, and negative differential response.
– Efficiency fluctuations in small machines (G. Verley, NESM): Gatien had the great intuition of studying the fluctuations of the efficiency, against the common belief that efficiency is only a macroscopic concept. Since his first studies of the large deviations of the efficiency, the field has been growing fast (see also following talk), and I’m personally involved in this. I will write more about this stuff in the near future on this blog.
– Single-particle stochastic heat engine (S. Rana, NESM): Most importantly for my interests, Rana is the author of one of the first works in which the statistics of the efficiency is studied computationally and where a second peak has been observed, a result that we have then derived theoretically. Strangely, though, much discussion involves a concept of average efficiency, while it is a general result (that we first observed and was later generalized by Van den Broeck and coworkers) that the efficiency statistics is a power law with no finite moments. Questions in this direction did not clarify this problem.
– Partial entropy production and its properties (N. Shiraishi, NESM): The idea is to concentrate on the nonequilibrium thermodynamics of an individual process among several networked processes, rather than describing the full thermodynamics. One requirement is that this new entropy production satisfies a Fluctuation Theorem. That FTs could hold for individual observables was something I didn’t believe until yesterday, when I came to know the existence of FTs for marginal probabilities of the currents. Anyway, this is not what these guys are doing. They are rather defining a very ad hoc path probability measure and an ad hoc entropy production that will automatically (and, in my opinion, somewhat tautologically) satisfy a FT. Unfortunately, I am afraid this new measure and entropy production might have no relationship to Markov processes and physical entropy production anymore, so the results appears to me to be a formal manipulation with little relation with our favourite concepts.
– Generalized Landauer bound as universal thermodynamic entropy in continuous phase transitions (C. Diamantini): The quite compelling idea is that there is a connection between the Landauer erasure principle, stating that there is a certain thermodynamic cost in restoring a bit from state 1 to state 0 (erasing information), and the phenomenon by which across a phase transition, an order parameter (e.g. the magnetization) moves from 0 (at the critical temperature) to 1 (at zero temperature). She applies the ideas to neurons modelled by the Hopfield model and probabilistic updating laws, where “remembering” means being magnetized and “forgetting” means zero magnetization (disorder). She computes the entropy change between these states and (apparently) finds that it coincides with the Landauer bound (not sure whether this isn’t a trivial consequence of the usual manipulations involved in the Landauer principle). The slides were at critical temperature, and didn’t help in gaining further insights.
– Thermodynamics of symmetry breaking (E. Roldan): Some years ago I had wild fantasies of nonequilibrium symmetry breaking and of a nonequilibrium analogue of the Goldstone theorem. I was shaking before this talk, but fortunately I can still hold on to my dreams. In this talk, relating to experimental work, discrete symmetries are considered, inquiring what is the energetics of spontaneous symmetry breaking when, say, a Brownian particle has to choose among several possible routes. I’d be curious to know whether there is any consequence of these results to efficiency, but have no idea how this problem could be attacked.