Survivors from the social dinner gathered for the last half-day of Luxembourg Out-of-Equilibrium.

– **Frenetic aspects of response** (C. Maes): The “miracle” of equilibrium is that there is one concept of entropy underlying heat (Clausius), fluctuations (Boltzmann-Planck-Einstein), linear response (Kubo), statistical forces (Casimir), evolution (Boltzmann’s H theorem) etc. The problem (opportunity) of nonequilibrium is that this is not valid anymore. In particular nonequilibrium dynamics does not boil down to nonequilibrium thermodynamics: there is more, as kinetic aspects enter in a crucial way. Maes and collaborators have been studying, for several years now, the kinetic contributions to response far from equilibrium. He started off with some argument, that the so-called *dynamical activity, *a time-symmetric quantity, relates to “surface” corrections in the exploration of a system of state space, while entropic terms relate to “volume”, in that the latter are associated to typicality of bulk states in some phase space (by the usual agument that systems maximize entropy), and the former are responsible for crossing boundaries between such bulk areas. Already for a simple biased random walker jumping to the left with probability rate or to the right with probability rate , to relate to thermodynamics one would use the Arrhenius law / local detailed balance yielding . But the rates are also characterized by some symmetric property, such as the exit rate , and it’s not difficult to cook up a dependency of such that by “pushing more one obtains less” mobility of the walker (defined as the response of the average velocity with respect to ). Importantly, dynamical activity only appears as a second order effect, and this is the reason why it doesn’t show up close to equilibrium, and which somehow relates to the volume-to-surface argument. Maes, Baiesi and Wynants have obtained a full nonequilibrium response formula that improves on the “more conservative approach based on the effective temperature”. He further explained the recent example of response of myosin V, comparing it to Sher Kahn the tiger that moves by the effect of a burning tail… Finally he commented that nonequilibrium thermodynamics in the XXIst century does not need to turn back to kinetic theory, but rather look forward to new types of phase transitions far from equilibrium involving activity in all its aspects. To this, I’d like to add one idea of mine: to introduce a new concept of efficiency that also accounts for traffic (while so far efficiency is only related to dissipation), something that tells you how much activity one has to spend to produce a displacement. Maes has quite a histrionic style that made the audience laugh more than once. Another memorable sentence: “It’s very hard to live close to equilibrium”, which reminds me of the Einstein quote (or at least attributed to him) “life is like riding a bycicle. To keep your balance, you must keep moving”.

As a side remark, I really believe that frenetic aspects of nonequilibrium is a point that should be emphasized more even beyond our community. Unfortunately in the applied thermodynamics community it is often believed that dissipation is only a function of the currents, that is, of the time-antisymmetric properties of the system. This turns out not to be the case; e.g. any tentative variational principle such as MinEP with respect to the currents is doomed.

–** Second law-like inequalities for Langevin systems under time-delayed feedback control** (M. L. Rosinberg): To model feedback control, Rosinberg recently (arXiv paper) considered a Langevin equation with an extra-force that depends on the state of the system at an earlier time, the Stochastic Delay Differential Equation (known from the literature to have a very rich dynamical behavior, such as multistability, bifurcations, stochastic resonance etc). The equation is obviously non-Markovian, the corresponding Fokker-Planck equation consisting of an infinite hierarchy of integro-differential equations with no closed solution, except for linear equations giving Gaussian processes (and this might be interesting to know the efficiency of feedback processes). Also, time-reversal is nontrivial.

In the FP equation, that formally appears to be the usual one, there appears a force term that is an average at an earlier time. Nevertheless, by defining the Shannon entropy the usual way, he can introduce an entropy balance and obtain some inequalities relating the maximum work that can be extracted to an entropy pumping rate (I don’t understand whether this result is in any respect different from usual and peculiar to time-delayed systems). As regards the Fluctuation Theorem, there is a technical problem due to the fact that the formal inverse dynamics depends on the state of the system at a later time, thus making it acausal. At this point a question from the audience points out, correctly, that one could consider the forward probability of the inverse trajectory, rather than some conjugate acausal dynamics. I very much agree on this, because in experiments one does not reverse the direction of time, but rather looks at trajectories that look “as if” time was inverted, but that are sampled with the usual forward probability. Rosinberg then derives second-law like inequalities, that have nothing to do with the “physical” inequality that he obtained from the Fokker-Planck equation, leading to the conclusion that stochastic thermodynamics is not consistent anymore between different descriptions. Although this might well be an artifact of the very peculiar time-reversal procedure, because in this case the forward probability of the backward equation is different from the backward probability of the forward equation. In the linear case, one has Gaussian propagators; interestingly, the steady state is a Maxwell-Boltzman but with two different temperatures for velocities and positions, hence in a way breaking the equipartition theorem. Apparently, there exists an experimental technique (the Active feedback cooling of nanomechanical resonators) that is described by such a kind of Langevin equation.

A final comment connects to my general perplexity (see comment to Seifert’s talk the first day) about the Fluctuation Theorem being a theorem at all. One can always define an “entropy production” as the the logarithm of the ratio of two probabilities. The point is proving that this has any connection to physics, and apparently in this case it’s not so.

– **On the thermal Josephson effect** (A. Engel): Engel first proposed some ideas he would like to explore as regards the fluctuating thermodynamics of phase-coherent heat fluxes across a Josephson junction. Nice experiments are available and it might be feasible to study Carnot cycles and all sorts of thermodynamic concepts but adding an inherently quantum, coherent nature. He has part of the theory, with all the creation and destruction operators, set up and this sounded like a call for collaboration that I hope somebody will pick up, as it sounds to be a very promising idea. I like this attitude of not caring for reservedness and for following other people’s trends, and making up fancy ideas.

He then moved to a completely different topic. Would it be possible to test the Fluctuation Theorem in non-thermal, macroscopic systems? In turbulence one has strong non-thermal fluctuations, that are stationary homogeneous and isotropic. Some results from a turbulent free ai jet of Renner and Peinke give strongly non-Gaussian distributions with hot tails at small scales (called “intermittency of turbulence”). This hierarchy of behavior is described by a Kolmogorov cascade that is described by a Fokker-Planck equation with respect to a parameter that moves across ranges (Friedrich and Peinke, PRL 1997). Then some FT holds, but this slide ends with the question “What is this good for?”, because in fact the “entropy production” appears to have no physical meaning (so, again, it’s a so-to-say “theorem”, see comment above). Funniest title for a slide: “A tale of tails”.