The celebrated fluctuation relations have dominated the last 20 years of nonequilibrium statistical mechanics. They are refined statements of the Second Law of thermodynamics, quantifying the irreversibility of processes in open systems. The observable that enters such fluctuation relations is typically the entropy production, which is antisymmetric under the reversion of the direction of time. However, systems far from equilibrium are not solely characterized by time-antisymmetric quantities, but time-symmetric properties (such as the “activity”, or the “frenesy” etc.) play a crucial role, a point that has been raised over and over, among others, by C. Maes and coworkers . Hence the question arises whether fluctuation relations for time-symmetric observables hold. In general, the full-blown Fluctuation Theorem, comparing probabilities of different stochastic trajectories, strongly depends on time reversion, hence one does not expect symmetric quantities to satisfy a FT. However, one of the corollaries of the FT, the integral (Jarzynski) fluctuation relation does not make explicit reference to time reversion, hence one might hope to derive integral fluctuation relations for time-symmetric quantities.
In this direction, an interesting contribution comes from this paper:
Marco Baiesi, Gianmaria Falasco, Inflow rate, a time-symmetric observable obeying the fluctuation relations, Phys. Rev. E 92, 042165 (2015).
Given a single realization of a Markov jump process , the authors introduce a quantity , called inflow rate, and prove that it satisfies two integral fluctuation relations (at long times):
where is the entropy production. These complement the Jarzynski identity:
The inflow rate is defined as follows. A Markov jump process jumps between states , with rate of jumping from to . Then, at each state there is one total exit probability rate , and one entrance probability rate . Then the inflow rate is defined as
Unfortunately, while has a clear probabilistic interpretation and it appears in the master equation regulating the evolution of a probability density, with generator
there is no such clear probabilistic interpretation for .
Or is there? Well, in fact notice that if we replace the (unnormalized) uniform state on the right-hand side we precisely get the inflow rate of Baiesi and Falasco. This suggests to widen up a little bit the scope of their treatment and look for generalizations that might give a better insight into the nature of the inflow rate.
One crucial step in their derivation is the introduction of an auxiliary dynamics which generates trajectory whose probability is not dramatically different from that of the original dynamics. The one they choose has just the reversed rates along every transition, which then requires to adjust the diagonal elements of the Markovian generator. But we can be slightly more general and define
where a diagonal positive-definite matrix, that is . Notice that without loss of generality we can choose to be a normalized probability density. The choice of Baiesi and Falasco corresponds to being the identity, but this is not strictly necessary. The diagonal matrix is there to ensure that is indeed a Markov generator, that is that it has the uniform vector as its left null vector. After a few straightforwad calculations we obtain
But this can be made to look a lot more appealing:
Ha-ha! Now that’s cute, because it gives a more physical understanding to what the inflow rate is. In particular, if I prepare a system in state , and let it evolve for a small instant of time to , we obtain
Hence is the variation of the self-information that is generated by the dynamics as the system is prepared in a state that is not the steady state. The choice of Baiesi and Falasco corresponds to preparing the system in a uniform state.
Now we can define, along a trajectory,
The meaning of this might be: forcedly sustain the system in state which is not the steady state, and as the trajectory jumps from state to state measure the total self-information change. This of course vanishes if is the steady state, hence, in a way, I’d like to think of this as the “information cost” of sustaining the system in a state that is not its natural steady state. However, this is far from being precise and would need to be backed up more seriously.
With this in mind, it is not difficult to see that the rest of the derivation follows along the same lines as in the paper, with slight modifications if one wants to include the finite-time effects. The technical trick is that the kind of transformation encoded by is a “gauge transformation” (see here for my own results on this topic) that only produces boundary terms in the jump part of the probability density. However, interestingly, such gauge transformations do affect time-symmetric quantities.
One obvious generalization of this is taking a time-dependent p.d.f. . The above considerations can be generalized, and now one has [corrected version!]
Notice that if is a solution of the master equation, then this functional vanishes. It looks very much like a Berry phase, so it is again tempting to make connections to gauge theory. Any such functional obeys the above fluctuation relations. It remains to understand what is their physical meaning.
PS. A paper where this kind of observables have been discussed and put in relation to the Doob transform of a generator is the following:
R. Chetrite and S. Gupta, Two refreshing views of fluctuation theorems through kinematics elements and exponential martingale, J. Stat. Phys. 143, 543 (2011).
Also, when in the last identity one replaces the time-dependent steady state of the system, one obtains the integral fluctuation theorem of Hatano and Sasa:
T. Hatano and S. Sasa, Steady-state thermodynamics of Langevin systems, Phys. Rev. Lett. 86, 3463 (2001).