I stumbled upon this interesting paper, which is already quite old:

V. Lecomte, C. Appert-Rolland and F. van Wijland

*Thermodynamic formalism for systems with Markov dynamics*, J. Stat. Phys. 2007

which naturally follows

P. Gaspard, *Time-reversed dynamical entropy and irreversibility in Markovian random processes*, J. Stat. Phys. 2004

The paper prosecutes on the line of bridging irreversibility in deterministic dynamical systems and in stochastic Markov processes, to which Pierre Gaspard made important contributions A broad review is Altaner’s Ph.D. Thesis *Foundations of Stochastic Thermodynamics* [arXiv:1410.3983]. On the one hand, irreversibility in dynamical systems occurs because of the contraction of phase-space, which is quantified in terms of Lyapunov exponents and the Kolmogorov-Sinai entropy. The two concepts are related by Pesin’s theorem that establishes that the KS entropy is the sum of positive Lyapunov exponents. A notion of reversed KS entropy , which is the sum of (minus) the negative Lyapunov exponents, can also be introduced. On the other hand an analogue of the KS entropy, and of its time-reversed, can be constructed for Markov chains, and the difference yields the thermodynamic entropy production that we know and love.

Till today I have always felt this analogy was incomplete, since I was not aware of a concept of Lyapunov exponent, phase-space contraction and hence an analogous Pesin theorem for Markov processes, so I was a bit dissatisfied by this parallel. In fact the authors comment that “[…] therefore is indeed the phase-space contraction rate (the sum of all Lyapunov exponents). We have of course no such a microscopic interpretation within the Markovian framework”. Although, later on they write that “It can also be noted that, if one defines a Lyapunov exponent for the random walk through an equivalent one-dimensional map, as described in Refs. … , we recover Pesin’s theorem”, so now I have more literature to go though. This I leave for next time…

Let us now dwell into more technical considerations.

**1 – KS entropy for continuous-time Markov jump processes**

The theory woks well for Markov chains, but things are slightly less straightforward for continuous-time Markov jump processes, as already noticed by Gaspard. Let me try to formulate it my own way. The reason can be traced to the very definition of KS entropy:

where is a probability measure over the space of histories up to time . The problem with this formula is common to all entropies: Taking the logarithm of a probability *measure* does not make much sense, it rather makes sense to take the logarithm of probability *density*, that is, of the Radon-Nikodym derivative or a probability measure with respect to another

I have considered this problem in [EPL 2012, arXiv] and used it to construct a gauge theory of nonequilibrium processes. It is no surprise that it emerges in continuous-time Markov jump processes, since the probability of a sequence of states and of waiting times in the interval between instant and is given by

(1)

where is the probability rate (with physical units of an inverse time) of a jump from to , and is the total probability rate out of Now, from this probability measure we cannot just take away the Lebesgue measure , cause the probability density so obtained would be dimensional, and moreover this operation does not have any reasonable probabilistic meaning. In fact, the waiting times are stochastic variables themselves, since it is stipulated that the Markov jump process is a collection of Poissonian processes on top of a Markov chain.

The authors then propose to take as the argument of the logarithm

This of course solves the problem, because it is a good adimensional probability density, positive and normalized. The interpretation of this as a probability though is a bit subtle. One might expect it to be the marginal of Eq. (1) with respect to times, but it’s not:

In fact, if it were it would still depend on time, since the probability of a sequence of jumps up to time or to time must be different. Instead it doesn’t. In fact, one can see by direct computation that

where also is integrated away. Then, one can interpret as the probability that the symbolic sequence occurs at all, independently of how long it takes to produce it. Now let us stretch this a little further. Given that the symbolic sequence , what is the probability that the sequence of waiting times is produced? Each time the system finds itself at a state, the pobability of jumping out of it is a Poisson process with rate . Hence this probability is

and of course

Now this comes to grips with my understanding that entropy is always relative entropy. In this case the KS entropy of a continuous-time Markov jump process is the relative entropy of the probability of a path relative to the probability of the waiting, conditioned on the a sequence of states. I also like this interpretation because interpreting as the probability of a mere sequence of symbols makes the KS entropy closer to information-theoretic concepts like the Kolmogorov complexity of a string.

*[follows?]*