Starting tomorrow I will be at the APS spring meeting, and I will keep uploading this and forthcoming posts with sketches of my impressions and notes. I have already toured New Orleans today and I was impressed by its beauty and liveliness, I was not expecting it. Last week I was at MIT to deliver a talk, hosted by my friend Jordan Horowitz, and most importantly I took the chance to visit MIT and Harvard, buy books at one of the best bookstores in the whole world, possibly, and get a glimpse of first-class Universitity life.

**Donald Salisbury,** *Leon Rosenfeld’s general theory of constrained Hamiltonian dynamics*. L. Rosenfeld studied with De Broglie and De Donder. In his paper “On the quantisation of wave fields” (and other papers by DS) 20 years previous to Paul Dirac (’49). . He wanted to work with Einstein on the quantum theory of gravity, Einstein agreed but he was not granted the support. He later joined with Dirac. He had a very ambitious program: to make a Hamiltonian form for all known of interactions know at the time, including the electron field and Einstein’s theory of relativity. Primary constraints from Lagrangian symmetries: when there are gauge symmetries there is always a singular Hessian of the Lagrangian. One can move from the Lagrangian to the Hamiltonian and one obtains an additional arbitrary space-time function (hence no unique deterministic solution). Even if Rosenfeld did not apply his procedure to General Relativity, if he did he would have come out with the Arnowitz-Deser-Misner Hamiltonian. Dirac and Rosenfeld were in contact, and in one mail Dirac asked “what is that lambda”, and he uses the same notation, and it’s a mystery why Dirac never acknowledged Rosenfeld’s work (different for Bergman).

**Tatiana Erukhimova**,* Physics Reality Show*. Now renamed Real Physics Live. A lot of APS is about “outreach”. Such as physicsfestival.tamu.edu. Educational videos and the such. Because the attention span of students is short, it is believed that one should feed them short videos; shows one example with explosions of nitrogen gas, where hardly anything is explained to the students. I think this is precisely the kind of activity that makes physics at this conference so similar to any other industrial commodity. I contest even the idea that it is necessary and healthy to promote science, if the promotion is dome like this. The second video on the squared-wheel tricycle though was fun.

**Elena Ferraro**, *CNOt sequences for heterogeneous spin quit architectures in a noisy environment*. Confinement of electron spins in structures, e.g. quantum dots, and double quantum dots singlet-triplet qbit. They want to construct logical gates and they have genetic algorithms for CNOT sequences. First she gives a description of the Hamiltonians, which preserve coherences. Randomness enters the picture in the genetic algorithm, but I don’t understand where the noisy environment otherwise.

**Erik Nimwegen**, *How do prokaryotic genomes evolve?* (Invited talk).

We start with Feynman, how physics works? We first need to define measurable quantities and find relationships (laws) between these quantities, and the miracle of physics happens when you manage to find a theory that explains these relationships in a way. However, if one looks at how mathematical populations genetics works, the situation is quite different: no collection of laws, no clear measurable quantities (sometimes things are not measurable at all, like fitness etc.). So he things what the field needs best is to think in a more scientific way. He studies E. Choli “in the wild” meaning somewhere in Minnesota. They built a phylogenetic tree. Not

all DNA is vertically inherited, a lot can be picked up from the environment (horizontal transfer).

**Louis Pecora**, *Cluster synchronisation in complex networks*. Considers networks of clusters and asks about synchronisation between them. The nodes are oscillators connected by edge that have bi-directional weights. It seems that the dynamics he considers contains no noise, so this would be a special case of my treatment. (all of this might be relevant to my problem of oscillators: how does the entropy production rate relate to synchronisation of the oscillators? I could even look at very simple examples…). Computational group theory to understand which symmetries the network has. He puts to work all of the machinery of group theory, including irreducible representations and Schur lemma. He then has an argument that shows that not all regular behaviour correspond to symmetries (there are equitable partitions that are not symmetric). For every equitable partition there is always a finer symmetry partition which helps the analysis. I should ask him whether the problem he is considering is dissipative in some way or fully deterministic and conservative. Then, how does synchronisation happen w.r.t. Poincaré recurrences?

**T. Nashikawa**, *Prevalence of asymmetry.induced synchronisation in oscillator networks. Symmetry in complex networks* (see Synchronisation: A universal concept in nonlinear sciences, 2003; Pecora Sorrentino, Hagestrom, Nat. Common., Roy. Sci. Adv.). What he focuses on is symmetry breaking, in particular on symmetric states that are stable only when the system is asymmetric (asymmetry-induced symmetry), or viceversa symmetric systems in which stability is only possible when the state is symmetric (symmetry breaking).

**F. Petruccione**,* Steady states of open quantum Brownian Motion*. Outline: Open quantum walks; in a special limit they become open quantum brownian motion; and finally he analyses the Gaussian or non-gaussian steady states. The open quantum walk is you have a lattice of nodes and you have a quantum walker sitting on them. The transitions of the walker jumping from node to node are quantum transition matrices (one would then ask what it means to have a non equilibrium vs. an equilibrium steady state). An open quantum walk is a CPTP map on density matrices on the Hilbert space describing the nodes. For example: the “quantum Pascal triangle”. His overview on OQWs includes “Dissipative Quantum Computing”, you an recover classical walks, and so on. More recently people have studied mathematical problems (CLT, reducibility etc.). Open Quantum Brownian Motion arises as the continuous limit of OQW (see Bauer, Bernhard, Tiloy). They end up with a master equation that is different from the usual Quantum Brownian Motion equation, containing a Fokker-Planck term, an Hamiltonian, and a Lindblad term. They showed that you can give this formalism a physical meaning by introducing baths and considering decoherent interaction of degrees of freedom. By the usual Born-Markov approximation one arrives at the master equation (similar process to the microscopic derivations of Bruer-Petruccione). Finally they study the steady states.

**Anzi Hu**, *Steady-state properties of Transverse-Field Ising model with dissipation along field direction*. The transverse Field Ising Model is like a bunch of quantum spins hanging form a rope like clean laundry, and there are fluctuations of the spin in the x-direction. The model has a Lindblad equation with a Hamiltonian and a dissipation, all pointing in the z direction. The system has a continuous phase transition from order = 0 to = const. to another phase at T=0 (in 1D) (question: is this a a nonequilibrium or an equilibrium transition? she said non equilibrium, but how does she establish that? by looking at quantum detailed balance?). She then does a comparison with classical model.

**A. Raju**, *Renormalisation group, Ising model and normal form theory*. He argues that normal form theory is a standard way to think about the renormalisation group. Power laws fail to describe interesting cases, there are logarithmic corrections, and it’s not clear the relationship with universality. All these things don’t fit together in a unique theory, apparently (I always had this impression about the RG…). One in general finds flow equations, e.g. for the Ising model for temperature and interaction. You identify a fixed point, linearize and find the eigenvalues, which provide the critical exponents of the theory. When you have a transcritical bifurcation you can perform a change of coordinates into a normal form (linear). So the new equations in normal form are linear with the eigenvalues. (might this be related to Jordan normal forms when there are no enough eigenvectors per eigenvalue, or is the form always diagonalisable because it’s an Hessian? I think the latter situation holds…). So for example for the Ising model in 4D the true singularity is due to the fact that Lambert’s W function appears. I should look at the application of the RG in chemical models where you put together more and more reactions and redefine the transition rates. This might be a good application of the RG.

**Josey Stevens**, *Quantum information processing by a continuous Maxwell demon*. The problem: previous models of continuous Maxwell demons were discrete and discrete quantum systems don’t have any good semiclassical approximation. He wants to discuss both “open” and “close” Maxwell demons (whatever that means: in a control-theory sense?). How do they do it: Step 1: the demon is coupled to a qubit; Step 2) Together they evolve with untary dynamics; Step 3) They are measured at some point. To conclude: they have a model for a continuous Maxwell demon, they sole the time evolution of the model, there are numeric errors preventing unitary dynamics. It seems that all of the interesting questions though are yet to be addressed.

**John Bechhoefer**, Momcilo Gavrilov, Raphael Chetrite, *Partial erasure of a bit: Direct measurement of Shannon’s entropy function using a feedback trap*. Maxwell’s demon is 150 years this year. In 1929 Szilard showed that if you have a single particle in a box you can insert a partition in a box and extract kT log 2 work for free. Now one can do an experimental equivalent to that example with a colloidal particle. The modern picture of thermodynamics and information [Parrondo, Horowitz, Sagawa, Nature] shows that, after Landauer, there is a cost of resetting a bit (I really believe that it makes no sense to distinguish between resetting, operation, preparing etc. I just believe that some operation requires that amount if one wants a computation to be performed). They compare two protocols, one with a tilting resulting in a high probability of moving the bit to one minimum, and one without tilt that gives no preferential occupation, and indeed they obtain dissipation in one case and reversibility in the other.

**Sivak** (in place of Crooks). Starts with the sentence I hate the most: “The entropy of the universe increases”. Life is fundamentally out of equilibrium, and this allows to create order. Cites England. He wants to talk about conversion between energy stores. Molecular machines work at low Reynolds number, they display large fluctuations, they can pause, run backward etc. His group works on non equilibrium statistical biophysics. Question: How can you make the future look different from the past at reduced energetic cost? How can you allocate dissipation to speed the machine? [Brown and Sivak, PRE 2016] Inspired by “the length of time’s arrow”. He goes for a talk where he does not explain what the things he’s plotting are, making the point that “energy conversion steps of intermediate size can lead to high time asymmetry for a given free energy dissipation”. [Brown and Sivak, arXiv] Discrete-state models of molecular machines (ATP synthase, myosin, etc.). Kinetics of non equilibrium driving in one simple three-state model with ATP, ADP, hydrolysis steps. Imagine that we have a fixed total dissipation (affinity) around the cycle (he argues that it is tightly regulated in real machines). How should we allocate the dissipation around the cycle? Lots of proposal in this direction. In a two-state cycle (even simpler!) the flux that maximises allocation of dissipation is analytical, in a three-state cycle they have to go numerical already (!). Paper was out today.

**Kater Murch**, *Exploring quantum thermodynamics in continuous measurement of superconducting qubits*. Quantum trajectories, wants to identify unitary and non unitary dynamics to distinguish between work and heat. It’s not clear to me what kind of non unitary dynamics it is. It’s an experimental piece of research. Transom quit resonantly coupled to a waveguide cavity. Quit and cavity together give a polaron, some sort of “one dimensional atom” that can be detected by fluorescence (not sure if what I’m writing makes actual sense…). He says that he would not observe the Jarzynski equality if he were not measuring the right amount of work.

**C. Jarzynski**,* Nano thermodynamics in the strong couling regime*. He wants to give definitions of heat, work and state functions that will reproduce thermodynamics even when the interaction energy with the environment is not negligible. I think this is very relevant to my own work, as one could see such “strong interaction” as a matter of coarse graining. He has in mind a rubber band somehow as an example. He is of course marginalising (but at equilibrium). And he has an effective internal energy; so far so good, [Seifert, PRL 116, 020601]. He introduces a lot of state functions that satisfy the right thermodynamic relations, but I don’t see how this goes beyond the definitions.

**Wolpert** (with England and Tegmark). Thermodynamics of computation, important: no chapter in computation has anything to do with bit erasure. 5% of annual US energy expenditures go to computation. Major tech firms locate their data servers in or near major rivers to keep them from melting. 50% of lifetime cost of current High-Performance Computer systems goes to energy costs. NYT: “the high energy requirements of the fastest computers have become the most daunting challenge…” Quantum computers are not gonna solve this kind of problems. Even if you can use a quantum computer, you still have to get the information out, store it, etc., and that’s classical. Approximate (“inexact”) computing: Allowing noise in a computation reduces energy requirements and heat generated. Intuition: transfer noise from the hot environment to the computer, thereby cooling the environment: to make a computer into a refrigerator rather than a heater (this sounds like a violation of the 2nd Law). In theory, one could use “thermal” random number generators to reduce battery consumption. Dynamic slowing of parallel computers (HPC’s). Slowing a computation reduces energy used and heat generated. Intuition the system can stay close to equilibrium all the time if it changes slowly enough (but that, according to me, makes the computation too unreliable). A step backward: the brain and computation. Why is intelligence so rare in evolutionary history? (I very much disagree with this statement). At 2% of our body mass, our brain uses 20% of the energy (about 12-20 Watts: we are “dim bulbs”). A massive evolutionary pressure to perform computation as thermodynamically efficiently as possible. Logical irreversibility: involves sequences of states; Thermodynamic irreversibility: involves sequences of marginal of states’ probabilities. Minimal work to erase a bit. Manila, Jarzynski, Crutchfield, Boyd etc. studied the thermodynamics of systems that are richer than just bit erasure. He argues that erasure can be made reversibly, but then there will be problems with computation. Conclusion: no computer is thermodynamically reversible for arbitrary users (because each one has a different distribution of initialisation according to what people like to do with their computer).

**Lu and Raz**, *Anomalous cooling and heating*. Oren explained me this very nice work yesterday. The Mpemba effect is quite amazing: the possibility that a hot thing cools down faster than an cool one. It was discovered by Tanzanian amateur scientist Mpemba apparently while making ice cream. Practical realisations and explanations involve several mechanisms (including evaporation etc.). But there is a simpler explanation. The argument of Lu and Raz, based on non equilibrium statistical mechanics, is very simple: for a given prepared thermal state at some temperature, the trajectory in ensemble space towards the final equilibrium state will not in general follow the line of equilibria, so it will not “pass by” the lower-temperature distributions. Now, one can always devise a situation where the hotter initial state belongs to a faster mode of relaxation of the dynamics. So, the main point is: this is a nonequilibrium effect due to the fact that the system does not go through thermal states in its evolution. Of course one can also have the inverse Mpemba effect by warming up instead of cooling down.

**Benjamin Machta**, *A subextensive bound on entropy production in the adiabatic limit*. The laws of physics are reversible in time but the world we interact with is not, dissipative. He proposes a bound on dissipation, based on the geodesic distance (thermodynamic length), the metric being the Fisher information. He thinks about systems that are very close to equilibrium. He asks questions about the cost of control on a system.

**Jonathan Pham**, Benjamin Vollmayr-Lee, *Field Theoretic Description of Non-Equilibrium Chemical Work Relations*. Very technical talk with a lot of path integrals in Doi-Peliti to obtain a fluctuation theorem for work. They have an interesting transformation of the fields that looks like a gauge-like transformation.

**Danny Sweeney**, Donald Priour , Andrew Harter , Yogesh Joglekar, *Non-equilibrium steady states in “PT-symmetric” classical chains*. This should interest me. They analyse a chain of coupled oscillators. Usual Ornstein-Uhlenbeck situation. He discusses the problem of existence of the steady state; the “A matrix” must not have negative real part of the eigenvalues. One could also have purely imaginary eigenvalues, which might be interesting.

**D. Mandal**, *Stochastic thermodynamics and fluctuation theorems for active matter*. He developes stochastic energetics and fluctuation theorems for active brownian particles. He considers a process with a memory contained in the self-correlation of the noise, hence which does not satisfy the Einstein relation (FDR). The environment is out of equilibrium. He goes to underdamped and defines heat, work and energy consistently with the first law (I also have the impression that the first law is actually a definition, at times). Then he moves to the second law. His underdamped equation has an interesting second derivative term, a correction to the momentum viscous term.

**Grzegorz Szamel**, *Evaluating linear response in active systems with no perturbing field: Application to the calculation of an effective temperature. *The point is that we should move beyond morphology of active systems. Is a picture a flock or does it just look like a flock? There are many mathematical models that produce the same “image”. One way is to look at linear response functions (see Nicholas Ouellett’s talk). In equilibrium you have fluctuation-dissipation relations, in active systems out of equilibrium if one wants to calculate response functions one has to do it computationally. Like Dibyendu, he considers active Ornstein-Uhlenbeck particles.

**Nitzan Razin**, Raphael Voituriez, Jens Elgeti, Nir Gov, *Extracting work from gradients in active motion. *Di Leaonardo et al. 2010, active systems that consume energy can perform work (ratchet). Razin: they consider the Archimede’s principle in an active fluid. In 1D it holds. Then they go to 2D.