Live blogging from ΣΦ

@ SigmaPhi in Corfu

Dominique Chu, Entropy production during computation. The question he is interested in is what is the cost of deterministic, finite-time computation. The discussion started a long time ago and there was a conclusion that there was no cost for deterministic computation – Feynman and Bennet (but then, there is no computation at all): e.g. the logic gates constructed with billiards. However, such zero-energy computation is either infinitely slow (quasi-static) or inaccurate (billiard balls) [this is very close to my thoughts on computation, accuracy and speed]. Has some nice plot of a scaling of cost vs. time vs. accuracy in biological cells, in a paper where he was asking what is the minimal energetic cost of computation. Digital deterministic computation can be performed fast and efficiently. Take-home message: 1) The cost and time scale scale linearly with the system size; 2) Accuracy scales with the power of the system size (???). He considers a continuous time Markov chain that relaxes to equilibrium; the initial state is the input, and the output is the equilibrium state (why equilibrium and not non equilibrium steady states?). Computation is limited by several aspects: entropy production, sampling time, and cost of sampling. The accuracy depends on the number of samples.

Many people when they want to consider computation go for the Turing machine, but he prefers to avoid this because, apparently, there is a complicated thermodynamic pattern there. He only considers a logical circuit. The minimal implementation of an AND gate is a CTMC consisting of states (0,0), (0,1), (1,0) and (1,1) and there are transitions between states [this reminds me of some comments by Horowitz et al. that other gates cannot be realised by a two-state model because you need a sort of wolf-goat-cabbage problem]. We can consider this as a minimal chemical system made of only two molecules. For a few kT you get a lot of accuracy.

Stefano Ruffo, Out-of-equilibrium physics in spontaneous synchronization. Dynamical systems’ theory. It was historically born in communication. He will cover the Kuramoto and Sakaguchi models, the role of noise, then his own work related to inertia, and a lot of other topics, including fluctuation theorems (very recent work that he might not cover). Synchronization is the adjustment of the rhythm of active, dissipative oscillators caused by a weak interaction. Prerequisite: the must be an external source that keeps the oscillators moving. An active oscillator generates periodic oscillations in the absence of periodic forces. The first to observe antiphase synchronization was Huygens with pendulum clocks. Also the synchronized in-phase is possible, and recently the experiment has been done and the conditions to obtain it are quite involved. In more recent times synchro was found in radio communication, flashing fireflies, circadian rhythms, the brain (actually synchro is not good for the brain, you have to find ways to de-synchro). The Lyapunov exponent in the direction of time, that changes the phase, is zero so it is very easy to synchronize phase. Then, the idea of Kuramoto was to look at the dynamics of the phases only. The Sakaguchi model introduces a drift in the equation that drives the system out of equilibrium [by the way, I should keep in mind this result that you can recast the system as an Hamiltonian action-angle variable, which is a very powerful result]. Sakaguchi, by some trick, managed to calculate the stationary distribution of the Fokker-Planck equation [it would be interesting to calculate the distribution of the currents; Ruffo mentions that he uses this distribution to make considerations about the Fluctuation Theorem, but we need to keep in mind that the steady state is in occupation while the FT is for the currents’ large deviation function]. Ermentrout (1991) introduced inertia in the model going underdamped, and used it to analyze electric network distributions. On a fully connected network, the model has a first order phase transition.

David Mukamel. In many 1D models and some 2D models one observes anomalous heat conduction, that is, the conductivity diverges with some power with the scale of the system, and there are strange temperature profiles that diverge with some meniscus exponent close to the two reservoirs at the boundary.

Alberto Imparato, Autonomous thermal motors. The question is: given two reservoirs at different temperatures, what is the minimal design to extract work? He considers a Langevin equation with a potential, a time-dependent protocol, and a nonconservative force, and the potential and the external force are either periodic or stochastic forces. e.g. Tilting ratchet and pulsing ratchet. The model he considers is inspired by that of Gomez-Martin et al. with two degrees of freedom and some periodic tilting potential. Moves to center of mass/relative coordinates, and then goes to the limit of strong coupling where the spring constant is large. He can adiabatically eliminate the fast variable (the relative distance) and find an effective equation for the center of mass. He can then find a periodic steady-state solution of the FP equation with some current that is uniform. The current can only be nonvanishing if the effective potential is not periodic [this reminds me of something by Landauer and Buttiker, check it out…] And in fact, he shows that this is equivalent to the Buttiker-Landauer model where there is a periodic potential and a periodic temperature profile, with non-commensurable periods [Any approximation of non-commensurable periods to real number (which must be the physical situation) gives commensurable periods, hence the equilibrium vs. nonequilibrium nature of the system depends on time scales, and the time scale depends on the representation of the real numbers; this might give rise to interesting and maybe even paradoxical results].

Sung J., Chemical fluctuation theorem for vibrant reaction networks in living cells. He shows that experiment indicate that intracellular reactions regulated by enzymes are not Poissonian at all, and proposes a new concept of “vibrant reaction process”. Basically, he proposes a stochastic reaction rate, due to the uncertainty and variation in enzyme expression. He finds some new relative variance [it seems to me that the calculations are possible because the fluctuation in the “vibrant” is independent of the intrinsic noise of the chemical evolution].

Afshin Monthabhak. There are many pieces of evidence that the brain works at criticality. The question he is interested in is how does the brain approach such criticality? Excitation and inhibition tendencies balance each other. There’s not just a critical line or point, but a whole critical phase, and this is the case with the Kuramoto model with hierarchical networks that mimic the game, and then the critical behavior is in an extended region. But is there a dynamical origin of such extended criticality? His model is a random network, diracted, where every two nodes are connected with some probability q hence the average degree of the network is k = qN. The dynamics works through a transfer function: at a given time the probability of a node to be activated is given by the activation of the neighbours with some transfer function in between: if neighbours all fire, then fires, if there’s no activity in between, no fire. Very simple. The largest eigenvalue of the adjacency matrix provides a lot of information about the collective dynamics of the network. The important parameters is the number of active sites. The activity-dependen branching ratio is the expected activation at the next tie step given that there is a certain degree of activation. The hypothesis is that a branching ratio of 1 s a good characteristic of a critical system. There’s some mean-field analysis that allows to calculate the activity-dependent branching ratio, which has a linear part and a nonlinear part, distinghishing the two types of behaviour. There’s a parameter such that if that parameter is at the critical point, then there is a transition between stability and instability (all firing, all non-firing). Actually, he has a whole interval of criticality. He then studies the fluctuations around the critical point, beyond mean-field. In the critical region, of course, it’s power law with fatter and fatter tails. Furthermore, one has avalanches: in the critical region one perturbs the system and then it goes on and on for a long while [divergent self-correlation?].

Vasilyev, Survival of a lazy evasive pray. Punchline: If the prey does not know where the predators are, it should stay where it is.

Carlos, Negative response to an effective bias by a mixed population of voters. A model of opinion formation: one often encounters situations in which trying harder, pushing stronger, making any excessive effort, appears counterproductive, leading to a smaller effect as compared to the outcome achieved with a more modest investment. In particular, in long-lasting human relationships. But there are also examples in physics: electron transfer in semiconductors at low temperature, hopping processes in disordered media, etc. He considers a large society made of many small communities, each comprising N individuals. They have to vote for one of to candidates, one blue and one red. Each community is exposed to an external bias prompting them to vote for a preferential candidate. Each member of the community will either align with the bias (ordinary voters) or against (contrarians). They model this system by, guess what?, a spin model, with a temperature that allows for fluctuations in the opinion within a given communities, with an interaction term between ordinary voters and contrarian, and there is a “magnetization” of the contrarians and a “magnetization” of the ordinary (so these are two populations of spins). To try to solve for the partition function, he goes to continuum and finds some expressions for the order parameter. There are several limits in which the expressions can be simplified, and one can find a negative slope in the coupling parameter, showing that the red candidate might win from an excessive attempt of pro voters to move the vote of the anti voters. I don’t see the physical mechanism behind this, and I’m surprised that this negative response occurs also for equilibrium systems.

Ruppeiner. Very qualitative talk, with many words but not many formulas. He mentions that the Fokker-Planck equation allows to interpret the thermodynamic scalar curvature. He shows a table with calculation of the thermodynamic curvature for several thermodynamic systems (all of them are equilibrium systems). Interestingly, it’s the first time I see a speaker read from a prepared speech. He makes the statement that the entropy of the universe is a nondecreasing quantity. Mentions some facts in black hole thermodynamics, stating that it is an application of the theory.

Sahay. Considers anti-de Sitter black holes, that are expecially good for thermodynamics. AdS spacetime solves the Einstein equation with a negative cosmological constant. The attractive gravity force increases with distance, acts as a box, and then allows to formulate stable canonical ensembles [this reminds me of the project I had of defining the canonical ensemble in GR…]. One then defines the metric as the derivative of the entropy with respect to parameters. The invariant scalar curvature is then defined. The thermodynamic geomety is flat for the ideal gas, curved for the van der Walls fluid and singular for something else. The curvature encodes first order phase transitions. The thermodynamic geometry for black holes. [like the previous one, a very wordy talk with few calculations made explicit, the few formulas are standard, so it’s difficult to understand exactly what this is all about]. Finally he mentions the Kerr-AdS black-hole.

 

 

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s