“Acculturated people are those people who find they are no longer treated as the sort of people they thought they were, because the outside world has changed.”
Kurt Vonnegut, Timequake
“Acculturated people are those people who find they are no longer treated as the sort of people they thought they were, because the outside world has changed.”
Kurt Vonnegut, Timequake
This quote, or variants, are attributed to some Lawrence Joseph Henderson. I would love to understand the line of reasoning that led to this thought, but I can’t find the origin of the quote. Someone online places it in his work The Order of Nature, that is freely consultable on the Internet Archive, but I could not find it. The writers of the proudly vintage website Today in Science History also tried a similar quest, without success. They append their own interpretation:
“Then all the fundamental insights gained for science led to wider applications. Thus the steam engine served as a stimulus, but was only one beneficiary among many other developments in technology.”
I think this interpretation might miss the key point. Thermodynamics developed as a discourse about the power of machines during the industrial revolution. It was (and still is) a patchwork of general principles (first law, second law, etc.) and special cases (the ideal gas, the Fourier law, etc.). But it is not clear at all to me if and how all this speculation actually fed back to the technical applications: the general principles are sort of intuitive to anyone doing practical things, and special cases do not apply to, say, the Diesel engine etc.
I’ve had the honor of been invited at CERN to talk about the recent experience of Scienceground at Festivaletteratura. Studio Corraini helped me prepare a nice presentation (find below some more considerations).
John Beecham, Abolish Outreach
Let me add a few more personal considerations.
“Outreach” gives the idea of us scientists lecturing the “lay” people, a one-way communication style. Of course lectures are definitely part of the toolkit (expertize exists!) but that can’t be the whole story. Fortunately other speakers before me pointed out they prefer to use “people engagement”, which comes slightly closer to what I have in mind, a two-way conversation where we can also interact and learn from the “audience”. Unfortunately as with so many words, “people engagement” is already becoming a buzzword in the corporate university, and I guess it will soon be spoiled of any meaning – as all those words whose pervasiveness is the best indicator of the absence of their referent: excellence, merit, synergy, interdisciplinarity etc.
As about “communication”, there’s too much communication already in the world, too many people involved in it, entire sectors of our bullshit economy that are only sustained by communication, and too many tools of communication that only add to the noise. Fighting on that ground is prohibitive and counterproductive, it risks transforming science into just another piece of cool “infotainment”, of adding to the noise and not to the signal. As if we didn’t trust the trascendental value of science as a method of critical inquiry of the world – so it needs a little help in promotion. Yesterday I learnt that some Italian institution, maybe the INFN, will base 20% of their evaluation of scientists according to their outreach activities. Frankly I find it’s horrible news. External incentives often have the effect to deteriorate the work ethics of people (as with those surgeons who cherry-pick their patients in order to increase the index of survival), and of pushing people who do not have any particular talent (can we talk about that?) in something they are not particularly interest in. I find putting emphasis on such “soft skills” an alarming signal of decadence.
So maybe here lies the crucial node that wasn’t resolved yesterday (the final round table, as all such round tables, gave the false impression that we were all aligned, which we weren’t): do we intend this “outreach” to transmit the scientific method for the generations to come, or to be a tool for a more short-timed sensibilization, engagement, and enrollment of people? At this point I have the impression that these two goals don’t go together. We may have reached that point described by Ivan Illich whereby any more effort in the industrialization of a human service leads to the opposite effect: more papers, more journals, more interviews, more metrics, more tweets, more pop books, etc. deteriorate the quality of science – and there’s no communication without good contents. Communication is one piece with all the other aspects of science, and unfortunately many aspects are becoming more and more disappointing.
My personal take is that we don’t need more scientists, we need better science. We don’t need to give everybody a little bit of Higgs boson to get them in our particular club – in the form of work-force, or “stake-holders”, tax-payers, or simply consumers of scientific “products” – but we should care that the good old-fashioned values of science seep in society, regardless of the particular discipline we practice and love (one of the speakers identified as an “enemy” one guy who questioned the opportunity of a new accelerator and would prefer more focus on climate science…). To do this, sometimes I have the impression physicists should go under the radar: High-Energy Physics has been the dominant narration for so long, and now we have the symptoms of a late empire: books about it take up a weird mystical and motivational tone which is precisely the opposite of rational thinking. Like all empires that are not up to their promises, people will rebel it – starting from those scientists whose work we as physicists have kept so long in the utmost contempt, and maybe even the swarms of young Ph.D.’s and students who will live in uncertain times as regards the credibility of future projects, like new accelerators etc. Like said Latour at a meeting at the end of Scienceground, “it’s a great time for science, but not necessarily the same science”. So maybe it’s time to go clandestine and start rebuild the community from the fundamentals, from the methods, using word-of-mouth instead of massive and uninformative noise.
The talk did create a lively conversation, animated in particular by one bold sentence I reported in my presentation, by Ben Goldacre – an active physician, great communicator, pundit of the “public engagement of science” community, possibly a smug no doubt but crucial voice in the debate – and by my insistence on the fact that it’s pointless to give superficial narrations of science and proxies to try and approach the “general public”, and that I’m more interested in raising the bar and give both high-level contents, and the context of research – in an effort to abolish the separation between academia and the world outside of it, and to use the tools of science to empower people (for example, by analyzing how to fetch, read and contextualize a scientific paper instead of resorting to newspapers). One or two of the other speakers may have felt addressed, so, in the improbable case they were reading this, I would like to point out that personally I appreciate all efforts in doing quality things, and that I was positively impressed by the works of most (but not all) of the speakers. We just shouldn’t believe too much that “that’s the system where we live in”, or something like that. Something I’ve been told over and over in life and it’s simply not true. We live in many different “systems” at the same time. In particular, on the one side there are communities of people who share a passion and have their own professional ethics and social practices, and then there’s an industrial superstructure of appropriation of the “products” of such communities that is pervasive but has no roots and produces no intrinsic value. To be effective it has to trick people into believing it is necessary for running the processes that communities would run anyway and have run for decades before (and there’s no better example, given that I was at CERN, of “the Internet”; or think of predatory scientific editors). In so doing, it corrupts the spirit of the community, buying its members out of the social norms that give some value of truth to their practices (e.g. by giving out prizes). It’s not by chance that this industrial system of production of goods and services frantically jumps from one thing to the next: because whatever it touches, dies (think of any musical trend the musical industry has come to exploit, like hip-hop recently). And I don’t want that to happen to science. The Scienceground project was meant precisely to show that you can work outside of that structure.
As every year since quite some time now, I have the possibility of organizing some scientific activities at a major literary festival, Festivaletteratura. In the years the format has changed a lot, moving from the traditional “top-down” frontal meetings were science is narrated to the audience, to more horizontal, “bottom-up” approaches where the public interacts with the experts in order to understand both the contents and the context of science. For this reason I don’t like to call these activities “divulgation”, as I more thoroughly explained last year.
A further external constraint this year was a serious budget limitation. This was nice, because constrictions are opportunities for creativity. So what better than involving fifteen young and enthusiastic students and researchers to actually stimulate the contents of the activities, instead of calling external experts to impart their knowledge?
So there we have it, it’s called Scienceground, you can find the whole program here. We will run laboratories (data mining for adolescents, cryptography for children, machine learning and probability) web interviews (Peter Woit, Carlo Rovelli, Sabine Hossenfelder, Harry Collins, Alex Reinhart), a podcast station, a library with selected books, and a wide spectrum of activities that are the scientist’s daily life: drafting a paper (the report of the experience will be latexed live), printing, reading and commenting other people’s papers, scribbling formulas, discussing ideas, running small and unreliable social experiments.
The main themes of this workshop will be: the status of high-energy physics, with the books of Woit and Hoffenselder being discussed, and conversations with Tommaso Dorigo who will personally be at the festival; the use and misuse of statistics outside and inside of science (I highly suggest Statistics Done Wrong by Alex Reinhart, it’s a great introductory book on a big problem we as scientists have right now); “Big Data” and machine learning (another great book I’m reading: A Theory of the Drone by Grégoire Chamayou); the socio-anthropological* aspects of science.
This latter aspect is the one I’m most interested in lately. Like Harry Collins explains here, and Bruno Latour argues in Laboratory life, sociology of science has long been interpreted as the understanding of the social context where the scientist works and knowledge is produced; and most people still understand it this way. But this has more to do with the sociology of the community of people around science (politicians, journalists, bureaucrats etc.) that pressure it from all fronts without understanding its key values, than that of the community of scientists themselves. This frame gives rise to a narration of how the historical context affects the output of research (most of history if science is still now an anecdotal account), on the tacit assumption that the research would have had its “natural course” if it had been run outside of these conditions, if this were possible**. It is not. So, after the dumps of militant postmodernism, as I understand it this “third wave” of sociology of science is more interested in understanding the internal social workings of the community (that’s why*** Collins chose the gravitational wave community: because it’s reasonably shielded from external interests and pressures with respect to, say, genomics****). That is, to study the community of scholars doing research as an anthropological tribe, using the tools that anthropology has developed to study “primitive” cultures around the world to mirror ourselves. What are the myths, beliefs, shamans, outcasts etc. and ultimately: is the community healthy, that is, consistent with its own values of “truth”?
I personally find this question extremely urgent in science for reasons that should be completely obvious to any scientist who attends any conference, so we should look with curiosity at the socio-anthropological method that allows at the same time to be extremely respectful and accepted by the community being studied, but skeptical about its mechanisms*****. In a way I find this method is orthogonal to the scientific method, as the first requires identification with the object of study, the second total detachment from it******.
I find this idea incredibly fascinating and powerful, and by now I’m sorry that some “of us” (as if this made sense) systematically cherry-picked quotes from Latour, Collins and others to “fight back” on a petty war of fields of knowledge, on the assumption that science was under attack. I think they sort of missed the whole point******.
* By the way: Here I very ignorantly and deliberately use the words “sociology” and “anthropology” interchangeably because it just makes a lot of sense. I do know these are very different things but I wonder whether this distinction makes sense anymore.
** This is the “purification” process that Latour denounces in We have never been modern, another book that I highly suggest.
*** “Why”: this is a retroactive narration made up by myself. We don’t actually know why he chose it back then.
**** This is also a process of “purification”; Latour instead studied a community of endocrinologists and could not make a clear separation between internal and external; this
***** This is where a “clash of words” occurs: people in different fields attach different meaning to words, and they start fighting over them as if they owned the words. I believe this might be at the origin certain choices of jargon that produced so much controversy over Latour’s claims (if one reads the early Latour, the “production” of scientific facts is not an accusation but a matter of method: the question “is the fact true?” is outside of the scope of the anthropologist – and I would say, also of the scientist).
****** I’m not totally convinced by this, this is subtler issue at higher levels of subtlety, but let it be…
******* I’m obviously referring to Fashionable Nonsense by Bricmont and Sokal. I have contrived feelings about this book and I should re-read it. The first time I read it I was totally caught-up. It actually made me laugh several times. But maybe for the wrong reasons. Exactly like with great stand-up comedy: some is based on easy jokes; but some other elegantly demolishes your sense of reality, and after laughter is gone you are left with a mixed sense of anxiety and shame.
Reduction of information will be a theme of the day.
Alexander N. Gorban, Mathematical frameworks for model reduction: invariant manifolds, singular perturbations, tropical asymptotics and beyond. He describes ours as the era of complexity [or of complicatedness?] and points out we should look at the history of ideas to understand where we’re at. Points out that when using applied mathematics for modeling, the usual paradigm of old laws – new phenomenon – new laws does not work anymore, because we adapt our models, we do model engineering, to adapt to phenomena [the problem with models without theory is obvious: does the model have any universal feature? And is that universality a property of a class of models or of “reality”, for what that means?]. Comments that complexity is not in things, it is in our problems and in our approaches to them [but is any science “in” things?]. Mentions complex balance and the fact that Boltzmann had already proved a similar property. Then goes into a mix of papers, anecdotes, some specific considerations mixed with some general considerations. No mention of fluctuations to be found here (despite mention of Boltzmann and Einstein). A person from the audience points out about several missed acknowledgements in his presentation (as by his habit), of course including some of his own. This is always the case in these presentations by elderly people who approach the final years of their career re-writing the history of science in such a way that their contribution can stem out in a more prominent way (that’s usually what they mean by “history”).
M. Cates, . JR. Howse et al PRL 99 (2007), I. Buttinoni PRL 110, J. Palacci et al, Science 2013: nice experiment on Janus particles in peroxide. They go by swarms. [My question is: is the collective behavior studied?]. P. Galajda et al, J. Bacterial 189, 8704 (2997). Mentions that this creation of asymmetry would be impossible because the potential is the same on both sides [but on this I might disagree – isn’t this the usual problem with misrepresenting equilibrium with uniformity?]. More interesting is the R. di Leonardo, 2009, where now rectification is in the sense of constant rotation. So it happens that there is a motility-induced phase separation [MEC + J Tailleur PRL 2008, EPL 2013, Fily et al PRL 2012; Stenhammar PRL 2013; Theurkauff PRL 2012; Buttinoni PRL 2013; etc.]. [Question is: are stalling states interesting for active particles?]
Stochastic field theory of phase separation: He compares MODEL B for passive phase separation and coexistence with detailed balance (quartic free energy functional). They propose an ACTIVE MODEL B by changing the free energy structure. He adds a square gradient term which is not a functional derivative and of course looks similar to interface growth KPZ models [Wittokowski et al. Nature]. But still it does not predict circulation of currents nor cluster phases. So something is missing. So he has to add up yet another term [C. Nartini et al. PRX 7, 021007 (2017); E. Tjhung et al. arXiv: 1801.07687].
P. Vagner, Electrochemistry in GENERIC. Electochemical model: Nernst-Planck, Nernst-Planck-Poisson, Bikermann. Mixture theory: Bedeux, Albano, Physica A 147 (1987), Dreyer et al preprint 2018. In the ’80s Marsden and Weinstein: Electromagnetic Poisson brackets. [The Hamiltonian structure of the Maxwell-Vlasov equations].
Suppose you receive an email from someone who claims “here is the project of a machine that runs forever and ever and produces energy for free!”. Obviously he must be a crackpot. But he may be well-intentioned. You opt for not being rude, roll your sleeves, and put your hands into the dirt, holding the Second Law as lodestar.
Keep in mind that there are two fundamental sources of error: either he is not considering certain input currents (“hey, what about that tiny hidden cable entering your machine from the electrical power line?!”, “uh, ah, that’s just to power the “ON” LED”, “mmmhh, you sure?”), or else he is not measuring the energy input correctly (“hey, why are you using a Geiger counter to measure input voltages?!”, “well, sir, I ran out of voltmeters…”).
In other words, the observer might only have partial information about the setup, either in quantity or quality, because he has been marginalized by society (most crackpots believe they are misunderstood geniuses). Therefore we will call such observer “marginal”, which incidentally is also the word that mathematicians use when they focus on the probability of a subset of stochastic variables… In fact, our modern understanding of thermodynamics as embodied in statistical mechanics and stochastic processes is founded (and funded) on ignorance: we never really have “complete” information.
If we actually had, all energy would look alike, it would not come in “more refined” and “less refined” forms, there would not be a differentials of order/disorder (using Paul Valery’s beautiful words), and that would end thermodynamic reasoning, the energy problem, and generous research grants altogether.
Even worse, within this statistical approach we might be missing chunks of information because some parts of the system are invisible to us. But then, what warrants that we are doing things right, and he (our correspondent) is the crackpot? Couldn’t it be the other way around? Here I would like to present some recent ideas I’ve been working on together with some collaborators on how to deal with incomplete information about the sources of dissipation of a thermodynamic system. I will do this in a quite theoretical manner, but somehow I will mimic the guidelines suggested above for debunking crackpots. My three buzzwords will be: marginal, effective, and operational.
The laws of thermodynamics that I address are:
The list above is all in the “area of the second law”. How about the other laws? Well, thermodynamics has for long been a phenomenological science, a patchwork. So-called Stochastic Thermodynamics is trying to put some order in it by systematically grounding thermodynamic claims in (mostly Markov) stochastic processes. But it’s not an easy task, because the different laws of thermodynamics live in somewhat different conceptual planes. And it’s not even clear if they are theorems, prescriptions, habits (a bit like in jurisprudence…2). Within Stochastic Thermodynamics, the Zeroth Law is so easy nobody cares to formulate it (I do, so stay tuned…). The Third Law: no idea, let me know. As regards the First Law (or, better, “laws”, as many as there are conserved quantities across the system/environment interface…), we will assume that all related symmetries have been exploited from the offset to boil down the description to a minimum.
This minimum is as follows. We identify a system that is well separated from its environment. The system evolves in time, the environment is so large that its state does not evolve within the timescales of the system3. When tracing out the environment from the description, an uncertainty falls upon the system’s evolution. We assume the system’s dynamics to be described by a stochastic Markovian process.
How exactly the system evolves and what is the relationship between system and environment will be described in more detail below. Here let us take an “out of the box” view. We resolve the environment into several reservoirs labeled by index . Each of these reservoirs is “at equilibrium” on its own (whatever that means… 4). Now, the idea is that each reservoir tries to impose “its own equilibrium” on the system, and that their competition leads to a flow of currents across the system/environment interface. Each time an amount of the reservoir’s resource crosses the interface, a “thermodynamic cost” has to be to be paid or gained (be it a chemical potential difference for a molecule to go through a membrane, or a temperature gradient for photons to be emitted/absorbed, etc.).
The fundamental quantities of stochastic thermo-dynamic modeling thus are:
Dissipation is quantified by the entropy production:
We are finally in the position to state the main results. Be warned that in the following expressions the exact treatment of time and its scaling would require a lot of specifications, but keep in mind that all these relations hold true in the long-time limit, and that all cumulants scale linearly with time.
Comment: This is not trivial, it follows from the explicit expression of the path-integral, see below.
Homework: Derive this relation from the FR in one line.
Homework: Derive this relation using Jensen’s inequality.
Homework: Derive this relation taking the first derivative w.r.t. of the IFR. Notice that also the average depends on the affinities.
Homework: Derive this relation taking the mixed second derivatives w.r.t. of the IFR.
Homework: Derive this relation taking the mixed second derivatives w.r.t. of the FR.
Notice the implication scheme: FR => IFR => 2nd, IFR => S-FDR, FR => RR.
Now we assume that we can only measure a marginal subset of currents (index always has a smaller range than ), distributed with joint marginal probability
Notice that a state where these marginal currents vanish might not be an equilibrium, because other currents might still be whirling around. We call this a stalling state.
My central question is: can we associate to these currents some effective affinity in such a way that at least some of the results above still hold true? And, are all definitions involved just a fancy mathematical construct, or are them operational?
First the bad news: In general the FR is violated for all choices of effective affinities:
This is not surprising and nobody would expect that. How about the IFR?
Mmmhh. Yeah. Take a closer look this expression: can you see why there actually exists an infinite choice of “effective affinities” that would make that average cross 1? Which on the other hand is just a number, so who even cares? So this can’t be the point.
Fact is the IFR per se is hardly of any practical interest, as are all “asbolutes” in physics. What matters is “relatives”: in our case, response. But then we need to specify how the effective affinities depend on the “real” affinities. And here steps in a crucial technicality, whose precise argumentation is a pain. Basing on reasonable assumptions7, we demonstrate that the IFR holds for the following choice of effective affinities:
where is the set of values of the affinities that make marginal currents stall. Notice that this latter formula gives an operational definition of the effective affinities that could in principle be reproduced in laboratory (just go out there and tune the tunable until everything stalls, and measure the difference). Obvsiously:
Now, according to the inference scheme illustrated above, we can also prove that:
Notice instead that the RR is gone at stalling. This is a clear-cut prediction of the theory that can be experimented with basically the same apparatuses with which response theory has been experimented so far (not that I actually know what these apparatuses are…): at stalling states, differing from equilibrium states, the S-FDR still holds, but the RR does not.
You definitely got enough of it at this point, and you can give up here. Please
exit through the gift shop.
If you’re stubborn, let me tell you what’s inside the box. The system’s dynamics is modeled as a continuous-time, discrete configuration-space Markov “jump” process. The state space can be described by a graph where is the set of configurations, is the set of possible transitions or “edges”, and there exists some incidence relation between edges and couples of configurations. The process is determined by the rates of jumping from one configuration to another.
We choose these processes because they allow some nice network analysis and because the path integral is well defined! A single realization of such a process is a trajectory
A “Markovian jumper” waits at some configuration for some time with an exponentially decaying probability with exit rate , then instantaneously jumps to a new configuration with transition probability . The overall probability density of a single trajectory is given by
One can in principle obtain the p.d.f. of any observable defined along the trajectory by taking the marginal of this measure (though in most cases this is technically impossible). Where does this expression come from? For a formal derivation, see the very beautiful review paper by Weber and Frey, but be aware that this is what one would intuitively come up with if he had to simulate with the Gillespie algorithm.
The dynamics of the Markov process can also be described by the probability of being at some configuration at time , which evolves with the master equation
We call such probability the system’s state, and we assume that the system relaxes to a uniquely defined steady state .
A time-integrated current along a single trajectory is a linear combination of the net number of jumps between configurations in the network:
The idea here is that one or several transitions within the system occur because of the “absorption” or the “emission” of some environmental degrees of freedom, each with different intensity. However, for the moment let us simplify the picture and require that only one transition contributes to a current, that is that there exist such that
Now, what does it mean for such a set of currents to be “complete”? Here we get inspiration from Kirchhoff’s Current Law in electrical circuits: the continuity of the trajectory at each configuration of the network implies that after a sufficiently long time, cycle or loop or mesh currents completely describe the steady state. There is a standard procedure to identify a set of cycle currents: take a spanning tree of the network; then the currents flowing along the edges left out from the spanning tree form a complete set.
The last ingredient you need to know are the affinities. They can be constructed as follows. Consider the Markov process on the network where the observable edges are removed . Calculate the steady state of its associated master equation , which is necessarily an equilibrium (since there cannot be cycle currents in a tree…). Then the affinities are given by
Now you have all that is needed to formulate the complete theory and prove the FR.
Homework: (Difficult!) With the above definitions, prove the FR.
How about the marginal theory? To define the effective affinities, take the set of edges where there run observable currents. Notice that now its complement obtained by removing the observable edges, that we call the hidden edge set , is not in general a spanning tree: there might be cycles that are not accounted for by our observations. However, we can still consider the Markov process on the hidden space, and calculate its stalling steady state , and ta-taaa: The effective affinities are given by
Proving the marginal IFR is far more complicated than the complete FR. In fact, very often in my field we will not work with the current’ probability density itself, but we prefer to take its bidirectional Laplace transform and work with the currents’ cumulant generating function. There things take a quite different and more elegant look.
Many other questions and possibilities open up now. The most important one left open is: Can we generalize the theory the (physically relevant) case where the current is supported on several edges? For example, for a current defined like ? Well, it depends: the theory holds provided that the stalling state is not “internally alive”, meaning that if the observable current vanishes on average, then also should and separately. This turns out to be a physically meaningful but quite strict condition.
Let me conclude with some more of those philosophical considerations that sadly I have to leave out of papers…
Stochastic thermodynamics strongly depends on the identification of physical and information-theoretic entropies — something that I did not openly talk about, but that lurks behind the whole construction. Throughout my short experience as researcher I have been pursuing a program of “relativization” of thermodynamics, by making the role of the observer more and more evident and movable. Inspired by Einstein’s gedankenexperimenten, I also tried to make the theory operational. This program may raise eyebrows here and there: Many thermodynamicians embrace a naïve materialistic world-view whereby what only matters are “real” physical quantities like temperature, pressure, and all the rest of the information-theoretic discourse is at best mathematical speculation or a fascinating analog with no fundamental bearings. According to some, information as a physical concept lingers alarmingly close to certain extreme postmodern claims in the social sciences that “reality” does not exist unless observed, a position deemed dangerous at times when the authoritativeness of science is threatened by all sorts of anti-scientific waves.
I think, on the contrary, that making concepts relative and effective and by summoning the observer explicitly is a laic and prudent position that serves as an antidote to radical subjectivity. The other way around, clinging to the objectivity of a preferred observer — which is implied in any materialistic interpretation of thermodynamics, e.g. by assuming that the most fundamental degrees of freedom are the positions and velocities of gas’s molecules — is the dangerous position, expecially when the role of such preferred observer is passed around from the scientist to the technician and eventually to the technocrat, who would be induced to believe there are simple technological fixes to complex social problems…
How do we reconcile observer-dependency and the laws of physics? The object and the subject? On the one hand, much like the position of an object depends on the reference frame, so much so entropy and entropy production do depend on the observer and the particular apparatus that he controls or experiment he is involved with. On the other hand, much like motion is ultimately independent of position and it is agreed upon by all observers that share compatible measurement protocols, so much so the laws of thermodynamics are independent of that particular observer’s quantification of entropy and entropy production (e.g., the effective Second Law holds independently of how much the marginal observer knows of the system, if he operates according to our phenomenological protocol…). This is the case even in the every-day thermodynamics as practiced by energetic engineers et al., where there are lots of choices to gauge upon, and there is no other external warrant that the amount of dissipation being quantified is the “true” one (whatever that means…) — there can only be trust in one’s own good practices and methodology.
So in this sense, I like to think that all observers are marginal, that this effective theory serves as a dictionary by which different observers practice and communicate thermodynamics, and that we should not revere the laws of thermodynamics as “true
idols, but rather as tools of good scientific practice.
In this work we give the complete theory and numerous references to work of other people that was along the same lines. We employ a “spiral” approach to the presentation of the results, inspired by the pedagogical principle of Albert Baez.
This is a shorter version of the story.
Early version of the story, containing the FDR results but not the full-fledged FR.
Great reference if one wishes to learn about path integrals for master equation systems.
1 There are as many so-called “Fluctuation Theorems” as there are authors working on them, so I decided not to call them by any name. Furthermore, notice I prefer to distinguish between a relation (a formula) and a theorem (a line of reasoning). I lingered more on this here.
“Just so you know, nobody knows what energy is”. Richard Feynman.
I cannot help but mention here the beautiful book by Shapin and Schaffer Leviathan and the air-pump about the Boyle vs. Hobbes diatribe about what constitutes a “matter of fact,” and Bruno Latour’s interpretation of it in We have never been modern. Latour argues that “modernity” is a process of separation of the human and natural spheres, and within each of these spheres a process of purification of the unit facts of knowledge and the unit facts of politics, of the object and the subject. At the same time we live in a world where these two spheres are never truly separated, a world of “hybrids” that are at the same time necessary “for all practical purposes” and unconceivable according to the myths that sustain the narration of science, of the State, and even of religion. In fact, despite these myths, we cannot conceive a scientific fact out of the contextual “network” where this fact is produced and replicated, and neither we can conceive society out of the material needs that shape it: so in this sense “we have never been modern”, we are not quite different from all those societies that we take pleasure of studying with the tools of anthropology. Within the scientific community Latour is widely despised; probably he is also misread. While it is really difficult to see how his analysis applies to, say, high-energy physics, I find that thermodynamics and its ties to the industrial revolution perfectly embodies this tension between the natural and the artificial, the matter of fact and the matter of concern. Such great thinkers as Einstein and Ehrenfest thought of the Second Law as the only physical law that would never be replaced, and I believe this is revelatory. A second thought on the Second Law, a systematic and precise definition of all its terms and circumstances, reveals that the only formulations that make sense are those phenomenological statements such as Kelvin-Planck’s or similar, which require a lot of contingent definitions regarding the operation of the engine, while fetished and universal statements are nonsensical (such as that masterwork of confusion that is “the entropy of the Universe cannot decrease”). In this respect, it is neither a purely natural law — as the moderns argue, nor a purely social construct — as the postmodern argue. One simply has to renounce to operate this separation. While I do not have a definite answer on this problem, I like to think of the Second Law as a practice, a consistency check of the thermodynamic discourse.
3 This assumption really belongs to a time, the XIXth century, when resources were virtually infinite on planet Earth…
4 As we will see shortly, we define equilibrium as that state where there are no currents at the interface between the system and the environment, so what is the environment’s own definition of equilibrium?!
5 This because we already exploited First Law.
6 This nomenclature comes from alchemy, via chemistry (think of Goethe’s The elective affinities…), it propagated in the XXth century via De Donder and Prigogine, and eventually it is still present in language in Luxembourg because in some way we come from the “late Brussels school”.
7 Basically, we ask that the tunable parameters are environmental properties, such as temperatures, chemical potentials, etc. and not internal properties, such as the energy landscape or the activation barriers between configurations.
a site about toothpaste
a site about toothpaste
Luxembourg, June 13-16, 2017
Physics and Mathematics of Disordered Systems