Q is for Quantum & “reality”

Terry Rudolph is a seemingly young professor at Imperial College already world-renowned as the third author of the so-called PBR theorem (I’m so envious he’s got a theorem named after himself…). His result allegedly shakes the foundations of Quantum Mechanics by stating that either quantum states are “real”  (whatever that means) or else the theory is “wrong” (whatever that means!). I look forward to reading more about all this stuff, with the mindset of a person convinced that physics is ultimately about measurement and information, and that Quantum Mechanics is the “symbolism of atomic measurements” – a much beloved sentence due to Schwinger.

But for the moment, I’ll focus on his book “Q is for Quantum”, which is a piece of art, though not perfect (fortunately, or I’d already be hanging from a beam).  Rudolph also runs a webpage with updates, and I’m not the first to review this book.

The first great merit of this book is that it is self-produced with very good taste for layout. As such its contents are free and direct (e.g. it does not include a biography of the author, and the self-comments on the back cover are quite witty*). I already bought two copies and you should as well. Its second greatest virtue is… brevity. The book is extremely concise, it goes straight to the point. Its purpose is to show how some of the most intriguing aspects of Quantum Mechanics can be understood (or, better, explored and manipulated) with very little machinery, and in particular without linear algebra (up to a mysterious minus sign to create interference, whose mystery is never really resolved… but you can buy that – and interestingly, it turns out that Quantum Computation done that way is universal, so there’s nothing to loose in principle). It does so presenting the material in a logically rigorous fashion, without getting lost in the usual blend of metaphors and anecdotes that “embellish”, so to say, the usual pop science literature, and without losing time to give credits to this or that scientist or explaining useless technical jargon (as is the case with the very deep, but quite pedantic Un’occhiata alle carte di Dio [A glimpse at God’s cards] by Gian Carlo Ghirardi). A dictionary to make connections to modern literature is served to the specialist at the very end. The style of writing is quite entertaining, the kind of nerdy fun for XKCD lovers. The exposition does not spare the reader some combinatoric games and a lot of thinking, both of which might make the book much more challenging for readers who do not have a proper training than the author seems to believe (let’s see, I’m running this experiment with a friend of mine…). In principle all of the calculations can be tackled with pencil, paper and patience provided readers stick by the rules of the game – as far as the rules of the game are well explained before the game starts (I’m always disappointed when friends invite me to play a new table game they are expert on, and only at the end a new rule comes out I didn’t know about, one that makes them win by loads – incidentally this is also what happens in most of Nolan’s movies, but now I’m going way astray…).

The book includes three chapters: the first showing how and why quantum computation is more powerful than classical computation, the second explaining entanglement, and the third the problems of interpretation of Quantum Mechanics and of the measurement process, including hidden variables and such amenities. It’s really amusing to see how the latest achievement – quantum computation – can actually be used to introduce quantum mechanics in a less contrived way, assuming that classical computation already makes some sort of sense to the reader (while the author does spend some time explaining classical gates, he gives for granted that people are ready to accept the very idea that logical operations can be processed mechanically and with binary symbols). The formalism also allows to introduce a version of Bell inequalities, and to present with incredible clarity the “no faster than light communication” argument. Also, the author manages to scoop in some of his own insights on the matter, in particular some elements of the PBS theorem mentioned above and of the Author’s own take on the interpretation of Quantum Mechanics. Which means that a smart teenager reading and understanding this book will peer at the frontline of research today – an incredible achievement, although this will not spare him years and years of university etc.

– – – Good cop exits the scene. Bad cop enters the scene – – –

The one chapter that falls a bit short with respect to its pretentious claims is the third, on “reality”. The subject matter is, of course, monstrous. In fact, we are begged to drop all philosophical subtleties at the start. Never so!

The logical development of this chapter is still clear, but the narrative is not quite to-the-point as the previous two. Several phrases are redundant, and because I had the feeling of walking on eggshells and always expected a new load of concepts at very sentence, repetition of previous concepts did not actually help: I tended to assume the next sentence would always add something I had not thought about before, and therefore I put an intensity that maybe was not even necessary. Or, maybe, it was necessary! Which means I didn’t get the point… In any case, while a second reading might help, and I did get a faint idea of the Pooh-Bear argument, for that I might prefer to go to the original article, which has already been piled up with tons of other papers-soon-to-be-read (ahem…).

Also, unfortunately, Rudolph falls in Nolan’s usual mistake: he defines a new rule towards the end of the story. What is “real” is only defined on p. 122, and not in a very satisfactory way: “By hypothesis, what we mean by the real state is anything and everything that can affect the outcome of an experiment”. What does that mean?

This leads us to the more conceptual core of the problem, and here I’d like to weight in my own misconceptions about physics. So from now own I’m going to ramble, please stop reading.

The point is, well before the Pooh argument, I’m already a bit in disagreement. If I understood well, we are asked to assume that it makes sense to define “reality” as a set of variables whose detailed knowledge would underlie any probabilistic concept, and this independently of whether the inferential machinery governing the states of knowledge of the observer are going to be classical or quantum. The Pooh-bear argument is then laid down to show that the wave function can be thought of as such a “real” state, dispensing with an argument by Einstein why it couldn’t.

I’m one of those freaks who don’t believe that, even in classical statistical mechanics, probability as a state of knowledge is actually supported by a “truth of matter” of what the real states of a system are according to their volume in phase space (as I argued at length in this paper). The volume measured by whom? e.g., what is the “real” entropy of a body? I don’t think this question even makes sense. What only makes sense is that the underlying degrees of freedom will also be subjected to a process of measurement, their probability analyzed, then perfectioned by Bayesian update, and so on and so forth. For example, the fact that today we take, for ideal gases, the position and momenta of the particles of the gas to be “equiprobable a priori” (up to lots and lots of corrections) is not due to any “reality” or fundamental nature of position and momenta of the gas molecules, it’s just that we have been running a lot of science before getting to that conclusion, updating previous hypothesis until we found the one that works reasonably well. If, instead, say gravitational interaction had been much stronger, and one couldn’t neglect the effect of General Relativity in the determination of the equation of state of a perfect gas, we would have given to the “microcanonical ensemble” a quite different meaning, with a complete distortion of the “real” state space. Exactly the history of QM (and in particular Quantum Field Theory) reveals that obsession with the “real” values of physical properties ends up in nothing. In fact, the more “fine-grained” states that Rudolph draws on his planes of reality would have exactly the same quantum nature as the more “coarse grained” states the macroscopic observer measures, and there would have to be a microscopic observer, but an observer nonetheless, that makes quantum measurements, and eventually there will be a proper way to compare the observations of the one and of the other observer. To me, what is really relevant is that measurements turn out to be consistent. This is actually (at present) my general philosophy of science: a reasonably self-consistent body of knowledge whose credibility does not come from the fact that it compares to “reality”, but from the fact that the scientific community has established practices which allowed it to acquire authoritativeness in certain fields of knowledge. The demarcation of these fields where the scientific method works was determined in a somewhat evolutionary manner, therefore when people say “science works”, for me this is more of a definition then of a property…

OK, as usual I became all-too-serious. To go back to the book, personally I would not have created a separation between misty states and rocky states, I would have always put things in the mist, even after being measured (then, measurement becomes just another logical gate). This gives great unity, and operationally it does not make any difference, as far as one sees QM as an inferential machine that manipulates symbols.

* The last authors I’ve seen writing their own notes were Luther Blisset, funnily also the authors of “Q”. But a completely different book.

Bookmark: The Endochronic Properties of Resublimated Thiotimoline

“The compound thiotimoline will dissolve in water – in the proportions of 1 gm./ml. – in minus 1.12 seconds. That is, it will dissolve before the water is added. The fact that the chemical dissolved prior to the addition of the water made the attempt natural to withdraw the water after solution and before addition. This, fortunately for the law of Conservation of Mass-Energy, never succeeded, since solution never took place unless the water was eventually added. The question is, of course, instantly raised as to how the thiotimoline can ‘know’ in advance whether the water will ultimately be added or not. Though this is not properly within our province as physical chemists, much recent material has been published within the last year upon the psychological and philosophical problems thereby posed.Nevertheless, the chemical difficulties involved rest in the fact that the time of solution varies enormously with the exact mental state of the experimenter. A period of even slight hesitation in adding the water reduces the negative time of solution, not infrequently wiping it out below the limits of detection.”

The University in ruins /1

The materialistic transparency of culture 
has not made it more honest, only more vulgar 
(Th. Adorno)

From Bill Readings, The University in ruins:

I am attracted to Robert Young’s suggestion that the University, both inside and outside the market economy, should “function as a surplus that the economy cannot comprehend’’. The binary opposition is there, and the University will deconstruct it by being neither simply useful nor simply useless. All very good, and very much what Humboldt wanted: indirect utility, direct uselessness for the state.”

This sentence makes for a perfect answer to all those people who are trying to sex up the University courses by bringing more industry into education. Anyway, the book by Readings is good and dense with concepts that its’s almost impossible to choose which excerpts I like most, I’d have to re-write it all on my blog… Here’s how the academia threats the arts by normalizing them:

“Rather than posing a threat, the analyses of Cultural Studies risk providing new marketing opportunities for the system. Practices such as punk music and dress styles are offered their self-consciousness in academic essays, but the dignity they acquire is not that of authenticity but of marketability, be it in the cinema, on MTV, or as a site of tourist interest for visitor to London. […] To put it bluntly, the shock value of punk is not lasting in a cultural sense, since it soon becomes possible to be “excellently punk”.

Here academia works as an advanced tool to extrapolate culture out of the context where it was born as a social practice, to make into a product of “culture” for a community of rich and educated people who have never been punk and never wished to be.

Is the Fluctuation Theorem a theorem?

[This is intended to be the first episode of a series of posts on what the so-called Fluctuation Theorem is and what it’s not; while I promise follow-up posts (one actually already appeared on whether we should talk about a theorem or a relation), I must admit that all previous plans of serial posts were aborted]

[I suggest to click on the title to view the post on a white background]

The Fluctuation “Theorem” (FT) is a major result in (relatively) recent Non-Equilibrium Statistical Mechanics. It roughly states that, in a wide class of probabilistic theories of dynamical systems, there exists a “physically meaningful” real-valued observable \Omega such that the probability of a positive +\Omega is exponentially favoured with respect to that of negative ones:

P(+\Omega/ P(-\Omega)= \exp {\Omega}.

I must say right away that I am not favourable to naming formulas after people, so I won’t append any name to the above relation. As a consequence of the above detailed relation, the following Integral FT follows by a simple integration over increments d\Omega:

\left\langle e^{-\Omega} \right\rangle = 1.

As a further consequence, by a simple inequality of convex functions one obtains an incarnation of the holy Second Law of Thermodynamics:

 \left\langle\Omega \right\rangle \geq 0.

A thousand+ papers are based on this trilogy of formulas, and I contributed my own:

M. P. and M. Esposito, J. Stat. Mech. P10033 (2014), arXiv:1408.5941.

What I want to discuss here is not the validity of the FT as an overall discourse about thermodynamics, but rather in what sense the FT is a theorem at all.  By “theorem” I don’t mean that a very abstract and formal framework is needed, I just mean that at some point in the development of the argument there is a clear-cut passage between some hypothesis and some thesis, whereby the thesis is not obviously embedded in the hypothesis. I will call this the “step”. I will not embark in the analysis of a-thousand-plus papers either (most of which cite each other in a self-inconsistent way) to pinpoint if and where exactly lies the step. Nevertheless, my overall experience with this stuff is that, if you really dig down the rabbit hole, very often the observable of interest is defined as:

\Omega =  \log\frac{P(+\Omega)}{P(-\Omega) }.

Ta-ta-ta-taaaaa! See any problem here? This definition coincides with the first formula of the trilogy. Therefore, this cannot be the step we are looking for. Where lies the step?

*  *  *

Let me narrow down my discussion to discrete-state space Markov processes (deterministic dynamical systems are special in their own way, but not that different; I will tackle them in a separate post, if at all). In this context, \Omega can be defined as a special linear combination of the net number of jumps between two states,

\Omega = \sum_{i,j} \# (i \gets j) \log \frac{w_{ij}}{w_{ji}}

where \# (i \gets j) is the number of times the process jumped from state j to state i and w_{ij} is the corresponding transition rate. Deriving the FT from this definition does require an important passage: we need to know the explicit expression for P(\Omega). In a physicist’s words, we need to know the exact path-integral representation of the process. It occurs that for Markov jump processes this explicit expression indeed exists, and that the FT follows trivially (I won’t give the expression which can be found in many papers, including my own; but check out this remarkable review: Markus F. Weber, Erwin Frey, Master equations and the theory of stochastic path integrals, arXiv:1609.02849, Sec. 1.4).

Have we found the step? If this were the case, then the FT would boil down to a known result in the theory of Markov processes that has nothing to do with physics. That can’t be our step. After all, we are borrowing results from mathematics to obtain something more.

*  *  *

So far I used \Omega as an abstract object without identity. In fact, the term \ln w_{ij}/w_{ji} in the latest formula seems to be quite arbitrary and ad hoc. However, in the mind of physicists this object has a clear nature: it is called the “entropy production”, quantifying the amount of entropy that is delivered to the environment along a process. If we can provide a different and relevant definition of this quantity in a way that is fairly independent of the Markov process itself, if we can show that this definition stands on its own feet without invoking the FT, then we can say that we made a step.

Arrhenius’s theory of transitions between states might come to our rescue, as it tells us that the rates at which jumps between states occur are related one to its reversed by

\frac{w_{ij}}{w_{ji}} = \exp {\frac{u_j - u_i}{k_B T_{ij}} }

where we imagine that the states are energy wells with energy u_i, that among wells there is a potential barrier that can be overcome by effect of the effect of an external bath that heats the system up to temperature T_{ij}. If we disconnect states i,j from the rest of the state space, then the system would relax to a steady state given by

p_i \propto \exp {\frac{- u_i}{k_B T_{ij}} }

which is the equilibrium “Gibbs” state. Notice however that the temperature might be different for any two states, therefore in general (that is, if we do not isolate a single transition) the system does not relax to an equilibrium Gibbs state. Thus we call the above condition local detailed balance. Local detailed balance provides the expression

\Omega = \frac{1}{k_B} \sum_{ij} \frac{\delta Q_{ij}}{T_{ij}}

where the heat flow is defined as the net amount of energy that has flown between two states along the process:

\delta Q_{ij} = \# (i \gets j) (u_j - u_i).

The condition of local detailed balance thus gives the missing link between mathematics and physics, allowing to connect to well-known lingo and concepts in thermodynamics.

*  *  *

However, our question still stands: have we made an inferential step yet? It is clear that the terms of this problem have shifted from the language of Markov processes to that of thermodynamics, but so far the condition of local detailed balance just looks like a renaming of quantities. To claim that we actually made a step we need to look at the definitions of energy, temperatures, etc. It is clear this search requires to take a trip down to the very foundations of thermodynamics.

We will not engage in such huge enterprise. Just a few observations.

Temperature is a very hostile concept in thermodynamics. The story of the the mercury column by which thermometers are introduced in the tale of basic physics is just a myth, one of many that physicists tell themselves to cast their research on safe ground, like any human community likes to tell a mythological story of their origins. There is no way one can use a mercury column to measure the temperature of the environment where a protein unfolds. Of course, better thermometers exist, but none of them overcomes the intrinsic problems that the mercury column has. What is important in physics is not so much to make reference to an absolute value of temperature that some special apparatus measures at all scales (even if there do exist better thermometers!), but rather to define at each scale a self-consistent notion of temperature (in the spirit of the “renormalization” approach to Quantum Field Theory, if you like…). I question that, when Brownian motion is at work, the operational definition of temperature can be disentangled from the observed stochasticity of the Brownian degrees of freedom. This is a very different perspective than that of a mathematician’s world where consequences follow from assumptions: here quantities need to be renegotiated, and there is no clear-cut separation between measurement apparatus and measured quantities.

The problem is not just with temperature. For definitiveness, let me consider one particular derivation of the Master Equation governing Markov jump process, the quantum case where the condition of local detailed balance is replaced by an analogous condition called Kubo-Martin-Schwinger (see for example Bruer & Petruccione, The theory of open quantum systems, Cambridge University Press). In all these derivations, one starts from an isolated universe evolving by some Hamiltonian and separates it into an environment and an effective system. It is assumed that (each piece of) the environment is at some thermal state at some temperature, that the system’s Hamiltonian is such and such, and that the system undergoes a Markovian dynamics. Then the KMS condition (local detailed balance) does follow, and with it comes the definition of the entropy production.

But again, how much of this “derivation” lies in the assumptions, and how much goes into its consequences. As you can see, facts that are assumed and facts that are derived are intertwined in quite complicated ways. In other words, it is not clear how and why exactly is defined the entropy production, given a Markov process, since the two processes go hand-in-hand.

*  *  *

So, where lies the inferential step? As you might have guessed already, by Betteridge’s law of headlines, the headline of this post has a negative answer. My point is that there is no such step. In fact, following a comment I heard from Giovanni Gallavotti (in reply to Denis Evans), I prefer to call these results Fluctuation “Relations” rather than “Theorems”.

Nevertheless, I don’t mean at any rate that these relations don’t make sense. That’s not my point, or I would have quit physics by now. What I mean is that in physics there is no net demarcation line between assumptions and consequences. In many respects, much part of physics is about developing a self-consistent discourse, and it’s so good that at least this discourse is at least self-consistent.

To conclude:

  1. The so-called Fluctuation Theorem is not a theorem at all; it’s a relation between quantities that might occasionally take the form of a theorem, provided it is explained which are the assumptions and which is the output. This has nothing to do with being “mathematical”: it has to do with reasoning right.
  2. The fundamental status of the fluctuation relation (just like many other laws of physics) is unclear: I argue that it is an overarching concept that unites a self-consistent discourse about processes of a certain kind.

I might also conclude with a slightly more “political” observation. At present, the industry of production of scientific papers makes pressure on scientists to produce better narratives of their research. To some degree this is good, because papers are more pleasant to read that way. However, this might go to the detriment of the logical development of papers. It pushes people to smuggle discourses for deductions, and it does not help distinguish which are the premises, which are the consequences, and which are the unknown factors…

Science @ Festivaletteratura 2017

Festivaletteratura, one of the most important literary events in Europe, took place last week (6th-10th) in the beautiful renaissance city of Mantova (where incidentally I was born). Ever since 2005 I’ve been organizing some scientific events in this context (here some impressions from the previous edition).

I hate to call these events “divulgation” for the reason that, in a way, I find that scientific popularization most often serves the purpose of  exposing people to science without involving them in a process of reasoning.  As if science were a sort of very advanced commodity, a cultural product whose need is induced by social pressure, and that can be picked in those supermarkets of ideas that are festivals. Or, in the worst cases, as if science were just another piece of infotainment, akin to sport news.

I hate the assumption that because people do not understand science in any case, it’s better to serve them some very inaccurate metaphors that they will swallow and immediately forget. For this reason at Festivaletteratura I try to raise the bar several inches above: I want to provoke people by giving them factual elements of that huge complexity of things people call “science”. Over the years, myself (and Festivaletteratura as a whole) have come to prefer more creative formats than the usual frontal event, where an author is interviewed by a journalist plus time for questions from the audience. I find that this sort of events are not best to transmitting ideas, expecially in science. I still organize a few of them, but just as advertisement or as conclusion of something bigger and deeper that has been going on behind.

My main project this year was Hackspace Festivaletteratura, a place to learn about the fundamentals of informatics, for young boys and adults. Here the presentation on the Festival’s program [rough translation from the Italian version]:

Today we’re born “digital”, but how many of us are conscious of what happens inside the integrated circuits under the screen of their smartphones? To understand the mechanisms of informatics and learn how to govern them, it’s necessary to start an historical travel back in time to the first steps of electronics, and, at the same time, a physical travel within the computer: opening the box, disassemblying the microprocessor, looking at the single transistor, dismembering the algorithms. In other words: hacking! No worries, nothing illegal: a hackspace is just a permanent laboratory, a community space where informatics and technology lovers meet to share experiences. At Hackspace Festivaletteratura there will be an active community of experts that will give rise to spontaneous trails into the heart of informatics. It will be possible to give new life to old rickety PCs, put one’s own hands on a single bit, and see and touch with hands some fo the calculators that made the history of informatics, made available by the Museum of the History of Informatics of the University of Verona and by some passionate friends who have actually seen the computer born and grow (them, truly “digital natives”).

There we (myself and a great group of volunteers headed by Emanuele Penocchio) organized several laboratories and other mini-lectures, plus various activities revolving around Arduino, all of which were a great success. The logic of the laboratories was incremental: first, by building their own marble adding machine with lego, people would have to reason how binary mathematical operations can be made mechanical; then, with some relè and cables they would have to understand logical gates; third step was to use Commodore 64 to understand the interface between man and machine, and finally Scratch provided the perfect coding language to understand what programming is all about. Here some pictures:

 

The local newspaper also featured a very nice video presenting the hackspace, it’s nearly the first time that some of the scientific contents I propose make it to the news:

blob:http://video.gelocal.it/d1e6593d-a411-4939-a692-ee9d263f7735

Another format that is very successful at Festivaletteratura are the blackboard talks, where we invite experts to give challenging lectures, that are intended to be exemplary of academic discussions and teaching. This year we had computer scientist Scott Aaronson, one of the major experts on quantum computation, delivering a fun blackboard lecture on very big numbers (here he blogs about it), and physicist Fabrizio Illuminati, expert on quantum information and quantum optics, giving a lecture on quantum-to-classical transition. The two of them finally dialogued on the future of quantum computation and related themes.

 

Finally, every year Festivaletteratura gives large space to environmental issues et similia. When talking about the interaction society-environment, it’s impossible not to go into the subtleties of what is innovation, what is technology, whether technological fixes are possible and even desirable, and more at large what are scientism and antiscientism. This year I interviewed sociologist of science Massimiano Bucchi on these themes. There came out a deep and challenging discussion that made critical points about the rhetoric of innovation, avoiding easy discourse. To the point where, when at the end of the conversation a person from the audience asked a very vehement question (well, a sermon more than a question, that always happens) about the responsibilities of science in the disruption of the world, I believe we commented in such a placid way that there are no easy answers to complex questions, that he himself was sort of disoriented: after all, we scientists are not the reductionist social engineers that so much anti-science fears and loathes. A friend who liked the discussion said it was a bit too challenging, but I think that’s fair. I only expect a 1-hour-something talk to give impressions and stimuli, not to convey a very celar and precise message, cause in that case it could only be a slogan.