Teaching /2: Open Quantum Systems for Ph. D. students

This is a year of teaching. To children, to high-school teachers, to master students, etc.

In these days I’m teaching a four-hour course to Ph.D. candidates of the physics department on Open Quantum Systems. The idea is that of starting from the unitary dynamics of a “universe”, and resolving it into “system + environment” to find a dynamical map for the system by tracing out the environmental degrees of freedom in the limit where the interaction between system and environment is weak. The material covers the so-called “microscopic” derivation of the Lindblad equation, then focuses on the case where the environment is thermal radiation, and then further focuses on the cases where the system is either a spin or a harmonic oscillator. I briefly discuss entropy vs. entropy production, the second law of thermodynamics, and irreversibility.

I had real fun preparing this course. As I dealt with a fairly new subject, I found the same vibe as back when I was a university student, always looking for further references, scrutinizing all possible details, ending up reading four times as much material as was needed to prepare an exam. Which was one of the (many) reasons why I eventually graduated two years later (while at the time I was a bit discouraged by this delay, it eventually turned out to be one of the best “choices” in my life).

The starting reference for this basic course is the remarkable book by Breuer and Petruccione:

H. P. Bruer, F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press)

This is perfect to have a physicist’s overview, but recently I found this other short book

A. Rivas, S. F. Huelga, Open Quantum Systems (SpringerBriefs)

that does a great job in resuming a lot of otherwise quite hostile math-phys literature on the subject, and sheds light on some obscure passages. In particular, in the physicist’s derivation, several different approximations are made: beside the weak coupling, there’s Born’s assumption that the system and the environment are uncorrelated at all times (which makes no sense, given that what we want to study is precisely how the environment affects the system…), the Markov property, and the so-called “rotating wave” approximation. This is probably due to a stratification of arguments and techniques that have piled up in the history of the derivation of this and similar master equations. However, the treatment given by Rivas and Huelga in terms of Nakajima-Zwanzig operators shows that the Born-Markov approximation  and, to some degree that I’m not able to appreciate, even the rotating-wave approximation, all follow from the weak interaction hypothesis alone! Thus the derivation is much neater.

As I want to learn more about all these things, as final exam I will propose to the students a half-day workshop where each one of them will present a paper or a book chapter picked from the following list (or, if they don’t like, they might just choose a relevant and interesting paper of their own).

Breuer & Petruccione Secs. 3.2.1 & 3.2.2

Rivas and Huelga Ch. 4.2 & Th. 3.1.2, Th. 3.2.1

These book chapters explain how to derive the Lindblad equation from the Markov property alone, and not following a microscopic derivation. This is standard material, and should be covered with some detail.

V. V. Albert and L. Jiang, Symmetries and conserved quantities in Lindblad master equation, PRA 89, 022118 (2014).

The main difference between the Lindblad equation and its classical analogue, the master equation, is that the former may have multiple nontrivial steady states, or even oscillate between coherent steady states, while multiple steady states for the classical master equation are trivially due to disconnected sub-spaces, in the case of reversible rates, or absorbing basins of attraction in the case where transitions are not reversible. Allegedly this paper discusses multiple steady states and their relations to conservation laws and symmetries of the Lindbladian.

D. Taj and F. Rossi, Completely positive Markovian quantum dynamics in the weak-coupling limit, PRA 78, 052113 (2008)

One of the limitations of the microscopic derivations outlined above is that the system must have continuous spectrum, and the system discrete. While the first condition seems to be unavoidable (all of thermodynamics is based on the assumption that we can throw stuff away and we will never see the consequences of it…), this paper rigorously addresses the case where the system has continuous spectrum, by analyzing the separation of time scales between the system’s evolution and the time of Poincaré recurrencies in the environment.

A. Rivas, A. D. K. Plato, S. F. Huelga, Markovian master equations: a critical study, New J. Phys. 12, 113032 (2010).

This is the material that sustains the book by Rivas and Huelga, but it also includes a couple of more in-depth examples, the two coupled harmonic oscillators and the driven harmonic oscillator, that are worth studying in detail.

F. Carollo, J. P. Garrahan, I. Lesanovsky and C. Pérez-Espigares, Making rare events typical in Markovian open quantum systems, arXiv:1711.10951.

J. P. Garrahan and I. Lesanovsky, Thermodynamics of Quantum Jump Trajectories, PRL 104, 160601 (2010)

P. Zoller, M. Marte, and D. F. Walls, Quantum jumps in atomic systems, PRA 35, 198 (1987).

This triptyque can be run up-down or down-up. For those who know about the thermodynamic approach to open systems, and in particular a bit of large deviation theory (or even better, they know about rare-events algorithms) (that is, my own group members) it is best to start from the first and only refer to the other two papers to complete the information. For all others, the third paper contains a great nontrivial three-level example and some speculations about “quantum jump trajectories”, and they might just take a look at the first two.

Philip Pearle, Simple Derivation of the Lindblad Equation, Eur. J. Phys 33, 805 (2012).

As per title, this paper proposes a simple derivation of the Lindblad equation, of the sort based on some general principles like many derivations of quantum mechanics from reasonable axioms. I’m at the same time compelled but skeptical about this paper, so this JC should really become a critical analysis.

M. B. Plenio and P. L. Knight, The quantum-jump approach to dissipative dynamics in quantum optics, Rev. Mod. Phys. 70, 101 (1998).

This is a long review, dealing with the subject of “quantum jump trajectories”. It connects here and there with the Lindblad equation, but not in a rigorous way. Possibly Carlmichael’s work (cited) gives further insights. I still have a problem making sense of all this jumping, it looks way too classical (the point is, jumping from where to where? As the diagonal basis of the density matrix evolves with time, then also the preferred “locations” whereto jump change…).

R. Alicki, The Markov master equations and the Fermi golden rule, Int. J. Theor. Phys. 16, 351 (1977).

As per the title, this paper proposes a derivation of Fermi’s golden rule. The JC should explain in what sense this derivation is more rigorous than the usual derivations.

Breuer and Petruccione Sec. 3.5 (given 2.4.6 and other material)

Derivation of the Lindblad equation from an indirect measurement process.

Breuer & Petuccione 3.4.6

Damped harmonic oscillator (in some detail).


Furthermore, I have a couple of questions of my own I would like to inspect:

– For the “microscopically derived” Lindblad equation, there is one preferred basis where the dynamics separates into populations and coherences. This implies that, in that basis, if one starts from a purely statistical state (a diagonal density matrix), then such state only evolves by the Pauli master equation. The question is, for generic Lindblad generators not following from a microscopic derivation, does there exist a preferential basis such that, if the initial state is prepared as a purely statistical state in that basis, then they evolve by a classical master equation, that is, they do not develop correlations at later times?

– An analysis by Kossakowsky states that, for any family of orthogonal complete projections {P_i}, i = 1,…,n, where n is the dimension of the system’s Hilbert space, the quantities w_ij = tr(P_i L P_j) are the rates of a classical Markov jump process. However, such process might not be representative of the quantum evolution, as the classical evolution depends on the choice of family of projections. The idea is to study the statistics of classical Markov jump processes, randomizing over the family. How to randomize? One possibility is the following. Since for some unitary matrix U, P_i = U I_i U^-1, where I_i has only the i-th diagonal entry equal to 1, and all others vanish, we could take the unitary U from the GUE (Gaussian Unitary Ensemble).

 

 

 

 

A boring solution to the Tower of Hanoi

Yesterday night I was trying to figure out how to get my son interested into some activity to be done with me, when I recalled buying online a puzzle called Tower of Hanoi. We spent a quarter of an hour playing one-on-one with invented rules, which didn’t work out very well. Then we went to Wikipedia to take a look at what’s the whole point with it (but without looking at the solution, ca va sans dire). And the point is you have three poles and a number of disks (in my case 9) of different sizes, initially piled-up from larger to smaller on one pole (let’s take only 3 to visualize):

1 0 0
2 0 0
3 0 0

“0” means “empty space”. The scope is, moving only one disk at a time in such a way that smaller disks always sit on top of larger disks, you have to rebuild the tower on another pole. In the 3-case it’s easy (vertical bars separate configurations):

1 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 1 0
2 0 0 | 2 0 0 | 0 0 0 | 0 0 1 | 0 0 1 | 0 0 0 | 0 2 0 | 0 2 0
3 0 0 | 3 1 0 | 3 1 2 | 3 0 2 | 0 3 2 | 1 3 2 | 1 3 0 | 0 3 0

We managed to spend some another fifteen minutes together, then he was bored and I was captured, I let him to his room so that I could be alone with it. I had managed to pile up a few disks, and the problem was obviously going to be based on some iterative procedure. Somehow as my hands were shuffling around I noticed that I was almost always making the same move with one of them, and I came up with this simple solution that makes the whole puzzle completely boring once you know it [and which turns out to be identical to the Wikipedia solution, I just took a look at it…]. It’s made up of two moves, to be iterated until you reach your final configuration (spoiler alert!):

    • Move the smallest disk (that of radius 1) to the next pole (mod 3)
    • Do the only allowed move on the other two poles.

You can check by yourself that this is what’s going on in the above scheme. I tried several times and found that with 2 rings I get 4 configurations, with 3 I get 8, with 4 I get 16 and you got the point, it takes 2^n-1 transitions to get to the final state. Which is exactly what other methods described in the Wikipedia page attain.

Now the question is why does this solution work. I don’t have a mathematical proof yet, can’t waste my time on these things really. But let’s collect a few simple facts. First thing, notice that if I add another disk, I get:

1 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 |
2 0 0 | 2 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 1 0 |
3 0 0 | 3 0 0 | 3 0 0 | 3 0 1 | 0 0 1 | 1 0 0 | 1 2 0 | 0 2 0 |
4 0 0 | 4 1 0 | 4 1 2 | 4 0 2 | 4 3 2 | 4 3 2 | 4 3 0 | 4 3 0 |

0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 1
0 1 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 0 | 0 0 2 | 0 0 2
0 2 0 | 0 2 1 | 0 0 1 | 1 0 0 | 1 0 3 | 0 0 3 | 0 0 3 | 0 0 3
0 3 4 | 0 3 4 | 2 3 4 | 2 3 4 | 2 0 4 | 2 1 4 | 0 1 4 | 0 0 4

But this is just twice the 3-disk pattern above, once on top of of disk 4 on the first column, and once on top of disk 4 on the third. Notice that, after the 1st move, disk 1 moves every two configurations clock-wise, that after the 2nd move disk 2 moves every 4 configurations counter-clockwise, and that after the 4th move disk 3 moves every 8 configurations clockwise, and that after the 8th move disk 4 moves every 16 configurations counterclockwise. To build a proof, one should show that the configurations reached this way are always feasible (no larger disk on a smaller one) and “ergodic”, meaning that they span all of the possible configurations, until they eventually reach the desired one.

I leave this to you as homework.

Thermodynamics done right: (lack of) pedagogical material

 

[This post is a work in progress and will be updated with time, please feel free to add your favourite reference in the comments and I will discuss it]

From time to time I like to say in my talks, and write in my papers, that we now have a modern theory of nonequilibrium thermodynamics based on Markov processes that covers most of the “old-style” thermodynamic phenomena and that greatly encompasses our old understanding of nonequilibrium phenomena, and that time is ripe for it to supplant the way we teach thermodynamics in physics courses, which is in a quite shameful state. While the physical insights that made this possible are already quite old (dating back e.g. to the Einstein relation), only recently a vast community reasoning in these terms has coalesced. However, unfortunately, so far vary little effort within this community has been made to create pedagogical material. I personally fantasize of doing my bit, maybe writing some lecture notes, or even a monograph. However, this humongous task looks prohibitive, given that I’m at the same time a procrastinator and a perfectionist.  For the moment I can only list what to me seem to be the best materials to getting acquainted with “modern” thermodynamics.

The only broad review on the topic is

U. Seifert, Stochastic thermodynamics, fluctuation theorems and molecular machines, Rep. Progr. Phys. 75, 12600112 (2012) [arXiv:1205.4176]

However, I do not suggest it for learning the theory. While doing a good job in surveying trends, topics, and methods, it was not written with a pedagogical intent, so it turns out being a loose collection. This I would only suggest for a later reading when one already masters a few of the technical tools, and wants to get to the frontier of research today.

When engaging with thermodynamics based on stochastic processes, it will be convenient to choose one particular formalism, either discrete (based on the Master Equation) or continuous  (based on diffusion equations). Results are interchangeable but it’s good to focus on one only at the start. On the discrete side, which is my favourite, this short review by Van den Broeck and Esposito, based on lecture notes of a course, is simple and accessible and gives a bit of an overview:

C. Van den Broeck and M. Esposito, Ensemble and trajectory thermodynamics: A brief introduction, Physica A 418, 6 (2015) [arXiv:1403.1777].

My own Ph. D. thesis might be a slightly more technical guidance for the analysis, in particular Chapters 3 and 4. What it’s nice about them is that I try to separate what is the dynamics of Markov processes, what is the thermodynamics associated to them, and furthermore (in Chapter 1-2) which aspects of nonequilibrium thermodynamics are only due to the geometry/topology of the state space with no particular reference to the fact that there is an underlying Markov process. I also try to avoid the plethora of thermodynamic potentials that haunt thermodynamic talking. It can be found here (I hold no responsibility for what I was writing 6 7 years ago!):

M. Polettini, Geometric and Combinatorial Aspects of NonEquilibrium Statistical Mechanics [link]

In general, Ph. D. thesis might be a good place to look at for introductions on the topic. Chapter 8 of Takahiro Sagawa’s published thesis does a good job in reviewing the fluctuation theorems, and the previous chapters can be used to get an overview of so-called Maxwell-demons revisited:

T. Sagawa, Thermodynamics of Information Processing in Small Systems (Springer, 2012) [link]

On the diffusion side, one could make a blend of Van Kampen’s and Ken Sekimoto’s books, the first describing the dynamics of diffusion equations, the second describing the thermodynamics.

N. G. Van Kampen, Stochastic Processes in Physics and Chemistry (North-Holland)
K. Sekimoto, Stochastic Energetics (Berlin: Springer)

For more recent research trends, the New Journal of Physics hosted a special issue, presented by the editors here:

C. Van den Broeck, S.-i. Sasa, and U. Seifert, Focus on stochastic thermodynamics, New J. Phys. 18, 020401 (2016) [link]

However, all of these references somehow fall short in giving an explicit systematic connection to classic topics in thermodynamics and Equilibrium Statistical Mechanics, such as the ideal gas law, the Carnot cycle, etc. This is mainly due because these references look at the future of thermodynamics at the nanoscale, rather than covering the past of thermodynamics, but it’s clear that if any serious claim has to be made about this being the way thermodynamics is taught, we seriously need to tackle the problem of incorporating the most important achievements of thermodynamics and Equilibrium Statistical Mechanics in an elegant and self-consistent fashion. This is in principle possible, but it has to be made explicitly. So, for the moment, the connection to thermodynamics has to be traced back piece by piece by the engaged student.

[To be added soon: refs. on Large Deviation Theory, interacting particle models]

Oblique projections on metric spaces

New paper out there:

M. Polettini, Oblique projections on metric spaces, arXiv:1711.04672

This one falls into a line of research dedicated to algebraic methods in graph theory, which eventually turn out to be useful for the analysis of thermodynamic systems on a network, though in this paper I decided to keep the physical motivation to a minimum and concentrate on the math, which is just plain linear algebra. One of the reasons for keeping this paper so abstract and simple is that, though I see several connections to other parts of physics and math, I’m not yet in the position to make these observations precise enough. Nevertheless, I think the paper is deep and sound enough to be kicked out as it is.

However, here I can be more courageous and attempt a few connections that I did not dare in the main body of the paper, and which might hopefully make the interpretation a bit more straightforward.

Projection operators are crucial in physics (consider e.g. the measurement process in Quantum Mechanics). Most often we assume orthogonal basis states: in this case everything is trivial and my paper does not matter, because I only have something to say about projections on oblique subspaces. Nonorthogonal states are a bit more rare in Quantum Mechanics, though they do have a role (one that I will try to explore in more detail soon…): for example coherent states are nonorthogonal (and over-complete), molecular orbitals as well, and I would guess there might be a number of similar situations in Quantum Computations, where one keeps moving from Bell states to pure states.

So what does my result say? Let’s take one step back: in my previous paper […], among some other novel considerations, I re-derived the known fact that, given a projection P and its complement Q=I-P, and letting P*, Q* denote their adjoints, the operators P*P and Q*Q have the same spectrum, as regards eigenvalues larger than 1. They can also have eigenvalues 0 and 1 in their spectrum, coming with different multiplicities (possibly none).

What do these operators P*P and Q*Q do? It’s more easily explained with this simple example:

projections.png

The two solid nonorthogonal axis are the directions where we project, and the dashed ones are the two directions orthogonal to them. P*P first projects along one of the orthogonal solid axis, and then projects along the orthogonal to the other axis, and viceversa. This gives rise to two maps. These two maps have the same eigenvalues > 1. In fact, I can prove something stronger, that physicists might call a “supersymmetry” (and indeed, as I briefly discuss at the end of the the paper, might have more than a faint connection to that supersymmetry): there exists a “supercharge” D such that

P*P – I = DD* and QQ*-I=D*D

Now in this latest paper I considered the same kind of construction when the vector space is also provided with a metric G that is not the usual scalar product. Somehow it becomes natural to define P’= √G-1 P √G and Q’=√G-1 P √G, and again construct the operators P’*P’ and Q’*Q’. Then the same results as above hold.

In the paper I discuss some applications to electrical circuit analysis and in the future I plan to go more in depth in the analysis of nonequilibrium systems. At the mathematical level, it is very tempting to notice that the transformation P’= √G-1 P √G looks very much like a gauge transformation, something analogous to √g(x)-1x √g(x) in differential geometry, and the application to graph theory I give in my paper hints at this connection to exterior derivatives and Hodge theory, though at the moment I fail to see the bigger picture.

Teaching /1: High school teachers

In these months my (sparser and sparser) research activity will be accompanied by several different sorts of teaching: to master students, to Ph. D. students, to high school students, to high school teachers, and even to kids!

The latter two audiences are surely the more unusual for an academician, so I’ll collect some thoughts about how I plan these activities and how they turn out to be.

The opportunity to teach to high-school teachers is part of a bigger science communication project on the future of energy that will end with a conference in 2018 in my hometown Mantova. The idea is to get high-school teachers, and through them high-school students, to participate to the conference already prepared on some of the themes, and actually be active part of the organization.

I gave a 6h training course introducing some of the most modern insights on thermodynamics done with Markov processes, in particular I wanted to get to the Fluctuation Theorem, and in so doing I want to give a very “system-theoretic” perspective to thermodynamic systems, and scrap off a lot of myths surrounding thermodynamics. The challenge was particularly interesting because all of these people all had a solid, though diverse, University background, they do teach thermodynamics, but of course they might be a bit rusty. So at the same time I could be a bit more philosophical but also go into a reasonable degree of technical detail.

I decided to leave two take-home messages, one more philosophical and one more mathematical. The philosophical message is that all formulations of the second law that involve some sort of “entropy of the universe” do not make sense and lead to that sort of paradoxes that inform so much lay talk (and can even be found in cosmology). Thermodynamics is about processes occurring to systems that are portions of the universe, not a universe on their own, and abstracting the tools for calculations that we use to characterize irreversibility in such systems, making them idealized absolute concepts, is a dangerous operation. To show this, I had to treat to some extent what it means to be an exact or an inexact differential and why these mysterious inexact differentials show up in thermodynamics. The second take-home message was the Fluctuation Theorem as a generalization of the Second Law of Thermodynamics, and its consequences. I surfed through these themes helping myself with some historical references (on which high-school teachers usually connect better) to the Carnot cycle, to Clausius’s entropy, to the Boltzmann equation, to Einstein’s analysis of Brownian motion, and finally to the fundamentals of the thermodynamics of irreversible processes laid down by such people as Onsager and Prigogine.

Though the course was deemed by most as quite challenging and “high”, I think most people could follow the basic lines of reasoning, at least judging from the very passionate questions I received.

Q is for Quantum & “reality”

Terry Rudolph is a seemingly young professor at Imperial College already world-renowned as the third author of the so-called PBR theorem (I’m so envious he’s got a theorem named after himself…). His result allegedly shakes the foundations of Quantum Mechanics by stating that either quantum states are “real”  (whatever that means) or else the theory is “wrong” (whatever that means!). I look forward to reading more about all this stuff, with the mindset of a person convinced that physics is ultimately about measurement and information, and that Quantum Mechanics is the “symbolism of atomic measurements” – a much beloved sentence due to Schwinger.

But for the moment, I’ll focus on his book “Q is for Quantum”, which is a piece of art, though not perfect (fortunately, or I’d already be hanging from a beam).  Rudolph also runs a webpage with updates, and I’m not the first to review this book.

The first great merit of this book is that it is self-produced with very good taste for layout. As such its contents are free and direct (e.g. it does not include a biography of the author, and the self-comments on the back cover are quite witty*). I already bought two copies and you should as well. Its second greatest virtue is… brevity. The book is extremely concise, it goes straight to the point. Its purpose is to show how some of the most intriguing aspects of Quantum Mechanics can be understood (or, better, explored and manipulated) with very little machinery, and in particular without linear algebra (up to a mysterious minus sign to create interference, whose mystery is never really resolved… but you can buy that – and interestingly, it turns out that Quantum Computation done that way is universal, so there’s nothing to loose in principle). It does so presenting the material in a logically rigorous fashion, without getting lost in the usual blend of metaphors and anecdotes that “embellish”, so to say, the usual pop science literature, and without losing time to give credits to this or that scientist or explaining useless technical jargon (as is the case with the very deep, but quite pedantic Un’occhiata alle carte di Dio [A glimpse at God’s cards] by Gian Carlo Ghirardi). A dictionary to make connections to modern literature is served to the specialist at the very end. The style of writing is quite entertaining, the kind of nerdy fun for XKCD lovers. The exposition does not spare the reader some combinatoric games and a lot of thinking, both of which might make the book much more challenging for readers who do not have a proper training than the author seems to believe (let’s see, I’m running this experiment with a friend of mine…). In principle all of the calculations can be tackled with pencil, paper and patience provided readers stick by the rules of the game – as far as the rules of the game are well explained before the game starts (I’m always disappointed when friends invite me to play a new table game they are expert on, and only at the end a new rule comes out I didn’t know about, one that makes them win by loads – incidentally this is also what happens in most of Nolan’s movies, but now I’m going way astray…).

The book includes three chapters: the first showing how and why quantum computation is more powerful than classical computation, the second explaining entanglement, and the third the problems of interpretation of Quantum Mechanics and of the measurement process, including hidden variables and such amenities. It’s really amusing to see how the latest achievement – quantum computation – can actually be used to introduce quantum mechanics in a less contrived way, assuming that classical computation already makes some sort of sense to the reader (while the author does spend some time explaining classical gates, he gives for granted that people are ready to accept the very idea that logical operations can be processed mechanically and with binary symbols). The formalism also allows to introduce a version of Bell inequalities, and to present with incredible clarity the “no faster than light communication” argument. Also, the author manages to scoop in some of his own insights on the matter, in particular some elements of the PBS theorem mentioned above and of the Author’s own take on the interpretation of Quantum Mechanics. Which means that a smart teenager reading and understanding this book will peer at the frontline of research today – an incredible achievement, although this will not spare him years and years of university etc.

– – – Good cop exits the scene. Bad cop enters the scene – – –

The one chapter that falls a bit short with respect to its pretentious claims is the third, on “reality”. The subject matter is, of course, monstrous. In fact, we are begged to drop all philosophical subtleties at the start. Never so!

The logical development of this chapter is still clear, but the narrative is not quite to-the-point as the previous two. Several phrases are redundant, and because I had the feeling of walking on eggshells and always expected a new load of concepts at very sentence, repetition of previous concepts did not actually help: I tended to assume the next sentence would always add something I had not thought about before, and therefore I put an intensity that maybe was not even necessary. Or, maybe, it was necessary! Which means I didn’t get the point… In any case, while a second reading might help, and I did get a faint idea of the Pooh-Bear argument, for that I might prefer to go to the original article, which has already been piled up with tons of other papers-soon-to-be-read (ahem…).

Also, unfortunately, Rudolph falls in Nolan’s usual mistake: he defines a new rule towards the end of the story. What is “real” is only defined on p. 122, and not in a very satisfactory way: “By hypothesis, what we mean by the real state is anything and everything that can affect the outcome of an experiment”. What does that mean?

This leads us to the more conceptual core of the problem, and here I’d like to weight in my own misconceptions about physics. So from now own I’m going to ramble, please stop reading.

The point is, well before the Pooh argument, I’m already a bit in disagreement. If I understood well, we are asked to assume that it makes sense to define “reality” as a set of variables whose detailed knowledge would underlie any probabilistic concept, and this independently of whether the inferential machinery governing the states of knowledge of the observer are going to be classical or quantum. The Pooh-bear argument is then laid down to show that the wave function can be thought of as such a “real” state, dispensing with an argument by Einstein why it couldn’t.

I’m one of those freaks who don’t believe that, even in classical statistical mechanics, probability as a state of knowledge is actually supported by a “truth of matter” of what the real states of a system are according to their volume in phase space (as I argued at length in this paper). The volume measured by whom? e.g., what is the “real” entropy of a body? I don’t think this question even makes sense. What only makes sense is that the underlying degrees of freedom will also be subjected to a process of measurement, their probability analyzed, then perfectioned by Bayesian update, and so on and so forth. For example, the fact that today we take, for ideal gases, the position and momenta of the particles of the gas to be “equiprobable a priori” (up to lots and lots of corrections) is not due to any “reality” or fundamental nature of position and momenta of the gas molecules, it’s just that we have been running a lot of science before getting to that conclusion, updating previous hypothesis until we found the one that works reasonably well. If, instead, say gravitational interaction had been much stronger, and one couldn’t neglect the effect of General Relativity in the determination of the equation of state of a perfect gas, we would have given to the “microcanonical ensemble” a quite different meaning, with a complete distortion of the “real” state space. Exactly the history of QM (and in particular Quantum Field Theory) reveals that obsession with the “real” values of physical properties ends up in nothing. In fact, the more “fine-grained” states that Rudolph draws on his planes of reality would have exactly the same quantum nature as the more “coarse grained” states the macroscopic observer measures, and there would have to be a microscopic observer, but an observer nonetheless, that makes quantum measurements, and eventually there will be a proper way to compare the observations of the one and of the other observer. To me, what is really relevant is that measurements turn out to be consistent. This is actually (at present) my general philosophy of science: a reasonably self-consistent body of knowledge whose credibility does not come from the fact that it compares to “reality”, but from the fact that the scientific community has established practices which allowed it to acquire authoritativeness in certain fields of knowledge. The demarcation of these fields where the scientific method works was determined in a somewhat evolutionary manner, therefore when people say “science works”, for me this is more of a definition then of a property…

OK, as usual I became all-too-serious. To go back to the book, personally I would not have created a separation between misty states and rocky states, I would have always put things in the mist, even after being measured (then, measurement becomes just another logical gate). This gives great unity, and operationally it does not make any difference, as far as one sees QM as an inferential machine that manipulates symbols.

* The last authors I’ve seen writing their own notes were Luther Blisset, funnily also the authors of “Q”. But a completely different book.