Sparse items on vaccines, Illich, science

  • Interesting and convincing post about the (im)possibility of viruses and bacteria developing vaccine resistance (contrary to what happens with antibiotics). The answer is complex. A lot of if’s and but’s. May I add, evolution has shown to be much more creative than we had expected.
  • Also on the same topic: here, here (commentary here). Some propose that vaccines may be the solution to the amtimicrobial resistance problem.
  • Italian virologist Roberto Burioni claims that science is not democratic. On this issue, I like the words of Harry Collins, from Gravity’s Ghost and Big Dog

“The following are anonymized extracts from a heated discussion among about sixty people that lasted approximately half an hour and which, because it could not be resolved, ended in a vote (!) on what should be done. […] Not long after this the meeting concluded that it would be impossible to reach a consensus and that they must vote on it. The vote went eighteen for [A] and a much larger number (around thirty) for [B]. The [A] group was asked to concede, which they did, but not without at least some people feeling frustrated and certain that a poor decision had been made. While everyone felt uncomfortable about taking a vote, to the extent of people looking round at me and laughingly acknowledging that “I had got what I came for,” it was an embarrassingly long time later before I realized what it all meant. The procedures of science are meant to be universally compelling to all. To find that one needs to vote on an issue institutionalizes the idea that there can be legitimate disagreement between rival parties. In other words, taking a vote shows that there can be a sociology of science that is not rendered otiose by the universally compelling logic of science. Though we did not realize it at the time, a vote at a scientific meeting is a vote for sociology of science and we all felt it instinctively. It is not that there aren’t disputes in science all the time, but putting their resolution to a vote legitimates the idea that they are irresolvable by “scientific” means; neither the force of induction from evidence or deduction from principles can bring everyone to agree.”

  • Two interesting personal recollections, one by JP Bunker and one by R. Smith, editor of BMJ, to a conference by Ivan Illich, on the occasion of the re-publishing of Medical Nemesis.

Health, argues Illich, is the capacity to cope with the human reality of death, pain, and sickness. Technology can help, but modern medicine has gone too far—launching into a godlike battle to eradicate death, pain, and sickness. In doing so, it turns people into consumers or objects, destroying their capacity for health.

His notion of its health enhancement is remarkably in tune with current views of the impact of the social environment on health.

He makes his purpose crystal clear: “I used medicine as a paradigm for any mega-technique that promises to transform the conditio humana. I examined it as a model for any enterprise claiming, in effect, to abolish the need for the art of suffering by a technically engineered pursuit of happiness.” […] “ emphatically, I do not care about health”

  • Illich on vaccinations in Medical Nemesis

Some modern techniques, often developed with the help of doctors, and optimally effective when they become part of the culture and environment or when they are applied independently of professional delivery, have also effected changes in general health, but to a lesser degree. Among these can be included contraception,smallpox vaccination of infants, and such non medical health measures as the treatment of water and sewage, the use of soap and scissors by midwives, and some antibacterial and insecticidal procedures.

The combined death rate from scarlet fever, diphtheria, whooping cough and measles among children up to fifteen shows that nearly 90 percent of the total decline in mortality between 1860 and 1965 had occurred before the introduction of antibiotics and widespread immunization. In part, this recession may be attributed to improved housing and to a decrease in the virulence of micro-organisms, but by far the most important factor was a higher host-resistance due to better nutrition.

  • Illich on vaccinations and schooling in Tools of Conviviality (quite a stronger opinion, though at the time he had not studied the medical system yet. Quite interesting the tie between schooling and vaccinations)

It is not always easy to determine what constitutes compulsory consumption. The monopoly held by schools is not established primarily by a law that threatens punishment to parent or child for truancy. Such laws exist, but school is established by other tactics: by discrimination against the unschooled, by centralizing learning tools under the control of teachers, by restricting public funds earmarked for baby-sitting to salaries for graduates from normal schools. Protection against laws that impose education, vaccination, or life prolongation is important, but it is not sufficient. Procedures must be used that permit any party who feels threatened by compulsory consumption to claim protection, whatever form the imposition takes. Like intolerable pollution, intolerable monopoly cannot be defined in advance.

  • Interesting book: Elena Conis, Vaccine Nation (The University of Chicago Press).

 

Three abstracts

Network methods for nonequilibrium thermodynamics

Ever since Einstein’s description of Brownian motion, the mathematics of Markovian stochastic processes proved a crucial paradigm for the understanding of the behaviour of open systems interacting with a large (and thus memory-less) environment. When such processes occur on an intrinsically discrete configuration space, the topology of the network of possible transitions furnishes important insights about the nature of nonequilibrium processes. In this talk I will 1) trace the conceptual path that leads from the abstraction of open systems to the mathematics of Markov processes on networks, 2) mention some of the major findings that are novel with respect to textbook thermodynamics, 3) outline some recent developments regarding the role of an ideal observer who only has partial information about the system.

Effective thermodynamics for a marginal observer

Thermodynamic modeling often presumes that the observer has complete information about the system she or he deals with: no parasitic current, exact evaluation of the forces needed to drive the system out of equilibrium. However, most often the observer only measures marginal information. How is she or he to make consistent thermodynamic claims? Disregarding sources of dissipation might lead to untenable claims, such as the possibility of perpetuum mobile. Basing on the tenets of Stochastic Thermodynamics, we show that for such an observer it is nevertheless possible to produce an effective description that does not dispense with the fundamentals of thermodynamics: the 2nd Law, the Fluctuation-Dissipation paradigm, and the more recent and encompassing Fluctuation Theorem.

Verso un approccio sistemico e sistematico alla termodinamica di nonequilibrio

Ancora oggi, a duecento anni dalla sua nascita, la termodinamica è un collage di modelli, formule, principi fondamentali e leggi empiriche. In questo calderone è facile usare le stesse parole per concetti diversi, o parole diverse per lo stesso concetto, rendendo il terreno fertile per confusioni e paradossi. Non a caso, è tra le più temute tra le materie di studio per fisici e ingegneri. Vi sono enormi distanze tra l’insegnamento della termodinamica, perlopiù ancorato agli stilemi ottocenteschi, la ricerca applicata all’efficientamento energetico, e la concettualizzazione teorica dei suoi principi fondamentali. In questo seminario vorrei ridiscutere alcuni concetti di base: cosa vuol dire essere all’equilibrio o fuori dall’equilibrio, qual è la differenza sostanziale tra una “funzione di stato” e un “differenziale inesatto” e perché dovremmo interessarcene, qual è il ruolo dell’osservatore, in che senso la termodinamica è una “metascienza” – una gnoseologia dell’ignoranza. In questo percorso arriverò a sfatare alcuni miti ricorrenti, a partire dalla formulazione della seconda legge come “aumento di entropia dell’universo”, e a rileggere la storia della termodinamica con occhio critico. Infine, presenterò brevemente un approccio teorico che è matematicamente rigoroso e fisicamente consistente con questa impalcatura osservativa, chiamato talvolta “termodinamica stocastica”.

Ph.D. @ Uni Lux on thermodynamics of computation

There will soon be an official opening for a thematic Ph.D. position at the Complex Systems and Statistical Mechanics group of the University of Luxembourg, where Prof. Massimiliano Esposito is Principal Investigator and I am research associate. I will be the Ph.D.’s advisor. The project is entitled “Accuracy and energetic efficiency of computation in the post-Moore-law era: a Stochastic Thermodynamics approach” (see short description below). We would like the candidate to start in June 2018. We are looking for a student with a strong background in mathematics and theoretical physics, and with some programming skills. Students interested should submit their application directly to me, including their curriculum, complete with marks from their bachelor and master degrees, a motivation letter, and possibly a short presentation letter by their master thesis advisor.

PROJECT DESCRIPTION

The process of miniaturization of the microprocessor, which has sustained the tremendous spreading of digital technologies, is slowing down and might eventually come to a halt as it meets its fundamental thermodynamic limits, both in the process of computation and in the process of transportation of information. At the macroscopic level, keeping within an energetic budget is crucial to avoid over-heating at room temperature of personal devices; furthermore, the energy expenditure to maintain major data centers and High Performance Computing facilities cool should remain a small share of the world’s energy consumption. At the microscopic level, as the electronic components become smaller, the operating voltages become comparable to the random voltage generated by thermal noise (the environment), which produces false bit flips and makes computation inaccurate. Therefore, keeping within a fixed energy budget is at the same time a formidable constraint and the occasion to venture into new research directions that demand a better integration between all levels involved in the process, from the algorithmic one, to the technological, to the architecture of the network. In this respect, many recent lines of research propose a slow-down of the computational task and to trade energetic feasibility with accuracy (whenever the tasks need not be too precise). As infinitely-fast computation is accurate but expensive, and infinitely-slow computation is cheap but completely unreliable, there exists a (class of) optimums in between. The main objective of this project is to find and characterize this class. The physical theory that allows to study the trade-off between velocity, thermodynamic efficiency, and accuracy is Stochastic Thermodynamics, which was recently formalized into a complete theory but has among its milestones the work by Johnson and Nyquist on the analysis of electrical noise in circuits. The project’s goal is to develop a set of tools that will allow to make claims about optimal conditions for the “next switch” (any technology that might replace the transistor with a slower technology), or for the next integration step. The project will be based on the mathematics of Langevin systems applied to the electrical elements and circuits that are fairly representative of IT technologies.

Workshop @ SISSA: Communicating science without reducing it

[Click on the title for a better view]

I had the great opportunity to entertain a workshop with the students of the Master of Scientific Communication at SISSA, the International School of Advanced Studies in Trieste. This gave me the chance to put together some thoughts about science, research, popularization, and reductionism that were in search of a more coherent formulation. Here a few pictures, taken by a dear friend and collaborator:

I started the workshop asking the students (Experiment 1) to measure the sum of the internal angles of these two figures:

figures

Results were peaked at 180° for the first  and 179° for the second:

chart2chart2

From a technical point of view, the statistical analysis of “what is the internal sum of the angles” can be carried on by comparing these curves to the “real” expected curves, which one might assume to be normally distributed with Gaussian probability G(α) and an error given by the resolution of the measurement instrument, which was 1°.

Beware: the two figures are not triangles. I drew one with a slightly convex side, and one with a slightly concave side, so that the sum of their internal angles was respectively 181 and 179. Throughout the presentation I avoided talking about “triangles” and I only referred to “figures”.

Then, the above experience might be a good one if one wants to show that people have confirmation biases that drive them to “push” for 180°. This is a social experiment (Experiment 2) on top of another experiment. However, this conclusion could only be supported by carrying on Experiment 1 on a much larger statistical sample, and by comparing the histogram to those obtained in experiments involving real 180° triangles. In such social experiment, one might even want to divide the population in two segments, those who get an incentive for obtaining the “right” answer (whatever it is) and those who do not.

Notice that a proper statistical analysis of this other question would have to involve some meta-analytic statistical tool. In fact, we would have to consider the probability of a deviation of histograms from “ideal” histograms, considering that if the measures are i.i.d. with normal distribution G(α) with an error var = 1°, then after performing N measures the probability of obtaining M measurements of an angle α off the “real” measure are given by a binomial {N choose M} G(α)^M (1-G(a))^(N-M), and it is this quantity we want to look at to check for eventual biases.

I might try a similar experiment one day, but that was not the point here either… Maybe these guys had biases, maybe not. But what did I actually mean when I told them that these figures had 179° and 181° sum of the internal angles? How did I determine that?

It turns out I used a vector graphic software to draw those images that allows me to: 1) focus arbitrarily close to the corners of the image, 2) resize the line thickness arbitrarily thin, 3) measure the angle with a semicircle with a resolution of cents of a degree. That’s quite a different procedure from theirs: they have a resolution of one degree, and the lines are that thick: this contributes to statistical error. But there’s also a source for systematic discrepancy, because the curvy side is about twice as long as the semicircle. So, we are actually measuring two different things: I am measuring the angle formed by the tangent to the point (dashed lines), while they are measuring the angle associated to a chord (dotted lines):

proof0

Assuming that the curvy line is the arch of circle, and that the chord is midway, then their error with respect to the inscribed triangle is half mine (this can be proven by simple Euclidean geometry, e.g. by comparing congruent angles in the diagram below).

proof

Therefore, if I had to evaluate whether they are biased, I would have to consider that, to them, it was perfectly legit to claim 180.5 and 179.5 degrees, which is well within the error bar!


So what was the whole point of this story? Not really to run Experiment 1 and quantitatively measure the internal angles of figures, nor to run Experiment 2 and quantitatively measure people’s biases. But rather, to argue in a completely heuristic and qualitative way that science is complex and so should be any account of science (which is what these students are studying): for if even for such a small thing, the story of what it means to measure and what it takes to be credible is such a mess, imagine what will be with such enormous themes as, say, GMO’s and the such. And a major part in this process is played by questioning: the students were ready to admit they might have had confirmation biases in running Experiment 1, but it takes one step further and a lot of insight to question my authoritativeness and point out that no, it was me who was (deliberately) pouring in his own confirmation biases in running Experiment 2!

(Actually, we didn’t really have the time to give this experience the right pace, I had to jump to the conclusions… So while this was an interesting first run of this workshop, where I learned a lot, I hope I will be able one day to provide a more refined version of it to a larger audience. On that occasion I would hope to initiate a conversation whereby at some point somebody contests authority by asking what it means that the measures of the figures are 179 and 181 are “true”…).


The rest of the workshop was a commentary on the following slides, where I also put in some excerpts from books I’ve been reading recently, starting from how scientific popularization works recently, how we do this job at Festivaletteratura, and ending with some more abstract and “political” considerations about the status of the University, on which I’ve been writing sparsely on this blog and I will definitely come back soon. I also ran, just for demonstrative purposes, some of the workshops on computation I proposed at Festivaletteratura and, more recently, in my son’s classroom.

This slideshow requires JavaScript.

 

Teaching /3: What is a bit? And what is a computer?

Recently I’ve been “teaching” to many different sorts of human specimen. Ph-D’s. Master students. High-school teachers…. And children! Which is so much fun. I put quotes around “teaching” because I’ve been learning so much from all these experiences.

Each and every kids impresses his fingers onto the screens of digital technologies. But how many of them know what hides behind the screen of their phone, tablet or device? In this laboratory we will use legoes and dominoes to build the most rudimentary of mechanical calculators.

The inspiration comes from the adding machines explained here. Dominoes can be used to construct logical gates.

First I introduce the binary representation of numbers, which is a bit tricky, for two reasons: because unfortunately we use the same two symbols, “0” and “1” as for decimal numbers, which contributes to confusion; and because the “carrying over” is done very early, and one does not appreciate that it’s exactly the same as in decimal accounting, but with less resources. So it’s a good idea to pass through another basis, so I would ask the children to think what would happen if I only had seven fingers to start with, and would count numbers by the names of the seven dwarfs Doc, Dop, Bash, Slee, Grum, Snee, Hap. You will soon find yourself with funny composite names like Bashgrumdop etc. With this intermediate passage it’s more intuitive to get to e.g. a False/True representation of binary numbers.

Second step is to understand that False/True can be different states of stuff. So, “Fallen/Standing” for a domino, or “Left/Right” for a simple lego bit.

Third, I show the children the technologies at our disposal (dominoes, legoes and marbles), and I ask them to make a project of a machine that can compute a 2-bit sum, or of several logical gates (in the pictures above you can recognize a few OR and XOR gates, and an AND gate I built myself). Though I help the children through the process, giving them some insights, it is absolutely crucial that they manage by themselves to figure out how to compose the pieces into a solution to the logical puzzle I posed them. I don’t give the kids and special recipe to follow, and they eventually found solutions I would not expect.

Fourth, I take the pieces together and explain how, in principle, by composing the technologies they have being producing, they could scale them up and obtain real computers that actually perform operations. To this end, the composability of both technologies allows for easy inferences. For example: once one has four 2-bit adders, it’s easy to compose them into an 8-bit adder.  But while the resources are scaling linearly, computational ability of this thing is scaling exponentially! Which opens up the whole question what is exponential growth, and how it sustained the industry…

– – –

One thing I realized, and that could be of inspiration for a project I have in mind, is that legoes and dominoes are of crucial difference with respect to computation in several respects. The one that most interests me is that, while the lego adding machine implements a lego bit to code information, and a substantially different process signals information (the marble), the dominoes are a more uniform technology: the transport and the processing of information (through the topology of their configuration) is the exact same physical process (and it would be interesting to extend this to memory as well…). Also, while the state of a lego bit is a static state,  the “falling/standing” nature of the dominoes has nothing to do about how a particular domino is up or down, but what’s happening along one line or another. On the other hand, dominoes are a tragically irreversible technology, they can only be used once (making for an extreme disposable computer), and while this might be fixed, the thermodynamic cost of this operation might be conspicuous.

Somehow, certain fundamental research in the foundations of computation is moving precisely in the direction of understanding how information transmission, processing, and storage can be made more uniform, and my personal take on this is that it can only happen if every variable is made dynamical. Then, we will have to work out with diligence what’s the thermodynamic cost of all this…

Teaching /2: Open Quantum Systems for Ph. D. students

This is a year of teaching. To children, to high-school teachers, to master students, etc.

In these days I’m teaching a four-hour course to Ph.D. candidates of the physics department on Open Quantum Systems. The idea is that of starting from the unitary dynamics of a “universe”, and resolving it into “system + environment” to find a dynamical map for the system by tracing out the environmental degrees of freedom in the limit where the interaction between system and environment is weak. The material covers the so-called “microscopic” derivation of the Lindblad equation, then focuses on the case where the environment is thermal radiation, and then further focuses on the cases where the system is either a spin or a harmonic oscillator. I briefly discuss entropy vs. entropy production, the second law of thermodynamics, and irreversibility.

I had real fun preparing this course. As I dealt with a fairly new subject, I found the same vibe as back when I was a university student, always looking for further references, scrutinizing all possible details, ending up reading four times as much material as was needed to prepare an exam. Which was one of the (many) reasons why I eventually graduated two years later (while at the time I was a bit discouraged by this delay, it eventually turned out to be one of the best “choices” in my life).

The starting reference for this basic course is the remarkable book by Breuer and Petruccione:

H. P. Bruer, F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press)

This is perfect to have a physicist’s overview, but recently I found this other short book

A. Rivas, S. F. Huelga, Open Quantum Systems (SpringerBriefs)

that does a great job in resuming a lot of otherwise quite hostile math-phys literature on the subject, and sheds light on some obscure passages. In particular, in the physicist’s derivation, several different approximations are made: beside the weak coupling, there’s Born’s assumption that the system and the environment are uncorrelated at all times (which makes no sense, given that what we want to study is precisely how the environment affects the system…), the Markov property, and the so-called “rotating wave” approximation. This is probably due to a stratification of arguments and techniques that have piled up in the history of the derivation of this and similar master equations. However, the treatment given by Rivas and Huelga in terms of Nakajima-Zwanzig operators shows that the Born-Markov approximation  and, to some degree that I’m not able to appreciate, even the rotating-wave approximation, all follow from the weak interaction hypothesis alone! Thus the derivation is much neater.

As I want to learn more about all these things, as final exam I will propose to the students a half-day workshop where each one of them will present a paper or a book chapter picked from the following list (or, if they don’t like, they might just choose a relevant and interesting paper of their own).

Breuer & Petruccione Secs. 3.2.1 & 3.2.2

Rivas and Huelga Ch. 4.2 & Th. 3.1.2, Th. 3.2.1

These book chapters explain how to derive the Lindblad equation from the Markov property alone, and not following a microscopic derivation. This is standard material, and should be covered with some detail.

V. V. Albert and L. Jiang, Symmetries and conserved quantities in Lindblad master equation, PRA 89, 022118 (2014).

The main difference between the Lindblad equation and its classical analogue, the master equation, is that the former may have multiple nontrivial steady states, or even oscillate between coherent steady states, while multiple steady states for the classical master equation are trivially due to disconnected sub-spaces, in the case of reversible rates, or absorbing basins of attraction in the case where transitions are not reversible. Allegedly this paper discusses multiple steady states and their relations to conservation laws and symmetries of the Lindbladian.

D. Taj and F. Rossi, Completely positive Markovian quantum dynamics in the weak-coupling limit, PRA 78, 052113 (2008)

One of the limitations of the microscopic derivations outlined above is that the system must have continuous spectrum, and the system discrete. While the first condition seems to be unavoidable (all of thermodynamics is based on the assumption that we can throw stuff away and we will never see the consequences of it…), this paper rigorously addresses the case where the system has continuous spectrum, by analyzing the separation of time scales between the system’s evolution and the time of Poincaré recurrencies in the environment.

A. Rivas, A. D. K. Plato, S. F. Huelga, Markovian master equations: a critical study, New J. Phys. 12, 113032 (2010).

This is the material that sustains the book by Rivas and Huelga, but it also includes a couple of more in-depth examples, the two coupled harmonic oscillators and the driven harmonic oscillator, that are worth studying in detail.

F. Carollo, J. P. Garrahan, I. Lesanovsky and C. Pérez-Espigares, Making rare events typical in Markovian open quantum systems, arXiv:1711.10951.

J. P. Garrahan and I. Lesanovsky, Thermodynamics of Quantum Jump Trajectories, PRL 104, 160601 (2010)

P. Zoller, M. Marte, and D. F. Walls, Quantum jumps in atomic systems, PRA 35, 198 (1987).

This triptyque can be run up-down or down-up. For those who know about the thermodynamic approach to open systems, and in particular a bit of large deviation theory (or even better, they know about rare-events algorithms) (that is, my own group members) it is best to start from the first and only refer to the other two papers to complete the information. For all others, the third paper contains a great nontrivial three-level example and some speculations about “quantum jump trajectories”, and they might just take a look at the first two.

Philip Pearle, Simple Derivation of the Lindblad Equation, Eur. J. Phys 33, 805 (2012).

As per title, this paper proposes a simple derivation of the Lindblad equation, of the sort based on some general principles like many derivations of quantum mechanics from reasonable axioms. I’m at the same time compelled but skeptical about this paper, so this JC should really become a critical analysis.

M. B. Plenio and P. L. Knight, The quantum-jump approach to dissipative dynamics in quantum optics, Rev. Mod. Phys. 70, 101 (1998).

This is a long review, dealing with the subject of “quantum jump trajectories”. It connects here and there with the Lindblad equation, but not in a rigorous way. Possibly Carlmichael’s work (cited) gives further insights. I still have a problem making sense of all this jumping, it looks way too classical (the point is, jumping from where to where? As the diagonal basis of the density matrix evolves with time, then also the preferred “locations” whereto jump change…).

R. Alicki, The Markov master equations and the Fermi golden rule, Int. J. Theor. Phys. 16, 351 (1977).

As per the title, this paper proposes a derivation of Fermi’s golden rule. The JC should explain in what sense this derivation is more rigorous than the usual derivations.

Breuer and Petruccione Sec. 3.5 (given 2.4.6 and other material)

Derivation of the Lindblad equation from an indirect measurement process.

Breuer & Petuccione 3.4.6

Damped harmonic oscillator (in some detail).


Furthermore, I have a couple of questions of my own I would like to inspect:

– For the “microscopically derived” Lindblad equation, there is one preferred basis where the dynamics separates into populations and coherences. This implies that, in that basis, if one starts from a purely statistical state (a diagonal density matrix), then such state only evolves by the Pauli master equation. The question is, for generic Lindblad generators not following from a microscopic derivation, does there exist a preferential basis such that, if the initial state is prepared as a purely statistical state in that basis, then they evolve by a classical master equation, that is, they do not develop correlations at later times?

– An analysis by Kossakowsky states that, for any family of orthogonal complete projections {P_i}, i = 1,…,n, where n is the dimension of the system’s Hilbert space, the quantities w_ij = tr(P_i L P_j) are the rates of a classical Markov jump process. However, such process might not be representative of the quantum evolution, as the classical evolution depends on the choice of family of projections. The idea is to study the statistics of classical Markov jump processes, randomizing over the family. How to randomize? One possibility is the following. Since for some unitary matrix U, P_i = U I_i U^-1, where I_i has only the i-th diagonal entry equal to 1, and all others vanish, we could take the unitary U from the GUE (Gaussian Unitary Ensemble).