[Click on the title for a better view]

I had the great opportunity to entertain a workshop with the students of the Master of Scientific Communication at SISSA, the International School of Advanced Studies in Trieste. This gave me the chance to put together some thoughts about science, research, popularization, and reductionism that were in search of a more coherent formulation. Here a few pictures, taken by a dear friend and collaborator:

I started the workshop asking the students (**Experiment 1**) to measure the sum of the internal angles of these two figures:

Results were peaked at 180° for the first and 179° for the second:

From a technical point of view, the statistical analysis of “what is the internal sum of the angles” can be carried on by comparing these curves to the “real” expected curves, which one might assume to be normally distributed with Gaussian probability *G(α)* and an error given by the resolution of the measurement instrument, which was 1°.

Beware: the two figures are *not* triangles. I drew one with a slightly convex side, and one with a slightly concave side, so that the sum of their internal angles was respectively 181 and 179. Throughout the presentation I avoided talking about “triangles” and I only referred to “figures”.

Then, the above experience might be a good one if one wants to show that people have confirmation biases that drive them to “push” for 180°. This is a social experiment (**Experiment 2**) on top of another experiment. However, this conclusion could only be supported by carrying on Experiment 1 on a much larger statistical sample, and by comparing the histogram to those obtained in experiments involving real 180° triangles. In such social experiment, one might even want to divide the population in two segments, those who get an incentive for obtaining the “right” answer (whatever it is) and those who do not.

Notice that a proper statistical analysis of this other question would have to involve some meta-analytic statistical tool. In fact, we would have to consider the probability of a deviation of histograms from “ideal” histograms, considering that if the measures are i.i.d. with normal distribution *G(α)* with an error *var = 1°*, then after performing *N* measures the probability of obtaining *M* measurements of an angle *α* off the “real” measure are given by a binomial *{N choose M} G(α)^M (1-G(a))^(N-M)*, and it is *this* quantity we want to look at to check for eventual biases.

I might try a similar experiment one day, but that was not the point here either… Maybe these guys had biases, maybe not. But what did I actually mean when I told them that these figures had 179° and 181° sum of the internal angles? How did I determine *that*?

It turns out I used a vector graphic software to draw those images that allows me to: 1) focus arbitrarily close to the corners of the image, 2) resize the line thickness arbitrarily thin, 3) measure the angle with a semicircle with a resolution of cents of a degree. That’s quite a different procedure from theirs: they have a resolution of one degree, and the lines are that thick: this contributes to statistical error. But there’s also a source for systematic discrepancy, because the curvy side is about twice as long as the semicircle. So, we are actually measuring two different things: I am measuring the angle formed by the tangent to the point (dashed lines), while they are measuring the angle associated to a chord (dotted lines):

Assuming that the curvy line is the arch of circle, and that the chord is midway, then their error with respect to the inscribed triangle is half mine (this can be proven by simple Euclidean geometry, e.g. by comparing congruent angles in the diagram below).

Therefore, if I had to evaluate whether they are biased, I would have to consider that, to them, it was perfectly legit to claim 180.5 and 179.5 degrees, which is well within the error bar!

So what was the whole point of this story? Not really to run Experiment 1 and quantitatively measure the internal angles of figures, nor to run Experiment 2 and quantitatively measure people’s biases. But rather, to argue in a completely heuristic and qualitative way that science is complex and so should be any account of science (which is what these students are studying): for if even for such a small thing, the story of what it means to measure and what it takes to be credible is such a mess, imagine what will be with such enormous themes as, say, GMO’s and the such. And a major part in this process is played by *questioning*: the students were ready to admit they might have had confirmation biases in running Experiment 1, but it takes one step further and a lot of insight to question my authoritativeness and point out that no, it was me who was (deliberately) pouring in his own confirmation biases in running Experiment 2!

(Actually, we didn’t really have the time to give this experience the right pace, I had to jump to the conclusions… So while this was an interesting first run of this workshop, where I learned a lot, I hope I will be able one day to provide a more refined version of it to a larger audience. On that occasion I would hope to initiate a conversation whereby at some point somebody contests authority by asking what it means that the measures of the figures are 179 and 181 are “true”…).

The rest of the workshop was a commentary on the following slides, where I also put in some excerpts from books I’ve been reading recently, starting from how scientific popularization works recently, how we do this job at Festivaletteratura, and ending with some more abstract and “political” considerations about the status of the University, on which I’ve been writing sparsely on this blog and I will definitely come back soon. I also ran, just for demonstrative purposes, some of the workshops on computation I proposed at Festivaletteratura and, more recently, in my son’s classroom.