Particle Physics Planet

September 21, 2018

Christian P. Robert - xi'an's og

Riddler collector

Once in a while a fairly standard problem makes it to the Riddler puzzle of the week. Today, it is the coupon collector problem, explained by W. Huber on X validated. (W. Huber happens to be the top contributor to this forum, with over 2000 answers, and the highest reputation closing on 200,000!) With nothing (apparently) unusual: coupons [e.g., collecting cards] come in packs of k=10 with no duplicate, and there are n=100 different coupons. What is the expected number one has to collect before getting all of the n coupons?  W. Huber provides an R code to solve the recurrence on the expectation, obtained by conditioning on the number m of different coupons already collected, e(m,n,k) and hence on the remaining number of collect, with an Hypergeometric distribution for the number of new coupons in the next pack. Returning 25.23 packs on average. As is well-known, the average number of packs to complete one’s collection with the final missing card is expensively large, with more than 5 packs necessary on average. The probability distribution of the required number of packs has actually been computed by Laplace in 1774 (and then again by Euler in 1785).

The n-Category Cafe

Cartesian Double Categories

In general, there are two kinds of bicategories: those like $\mathrm{Cat}Cat$ and those like $\mathrm{Span}Span$. In the $\mathrm{Cat}Cat$-like ones, the morphisms are “categorified functions”, which generally means some kind of “functor” between some kind of “category”, consisting of functions mapping objects and arrows from domain to codomain. But in the $\mathrm{Span}Span$-like ones (which includes $\mathrm{Mod}Mod$ and $\mathrm{Prof}Prof$), the morphisms are not “functors” but rather some kind of “generalized relations” (including spans, modules, profunctors, and so on) which do not map from domain to codomain but rather relate the domain and codomain in some way.

In $\mathrm{Span}Span$-like bicategories there is usually a subclass of the morphisms that do behave like categorified functions, and these play an important role. Usually the morphisms in this subclass all have right adjoints; sometimes they are exactly the morphisms with right adjoints; and often one can get away with talking about “morphisms with right adjoints” rather than making this subclass explicit. However, it’s also often conceptually and technically helpful to give the subclass as extra data, and arguably the most perspicuous way to do this is to work with a double category instead. This was the point of my first published paper, though others had certainly made the same point before, and I think more and more people are coming to recognize it.

Today a new installment in this story appeared on the arXiv: Cartesian Double Categories with an Emphasis on Characterizing Spans, by Evangelia Aleiferi. This is a project that I’ve wished for a while someone would do, so I’m excited that at last someone has!

We know now that various structure on a double category corresponds to similar structure on a bicategory. For instance, a monoidal structure on a (suitably well-behaved) double category induces a monoidal structure on its underlying bicategory. However, the monoidal double category is generally much stricter and easier to work with.

Aleiferi’s paper is about extending this to the cartesian monoidal case. A cartesian monoidal double category is easy to define: its diagonal $D\to D×DD\to D\times D$ and projection $D\to 1D\to 1$ have right adjoints, just as for ordinary categories. It’s also easy to say what it means for a $\mathrm{Cat}Cat$-like bicategory to be cartesian monoidal: we can say that its diagonal and projection have right adjoints too, although that’s more complicated because the adjoints are generally only pseudofunctors living in a tricategory.

But it’s not at all obvious what it means for a $\mathrm{Span}Span$-like bicategory to be “cartesian monoidal”. Intuitively, bicategories like $\mathrm{Span}Span$ itself, or more generally $\mathrm{Span}\left(E\right)Span\left(E\right)$ for $EE$ a category with finite limits, and $\mathrm{Prof}\left(V\right)Prof\left(V\right)$ when $VV$ is cartesian monoidal, should be “cartesian” — but they are not cartesian monoidal in the $\mathrm{Cat}Cat$-like way. The notion of cartesian bicategory was defined (by Carboni, Walters, Kelly, Verity, and Wood) to capture examples like these, but it is quite complicated. Moreover, to someone familiar with double categories, it is crying out to be reformulated in double-category language (e.g. it requires certain morphisms to have right adjoints, and induces $\mathrm{Cat}Cat$-like cartesian structure on the sub-bicategory of morphisms with right adjoints). In fact, it blows my mind that anyone was able to define the notion of cartesian bicategory without secretly having double categories in their head!

Aleiferi has now made a more careful study of cartesian double categories, and shown that they can be used for at least some (which I suspect will eventually become “nearly all”) of the same purposes as cartesian bicategories. For instance, here is a theorem from Lack-Walters-Wood Bicategories of spans as cartesian bicategories:

Theorem: A bicategory is equivalent to $\mathrm{Span}\left(E\right)Span\left(E\right)$, for some category $EE$ with finite limits, if and only if it is cartesian, each comonad has an Eilenberg-Moore object, and every map is comonadic.

And here is a theorem from Aleiferi’s paper:

Theorem: A double category is equivalent to $\mathrm{Span}\left(E\right)Span\left(E\right)$, for some category $EE$ with finite limits, if and only if it is cartesian, fibrant, unit-pure, and has strong Eilenberg-Moore objects for copointed endomorphisms.

Even without understanding all the words, the family resemblance should be clear, even if the technicalities are different. On a quick skim of Aleiferi’s paper it looks like there is no formal comparison yet between cartesian double categories and cartesian bicategories, but I’m sure that will come.

The n-Category Cafe

A Pattern That Eventually Fails

Sometimes you check just a few examples and decide something is always true. But sometimes even $1.5×{10}^{43}1.5 \times 10^\left\{43\right\}$ examples is not enough.

You can show that

${\int }_{0}^{\infty }\frac{\mathrm{sin}t}{t}\phantom{\rule{thinmathspace}{0ex}}dt=\frac{\pi }{2}\displaystyle\left\{ \int_0^\infty \frac\left\{\sin t\right\}\left\{t\right\} \, d t = \frac\left\{\pi\right\}\left\{2\right\} \right\} $

${\int }_{0}^{\infty }\frac{\mathrm{sin}t}{t}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{101}\right)}{\frac{t}{101}}\phantom{\rule{thinmathspace}{0ex}}dt=\frac{\pi }{2}\displaystyle\left\{ \int_0^\infty \frac\left\{\sin t\right\}\left\{t\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{101\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{101\right\}\right\} \, d t = \frac\left\{\pi\right\}\left\{2\right\} \right\} $

${\int }_{0}^{\infty }\frac{\mathrm{sin}t}{t}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{101}\right)}{\frac{t}{101}}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{201}\right)}{\frac{t}{201}}\phantom{\rule{thinmathspace}{0ex}}dt=\frac{\pi }{2}\displaystyle\left\{ \int_0^\infty \frac\left\{\sin t\right\}\left\{t\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{101\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{101\right\}\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{201\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{201\right\}\right\} \, d t = \frac\left\{\pi\right\}\left\{2\right\} \right\} $

${\int }_{0}^{\infty }\frac{\mathrm{sin}t}{t}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{101}\right)}{\frac{t}{101}}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{201}\right)}{\frac{t}{201}}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{301}\right)}{\frac{t}{301}}\phantom{\rule{thinmathspace}{0ex}}dt=\frac{\pi }{2}\displaystyle\left\{ \int_0^\infty \frac\left\{\sin t\right\}\left\{t\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{101\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{101\right\}\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{201\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{201\right\}\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{301\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{301\right\}\right\} \, d t = \frac\left\{\pi\right\}\left\{2\right\} \right\} $

and so on.

It’s a nice pattern. But it doesn’t go on forever! In fact, Greg Egan showed the identity

${\int }_{0}^{\infty }\frac{\mathrm{sin}t}{t}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{101}\right)}{\frac{t}{101}}\phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{201}\right)}{\frac{t}{201}}\cdots \phantom{\rule{thinmathspace}{0ex}}\frac{\mathrm{sin}\left(\frac{t}{100n+1}\right)}{\frac{t}{100n+1}}\phantom{\rule{thinmathspace}{0ex}}dt=\frac{\pi }{2}\displaystyle\left\{ \int_0^\infty \frac\left\{\sin t\right\}\left\{t\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{101\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{101\right\}\right\} \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{201\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{201\right\}\right\} \cdots \, \frac\left\{\sin \left\left(\frac\left\{t\right\}\left\{100 n +1\right\}\right\right)\right\}\left\{\frac\left\{t\right\}\left\{100 n + 1\right\}\right\} \, d t = \frac\left\{\pi\right\}\left\{2\right\} \right\} $

holds when

$n<15,341,178,777,673,149,429,167,740,440,969,249,338,310,889 n < 15,341,178,777,673,149,429,167,740,440,969,249,338,310,889 $

but fails for all

$n\ge 15,341,178,777,673,149,429,167,740,440,969,249,338,310,889. n \ge 15,341,178,777,673,149,429,167,740,440,969,249,338,310,889 .$

It’s not as hard to understand as it might seem; it’s a special case of the infamous ‘Borwein integrals’. The key underlying facts are:

• The Fourier transform turns multiplication into convolution.

• The Fourier transform of $\mathrm{sin}\left(cx\right)/\left(cx\right)\sin\left(c x\right)/\left(c x\right)$ is a step function supported on the interval $\left[-c,c\right]\left[-c,c\right]$.

• The sum $\sum _{k=1}^{n}\frac{1}{100n+1}\displaystyle\left\{\sum_\left\{k = 1\right\}^n \frac\left\{1\right\}\left\{100n + 1\right\}\right\}$ first exceeds $11$ when

$n=15,341,178,777,673,149,429,167,740,440,969,249,338,310,889. n = 15,341,178,777,673,149,429,167,740,440,969,249,338,310,889. $

For Greg’s more detailed explanation, based on that of Hanspeter Schmid, and for another famous example of a pattern that eventually fails, go here:

Peter Coles - In the Dark

Tonight is Culture Night!

Just time for a quick post to mention that tonight is Culture Night in Ireland, which means that over 1600 venues around the country are open this evening for free cultural events. Museums, art galleries and other public buildings and spaces will open later this evening to welcome the general public and there are scores of free concerts going on all over the place. There’s a useful guide here.There are some events in Maynooth tonight, including one at Maynooth Castle.

I would have gone to tonight’s free concert at the National Concert Hall. Although it’s free you have to book a ticket because the capacity is limited and unfortunately I was too late getting around to doing that so couldn’t get in. I’ll probably listen to it on the radio tonight instead.

I think Culture Night is a great idea, as it encourages people to sample cultural fare they might otherwise not get around to trying, and may boost the audiences for the rest of the year as a result. I wonder if anyone has ever thought of running a Culture Night in, say, Cardiff?

Christian P. Robert - xi'an's og

postdoctoral position on the Malaria Atlas Project, Oxford [advert]

The Malaria Atlas Project is opening a postdoctoral position in Oxford in geospatial modelling toward collaborating with other scientists to develop probabilistic maps of malaria risk at national and sub-national level to evaluate the efficacy of past intervention strategies and to assist with the planning of future interventions. An understanding of spatiotemporal modelling and expertise in geostatistics, random-field models, or equivalent are essential. An understanding of the epidemiology of a vector-borne disease such as malaria is desirable but not essential. You must have a PhD or equivalent experience in mathematics, statistics, biostatistics, or a similar quantitative discipline.

You will contribute to and, as appropriate, lead in the preparation of scientific reports and journal articles for publication of research findings from this work in open access journals. Travel to collaborators in Europe, the United States, Africa, and Asia will be part of the role.

This full-time position is fixed-term until 31 December 2019 in the first instance. The closing date for this position will be 12.00 noon on Wednesday 17 October 2018.

Emily Lakdawalla - The Planetary Society Blog

The day I caught rocket fever
On February 6, 2018, I found myself shoulder to shoulder with two of my heroes: Bill Nye on the left, Buzz Aldrin on the right. Our eyes were fixed on the first vertical Falcon Heavy rocket. Figuring the world's most powerful rocket might send me flying backwards once the countdown hit zero, I gripped the railing so tightly I started to lose the feeling in my fingertips.

September 20, 2018

Christian P. Robert - xi'an's og

red sister [book review]

“It is important, when killing a nun, to ensure that you bring an army of sufficient size. For Sister Thorn of the Sweet Mercy convent Lano Tacsis brought two hundred men.”

If it were a film, this book would be something like Harry Potter meets Clockwork Orange meets The Seven Samurai meets Fight Club! In the sense that it is set in a school (convent) for young girls with magical powers who are trained in exploiting these powers, that the central character has a streak of unbounded brutality at her core, that the training is mostly towards gaining fighting abilities and assassin skills. And that most of the story sees fighting, either at the training level or at the competition level or at the ultimate killing level. As in the previous novels by Mark Lawrence, which I did not complete, the descriptions of fights and deaths therein are quite graphic, and detailed, and obviously gory. But I found myself completely captivated by the story and the universe Lawrence created [with some post-apocalyptic features common with his earlier books] and the group of novices at the centre of the plot [even if some scenes were totally unrealistic within the harsh universe of Red Sister]. Despite the plot being sometimes very weak. or even incoherent.

“I’ve never deleted a page and rewritten it, some authors rewrite whole chapters or remove or add characters. That’s going to make it a lengthy process.”

As the warning from the author above makes it clear, the style itself is not always great, with too obvious infodumps and repetitions. And some unevenness in the characters that suddenly switch from pre-teens in a boarding school to mature schemers to super-mature strategists, from one page to the next. And [weak spoiler!] the potential villain is walking with a flashing light on top of her, almost from the start! Still, this book I bought on my last day on Van Isle, in the bookstore dense town of Sidney (B.C.) kept me hooked for a bit more than a day, from airport waits to sleepless breaks in the plane and the night after at home. And ordering the next volume of the trilogy almost immediately! One point reassuring in the interview of Lawrence is that he wrote the entire trilogy before publishing the first volume, contrary to Robert Jordan, George Martin, or Patrick Rothfuss!, meaning that his readers do not have to enjoy special time-accelerating powers to be certain to reach the date of publication of the next volume.

John Baez - Azimuth

Patterns That Eventually Fail

Sometimes patterns can lead you astray. For example, it’s known that

$\displaystyle{ \mathrm{li}(x) = \int_0^x \frac{dt}{\ln t} }$

is a good approximation to $\pi(x),$ the number of primes less than or equal to $x.$ Numerical evidence suggests that $\mathrm{li}(x)$ is always greater than $\pi(x).$ For example,

$\mathrm{li}(10^{12}) - \pi(10^{12}) = 38,263$

and

$\mathrm{li}(10^{24}) - \pi(10^{24}) = 17,146,907,278$

But in 1914, Littlewood heroically showed that in fact, $\mathrm{li}(x) - \pi(x)$ changes sign infinitely many times!

This raised the question: when does $\pi(x)$ first exceed $\mathrm{li}(x)$? In 1933, Littlewood’s student Skewes showed, assuming the Riemann hypothesis, that it must do so for some $x$ less than or equal to

$\displaystyle{ 10^{10^{10^{34}}} }$

Later, in 1955, Skewes showed without the Riemann hypothesis that $\pi(x)$ must exceed $\mathrm{li}(x)$ for some $x$ smaller than

$\displaystyle{ 10^{10^{10^{964}}} }$

By now this bound has been improved enormously. We now know the two functions cross somewhere near $1.397 \times 10^{316},$ but we don’t know if this is the first crossing!

All this math is quite deep. Here is something less deep, but still fun.

You can show that

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, dt = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, dt = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, dt = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, \frac{\sin \left(\frac{t}{301}\right)}{\frac{t}{301}} \, dt = \frac{\pi}{2} }$

and so on.

It’s a nice pattern. But this pattern doesn’t go on forever! It lasts a very, very long time… but not forever.

More precisely, the identity

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }$

holds when

$n < 9.8 \cdot 10^{42}$

but not for all $n.$ At some point it stops working and never works again. In fact, it definitely fails for all

$n > 7.4 \cdot 10^{43}$

The explanation

The integrals here are a variant of the Borwein integrals:

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, dx= \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\frac{\sin(x/3)}{x/3} \, dx = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\, \frac{\sin(x/3)}{x/3} \, \frac{\sin(x/5)}{x/5} \, dx = \frac{\pi}{2} }$

where the pattern continues until

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots\frac{\sin(x/13)}{x/13} \, dx = \frac{\pi}{2} }$

but then fails:

$\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots \frac{\sin(x/15)}{x/15} \, dx \approx \frac \pi 2 - 2.31\times 10^{-11} }$

I never understood this until I read Greg Egan’s explanation, based on the work of Hanspeter Schmid. It’s all about convolution, and Fourier transforms:

Suppose we have a rectangular pulse, centred on the origin, with a height of 1/2 and a half-width of 1.

Now, suppose we keep taking moving averages of this function, again and again, with the average computed in a window of half-width 1/3, then 1/5, then 1/7, 1/9, and so on.

There are a couple of features of the original pulse that will persist completely unchanged for the first few stages of this process, but then they will be abruptly lost at some point.

The first feature is that F(0) = 1/2. In the original pulse, the point (0,1/2) lies on a plateau, a perfectly constant segment with a half-width of 1. The process of repeatedly taking the moving average will nibble away at this plateau, shrinking its half-width by the half-width of the averaging window. So, once the sum of the windows’ half-widths exceeds 1, at 1/3+1/5+1/7+…+1/15, F(0) will suddenly fall below 1/2, but up until that step it will remain untouched.

In the animation below, the plateau where F(x)=1/2 is marked in red.

The second feature is that F(–1)=F(1)=1/4. In the original pulse, we have a step at –1 and 1, but if we define F here as the average of the left-hand and right-hand limits we get 1/4, and once we apply the first moving average we simply have 1/4 as the function’s value.

In this case, F(–1)=F(1)=1/4 will continue to hold so long as the points (–1,1/4) and (1,1/4) are surrounded by regions where the function has a suitable symmetry: it is equal to an odd function, offset and translated from the origin to these centres. So long as that’s true for a region wider than the averaging window being applied, the average at the centre will be unchanged.

The initial half-width of each of these symmetrical slopes is 2 (stretching from the opposite end of the plateau and an equal distance away along the x-axis), and as with the plateau, this is nibbled away each time we take another moving average. And in this case, the feature persists until 1/3+1/5+1/7+…+1/113, which is when the sum first exceeds 2.

In the animation, the yellow arrows mark the extent of the symmetrical slopes.

OK, none of this is difficult to understand, but why should we care?

Because this is how Hanspeter Schmid explained the infamous Borwein integrals:

∫sin(t)/t dt = π/2
∫sin(t/3)/(t/3) × sin(t)/t dt = π/2
∫sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2

∫sin(t/13)/(t/13) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2

But then the pattern is broken:

∫sin(t/15)/(t/15) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2

Here these integrals are from t=0 to t=∞. And Schmid came up with an even more persistent pattern of his own:

∫2 cos(t) sin(t)/t dt = π/2
∫2 cos(t) sin(t/3)/(t/3) × sin(t)/t dt = π/2
∫2 cos(t) sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2

∫2 cos(t) sin(t/111)/(t/111) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2

But:

∫2 cos(t) sin(t/113)/(t/113) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2

The first set of integrals, due to Borwein, correspond to taking the Fourier transforms of our sequence of ever-smoother pulses and then evaluating F(0). The Fourier transform of the sinc function:

sinc(w t) = sin(w t)/(w t)

is proportional to a rectangular pulse of half-width w, and the Fourier transform of a product of sinc functions is the convolution of their transforms, which in the case of a rectangular pulse just amounts to taking a moving average.

Schmid’s integrals come from adding a clever twist: the extra factor of 2 cos(t) shifts the integral from the zero-frequency Fourier component to the sum of its components at angular frequencies –1 and 1, and hence the result depends on F(–1)+F(1)=1/2, which as we have seen persists for much longer than F(0)=1/2.

• Hanspeter Schmid, Two curious integrals and a graphic proof, Elem. Math. 69 (2014) 11–17.

I asked Greg if we could generalize these results to give even longer sequences of identities that eventually fail, and he showed me how: you can just take the Borwein integrals and replace the numbers 1, 1/3, 1/5, 1/7, … by some sequence of positive numbers

$1, a_1, a_2, a_3 \dots$

The integral

$\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/a_1)}{x/a_1} \, \frac{\sin(x/a_2)}{x/a_2} \cdots \frac{\sin(x/a_n)}{x/a_n} \, dx }$

will then equal $\pi/2$ as long as $a_1 + \cdots + a_n \le 1,$ but not when it exceeds 1. You can see a full explanation on Wikipedia:

• Wikipedia, Borwein integral: general formula.

As an example, I chose the integral

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt }$

which equals $\pi/2$ if and only if

$\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} \le 1 }$

Thus, the identity holds if

$\displaystyle{ \sum_{k=1}^n \frac{1}{100 k} \le 1 }$

but

$\displaystyle{ \sum_{k=1}^n \frac{1}{k} \le 1 + \ln n }$

so the identity holds if

$\displaystyle{ \frac{1}{100} (1 + \ln n) \le 1 }$

or

$\ln n \le 99$

or

$n \le e^{99} \approx 9.8 \cdot 10^{42}$

On the other hand, the identity fails if

$\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} > 1 }$

so it fails if

$\displaystyle{ \sum_{k=1}^n \frac{1}{101 k} > 1 }$

but

$\displaystyle{ \sum_{k=1}^n \frac{1}{k} \ge \ln n }$

so the identity fails if

$\displaystyle{ \frac{1}{101} \ln n > 1 }$

or

$\displaystyle{ \ln n > 101}$

or

$\displaystyle{n > e^{101} \approx 7.4 \cdot 10^{43} }$

With a little work one could sharpen these estimates considerably, though it would take more work to find the exact value of $n$ at which

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }$

first fails.

Peter Coles - In the Dark

Seven Years From Swindon

I took the above snap this morning walking back to the Science Building. It shows the view from the other side of St Joseph’s Square compared to the picture I posted on Tuesday, i.e. towards St Patrick’s House rather than away from it. The weather has taken a turn for the worse since Tuesday, and it’s decidedly autumnal today but it’s still not a bad view to be greeted with on the way to the office.

Contrast this with a photograph I took precisely seven years ago today, on September 20th 2011, when I had just arrived in Swindon for a stint on the STFC Astronomy Grants Panel:

I’m no longer part of the UK research system so I guess I’ll never have to visit Swindon again…

Jon Butterworth - Life and Physics

What is the universe really made of?
The paperback edition of A Map of the Invisible is out now, and to help promote it we made a few videos on some of the themes in the book. Here’s the first one:

September 19, 2018

Christian P. Robert - xi'an's og

peer reviews on-line or peer community?

Nature (or more precisely some researchers through Nature, associated with the UK Wellcome Trust, the US Howard Hughes Medical Institute (hhmo), and ASAPbio) has (have) launched a call for publishing reviews next to accept papers, one way or another, which is something I (and many others) have supported for quite a while. Including for rejected papers, not only because making these reviews public diminishes on principle the time involved in re-reviewing re-submitted papers but also because this should induce authors to revise papers with obvious flaws and missing references (?). Or abstain from re-submitting. Or publish a rejoinder addressing the criticisms. Anything that increases the communication between all parties, as well as the perspectives on a given paper. (This year, NIPS allows for the posting of reviews of rejected submissions, which I find a positive trend!)

In connection with this entry, I am still most sorry that I could not pursue the [superior in my opinion] project of Peer Community in computational statistics, for the time requested by Biometrika editing is just too important [given my current stamina!] for me to handle another journal (or the better alternative to a journal!). I hope someone else can take over the project and create the editorial team needed to run it.

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juillet et décembre.

La prochaine permanence se tiendra le :

Mardi 25 septembre de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel

Les permanences suivantes auront lieu les mardis 30 octobre et 27 novembre 2018.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/
Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

CERN Bulletin

Interfon

Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.

CERN Bulletin

Offer for our members

Our partner FNAC is offering to all our members 10% discount on all the iMacs and Macbooks.

This offer is valid between September 12 and September 30, 2018 upon the presentation of your Staff Association membership card.

CERN Bulletin

Exhibition

Le Chronoscope
Images du Temps, Eclats de Temps

Thomas Desbrières

Du 24 septembre au 5 octobre
CERN Meyrin, Bâtiment principal

Passionné  de sciences et d’art, l’artiste Thomas Desbrières réalise des tableaux d’art fractal (art numérique, création d’images à partir de formules mathématiques calculées par ordinateur).

Le Chronoscope est un instrument scientifique imaginaire dont le but est de capter des images du Temps. Il pose les
questions : A quoi pourrait ressembler le Temps ? Et comment pourrions-nous l’observer ?

Les tableaux fractals sont comme les images résultats de cette expérience. Ce sont des visions originales du Temps obtenues par le Chronoscope. Ils montrent des mécanismes complexes, des cycles démultipliés à l’infini. Leur aspect évoque celui d’horloges étranges, dont les coloris dorés rappellent également le laiton des instruments de précision.

http://www.senarius.fr

Pour plus d’informations et demandes d’accès : staff.association@cern.ch  |  +41 22 767 28 19

September 16, 2018

ZapperZ - Physics and Physicists

Want To Located The Accelerometer In Your Smartphone?
Rhett Allain has a simple, fun rotational physics experiment that you can perform on your smartphone to locate the position of the accelerometer in that device, all without opening it.

Your smart phone has a bunch of sensors in it. One of the most common is the accelerometer. It's basically a super tiny mass connected with springs (not actual springs). When the phone accelerates in a particular direction, some of these springs will get compressed in order to make the tiny test mass also accelerate. The accelerometer measures this spring compression and uses that to determine the acceleration of the phone. With that, it will know if it is facing up or down. It also can estimate how far you move and use this along with the camera to find out where real world objects are, using ARKit.

So, we know there is a sensor in the phone—but where is it located? I'm not going to take apart my phone; everyone knows I'll never get it back together after that. Instead, I will find out the location by moving the phone in a circular path. Yes, moving in a circle is a type of acceleration.

I'll let you read the article to know what he did, and what you can do yourself.

Now, the only thing left is to verify the result. Someone needs to open an iPhone 7 and confirm the location of the accelerometer (do we even know what it looks like in such a device?). Any volunteers? :)

Zz.

John Baez - Azimuth

The 5/8 Theorem

This is a well-known, easy group theory result that I just learned. I would like to explain it more slowly and gently, and I hope memorably, than I’ve seen it done.

It’s called the 5/8 theorem. Randomly choose two elements of a finite group. What’s the probability that they commute? If it exceeds 62.5%, the group must be abelian!

This was probably known for a long time, but the first known proof appears in a paper by Erdös and Turan.

It’s fun to lead up to this proof by looking for groups that are “as commutative as possible without being abelian”. This phrase could mean different things. One interpretation is that we’re trying to maximize the probability that two randomly chosen elements commute. But there are two simpler interpretations, which will actually help us prove the 5/8 theorem.

How big can the center be?

How big can the center of a finite group be, compared to the whole group? If a group $G$ is abelian, its center, say $Z,$ is all of $G.$ But let’s assume $G$ is not abelian. How big can $|Z|/|G|$ be?

Since the center is a subgroup of $G,$ we know by Lagrange’s theorem that $|G|/|Z|$ is an integer. To make $|Z|/|G|$ big we need this integer to be small. How small can it be?

It can’t be 1, since then $|Z| = |G|$ and $G$ would be abelian. Can it be 2?

No! This would force $G$ to be abelian, leading to a contradiction! The reason is that the center is always a normal subgroup of $G$, so $G/Z$ is a group of size $|G/Z| = |G|/|Z|$. If this is 2 then $G/Z$ has to be $\mathbb{Z}/2.$ But this is generated by one element, so $G$ must be generated by its center together with one element. This one element commutes with everything in the center, obviously… but that means $G$ is abelian: a contradiction!

For the same reason, $|Z|/|G|$ can’t be 3. The only group with 3 elements is $\mathbb{Z}/3,$ which is generated by one element. So the same argument leads to a contradiction: $G$ is generated by its center and one element, which commutes with everything in the center, so $G$ is abelian.

So let’s try $|Z|/|G| = 4.$ There are two groups with 4 elements: $\mathbb{Z}/4$ and $\mathbb{Z}/2 \times \mathbb{Z}/2.$ The second, called the Klein four-group, is not generated by one element. It’s generated by two elements! So it offers some hope.

If you haven’t studied much group theory, you could be pessimistic. After all, $\mathbb{Z}/2 \times \mathbb{Z}/2$ is still abelian! So you might think this: “If $G/Z \cong \mathbb{Z}/2 \times \mathbb{Z}/2,$ the group $G$ is generated by its center and two elements which commute with each other, so it’s abelian.”

But that’s false: even if two elements of $G/Z$ commute with each other, this does not imply that the elements of $G$ mapping to these elements commute.

This is a fun subject to study, but best way for us to see this right now is to actually find a nonabelian group $G$ with $G/Z \cong \mathbb{Z}/2 \times \mathbb{Z}/2$. The smallest possible example would have $\mathbb{Z}/2,$ and indeed this works!

Namely, we’ll take $G$ to be the 8-element quaternion group

$Q = \{ \pm 1, \pm i, \pm j, \pm k \}$

where

$i^2 = j^2 = k^2 = -1$
$i j = k, \quad j k = i, \quad k i = j$
$j i = -k, \quad k j = -i, \quad i k = -j$

and multiplication by $-1$ works just as you’d expect, e.g.

$(-1)^2 = 1$

You can think of these 8 guys as the unit quaternions lying on the 4 coordinate axes. They’re the vertices of a 4-dimensional analogue of the octahedron. Here’s a picture by David A. Richter, where the 8 vertices are projected down from 4 dimensions to the vertices of a cube:

The center of $Q$ is $Z = \{ \pm 1 \},$ and the quotient $Q/Z$ is the Klein four-group, since if we mod out by $\pm 1$ we get the group

$\{1, i, j, k\}$

with

$i^2 = j^2 = k^2 = 1$
$i j = k, \quad j k = i, \quad k i = j$
$j i = k, \quad k j = i, \quad i k = j$

So, we’ve found a nonabelian finite group with 1/4 of its elements lying in the center, and this is the maximum possible fraction!

How big can the centralizer be?

Here’s another way to ask how commutative a finite group $G$ can be, without being abelian. Any element $g \in G$ has a centralizer $C(g),$ consisting of all elements that commute with $g.$

How big can $C(g)$ be? If $g$ is in the center of $G,$ then $C(g)$ is all of $G.$ So let’s assume $g$ is not in the center, and ask how big the fraction $|C(g)|/|G|$ can be.

In other words: how large can the fraction of elements of $G$ that commute with $g$ be, without it being everything?

It’s easy to check that the centralizer $C(g)$ is a subgroup of $G.$ So, again using Lagrange’s theorem, we know $|G|/|C(g)|$ is an integer. To make the fraction $|C(g)|/|G|$ big, we want this integer to be small. If it’s 1, everything commutes with $g.$ So the first real option is 2.

Can we find an element of a finite group that commutes with exactly 1/2 the elements of that group?

Yes! One example is our friend the quaternion group $Q.$ Each non-identity element commutes with exactly half the elements. For example, $i$ commutes only with its own powers: $1, i, -1, -i.$

So we’ve found a finite group with a non-central element that commutes with 1/2 the elements in the group, and this is maximum possible fraction!

What’s the maximum probability for two elements to commute?

Now let’s tackle the original question. Suppose $G$ is a nonabelian group. How can we maximize the probability for two randomly chosen elements of $G$ to commute?

Say we randomly pick two elements $g,h \in G.$ Then there are two cases. If $g$ is in the center of $G$ it commutes with $h$ with probability 1. But if $g$ is not in the center, we’ve just seen it commutes with $h$ with probability at most 1/2.

So, to get an upper bound on the probability that our pair of elements commutes, we should make the center $Z \subset G$ as large as possible. We’ve seen that $|Z|/|G|$ is at most 1/4. So let’s use that.

Then with probability 1/4, $g$ commutes with all the elements of $G,$ while with probability 3/4 it commutes with 1/2 the elements of $G.$

So, the probability that $g$ commutes with $h$ is

$\frac{1}{4} \cdot 1 + \frac{3}{4} \cdot \frac{1}{2} = \frac{2}{8} + \frac{3}{8} = \frac{5}{8}$

Even better, all these bounds are attained by the quaternion group $Q.$ 1/4 of its elements are in the center, while every element not in the center commutes with 1/2 of the elements! So, the probability that two elements in this group commute is 5/8.

So we’ve proved the 5/8 theorem and shown we can’t improve this constant.

Further thoughts

I find it very pleasant that the quaternion group is “as commutative as possible without being abelian” in three different ways. But I shouldn’t overstate its importance!

I don’t know the proof, but the website groupprops says the following are equivalent for a finite group $G$:

• The probability that two elements commute is 5/8.

• The inner automorphism group of $G$ has 4 elements.

• The inner automorphism group of $G$ is $\mathbb{Z}/2 \times \mathbb{Z}/2.$

Examining the argument I gave, it seems the probability 5/8 can only be attained if

$|Z|/|G| = 1/4$

$|C(g)|/|G| = 1/2$ for every $g \notin Z.$

So apparently any finite group with inner automorphism group $\mathbb{Z}/2 \times \mathbb{Z}/2$ must have these other two properties as well!

There are lots of groups with inner automorphism group $\mathbb{Z}/2 \times \mathbb{Z}/2.$ Besides the quaternion group, there’s one other 8-element group with this property: the group of rotations and reflections of the square, also known as the dihedral group of order 8. And there are six 16-element groups with this property: they’re called the groups of Hall–Senior class two. And I expect that as we go to higher powers of two, there will be vast numbers of groups with this property.

You see, the number of nonisomorphic groups of order $2^n$ grows alarmingly fast. There’s 1 group of order 2, 2 of order 4, 5 of order 8, 14 of order 16, 51 of order 32, 267 of order 64… but 49,487,365,422 of order 1024. Indeed, it seems ‘almost all’ finite groups have order a power of two, in a certain asymptotic sense. For example, 99% of the roughly 50 billion groups of order ≤ 2000 have order 1024.

Thus, if people trying to classify groups are like taxonomists, groups of order a power of 2 are like insects.

In 1964, the amusingly named pair of authors Marshall Hall Jr. and James K. Senior classified all groups of order $2^n$ for $n \le 6.$ They developed some powerful general ideas in the process, like isoclinism. I don’t want to explain it here, but which involves the quotient $G/Z$ that I’ve been talking about. So, though I don’t understand much about this, I’m not completely surprised to read that any group of order $2^n$ has commuting probability 5/8 iff it has ‘Hall–Senior class two’.

There’s much more to say. For example, we can define the probability that two elements commute not just for finite groups but also compact topological groups, since these come with a god-given probability measure, called Haar measure. And here again, if the group is nonabelian, the maximum possible probability for two elements to commute is 5/8!

There are also many other generalizations. For example Guralnick and Wilson proved:

• If the probability that two randomly chosen elements of $G$ generate a solvable group is greater than 11/30 then $G$ itself is solvable.

• If the probability that two randomly chosen elements of $G$ generate a nilpotent group is greater than 1/2 then $G$ is nilpotent.

• If the probability that two randomly chosen elements of $G$ generate a group of odd order is greater than 11/30 then $G$ itself has odd order.

The constants are optimal in each case.

I’ll just finish with two questions I don’t know the answer to:

• For exactly what set of numbers $p \in (0,1]$ can we find a finite group where the probability that two randomly chosen elements commute is $p?$ If we call this set $S$ we’ve seen

$S \subseteq (0,5/8] \cup \{1\}$

But does $S$ contain every rational number in the interval (0,5/8], or just some? Just some, in fact—but which ones? It should be possible to make some progress on this by examining my proof of the 5/8 theorem, but I haven’t tried at all. I leave it to you!

• For what properties P of a finite group is there a theorem of this form: “if the probability of two randomly chosen elements generating a subgroup of $G$ with property P exceeds some value $p,$ then $G$ must itself have property P”? Is there some logical form a property can have, that will guarantee the existence of a result like this?

References

Here is a nice discussion, where I learned some of the facts I mentioned, including the proof I gave:

• MathOverflow, 5/8 bound in group theory.

Here is an elementary reference, free online if you jump through some hoops, which includes the proof for compact topological groups, and other bits of wisdom:

• W. H. Gustafson, What is the probability that two group elements commute?, American Mathematical Monthly 80 (1973), 1031–1034.

For example, if $G$ is finite simple and nonabelian, the probability that two elements commute is at most 1/12, a bound attained by $\mathrm{A}_5.$

Here’s another elementary article:

• Desmond MacHale, How commutative can a non-commutative group be?, The Mathematical Gazette 58 (1974), 199–202.

If you get completely stuck on Puzzle 1, you can look here for some hints on what values the probability of two elements to commute can take… but not a complete solution!

The 5/8 theorem seems to have first appeared here:

• P. Erdös and P. Turán, On some problems of a statistical group-theory, IV, Acta Math. Acad. Sci. Hung. 19 (1968) 413–435.

September 15, 2018

Jon Butterworth - Life and Physics

Rising up to the challenge: My Brexit plan
Even a stopped clock gives the right time twice a day. And the brexit ultras and associated careerists are correct that the so-called “Chequers” proposal is indeed “worse than status quo”. Damning indeed if you take “Marguerita Time” into consideration. … Continue reading

September 14, 2018

ZapperZ - Physics and Physicists

Bismuthates Superconductors Appear To Be Conventional
A lot of people overlooked the fact that during the early days of the discovery of high-Tc superconductors, there was another "family" of superconductors beyond just the cuprates (i.e. those compounds having copper-oxide layers). These compounds are called bismuthates, where instead of having copper-oxide layers, they have bismuth-oxide layers. Otherwise, their crystal structures are similar to the cuprates.

They didn't make that much of a noise at that time because Tc for this family of material tends to be lower than the cuprates. And, even back then, there were already evidence that the bismuthates superconductors might be "boring", i.e. the results that they have produced looked like they might be a conventional superconductor. This is supported by several experiments, including a tunneling experiment[1] that showed that the phonon density of states obtained from tunneling data matches that of the density of states obtained from neutron scattering.

Now it seems that there is more evidence that the bismuthates are conventional BCS superconductors, and it comes from ARPES experiment[2]. There have been no ARPES measurement done on bismuthates before this because it had been a serious challenge to get a single-crystal of this compound large enough to perform such an experiment. But obviously, large-enough single-crystals have been synthesized.

In this latest experiment, they look at the band structure of this compound, and extract, among others, the strong electron-phonon coupling that matches the superconducting gap. This strongly indicates that phonons are the "glue" in the superconducting mechanism for this compound.

So this adds another piece of the puzzle for the whole mystery of the origin of superconductivity in the cuprates. Certainly, having similar layered crystal structure does not discount being a conventional superconductor. Yet, the cuprates have very different behavior when we perform tunneling and ARPES experiments, and they certainly have higher Tc's.

The mystery continues.

Zz.

[1] Q. Huang et al. Nature v347, p369 (1990).
[2] CHP. Wen et al. PRL  121, 117002 (2018). https://arxiv.org/abs/1802.10507

September 13, 2018

ZapperZ - Physics and Physicists

Human Eye Can Detect Cosmic Radiation
Well, not in the way you think.

I recently found this video of an appearance of astronaut Scott Kelly on The Late Show with Stephen Colbert. During this segment, he talked about the fact that when he went to sleep on the Space Station and closed his eyes, he occasionally detected flashes of light. He attributed it to the cosmic radiation  passing through his body, and his eyes in particular.

Check out the video at minute 3:30

My first inclination is to say that this is similar to how we detect neutrinos, i.e. the radiation particles interact with the medium in his yes, either the vitreous or the medium that makes up the lens, and this interaction causes the ejection of relativistic electron and subsequently, a Cerenkov radiation. The Cerenkov radiation is then detected by the eye.

Of course, there are other possibilities, such as the cosmic particle causes an excitation of an atom or molecules when they collided, and this then caused a light emission. But Scott Kelly mentioned that these flashes appeared like fireworks. So my guess here is that it is more of a very short cascade of events, and probably the Cerenkov light scenario.

This, BTW, is almost how we detect neutrinos, especially at Super Kamiokande and all the neutrino detectors around the world. Neutrinos come into the detector, and those that interact with the medium inside the detector (water, for example), cause the emission of relativistic electrons that move faster than the speed of light inside the medium. This creates the Cerenkov radiation, and typically, the light is blueish white. It's the same glow that you see if you look in a pool of fuel rods in a nuclear reactor.

So there! You can detect something with your eyes closed!

Zz.

September 12, 2018

John Baez - Azimuth

Noether’s Theorem

I’ve been spending the last month at the Centre of Quantum Technologies, getting lots of work done. This Friday I’m giving a talk, and you can see the slides now:

• John Baez, Getting to the bottom of Noether’s theorem.

Abstract. In her paper of 1918, Noether’s theorem relating symmetries and conserved quantities was formulated in term of Lagrangian mechanics. But if we want to make the essence of this relation seem as self-evident as possible, we can turn to a formulation in term of Poisson brackets, which generalizes easily to quantum mechanics using commutators. This approach also gives a version of Noether’s theorem for Markov processes. The key question then becomes: when, and why, do observables generate one-parameter groups of transformations? This question sheds light on why complex numbers show up in quantum mechanics.

At 5:30 on Saturday October 6th I’ll talk about this stuff at this workshop in London:

The Philosophy and Physics of Noether’s Theorems, 5-6 October 2018, Fischer Hall, 1-4 Suffolk Street, London, UK. Organized by Bryan W. Roberts (LSE) and Nicholas Teh (Notre Dame).

This workshop celebrates the 100th anniversary of Noether’s famous paper connecting symmetries to conserved quantities. Her paper actually contains two big theorems. My talk is only about the more famous one, Noether’s first theorem, and I’ll change my talk title to make that clear when I go to London, to avoid getting flak from experts. Her second theorem explains why it’s hard to define energy in general relativity! This is one reason Einstein admired Noether so much.

I’ll also give this talk at DAMTP—the Department of Applied Mathematics and Theoretical Physics, in Cambridge—on Thursday October 4th at 1 pm.

The organizers of London workshop on the philosophy and physics of Noether’s theorems have asked me to write a paper, so my talk can be seen as the first step toward that. My talk doesn’t contain any hard theorems, but the main point—that the complex numbers arise naturally from wanting a correspondence between observables and symmetry generators—can be expressed in some theorems, which I hope to explain in my paper.

September 10, 2018

Lubos Motl - string vacua and pheno

Why string theory is quantum mechanics on steroids
In many previous texts, most recently in the essay posted two blog posts ago, I expressed the idea that string theory may be interpreted as the wisdom of quantum mechanics that is taken really seriously – and that is applied to everything, including the most basic aspects of the spacetime, matter, and information.

People like me are impressed by the power of string theory because it really builds on quantum mechanics in a critical way to deduce things that would have been impossible before. On the contrary, morons typically dislike string theory because their mezzoscopic peabrains are already stretched to the limit when they think about quantum mechanics – while string theory requires the stretching to go beyond these limits. Peabrains unavoidably crack and morons, writing things that are not even wrong about their trouble with physics, end up lost in math.

Other physicists have also made the statement – usually in less colorful ways – that string theory is quantum mechanics on steroids. It may be a good idea to explain what all of us mean – why string theory depends on quantum mechanics so much and why the power of quantum mechanics is given the opportunity to achieve some new amazing things within string theory.

At the beginning, I must say that the non-experts (including many pompous fools who call themselves "experts") usually overlook the whole "beef" of string theory just like they overlook the "beef" of quantum mechanics.

They imagine that quantum mechanics "is" a new equation, Schrödinger's equation, that plays the same role as Newton's, Maxwell's, Einstein's, and other equations. But quantum mechanics is much more – and much more universal and revolutionary – than another addition to classical physics. The actual heart of quantum mechanics is that the objects in its equations are connected to the observations very differently than the classical counterparts have been.

In the same way, they imagine that string theory is a theory of a new random dynamical object, a rubber band, and they imagine either downright classical vibrating strings or quantum mechanical strings that just don't differ from other quantum mechanical objects. But this understanding doesn't go beyond the (unavoidably oversimplified) name of string theory. If you analyze the composition of the term "string theory" as a linguist, you may think it's just a "theory of some strings". But that's not really the lesson one should draw. The real lesson is that if certain operations are done well with particular things, one ends with some amazing set of equations that may explain lots of things about the Universe.

Strings are exceptionally powerful – and only exceptionally powerful – at the quantum level. And the point of string theory isn't that it's a theory of another object. The point is that string theory is special among theories that would initially look "analogous".

Why is it special? And why is the magic of string theory so intertwined with quantum mechanics?

Discrete types of Nature's building blocks

For centuries, people knew something about chemistry. Matter around us is made of compounds which are mixtures of elements – such as hydrogen, helium, lithium, and I am sure you have memorized the rest. The number of types of atoms around us is finite. If arbitrarily large nuclei were allowed or stable, it would be countably infinite. But the number would still be discrete – not continuous.

For some century, people realized that the elements are probably made out of identical atoms. Each element has its own kind of atoms. The concept of atoms was first promoted by Democritus in ancient Greece. But in chemistry, atoms became more specific.

Sometime in the late 19th and early 20th century, people began to understand that the atom isn't as indivisible as the Greek name suggested. It is composed of a dense nucleus and electrons that live somewhere around the nucleus. Nucleus was later found to be composed of protons and neutrons. Quantum mechanics of 1925 allowed the physicists to study the quantized motion of electrons around the nuclei – and the motion of the electrons is the crucial thing that decides about the energy levels of all atoms and, consequently, their chemical properties.

In the 1960s, protons and neutrons were found to be composite as well. First, matter was composed of atoms – different kinds of building blocks for every element. Later, matter was reduced to bound states of electrons, protons, and neutrons. Later, protons and neutrons were replaced with quarks while electrons remained and became an important example of leptons, a group of fermions that is considered "on par" with quarks. The Standard Model deals with fermions, namely quarks and leptons, and bosons, namely the gauge boson and the Higgs boson. The bosons are particularly capable of mediating forces between all the fermions (and bosons).

But even in this "nearly final" picture, there are still finitely many but relatively many species of elementary particles. Their number is slightly lower than the number of atoms that were considered indivisible a century earlier. But the difference isn't too big – neither qualitatively nor quantitatively. We have dozens of types of basic "atoms" or "elementary particles" and each of them must be equipped with some properties (yes, the properties of elementary particles in the Standard Model look more precise and fundamental than the properties of atoms of the elements used to). The different particle species amount to many independent assumptions about Nature that have to be added to the mix to build a viable theory.

Can we do better? Can we derive the species from a smaller number of assumptions – and from one kind of matter?

String theory – let's assume that Nature is described by a weakly-coupled heterotic string theory (closed strings only), to make it simpler – describes all elementary particles, bosons and fermions, as discrete energy eigenstates of a vibrating closed string. All interactions boil down to splitting and merging of these oscillating strings. Quantum mechanics is needed for the energy levels to be discrete – just like in the case of the energy levels of atoms. But for the first time, there is only one underlying building block in Nature, a vibrating closed string.

Like in atomic and molecular physics, quantum mechanics is needed for the discrete – finite or countable – number of species of small bound objects that exist.

Also, the number of spacetime dimensions was always arbitrary in classical physics. When constructing a theory, you had to assume a particular number – in other words, you had to add the coordinates $$t,x,y,z$$ to your theory manually, one by one – and because the choice of the spacetime dimension was one of the first steps in the construction of any theory, there was no way to treat the theories in different spacetime dimensions simultaneously, and there were consequently no conceptual ways how to derive the right spacetime dimension.

In string theory, it's different because even the spacetime dimensions – scalar fields on the world sheet – are "things" that contribute to various quantities (such as the conformal anomaly) and string theory is therefore capable of picking the preferred (critical) dimension of the spacetime. Even the individual spacetime dimensions are sort of made of the "same convertible stuff" within string theory. This would be unthinkable in classical physics.

Prediction of gravity and other special forces: state-operator correspondence

String theory is not only the world's only known theory that allows Einsteinian gravity in $$D\geq 4$$ to co-exist with quantum mechanics. String theory makes the Einsteinian gravity unavoidable. It predicts gravitons, spin-two particles that interact in agreement with the equivalence principle (all objects accelerate at the same acceleration in a gravitational field).

Why is it so? I gave an explanation e.g. in 2007. It is because a particular energy level of the vibrating closed string looks like a spin-two massless particle and it may be shown that the addition of a coherent state of such "graviton strings" into a spacetime is equivalent to the change of the classical geometry on which all other objects – all other vibrating strings – propagate. In this way, the dynamical curved geometry (or at least any finite change of it) may be literally built out of these gravitons.

(Similarly, the addition of strings in another mode, the photon mode, may have the effect that is indistinguishable from the modification of the background electromagnetic field and it is true for all other low-energy fields, too.)

Why is it so? What is the most important "miracle" or a property of string theory that allows this to work? I have picked the state-operator correspondence. And the state-operator correspondence is an entirely quantum mechanical relationship – something that wouldn't be possible in a classical world.

What is the state-operator correspondence? Consider a closed string. It has some Hilbert space. In terms of energy eigenstates, the Hilbert space has a zero mode described by the usual $$x_0,p_0$$ degrees of freedom that make the string behave as a quantum mechanical particle. And then the strings may be stretched and the amount of vibrations may be increased by adding oscillators – excitations by creation operators of many quantum harmonic oscillators. So a basis vector in this energy basis of the closed string's Hilbert space is e.g.$\alpha^\kappa_{-2}\alpha^\lambda_{-3} \tilde \alpha^\mu_{-4} \tilde\alpha_{-1}^\nu \ket{0; p^\rho}.$ What is this state? It looks like a momentum eigenstate of a particle whose spacetime momentum is $$p^\rho$$. However, for a string, the "lightest" state with this momentum is just a ground state of an infinite-dimensional harmonic oscillator. We may excite that ground state with the oscillators $$\alpha$$. These excitations are vaguely analogous to the kicking of the electrons in the atoms from the ground state to higher states, e.g. from $$1s$$ to $$2p$$. Those oscillators without a tilde are left-moving, those with a tilde are right-moving waves on the string. The (negative) subscript labels the number of periods along the closed string (which Fourier mode we pick). The superscript $$\kappa$$ etc. labels in which transverse spacetime direction the string's oscillation is increased.

The total squared mass is given by $$2+3=4+1$$ in some string units. The sum of the tilded and untilded subscripts must be equal (five, in this case) for the "beginning" of the closed string to be immaterial, technically because $$L_0-\tilde L_0 = 0$$. Great. This was a basis of the closed string's Hilbert space.

But we may also discuss the linear operators on that Hilbert space. They're constructed as functionals of $$X^\kappa(\sigma)$$ and $$P^\kappa(\sigma)$$ – I am omitting some extra fields (ghosts) that are needed in some descriptions, plus I am omitting a discussion about the difference between transverse and longitudinal directions of the excitations etc. – there are numerous technicalities you have to master when you study string theory at the expert level but they don't really affect the main message I want to convey.

OK, the Hilbert space is infinite-dimensional but its dimension $$d$$ must be squared, to get $$d^2$$, if you want to quantify the dimension of the space of matrices on that space, OK? A matrix is "larger" than a column vector. The number $$d^2$$ looks much higher than $$d$$ but nevertheless, for $$d=\infty$$, as long as it is the right "stringy infinity", there exists a very natural one-to-one map between the states and the local operators. Let me immediately tell you what is the operator corresponding to the state above:$(\partial_z)^2 X^\kappa (\partial_z)^3 X^\lambda (\partial_{\bar z})^4 X^\mu (\partial_{\bar z})^1 X^\nu \exp(ip\cdot X(\sigma))$ There should be some normal ordering here. All the four operators $$X^{\kappa,\lambda,\mu,\nu}$$ are evaluated at the point of the string $$\sigma$$, too. You see that the superscripts $$\kappa,\lambda,\mu,\nu$$ were copied to natural places, the subscripts $$2,3,4,1$$ were translated to powers of the world sheet derivative with respect to $$z$$ or $$\bar z$$, the holomorphic or antiholomorphic complex coordinates on the Euclideanized worldsheet. Tilded and untilded oscillators were translated to the holomorphic and antiholomorphic derivatives. An exponential of $$X^\rho$$ operator was inserted to encode the ordinary "zero mode", particle-like total momentum of the string. And the total operator looks like some very general product of a function of $$X^\rho$$ – the imaginary exponentials are a good basis, ask Mr Fourier why it is so – and its derivatives (of arbitrarily high orders). By the combination of the "Fourier basis wisdom" and a simple decomposition to monomials, every function of $$X^\rho$$ and its worldsheet derivatives may be expanded to a sum of such terms.

The map between operators and states isn't quite one-to-one. We only considered "local operators at point $$\sigma$$ of the string" where the value of $$\sigma$$ remains unspecified. But the "number of possible values of $$\sigma$$" looks like a smaller factor than the factor $$d$$ that distinguishes $$d,d^2$$, the dimension of the Hilbert space and the space of operators, so the state-operator correspondence is "almost" a one-to-one map.

Such a map would be unthinkable in classical physics. In classical physics, a pure state would be a point in the phase space. On the other hand, the observable of classical physics is any coordinate on the phase space – such as $$x$$ or $$p$$ or $$ax^2+bp^2$$. Is there a canonical way to assign a coordinate on the phase space – a scalar function on the phase space – to a particular point $$(x,p)$$ on that space? There's clearly none. These mathematical objects carry completely different information – and the choice of the coordinate depends on much more information. You would have a chance to map a probability distribution (another scalar function) on the phase space to a general coordinate on the phase space – except that the former is non-negative. But that map wouldn't be shocking in quantum mechanics, either, because the probability distribution is upgraded to a density matrix which is a similar matrix as the observables. The magic of string theory is that there is a dictionary between pure states and operators.

This state-operator correspondence is important – it is a part of the most conceptual proof of the string theory's prediction of the Einsteinian gravity. Why does the state-operator correspondence exist? What is the recipe underlying this magic?

Well, you can prove the state-operator correspondence by considering a path integral on an infinite cylinder. By conformal transformations – symmetries of the world sheet theory – the infinite cylinder may be mapped to the plane with the origin removed. The boundary conditions on the tiny removed circle at the origin (boundary conditions rephrased as a linear insertion in the path integral) correspond to a pure state; but the specification of these boundary conditions must also be equivalent to a linear action at the origin, i.e. a local operator.

Another "magic player" that appeared in the previous paragraph – a chain of my explanations – is the conformal symmetry. A solution to the world sheet theory works even if you conformally transform it (a conformal transformation is a diffeomorphism that doesn't change the angles even if you keep the old metric tensor field). Conformal symmetries exist even in purely classical field theories. Lots of the self-similar or scale-invariant "critical" behavior exhibits the conformal symmetry in one way or another. But what's cool about the combination of conformal symmetry and quantum mechanics is that a particular, fully specified pure state (and the ground state of a string or another object, e.g. the spacetime vacuum) may be equivalent to a particular state of the self-similar fog.

The combination of quantum mechanics and conformal symmetry is therefore responsible for many nontrivial abilities of string theory such as the state-operator correspondence (see above) or holography in the AdS/CFT correspondence. At the classical level, the conformal symmetry of the boundary theory is already isomorphic to the isometry of the AdS bulk. But that wouldn't be enough for the equivalence between "field theory" in spacetimes of different dimensions. Holography i.e. the ability to remove the holographic dimension in quantum gravity may only exist when the conformal symmetry exists within a quantum mechanical framework.

Dualities, unexpected enhanced symmetries, unexpected numerous descriptions

The first quantum mechanical X-factor of quantum mechanics is the state-operator correspondence and its consequences – either on the world sheet (including the prediction of forces mediated by string modes) or on in the boundary CFT in the holographic AdS/CFT correspondence.

To make the basic skeleton of this blog post simple, I will only discuss the second class of stringy quantum muscles as one package – the unexpected symmetries, enhanced symmetries, and numerous descriptions. For some discussion of the enhanced symmetries, try e.g. this 2012 blog post.

In theoretical physicists' jargon, dualities are relationships between seemingly different descriptions that shouldn't represent the same physics but for some deep, nontrivial, and surprising reasons, the physical behavior is completely equivalent, including the quantitative properties such as the mass spectrum of some bound states etc.

The enhanced symmetries such as the $$SU(2)$$ gauge group of the compactification on a self-dual circle (under T-duality) are a special example of dualities, too. The action of this $$SU(2)$$, except for the simple $$U(1)$$ subgroup, looks like some weird mixing of states with different winding numbers etc. Nothing like that could be a symmetry in classical physics. In particular, we need quantum mechanics to make the momenta quantized – just like the winding numbers (the integer saying how many times a string is wound around a non-contractible circle in the spacetime) are quantized – if we want to exchange momenta and windings as in T-duality. But within string theory, those symmetries become possible.

Many stringy vacua have larger symmetry groups than expected classically. You may identify 16+16 fermions on the heterotic string's world sheet and figure out that the theory will have an $$SO(16)\times SO(16)$$ symmetry. But if you look carefully, the group is actually enhanced to an $$E_8\times E_8$$. Similarly, a string theory on the Leech lattice could be expected to have a Conway group of symmetries – the isometry of such a lattice – but instead, you get a much cooler, larger, and sexier monster group of symmetries, the largest sporadic finite group.

Two fermions on the world sheet may be bosonized – they are equivalent to one boson. This is also a simple example of a "stringy duality" between two seemingly very different theories. The conformal symmetry and/or the relative scarcity of the number of possible conformal field theories may be used in a proof of this equivalence. Wess-Zumino-Witten models involving strings propagating on group manifolds are equivalent to other "simple" theories, too.

I don't want to elaborate on all the examples – their number is really huge and I have discussed many of them in the past. They may often be found in different chapters of string theory textbooks. Here, I want to emphasize their general spirit and where this spirit comes from. Quantum mechanics is absolutely essential for this phenomenon.

Why is it so? Why don't we see almost any of these enhanced symmetries, dualities, and equivalences between descriptions in classical physics? An easy answer is unlikely to be a rigorous proof but it may be rather apt, anyway. My simplest explanation would be: You don't see dualities and other things in classical physics because classical physics allows you the "infinite sharpness and resolution" which means that if two things look different, they almost certainly are different.

(Well, some symmetries do exist classically. For example, Maxwell's equations – with added magnetic monopoles or subtracted electric charges – have the symmetry of exchanging the electric fields with the magnetic fields, $$\vec E\to \vec B$$, $$\vec B\to -\vec E$$. This is a classical seed of the stringy S-dualities – and of stringy T-dualities if the electromagnetic duality is performed on a world sheet. But quantum mechanics is needed for the electromagnetic duality to work in the presence of particles with well-defined non-zero charges in the S-duality case; and in the presence of quantized stringy winding charges in the T-duality example because the T-dual momenta have to be quantized as well.)

On the other hand, quantum mechanics brings you the uncertainty principle which introduces some fog and fuzziness. The objects don't have sharp boundaries and shapes given by ordinary classical functions. Instead, the boundaries are fuzzy and may be interpreted in various ways. It doesn't mean that the whole theory is ill-defined. Quantum mechanics is completely quantitative and allows an arbitrarily high precision.

Instead, the quantum mechanical description often leads to a discrete spectrum and allows you to describe all the "invariant" properties of an energy-like operator by its discrete spectrum – by several or countably many eigenvalues. And there are many classical models whose quantization may yield the same spectrum. The spectrum – perhaps with an extra information package that is still relatively small – may capture all the physically measurable, invariant properties of the physical theory.

We may see the seed of this multiplicity of descriptions in basic quantum mechanics. The multiplicity exists because there are many – and many clever – unitary transformations on the Hilbert space and many bases and clever bases we may pick. The Fourier-like transformation from one basis to another makes the theory look very different than before. Such integral transformations would be very unnatural in classical physics because they would map a local theory to a non-local one. But in quantum mechanics, both descriptions may often be equally local.

OK, so string theory, due to its being a special theory that maximizes the number of clever ways in which the novel features of quantum mechanics are exploited, is the world champion in predicting things that were believed to be "irreducible assumptions whose 'why' questions could never be answered by science" and allowing new perspectives to look at the same physical phenomena. String theory allows to derive the spacetime dimension, the spectrum of elementary particles (given some discrete information about the choice of the compactification, a vacuum solution of the stringy equations), and it allows you to describe the same physics by bosonized or fermionized descriptions, descriptions related by S-dualities, T-dualities (including mirror symmetries), U-dualities, string-string-dualities which exhibit enhanced gauge symmetries, holography as in the AdS/CFT correspondence, the matrix model description representing any system as a state of bound D-branes with off-diagonal matrix entries for each coordinate, the ER-EPR correspondence for black holes, and many other things.

If you feel why quantum mechanics smells like progress relatively to classical physics, string theory should smell like progress relatively to the previous quantum mechanical theories because the "quantum mechanical thinking" is applied even to things that were envisioned as independent classical assumptions. That's why string theory is quantum mechanics squared, quantum mechanics with an X-factor, or quantum mechanics on steroids. Deep thinkers who have loved the quantum revolution and who have looked into string theory carefully are likely to end up loving string theory, and those who have had psychological problems with quantum mechanics must have even worse problems with string theory.

Throughout the text above, I have repeatedly said that "quantum mechanics is applied to new properties and objects" within string theory. When I was proofreading my remarks, I felt uneasy about these formulations because the comment about the "application" indicates that we just wanted to use quantum mechanics more universally and seriously, and it was guaranteed that we could have done so. But this isn't the case. The existence of string theory (where the deeper derivations of seemingly irreducible classical assumptions about the world may arise) is a sort of a miracle, much like the existence of quantum mechanics itself. (Well, a miracle squared.) Before 1925, people didn't know quantum mechanics. They didn't know it was possible. But it was possible. Quantum mechanics was discovered as a highly constrained, qualitatively different replacement for classical physics that nevertheless agrees with the empirical data – and allows us to derive many more things correctly. In the same way, string theory is a replacement for local quantum field theories that works in almost the same way but not quite. Just like quantum mechanics allows us to derive the spectrum and states of atoms from a deeper point, string theory allows us to derive the properties of elementary particles and even the spacetime dimension and other things from a deeper, more starting point. Like quantum mechanics itself, string theory feels like something important that wasn't invented or constructed by humans. It pre-existed and it was discovered.

Jon Butterworth - Life and Physics

Ten years after the “Big Bang”
Ten years ago it was Wednesday, and at 10:28 in the morning Geneva time the first protons had just made the 27 km journey through the Large Hadron Collider at CERN. The media referred to it as “Big Bang Day”, and … Continue reading

September 04, 2018

Clifford V. Johnson - Asymptotia

Beach Scene…

The working title for this was “when you forget to bring your camera on holiday...” but I know you won’t believe that's why I drew it! (This was actually a quick sketch done at the beach on Sunday, with a few tweaks added over dinner and some shadows added using iPad.)

I'm working toward doing finish work on a commissioned illustration for a magazine (I'll tell you about it more when I can - check instagram, etc., for updates/peeks), and am finding my drawing skills very rusty --so opportunities to do sketches, whenever I can find them, are very welcome.

The post Beach Scene… appeared first on Asymptotia.

Jon Butterworth - Life and Physics

Anti-protons, Dark Matter and Helium
First post of “Postcards from the Energy Frontier” at the Cosmic Shambles Network. A new measurement at CERN tells us something about the way particles travel through interstellar space. Which in turn may help a satellite on the International Space … Continue reading

September 01, 2018

Jon Butterworth - Life and Physics

Geneva Monopoly
Just returned from a couple of weeks at CERN. Saw this in Geneva and had to buy it – you can probably tell why. So, CERN is Oxford Street (which, for those of you who don’t know London, is much … Continue reading

August 31, 2018

Lubos Motl - string vacua and pheno

Light Stückelberg bosons deported to the swampland
Conjecture would also imply that photons have to be strictly massless

I am rather happy about the following new hep-th preprint that adds 21 pages of somewhat nontrivial thoughts to some heuristic arguments that I always liked to spread. Just to be sure, Harvard's Matt Reece released his paper
Photon Masses in the Landscape and the Swampland
What's going on? Quantum field theory courses usually start with scalar fields and the Klein-Gordon Lagrangian. At some moment, people want to learn about some empirically vital quantum field, the electromagnetic field, whose Lagrangian is${\mathcal L}_\gamma = -\frac 14 F_{\mu\nu} F^{\mu\nu}.$ The action is invariant under the $$U(1)$$ gauge invariance which is why 3+1 polarizations of the $$A_\mu$$ field are reduced to the $$(D-2)$$ i.e. two transverse physical polarizations of the spin-1 photon. Are there also massive spin-one bosons?

Yes, there are, e.g. W-bosons and Z-bosons that were discovered at CERN more than 30 years ago. The addition of masses naively corresponds to a simple mass term${\mathcal L}_{\rm mass} = \frac {m^2}{2} A_\mu A^\mu.$ A problem is that this term isn't gauge-invariant. So the theory must be defined without the gauge invariance and we can't consistently reduce the 3+1 polarizations (including one, time-like polarization that has the wrong sign of the norm so it would lead to negative probabilities) to 3 (for a massless photon, 2) polarizations.

However, the Standard Model allows massive spin-1 bosons by the Higgs mechanism. The fundamental Lagrangian actually is gauge-invariant and the gauge-invariance-violating mass term above isn't included directly. Instead, it is generated from the Higgs field's vacuum expectation value $$\langle h\rangle = v$$ through the interactions of the gauge field $$W_\mu$$ or $$Z_\mu$$ with the Higgs field that is included in the Higgs boson's kinetic term $$\partial_\mu h \cdot\partial^\mu h$$ once the partial derivatives are replaced with the covariant derivatives. These covariant derivatives $$D_\mu=\partial_\mu - i g A_\mu$$ are not only allowed but needed to construct gauge-invariant kinetic terms

So the W-bosons and Z-bosons get their masses via the interaction with the Higgs boson (that's also true for the fermions – leptons and quarks). This is the pretty way to generate masses of spin-1 bosons. It is exploited by the Standard Model and the Higgs mechanism is the last big clear discovery of experimental particle physicists. So massive gauge bosons automatically point to the Higgs mechanism.

But then there's the "ugly" way – and I've always considered it an ugly way – to make spin-1 bosons massive, the Stückelberg mechanism. The mass term for the photons is rewritten as${\mathcal L}_{\rm mass} = \frac 12 f_{\theta}^2 (\partial_\mu \theta - eA_\mu)^2.$ We added a new scalar field $$\theta$$ and preserved the gauge invariance $$A_\mu\to A_\mu +(1/e)\partial_\mu \alpha$$ but the new scalar field must also transform under it, $$\theta\to \theta+\alpha$$. Because we have the same "amount" of gauge invariance as we have in the massless photons, but there is one scalar field added, we end up with 3 physical polarizations of the massive particle instead of the massless photon's two polarizations. They're the ordinary three spatial or transverse polarizations of a massless vector particle, $$x,y,z$$.

One may gauge-fix the Stückelberg action by setting $$\theta=0$$ which reduces the system to the Proca action for the "regular" massive spin-1 boson. But the advantage of the Stückelberg form is that you know how to write down the field's interactions with others in a gauge-invariant way.

The mass of the (Swiss) Ernst Stückelberg's boson is $$m_A = ef_\theta$$. You may send it zero either by sending the gauge coupling $$e\to 0$$ or sending $$f_\theta\to 0$$ or some combination of both. Note that $$e\to 0$$ is something that the weak gravity conjecture labels dangerous and, under certain assumptions, forbidden. OK, this kind of a description of a massive spin-1 boson doesn't seem to be exploited by the Standard Model. It's ugly because the scalar field transforms in a suicidal way and the theory doesn't point to any non-Abelian gauge symmetry and other pretty things.

In principle, people would always say that the photon that we know and love (and especially see) can in principle be massive, thanks to a Stückelberg mechanism. Well, I always protested when someone presented it as a real possibility. If a photon were massive, we still know that the mass must be much smaller than the inverse radius of the Earth – because we know that the magnetic fields around the Earth behave as those in the proper massless electromagnetism, not in some Proca-Yukawa way. And if the photon were massive but this light, it would at least amount to a new, unsubstantiated fine-tuning. It's more likely and we are encouraged to assume that the photon is exactly massless.

Reece places this "negative sentiment" of mine into a potentially axiomatic if not provable framework. He argues that the limit of the very light photon is "very far in the configuration space" and in consistent theories of quantum gravity, the swampland reasoning implies the existence of some light enough particles (well, a whole tower of them) and/or other reasons why the effective field theory has to break at relatively low energy scales. Quantitatively, Reece claims that the effective field theory has to break above$\Lambda_{UV} = \sqrt{ \frac{m_\gamma M_{\rm Planck}}{e} }.$ Well, the theory would have to break down earlier, at the scale $$e^{1/3} M_{\rm Planck}$$, if the latter scale were even lower. At any rate, using the scale in the displayed equation above, we know that the photon mass is rather tiny (recall my comments about the geomagnetic field etc.) and the geometric average with the Planck mass sends us to an atomic physics scale where QED still seems OK, and that's how the massive photon hypothesis could be strictly refuted.

We're not quite sure about any of these swampland-based principles but I tend to think that many of them, when properly formulated, are right and powerful. I find this picture intriguing. Lots of the constructions in effective field theory, like the Stückelberg masses, looked ugly and heuristically "less consistent" to the people who had as good a taste as your humble correspondent. Finally, we may be becoming able to clearly articulate the arguments showing that this "feeling of reduced consistency" is not just some emotion. When coupled to quantum gravity, these ugly scenarios could indeed be strictly forbidden.

Quantum gravity and/or string theory could only allow the solutions that seemed "more pretty" than their ugly competitors. And you could stop issuing politically correct disclaimers such as "we are assuming that the photon mass is exactly zero; if it had a nonzero mass, we would have to revise the whole analysis".

Reece paper has no direct relationship to the de Sitter vacua and the cosmological controversies. But if it's right or at least accepted, it clearly strengthens the Vafa Team in that dispute. There are really different sketches of the general spirit of the stringy research in the future. In Team Stanford's plan, we're satisfied with some Rube Goldberg-style construction, we don't know which one (or which class) is the right one, we get used to it, and we train ourselves to be happy that we won't learn anything new.

On the other hand, in Team Vafa's plan for the future, string theory research continues to do actual progress, trying to answer well-defined questions about the world around us that weren't previously answered, such as "Can some massive bosons we will produce have Stückelberg masses? Is our photon allowed by string theory to be massive?" Truly curious physicists simply want new answers like that to be found. It may be impossible to answer some of these questions, especially if our vacuum is a relatively random one in a set of vacua that have different properties. But this possibility is not a proven fact and even if it is true for some properties, it is not true for all questions.

We can't ever accept the belief that all questions that haven't been answered so far will remain unanswered forever! That would be a clear religious attitude that stops progress in science – and that could have stopped it at every moment in the past. Harvard's Reece sketched some arguments that may prohibit Stückelberg masses in quantum gravity and you – I am primarily talking about you, dear reader in Palo Alto – should better think about it and decide whether he's right or not.

In some technical questions within the de Sitter controversy, I am uncertain, and so are others. But I am certain about certain principles of the scientific method. The real pleasure of science is to find ways to answer questions – to discriminate between possible answers. Many people in Northern California (which includes Palo Alto) may have adopted a non-discrimination approach to society and science (all people and answers and vacua are equally good) – but without discrimination, there is no science.

August 30, 2018

ZapperZ - Physics and Physicists

Where Do Elementary Particle Names Come From?
In this video, Fermilab's Don Lincoln tackles less about physics, but more about history and classification of our current Standard Model of elementary particles.

Zz.

August 29, 2018

Lubos Motl - string vacua and pheno

Team Stanford launches Operation Barbarossa against quintessence
The disagreement between Team Stanford – which defends its paradigm with a large landscape of de Sitter solutions of string theory – and Team Vafa – which suggests that de Sitter spaces could be banned due to general stringy "swampland" principles (and which proposes quintessence as an alternative) – has been seemingly confined to short enough exchanges in the questions-and-answers periods of various talks.

The arguments couldn't have been properly analyzed and compared in such a limited context. In science, it is better to write them down. You may look at these arguments and equations for hours – and so can your antagonists – which usually increases the quality of the analyses. Team Stanford clearly believes that the de Sitter vacua are here to stay, the criticisms are wrong, and quintessence has fatal problems. But can they back these opinions by convincing arguments?

Today, in the list of new hep-th preprints, we received an avalanche of papers that say something about the deSitter-vs-quintessence controversy in string theory. Using the [numbers] from the daily ordering of papers, we talk about the following papers:
[3] De Sitter vs Quintessence in String Theory (by Cicoli+4, 49 pages)

[4] A comment on effective field theories of flux vacua (by Kachru+Trivedi, 22 pages)

[15] dS Supergravity from 10d (by Kallosh+Wrase, 18 pages)

[16] de Sitter Vacua with a Nilpotent Superfield (by Kallosh+3, 6 pages)

[18] The landscape, the swampland and the era of precision cosmology (by Akrami+3, 43 pages)
I have omitted Tadashi Takayanagi's paper(s) although one of them also talks about de Sitter spaces.

First, concerning the affiliations: I include all of the collaborations into "Team Stanford" because they defend de Sitter solutions in string theory. But the first paper is really international (Bologna-Boulder-India-Cambridge), the second paper is Stanford-Bombay, the third paper is Stanford-Vienna, the fourth paper is Stanford-Brown-Leuven (Belgium), and the last paper is Stanford-Leiden (the Netherlands).

Well, you may hopefully see that Stanford is overrepresented in these papers. Moreover, it seems to play the role of the "headquarters" of this campaign. And the first paper among the five which is the only Stanford-free is arguably the least combative one, too. ;-) I think it's fair to say that the stringy landscape picture of cosmology is the greatest source of pride for Stanford's theoretical physicists in recent 15 years. At some human level, we could understand why they could be anxious if someone were basically saying that those 15 years revolved around a mistake or some sloppiness. But the pride doesn't imply that those papers were right and safe, of course.

Now, the number of papers – five – is rather large and the salvos had to be at least partially coordinated. Can the colleagues be expected to swallow a reasonably high percentage of the content? Wasn't the number of papers chosen to be high to simply intimidate the opposition? To replace the quality of the arguments with the quantity of papers? I am not saying that. I am just asking. The high number of papers leads me to similar feelings as the proposed large number of de Sitter vacua. Less is sometimes more.

Let's talk about the separate papers. The middle paper, one by Kallosh and Wrase, claims that the anti-D3-branes in the KKLT "uplifting" procedure may be replaced by anti D5, D6, D7, or D9-branes, too. That seems like a bold statement to me. If this were the case, why wouldn't have KKLT noticed these four new possible dimensions right away? Fifteen years ago, I was surely asking the question why anti-D3-branes were used and not some branes of other dimensions and I was surely given a – not so convincing – answer implying that it had to be anti-D3-branes. If one says that 4 possible dimensionalities of the antibranes are just as OK, and one does so 15 years after the game-changing paper is released, it doesn't exactly help both of these papers to be trustworthy.

I would probably choose to disbelieve the new Kallosh-Wrase paper. One general problem with this paper (but, to some extent, with many other papers and perhaps with Team Stanford's papers in general) is that it seems to be a supergravity paper, not a full-blown stringy paper. And I think it's fair to describe both Kallosh and Wrase as supergravity experts, not string theory experts. Shouldn't a full-blown string theory expert validate claims that D-branes may be used in a certain new way? My answer is that he or she should.

At their supergravity level of analysis, many things are possible and they may change the dimensionality of the uplifting antibrane. Great. But have they actually demonstrated that string theory allows such solutions, especially the new ones? I don't think that they have made the full-blown string analysis. Whatever is intrinsically stringy is treated in a sloppy way. For example, search for an "open string" in the Kallosh-Wrase paper. You will get three hits – and all of them just say that they have ignored the open string moduli.

The more stringy a given concept or structure is, the more it is ignored in this paper. Again, I think that this criticism applies to most of the Team Stanford papers in general. But the whole point of the Vafa Team is to carefully study the fine, characteristically stringy features, phenomena, and constraints that are completely invisible at the level of supergravity – i.e. at the level of effective field theory. I have doubts about every particular, precise enough "swampland statement" made by Vafa or any disciple (including our "weak gravity conjecture" group). On the other hand, I have no doubts that it is extremely important to appreciate that string theory is not just supergravity and most of the particular low-energy supergravity-based effective field theories have no consistent quantum gravity or stringy completion.

Kallosh and Wrase – and, as I said, much of the Team Stanford – seem to use string theory as the "ultimate justification of the 'anything goes' paradigm in supergravity". You may do anything you want in supergravity, add any string-inspired object, fluxes, branes, whatever you like, and then you use the term "string theory" as if it were the ultimate and universal justification of the validity of all such constructions. For them, string theory is just a knife that always unties your hands. Like with Elon Musk's promises, anything goes with string theory.

OK, I am sure that this is just a wrong usage or interpretation of "string theory". String theory offers some new tools, new objects, new transitions, phenomena, and relationships between the objects. But string theory also – and maybe primarily – brings us new constraints, new bans, new universal, and particular predictions. For me, string theory may have produced new ingredients and possibilities but it's still primarily a theory that has a greater predictive power than the effective quantum field theory. It's clearly a sloppy, skewed way to use string theory if someone only uses string theory as the "source of many new objects and possibilities" – and not as a "book full of new constraints, universal laws and principles, and previously impossible predictions for particular situations".

(There has been a community of "extremely applied" string theorists – whom I would surely call non-string theorists – who have used the term "string theory" as an excuse for really non-standard pieces of physics including the Lorentz symmetry violation and the violation of the equivalence principle. I believe that string theory is, on the contrary, a solid framework that bans or at least greatly discourages such experiments.)

Because we are discussing the question whether the carefully and accurately studied string/M-theory allows de Sitter vacua, the KKLT construction, and similar things, another supergravity-level sloppy analysis just cannot possibly be relevant for the big question defining the Team Stanford vs Team Vafa controversy. To resolve this controversy, one simply needs a higher stringy precision of the arguments. The paper by Kallosh and Wrase doesn't have it and it's questionable whether they could make such an analysis in any other paper.

OK, let's now look at the fourth paper among the five about a "nilpotent superfield". The new paper is a response to a 2017 paper Towards de Sitter from 10D by Moritz, Retolaza, and Westphal. OK, those authors have claimed that the KKLT didn't work because during the uplift, there was a stronger backreaction than previously thought and the compactification remains AdS and doesn't become dS. In the new paper, they claim that the nilpotent superfield as a SUSY breaking tool isn't compatible with the nonlinearly realized SUSY. But that doesn't really matter because even if one allows it, they do get a de Sitter, not anti de Sitter.

I would believe that one of these groups must admit defeat soon enough because the claims and arguments look rather straightforward.

Now, let's turn our attention to the new Kachru-Trivedi paper. It's written as a "positive paper" on effective field theories of the KKLT-style flux vacua. I haven't read the paper in its entirety but the abstract and the general organization of the paper does suggest that they're reviewing the thoughts that have been around from the KKLT. It seems to me that concerning the validity and existence of the effective field theories for the stringy situations, they always rely on field-theory-based, e.g. Wilsonian arguments. I am not persuaded that this is good enough. String theory may invalidate the effective field theories by making sure that an energy-$$E$$ effective theory isn't a local quantum field theory at all.

What really bothers me is the superficial approach of Kachru and Trivedi to the arguments given by the Vafa Team:
A recent paper [46 Obied Ooguri Spodyneiko Vafa], motivated largely by no-go theorems with limited applicability to a partial set of classical ingredients, made a provocative conjecture implying that quantum gravity does not support de Sitter solutions. [Footnote about two previous papers saying similar things.] Our analysis – and more importantly, effective field theory applied to the full set of ingredients available in string theory – is in stark conflict with this conjecture. This leads us to believe that the conjecture is false.
Do Kachru and Trivedi consider this non-technical, judgmental paragraph to be enough to deal with the proposed alternative picture? OK, let's rephrase what they are saying:
We may repeat what we said 15 years ago. We may pay no attention whatsoever to the detailed arguments given by the Vafa Team. We don't need to be impartially interested in the validity of the proposed new principles, inequalities, and no-go theorems. We just don't want to learn any and we prefer to believe that no such new insights exist. Instead, it's enough to dismiss all these papers with a simple slogan, with slurs such as "provocative" that make the Vafa Team look limited while we look unlimited, repeat that everything we have ever claimed to be true must be true, and that's enough to "prove" that we are right and they are wrong.
I am sorry but it doesn't seem enough to me. The claim that the Vafa Team's statements are limited to a "partial set of classical ingredients" while Team Stanford is better because effective field theory is "applied to all ingredients available in string theory" seems utterly demagogic to me. KKLT and followers have used lots of ingredients from string theory but there's no proof that they are "all" ingredients of string theory. New ingredients kept on emerging and we still can't prove that we know "all of them" because we don't have a universal definition of string theory. Moreover, the high number of such ingredients makes it more likely, and not less likely, that one of them breaks down and invalidates the KKLT construction as a whole. So they have many, not "all", ingredients of string theory and this fact makes their construction more vulnerable, not less so!

And the very conclusion that "Vafa seems to disagree with something we wrote so he must be wrong" simply looks childish. This is not a rational way to argue. Vafa might say exactly the same thing – he doesn't – but none of these two stubborn propositions would imply a convincing argument in one way or another.

Finally, we have the first paper by Cicoli et al.; and the last, fifth paper by Akrami et al. These two papers explicitly claim to discuss the "de Sitter versus quintessence" controversy in string theory. The Akrami paper seems to have one proposed counterargument against quintessence that Akrami et al. are proud about and that they want to be carefully read by the reader. What is it? They pick the constant $$c$$ from the Vafa Team inequality, claim that it should be equal to one (or being of order one, they're not sure), and then they claim that the cosmological observations rule out $$c\gt 1$$ at the 3-sigma i.e. 99.7% level.

I am sorry but the right value of $$c$$ isn't really known, at least not too reliably, so they can't determine the statistical significance well, either. The right $$c$$ could be $$1/3$$ and there would be no exclusion at all. It seems to me that this overemphasis on the $$c\sim 1$$ "prediction" and its weak exclusion by the observational data is their strongest argument. If that's so, I find it extremely weak. Even if the calculation of the 3-sigma confidence level were solid, which it doesn't seem to be at all, it is still just a 3-sigma confidence. A few years ago, the LHC diphoton bump was "discovered" at four sigma and it was fake. A potential universal new principle of string theory is a different caliber. In my list of priorities, if I become sufficiently certain about a new universal principle of physics, it may beat even 5-sigma deviations from the predictions.

Finally, the first, Stanford-free paper is less arrogant than the Stanfordful papers. They prefer the de Sitter, KKLT-style models because they look concrete, there seems to be a calculational control, and it's apparently getting better with time. Quintessence is more "challenging" and requires more fine-tuning, we read. Well, Vafa et al. disagree with the second point, probably both points. At any rate, they're potentially subjective. You can't use your feelings that something is "challenging" – without any particular argument or quantification of the "challenge" – as a persuasive argument against an alternative theory.

So I am afraid that this Cicoli et al. paper is going to be too vague when arguing against the alternative picture based on the new general principles proposed by the Vafa Team. One problem, as I have mentioned, is that these two paradigms are very different from each other. They have completely different advantages, very different numbers of requires solutions or corners of the stringy configuration space, different importance of the precision needed to analyze things, and so on. Depending on one's philosophy, the prior probabilities assigned to these paradigms may be very different. The probability ratio may very well be more extreme than 300-to-1 in either direction – which makes some 3-sigma empirical arguments weaker than a weak tea.

At the end, it should be possible to resolve the controversy. But one simply needs to study the purely stringy effects in these compactifications (or would-be vacua) more accurately or more reliably than ever before. This increased control over the stringy effects or the increased reliability of the stringy arguments is probably necessary for any progress in resolving this open question. I haven't read the papers in their entirety but I am afraid it is obvious that they haven't really made any progress in resolving the actual disagreement. They are basically repeating the things that were done before and that's not a good path to progress.

And that's the memo.

August 24, 2018

Clifford V. Johnson - Asymptotia

Science Friday Book Club Wrap!

Don't forget, today live on Science Friday we (that's SciFri presenter Ira Flatow, producer Christie Taylor, Astrophysicist Priyamvada Natarajan, and myself) will be talking about Hawking's "A Brief History of Time" once more, and also discussing some of the physics discoveries that have happened since he wrote that book. We'll be taking (I think) caller's questions too! Also we've made recommendations for further reading to learn more about the topics discussed in Hawking's book.

-cvj

(P.S. The picture above was one I took when we recorded for the launch of the book club, back in July. I used the studios at Aspen Public Radio.) Click to continue reading this post

The post Science Friday Book Club Wrap! appeared first on Asymptotia.

John Baez - Azimuth

Compositionality – Now Open For Submissions

Our new journal Compositionality is now open for submissions!

It’s an open-access journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline. Topics may concern foundational structures, an organizing principle, or a powerful tool. Example areas include but are not limited to: computation, logic, physics, chemistry, engineering, linguistics, and cognition.

Compositionality is free of cost for both readers and authors.

CALL FOR PAPERS

We invite you to submit a manuscript for publication in the first issue of Compositionality (ISSN: 2631-4444), a new open-access journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline.

To submit a manuscript, please visit http://www.compositionality-journal.org/for-authors/.

SCOPE

Compositionality refers to complex things that can be built by sticking together simpler parts. We welcome papers using compositional ideas, most notably of a category-theoretic origin, in any discipline. This may concern foundational structures, an organising principle, a powerful tool, or an important application. Example areas include but are not limited to: computation, logic, physics, chemistry, engineering, linguistics, and cognition.

Related conferences and workshops that fall within the scope of Compositionality include the Symposium on Compositional Structures (SYCO), Categories, Logic and Physics (CLP), String Diagrams in Computation, Logic and Physics (STRING), Applied Category Theory (ACT), Algebra and Coalgebra in Computer Science (CALCO), and the Simons Workshop on Compositionality.

SUBMISSION AND PUBLICATION

Submissions should be original contributions of previously unpublished work, and may be of any length. Work previously published in conferences and workshops must be significantly expanded or contain significant new results to be accepted. There is no deadline for submission. There is no processing charge for accepted publications; Compositionality is free to read and free to publish in. More details can be found in our editorial policies at http://www.compositionality-journal.org/editorial-policies/.

STEERING BOARD

John Baez, University of California, Riverside, USA
Bob Coecke, University of Oxford, UK
Kathryn Hess, EPFL, Switzerland
Steve Lack, Macquarie University, Australia
Valeria de Paiva, Nuance Communications, USA

EDITORIAL BOARD

Corina Cirstea, University of Southampton, UK
Ross Duncan, University of Strathclyde, UK
Andree Ehresmann, University of Picardie Jules Verne, France
Tobias Fritz, Max Planck Institute, Germany
Neil Ghani, University of Strathclyde, UK
Dan Ghica, University of Birmingham, UK
Jeremy Gibbons, University of Oxford, UK
Nick Gurski, Case Western Reserve University, USA
Helle Hvid Hansen, Delft University of Technology, Netherlands
Chris Heunen, University of Edinburgh, UK
Aleks Kissinger, Radboud University, Netherlands
Joachim Kock, Universitat Autonoma de Barcelona, Spain
Martha Lewis, University of Amsterdam, Netherlands
Samuel Mimram, Ecole Polytechnique, France
Simona Paoli, University of Leicester, UK
Dusko Pavlovic, University of Hawaii, USA
Christian Retore, Universite de Montpellier, France
Peter Selinger, Dalhousie University, Canada
Pawel Sobocinski, University of Southampton, UK
David Spivak, MIT, USA
Jamie Vicary, University of Birmingham and University of Oxford, UK
Simon Willerton, University of Sheffield, UK

Sincerely,

The Editorial Board of Compositionality

August 20, 2018

Clifford V. Johnson - Asymptotia

And So it Begins…

It’s that time of year again! The new academic year’s classes begin here at USC today. I’m already snowed under with tasks I must get done, several with hard deadlines, and so am feeling a bit bogged down already, I must admit. Usually I wander around the campus a bit and soak up the buzz of the new year that you can pick up in all the campus activity swarming around. But instead I sit at my desk, prepping my syllabus, planning important dates, adjusting my calendar, exchanging emails, (updating my blog), and so forth. I hope that after class I can do the wander.

What will I be teaching this semester? The second part of graduate electromagnetism, as I often do. Yes, in a couple of hours, I’ll be again (following Maxwell) pointing out a flaw in one of the equations of electromagnetism (Ampere’s), introducing the displacement current term, and then presenting the full completed set of the equations - Maxwell’s equations, one of the most beautiful sets of equations ever to have been written down. (And if you wonder about the use of the word beautiful here, I can happily refer you to look at The Dialogues, starting at page 15, for a conversation about that very issue…!)

Speaking of books, if you’ve been part of the Science Friday Summer reading adventure, reading Hawking’s A Brief History of Time, you should know that I’ll be back on the show on Friday talking with Priyamvada Natarajan, producer Christie Taylor, and presenter Ira flatow about the book one more time. There may also be an opportunity to phone in with questions! And do look at their website for some of the extra material they’ve bene posting about the book, including extracts from last week’s live tweet Q&A.

Anyway, I’d better get back to prepping my class. I’ll be posting more about the semester (and many other matters) soon, so do come back.

The post And So it Begins… appeared first on Asymptotia.

August 18, 2018

Lubos Motl - string vacua and pheno

Quintessence is a form of dark energy
Tristan asked me what I thought about Natalie Wolchover's new Quanta Magazine article,
Dark Energy May Be Incompatible With String Theory,
exactly when I wanted to write something. Well, first, I must say that I already wrote a text about this dispute, Vafa, quintessence vs Gross, Silverstein, in late June 2018. You may want to reread the text because the comments below may be considered "just an appendix" to that older text. Since that time, I exchanged some friendly e-mails with Cumrun Vafa. I am obviously more skeptical towards their ideas than they are but I think that I have encountered some excessive certainty of some of their main critics.

Wolchover's article sketches some basic points about this rather important disagreement about cosmology among string theorists. But there are some very unfortunate details. The first unfortunate detail appears in the title. Wolchover actually says that "dark energy might be incompatible with string theory". That's the statement she seems to attribute to Cumrun Vafa and co-authors.

But that misleading formulation is really invalid – it's not what Cumrun is saying. Here, the misunderstanding may be blamed on some sloppy "translation" of the technical terms that has become standard in the pop science press – and the excessively generalized usage of some jargon.

OK, what's going on? First of all, the Universe is expanding, isn't it? We're talking about cosmology, the big bang theory (which I don't capitalize – to make sure that I am not talking about the sitcom), and the expansion of the Universe was already seen in the 1920s although people only became confident about it some 50 years ago.

In the late 1990s, it was observed that the expansion wasn't slowing down, as widely expected, but speeding up. The accelerated expansion may be explained by dark energy. Dark energy is anything that is present everywhere in the vacuum and that tends to accelerate the expansion of the Universe. Dark energy, like dark matter, is invisible by optical telescopes (that's why both of them are called dark). But unlike dark matter which has (like all matter or dust) the pressure $$p=0$$, the dark energy has nonzero pressure, namely $$p\lt 0$$ or $$p\approx -\rho$$ where $$\rho$$ is the energy density. That's how dark energy and dark matter differ; dark energy's negative pressure is needed for its ability to accelerate the expansion of the Universe.

Dark energy is supposed to be a rather general, umbrella term that may be represented by several known, slightly different theoretical concepts described by equations of physics. So far, the by far most widespread and "canonical" or "minimalist" kind of dark energy was the cosmological constant. That's really a number that is independent of space and especially time (it's why it's called a constant) which Einstein added to the original Einstein's equations of the general theory of relativity. Einstein's original goal was to allow the size of the Universe to be stable in time – because his equations seemed to imply that the Universe's size should evolve, much like the height of a freely falling apple. It just can't sit at a constant value – just like the apple usually doesn't sit in the air in the middle of the room.

But the expansion of the Universe was discovered. Einstein could have predicted it because it follows from the simplest form of Einstein's equations, as I said. That could have earned him another Nobel prize when the expansion was seen by Hubble. (Well, Einstein's stabilization by the cosmological constant term wouldn't really work even theoretically, anyway. The balance would be unstable, tending to turn to an expansion or the implosion, like a pencil standing on the tip. Any tiny perturbation would be enough for this instability to grow exponentially.)

That's probably the main reason why Einstein labeled the introduction of the cosmological constant term "the greatest blunder of his life". Well, it wasn't the greatest blunder of his life: the denial of quantum mechanics and state-of-the-art physics in general in the last 30 years of his life was almost certainly a greater blunder.

In the late 1990s, the Universe's expansion was seen to accelerate which is why it seemed obvious that Einstein's blunder wasn't a blunder at all, let alone the worst one: the cosmological constant term seems to be there and it's responsible for the acceleration of the Universe. Suddenly, Einstein's cosmological term (with a different numerical value than Einstein needed – but one that is of the same order) seemed like a perfect, minimalistic explanation of the accelerated expansions. Recall that Einstein's equations say$G_{\mu\nu} +\Lambda g_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu}.$ Note that even in the complicated SI units, there is no $$\hbar$$ here – Einstein's general relativity is a classical theory that doesn't depend on quantum mechanics at all. Here, $G_{\mu\nu} = R_{\mu\nu} - \frac{1}{2} R g_{\mu\nu}$ is the Einstein curvature tensor, constructed from the Ricci tensor and the Ricci scalar $$R$$. It's some function of the metric and its first and especially second partial derivatives in the spacetime. On the right hand side of Einstein's equations, $$T_{\mu\nu}$$ is the stress-energy tensor that knows about the sources, the density of mass/energy and momentum and their flow.

The $$\Lambda g_{\mu\nu}$$, a simple term that adds an additional mixture of the metric tensor to Einstein's equations, is the cosmological constant term. It naturally reappeared in the late 1990s. It's a rather efficient theory. The term doesn't have to be there but in some sense, it's even "simpler" than Einstein's tensor, so why should it be absent? And it seems to explain the accelerated expansion, so we need it.

The theory is really natural which is why the standard cosmological model was the $$\Lambda{CDM}$$ model, i.e. a big bang theory with the cold dark matter (CDM) and the cosmological constant term $$\Lambda$$.

What about string theory?

String theory really predicts gravity. You may derive Einstein's equations, including the equivalence principle, from the vibrating strings. Einstein's theory of gravity is a prediction of string theory, which is still one of the main reasons to be confident that string theory is on the right track to find a deeper or final theory in physics, to say the least. Aside from gravitons and gravity (and Einstein's equations that may be derived from string theory for this force), string theory also predicts gauge fields and matter fields such as leptons and quarks. They have their (Dirac, Maxwell...) equations and their stress-energy tensors also enter as terms in $$T_{\mu\nu}$$ on the right hand side of Einstein's equations.

String theory demonstrably predicts Einstein's equations as the low-energy limit for the massless, spin-two field (the graviton field) that unavoidably arises as a low-lying excitation of a vibrating string. To some extent, this appearance of Einstein's equations is guaranteed by consistency of the theory (or by the relevant gauge invariance, namely the diffeomorphisms) – and string theory is consistent (which is a highly unusual, and probably unprecedented, virtue of string theory among quantum mechanical theories dealing with massless spin-two fields).

Does string theory also predict the cosmological constant term, one that Einstein originally included in the equations? At this level, the answer is unquestionably Yes and Cumrun Vafa and pals surely agree. To say the least, string theory predicts lots of vacua with a negative value of the cosmological constant, the anti de Sitter (AdS) vacua. In fact, those are the vacua where the holographic principle of quantum gravity may be shown rather rigorously – holography takes the form of Maldacena's AdS/CFT correspondence.

There are lots of Minkowski, $$\Lambda=0$$, vacua in string theory. And there are also lots of AdS, $$\Lambda\lt 0$$, vacua in string theory. I think that the evidence is clear and no one who is considered a real string theorist by most string theorists disputes the statement that both groups of vacua, flat Minkowski vacua and AdS vacua, are predicted by string theory.

The real open question is whether string theory allows the existence of $$\Lambda \gt 0$$ (de Sitter or dS) vacua. Those seem to be needed to describe the accelerated expansion of the Universe in terms of the cosmological constant. After 2000, the widespread view – if counted by the number of heads or number of papers – was that string theory allowed the positive cosmological constant. Even though I still find de Sitter vacua in string theory plausible, I believe that it's fair to say that the frantic efforts to spread this de Sitter view – and write papers about de Sitter in string theory – may be described as a sign of group think in the community.

There have always been reason to doubt whether string theory allows de Sitter vacua at all. At the end of the last millennium, Maldacena and Nunez wrote a paper with a no-go theorem. It was mostly based on supergravity, a supersymmetric extension of Einstein's general relativity and a low-energy limit of superstring theories, but people generally believed that this approximation of string theory was valid in the context of the proof.

Sociologically, you may also want to know that in the 1990s, Edward Witten was "predicting" that the cosmological constant had to be exactly zero (and a symmetry-like principle would be found that implies the vanishing value). He was motivated by the experience with string theory. Even before Maldacena and Nunez and lots of similar work, it looked very hard to establish de Sitter, $$\Lambda \gt 0$$ vacua in string theory. However, some of these problems could have been – and were – considered just technical difficulties. Why? Because if the cosmological constant is positive, you don't have any time-like Killing vectors and there can be no unbroken spacetime supersymmetry. Controlled stringy calculations only work when the spacetime supersymmetry is present (and guarantees lots of cancellations etc.) which is why people were willing to think that the difficulties in finding de Sitter vacua in string theory were only technical difficulties – caused by the hard calculations in the case of a broken supersymmetry.

However, aside from Maldacena-Nunez, we got additional reasons to think that string theory might prohibit de Sitter vacua in general. Cumrun Vafa's Swampland – the term for an extension of the (nice stringy) landscape that also includes effective field theories that string theory wouldn't touch, not even with a long stick – implies various general (sometimes qualitative, sometimes quantitative) predictions of string theory that hold in all the stringy vacua, despite their high number. Along with his friend Donald Trump, Cumrun Vafa has always wanted to drain the swamp. ;-)

The Swampland program has produced several, more or less established, general laws of string theory – that may also be considered consequences of a consistent theory of quantum gravity. Wolchover mentions that the most well-established example of a Swampland law is our "weak gravity conjecture". Gravity (among elementary particles) is much weaker than other forces in our Universe – and in fact, it probably has to be the case in all Universes that are consistent at all.

The Swampland business contains many other laws like that, some of them are more often challenged than the weak gravity conjecture. Cumrun Vafa and his co-authors have presented an incomplete sketch of a proof that de Sitter vacua could be banned in string theory for Swampland reasons – for similar general reasons that guarantee that gravity is the weakest force.

This assertion is unsurprisingly disputed by lots of people, especially people around Stanford, because Stanford University (with Linde, Kallosh, Susskind, Kachru, Silverstein, and many others) has been the hotbed of the "standard stringy cosmology" after 2000. They wrote lots of papers about cosmology, starting from the KKLT paper, and the most famous ones have thousands of citations. At some level, authors of such papers may be tempted to think that their papers just can't be wrong.

But even the main claims of papers with thousands of citations ultimately may be wrong, of course. Sadly, I must say that some of this Stanford environment likes to use group think – and arguments about authorities and number of papers – that resembles the "consensus science" about the global warming. Sorry, ladies and gentlemen, but that's not how science works.

Doubts about the KKLT construction are reasonable because the KKLT and similar papers still build on certain assumptions and approximations. I am confident it is correct to say that the authors of some of the critical papers questioning the KKLT (especially the final, de Sitter "uplift" of some intermediate AdS vacua, an uplift that is achieved by the addition of some anti-D3-branes) are competent physicists – at least "basically indistinguishable" in competence from the Stanford folks. See e.g. Thomas Van Riet's TRF guest blog from November 2014 (time is fast, 1 year per year).

Cumrun Vafa et al. don't want to say that string theory has been ruled out. Instead, they say that in string theory, the observed dark energy is represented by quintessence which is just a form of dark energy (read the first sentence of the Wikipedia article I just linked to) – and that's why Wolchover's title that "dark energy is incompatible with string theory" is so misleading. I think that the previous sentence is enough for everyone to understand the main unfortunate terminological blunder in Wolchover's article. Cumrun and pals say that dark energy is described by quintessence, a form of dark energy, in string theory. They don't say that dark energy is impossible in string theory.

Wolchover's blunder may be blamed upon the habit to consider the phrase "dark energy" to be the pop science equivalent of the "cosmological constant". Well, they are not quite equivalent and to understand the proposals by Cumrun Vafa et al., the difference between the terms "dark energy" and "cosmological constant" is absolutely paramount.

Quintessence is a philosophically if not spiritually sounding word but in cosmology, it's just a fancy word for an ordinary time-dependent generalization of the cosmological constant – that results from the potential energy of a new, inflaton-like scalar field. String theory often predicts many scalar fields, some of them may play the role of the inflaton, others – similar ones – may be the quintessence that fills our Universe with the dark energy which is responsible for the accelerated expansion.

Now, the disagreement between "Team Vafa" and "Team Stanford" may be described as follows:
Team Stanford uses the seemingly simplest description, one using Einstein's old cosmological constant. It's really constant, string theory allows it, and elaborate – but not quite exact – constructions with antibranes exist in the literature. They use lots of sophisticated equations, do many details very accurately and technically, but the question whether these de Sitter vacua exist remains uncertain because approximations are still used. Team Stanford ignores the uncertainty and sometimes intimidates other people by sociology – by a large number of authors who have joined this direction. The cosmological constant may be positive, they believe, and there are very many, like the notorious number $$10^{500}$$, ways to obtain de Sitter vacua in string theory. We may live in one of them. Because of the high number, the predictive power of string theory may be reduced and some form of the multiverse or even the anthropic principle may be relevant.

Team Vafa uses a next-to-simplest description of dark energy, quintessence, which is a scalar field. This scalar field evolves and the potential normally needs to be fine-tuned even more so than the cosmological constant. But Team Vafa says that due to some characteristically stringy relationships, the new, added fine-tuning is actually not independent from the old one, the tuning of the apparently tiny cosmological constant, so from this viewpoint, their picture might be actually as bad (or as good) as the normal cosmological constant. The very large hypothetical landscape may be an illusion – all these constructions may be inconsistent and therefore non-existent, due to subtle technical bugs overlooked by the approximations or, equivalently, due to very general Swampland-like principles that may be used to kill all these hypothetical vacua simultaneously. Team Vafa doesn't have too many fancy mathematical calculations of the potential energy and it doesn't have a very large landscape. So in this sense, Team Vafa looks less technical and more speculative than Team Stanford. But one may argue that Team Stanford's fancy equations are just a way to intimidate the readers and they don't really increase the probability that the stringy de Sitter vacua exist.
These are just two very different sketches how dark energy is actually incorporated in string theory. They differ by some basic statements, by the expectation "how very technical certain adequate papers answering a question should be", and in many other respects. I think we can't be certain which of them, if any, is right – even though Team Stanford would be tempted to disagree. But their constructions simply aren't waterproof and they look arbitrary or contrived from many points of view. And yes, as you could have figured out, I do have some feeling that the way of argumentation by Team Stanford has always been similar to the "consensus science" behind the global warming hysteria. Occasional references to the "consensus" and a large number of papers and authors – and equations that seem complicated but if you think about their implications, they don't really settle the basic question (whether the de Sitter vacua – or the dangerous global warming – exist at all).

Team Vafa proposes a new possibility and I surely believe it deserves to be considered. It's "controversial" in the sense that Team Stanford is upset, especially some of the members such as E.S. But I dislike Wolchover's subtitle:
A controversial new paper argues that universes with dark energy profiles like ours do not exist in the “landscape” of universes allowed by string theory.
What's the point of labeling it "controversial"? It may still be right. Strictly speaking, the KKLT paper and the KKLT-based constructions by Team Stanford are controversial as well. These a priori labels just don't belong to the science reporting, I think – they belong to the reporting about pseudosciences such as the global warming hysteria. Reasonable people just don't give a damn about these labels. They care about the evidence. Cumrun Vafa is a top physicist, he and pals have proposed some ideas and presented some evidence, and this evidence hasn't really been killed by solid counter-evidence as of now.

Incidentally, after less than two months, Team Vafa already has 23+19 citations. So it doesn't look like some self-evidently wrong crackpot papers, like papers claiming that the Standard Model is all about octonions.

I was also surprised by another adjective used by Wolchover:
In the meantime, string theorists, who normally form a united front, will disagree about the conjecture.
Do they form a united front? What is it supposed to mean and what's the evidence that the statement is correct whatever it means? Are all string theorists members of Marine Le Pen's National Front? Boris Pioline could be one but I think that even he is not. ;-) String theorists are theoretical physicists at the current cutting-edge of fundamental physics and they do the work as well as they can. So when something looks clearly proven by some papers, they agree about it. When something looks uncertain, they are individually uncertain – and/or they disagree about the open questions. When a possible new loophole is presented that challenges some older lore or no-go not-yet-theorems, people start to think about the new possibilities and usually have different views about it, at least for a while.

What is Wolchover's "front" supposed to be "united" for or against? String theorists are united in the sense that they take string theory seriously. Well, that's a tautology. They wouldn't be called string theorists otherwise. String theory also implies something so they of course take these implications – as far as they're clearly there – seriously. But is there any valid, non-tautological content in Wolchover's statement about the "united front"?

It's complete nonsense to say that string theories are "more united as a front" than folks in any other typical scientific discipline that does things properly. String theorists have disagreed about numerous things that didn't seem settled to some of them. I could list many technical examples but one recent example is very conceptual – the firewall by late Joe Polchinski and his team. There were sophisticated constructions and equations in the papers by Polchinski et al. but the existence of the firewalls obviously remained disputed, and I think that almost all string theorists think that firewalls don't exist in any useful operational sense. But they followed the papers by Polchinski et al. to some extent. Polchinski and others weren't excommunicated for a heresy in any sense – despite the fact that the statement "the black holes don't have any interior at all" would unquestionably be a radical change of the lore.

This disagreement about the representation of dark energy within string theory is comparably deep and far-reaching as the firewall wars.

Again, I still assign the probability above 50% to the basic picture of Team Stanford which leads to a cosmological constant from string theory. But I don't think it has been proven (a similar warning I have said about $$P\neq NP$$ and other things). I have communicated with many apparently smart and technically powerful folks who had sensible arguments against the validity of the basic conclusions of the KKLT. I am extremely nervous about the apparent efforts of some Stanford folks to "ban any disagreement" about the KKLT-based constructions, a ban that would be "justified" by the existence of many papers and their mutual citations.

That's not how actual science may progress for a very long time. If folks like Vafa have doubts about de Sitter vacua in string theory and all related constructions, and they propose quintessence models that could be more natural than once believed (the simple reasons why quintessence would be dismissed by string theorists including myself just a few years ago), they must have the freedom – not just formally, but also in practice – to pursue these alternative scenarios, regardless of the number of papers in literature that take KKLT for granted! Only when the plausibility and attractiveness of these ideas really disappears according to the body of the experts, it could make sense to suggest that Vafa seems to be losing.

These two pictures offer very different sketches how the real world is realized within string theory. Indeed, the string phenomenological communities that would work on these two possibilities could easily evolve into "two separated species" that can't talk to each other usefully (although both of them would still be trained with the help of the same textbooks up to a basic textbook of string theory). But as long as we're uncertain, this splitting of the research to several different possibilities is simply the right thing that should happen. Putting eggs to one basket when we're not quite sure about the right basket would simply be wrong.

Wolchover also mentions the work of Dr Wrase. I haven't read that so I won't comment.

But I will comment on some remarks by Matt Kleban (trained at Team Stanford, now NYU) such as
Maybe string theory doesn’t describe the world. [Maybe] dark energy has falsified it.
Well, that's nice. String theory is surely falsifiable and such things might happen which would be a big event. But I think it's obvious that Kleban isn't really taking the side of the string theory critics. Instead this statement – that dark energy may have falsified string theory – is a subtle demagogic attack against Team Vafa which is whom he actually cares about (he doesn't care about Šm*its). Effectively, Matt is trying to compare Vafa et al. to Šmoits. If the dark energy in string theory doesn't work in the Stanford way, I will scream and cry, Matt says, and you will give it up. Matt knows that the real people whom he cares about wouldn't consider string theory ruled out for similar reasons so he's effectively saying that they shouldn't buy Team Vafa's claims, either.

Sorry, Matt, but that's a demagogy. Team Vafa doesn't really claim that they have falsified string theory. There is a genuine new possibility whether you like to admit it or not. Also, Matt expressed his attacks against Team Vafa using a different verbal construction:
He stresses that the new swampland conjecture is highly speculative and an example of “lamppost reasoning,"...
Cute, Matt. I always love when people complain about lamppost reasoning. I've had funny discussions both with Brian Greene and Lisa Randall about this phrase before they published their popular books. Lisa felt very entertained when I said it was actually rational to spend more time by looking under the lamppost. But it is rational.

I must explain the proverb here. There exists some mathematical set of possibilities in theoretical physics or string theory but only some of them have been discovered or understood, OK? So we call those things that have been understood or studied (intensely enough) "the insights under the lamppost". Now, the "lamppost reasoning" is a criticism used by some people who accuse others from a specific kind of bias. What is this sin or bias supposed to be? Well, the sin is that these people only search for their lost keys under the lamppost.

Now, this is supposed to be funny and immediately mock the perpetrators of the "sin" and kill their arguments. If you lose your keys somewhere, it's a matter of luck whether the keys are located under a lamppost, where you could see them, or elsewhere, where you couldn't. So obviously, you should look for the keys everywhere, including places that aren't illumined by the lamp, Kleban and Randall say, among others.

But there's a problem with this recommendation. You can't find the keys in the dark too easily – because you don't see anything there. Perhaps if you sweep the whole surface by your fingers. But it's harder and the dark area may be very large. If you want to increase the probability that you find something, you should appreciate the superiority of vision and primarily look at the places where you can see something! You aren't guaranteed to find the keys but your probability to find them per unit time may be higher because you can see there.

And there might even exist reasons why the keys are even more likely to be under the lamppost. When you were losing them, you probably preferred to walk at places where you could see, too. You may have lost them while checking the content of your wallet, and you were more likely to do it under the lamppost. So that's why you were more likely under the lamppost at that time, too! Similarly, when God was creating the world, assuming her similar mathematical skills, She was likely to start with discovering things that were relatively easy for us to discover and clarify, too. So she was more likely to drop our Universe under the lamppost, too, and that's why it's right to focus our attention there, too.

For a researcher, it's damn reasonable to focus on things that are easier to be understood properly.

The two situations (keys, physics) aren't quite analogous but they're close enough. My claim is even clearer in the metaphorical "lamppost" of physics. If you want to settle a question, such as the existence of de Sitter vacua, you simply have to build primarily on the concepts – both general principles and the particular constructions – that have been understood well enough. You can't build on the things that are completely unknown. And if you build on things that are only known vaguely or with a lot of uncertainty, you can be misled easily!

So in some sense, I am saying that you should look for your keys under the lamppost, and then increase the sensitivity of your retinas and increase your range that you have a control over. That's how knowledge normally grows – but there always exist regions in the space of ideas and facts that aren't understood yet. The suggestion that claims in physics may be supported by constructions that are either completely unknown or badly understood are just ludicrous. They may sound convincing to them because the keys may be anywhere, the keys may be in the dark. But in the dark of ignorance, science can't be applied and we must appreciate that all our scientific conclusions may only be based on the things that have been illuminated – all of our legitimate science is built out of the insights about the vicinity of the lamppost.

Whoever claims to have knowledge derived from the dark is a charlatan – sorry but it's true, Lisa and Matt! In this particular case, it's totally sensible for Team Vafa to evaluate the experience with the known constructions of the vacua and conclude that it seems rather convincing that no de Sitter vacua exist in string theory and the existing counterexamples are fishy and likely to be inconsistent. This evidence is circumstantial because it builds on the "set of constructions" that have been studied or illuminated – constructions under the lamppost – but that's still vastly better than if you make up your facts and make far-reaching claims about the "world in the dark" that we have no real evidence of!

You surely expect comparisons to politics as well. I can't avoid the feeling that the Team Stanford claim that de Sitter vacua simply have to exist is just another example of some egalitarianism or non-discrimination. Like men and women, anti de Sitter and de Sitter vacua must be treated as equal. But sorry to say, like men and women, de Sitter and anti de Sitter vacua are simply not equal. The constructions of these two classes within string theory look very different and unlike the anti de Sitter vacua, it's plausible and at least marginally compatible with the evidence that the de Sitter vacua don't exist at all. A Palo Alto leftist could prefer a non-discrimination policy but the known facts, evidence, and constructions surely do discriminate between de Sitter and anti de Sitter spaces – and Team Vafa, like any honest scientist who actually cares about the evidence, assigns some importance to this highly asymmetric observation!

Lubos Motl - string vacua and pheno

Search for ETs is more speculative than modern theoretical physics
Edwin has pointed out a new tirade against theoretical physics,
Theoretical Physics Is Pointless without Experimental Tests,
that Abraham Loeb published at pages of Scientific American which used to be an OK journal some 20 years ago. The title itself seems plagiarized from Deutsche or Aryan Physics – which may be considered ironic for Loeb who was born in Israel. And in fact, like his German role models, Loeb indeed tries to mock Einstein as well – and blame his mistakes on the usage of thought experiments:
Einstein made great discoveries based on pure thought, but he also made mistakes. Only experiment and observation could determine which was which.

Albert Einstein is admired for pioneering the use of thought experiments as a tool for unraveling the truth about the physical reality. But we should keep in mind that he was wrong about the fundamental nature of quantum mechanics as well as the existence of gravitational waves and black holes...
Loeb has a small, unimportant plus for acknowledging that Einstein was wrong on quantum mechanics. However, as an argument against theoretical physics based on thought experiments and on the emphasis on the patient and careful mental work in general, the sentences above are at most demagogic.

The fact that Einstein was wrong about quantum mechanics, gravitational waves, or black holes don't imply anything wrong about the usage of thought experiments and other parts of modern physics. There's just no way to credibly show such an implication. Other theorists have used better thought experiments, have thought about them more carefully, and some of them have correctly figured out that quantum mechanics had to be right and gravitational waves and black holes had to exist.

The true fathers of quantum mechanics, especially Werner Heisenberg, were really using Einstein's new approach based on thought experiments, principles, and just like Einstein, they carefully tried to remove the assumptions about physics that couldn't have been operationally established (such as the absolute simultaneity killed by special relativity; and the objective existence of values of observables before an observation, killed by quantum mechanics).

Note that gravitational waves as well as black holes were detected many decades after their theoretical discovery. The theoretical discoveries almost directly followed from Einstein's equations. So Einstein's mistakes meant that he didn't trust (his) theory enough. It surely doesn't mean and cannot mean that Einstein trusted theories and theoretical methods too much. Because Loeb has made this wrong conclusion, it's quite some strong evidence in favor of a defect in Loeb's central processing unit.

The title may be interpreted in a way that makes sense. Experiments surely matter in science. But everything else that Loeb is saying is just wrong and illogical. In particular, Loeb wrote this bizarre paragraph about Galileo and timing:
Similar to the way physicians are obliged to take the Hippocratic Oath, physicists should take a “Galilean Oath,” in which they agree to gauge the value of theoretical conjectures in physics based on how well they are tested by experiments within their lifetime.
Well, I don't know how I could judge theories according to experiments that will be done after I die, after my lifetime. That's clearly impossible so this restriction is vacuous. On the other hand, is it OK to judge theories according to experiments that were done before our lifetimes or before physicists' careers?

You bet. Experimental or empirical facts that have been known for a long time are still experimental or empirical facts. In most cases, they may be repeated today, too. People often don't bother to repeat experiments that re-establish well-established truths. But these old empirical facts are still crucial for the work of every theorist. They are sufficient to determine lots of theoretical principles.

You know, it's correct to say that science is a dialogue between the scientist and Nature. But this is only true in the long run. It doesn't mean that every day or every year, both of them have to speak. If Nature doesn't want to speak, She has the right to stay silent. And She often stays silent even if you complained that She doesn't have the right. She ignores your restrictions on Her rights! So at the LHC after the Higgs boson discovery, Nature chose to remain silent so far – or She kept on saying "the Standard Model will look fine to you, human germ".

You can't change this fact by some wishful thinking about "dialogues". Theorists just didn't get new post-Higgs data from the LHC because so far, there are no new data at the LHC. They need to keep on working which makes it obvious that they have to use older facts and new theoretical relationships between them, new hypotheses etc. In the absence of new theoretical data, it is obvious that theorists' work has to be overwhelmingly theoretical or, in Loeb's jargon, it has to be a monologue! When Nature has something new and interesting to say (through experiments), Nature will say it. But theorists can't be silent or "doing nothing" just because Nature is silent these years! Only a complete idiot may fail to realize these points or agree with Loeb.

What Loeb actually wants to say is that a theorist should be obliged to plan the experiments that will settle all his theoretical ideas within his lifetime. But that's not possible. The whole point of scientific research in physics is to study questions about the laws of Nature that haven't been answered yet. And because they haven't been answered yet, people don't know and can't know what the answer will be – and even when it will be found.

An experimenter (or a boss or a manager of an experimental team) may try to plan what the experiment will do, when it will do these things, and what are the answers that it could provide us with. Even this planning sometimes goes wrong, there are delays etc. But this is not the main problem here. The real problem is that the result of a particular experiment is almost never the real question that people want to be answered. An experiment is often just a step towards adjusting our opinions about a question – and whether this step is a big or small one depends on what the experimental outcome actually is, and this is not known in advance.

Loeb has mentioned examples of such questions himself. People actually wanted to know whether there were black holes and gravitational waves. But a fixed experiment with a fixed budget, predetermined sensitivity etc. simply cannot be guaranteed to produce the answer. That's the crucial point that kills Loeb's Aryan Physics as a proposed (not so) new method to do science.

For example, both gravitational waves and black holes are rather hard to see. Similarly, the numerical value of the cosmological constant (or vacuum energy density) is very small. It's this smallness that has implied that one needed a long – and impossible to plan – period of time to discover these things experimentally.

Because black holes, gravitational waves, and a positive cosmological constant needed fine gadgets – and it was not known in advance how fine they had to be – does it mean that the theorists should be banned from studying these questions and concepts? The correct answer is obviously No – while Loeb's answer is Yes. Almost all of theoretical physics is composed of such questions. We just can't know in advance how much time will be needed to settle the questions we care about (and, as Edwin emphasized, there is nothing special about the timescale given by "our lifespan"). We can't know what the answers will be. We can't know whether the evidence that settles these questions will be theoretical in character, dependent on somewhat new experimental tools, or dependent on completely new experimental tools, discoveries, and inventions.

None of these things about the future flow of evidence can be known now (otherwise we could settle all these things now!) which is why it's impossible for these unknown answers to influence what theorists study now! The influences that Loeb demands would violate causality. If the theorists knew in advance when the answer is obtained, they would really have to know what the answer is – as I mentioned above, the confirmation of a null hypothesis always means that the answer to the interesting qualitative question was postponed. But then the whole research would be pointless.

So if science followed Loeb's Aryan Physics principles, it would be pointless! The real science follows the scientific method. Scientists must make decisions and conclusions, often conclusions blurred by some uncertainty, right now, based on the facts that are already known right now – not according to some 4-year plans, 5-year plans, or 50-year plans. And if their research depends on some assumptions, they have to articulate them and go through the possibilities (ideally all of them).

It's also utterly demagogic for him to talk about the "Galilean Oath" because Galileo Galilei disagreed with ideas that were very similar to Loeb's. In particular, Galileo has never avoided the formulation of hypotheses that could have needed a long time to be settled. One example where he was wrong was Galileo's belief that comets were atmospheric phenomena. That belief looks rather silly to me (didn't they already observe the periodicity of some comets, by the way?) but the knowledge was very different then. Science needed a long time to really settle the question.

But more generally, Galileo did invent lots of conjectures and hypotheses because those were the real new concepts that became widespread once he started the new method, the scientific method. Google search for "Galileo conjectured" or "Galileo hypothesized". Of course you get lots of hits.

As e.g. Feynman said in his simple description of the scientific method, the scientific method to search for new laws works as follows: First, we guess the laws. Then we compute consequences. And then we compare the consequences to the empirical data.

Note the order of the steps: the guess must be at the very beginning, scientists must be free to present all such possible hypotheses and guesses, and the computation of the consequences must still be close to the beginning. Loeb proposes something entirely different. He wants some planning of future experiments to be placed at the beginning, and this planning should restrict what the physicists are allowed to think about in the first place.

Sorry, that wouldn't be science and it couldn't have produced interesting results, at least not systematically. And these restrictions are indeed completely analogous to the bogus restrictions that the church officials – and later various philosophers etc. – tried to place on the scientific research. Like Loeb, the church hierarchy also wanted the evidence to be direct at all cases. But one of the ingenious insights by Galileo was that he realized that the evidence may often be indirect or very indirect but one may still learn a great deal of insights out of it.

The simplest example of this "direct vs indirect" controversy are the telescopes. Galileo has improved the telescope technology and made numerous new observations – such as those of the Jovian moons. The church hierarchy actually disputed that those satellites existed because the observation by telescopes wasn't direct enough for them. It took many years before people realized how incredibly idiotic such an argument was. It would be a straight denial of the evidence. The telescopes really see the same thing as the eyes when both see something. Sometimes, telescopes see more details than the eyes – so they must be considered nothing else than improved eyes. The observations from eyes and telescopes are equally trustworthy. But telescopes have a better resolution.

The laymen trust telescopes today even though the telescope observations are "indirect" ways to see something. But the tools to observe and deduce things in physics have become vastly more indirect than they were in Galileo's lifetime. And most laymen – including folks like Loeb – simply get lost in the long chains of reasoning. That's one reason why many people distrust science. Because they haven't verified them individually (and most laymen wouldn't be smart or patient enough to do so), they believe that the long chains of reasoning and evidence just cannot work. But they do work and they are getting longer.

The importance of reasoning and theory-based generalizations was increasing much more quickly during Newton's lifetime – and it kept on increasing at an accelerating rate. Newton united the celestial and terrestrial gravity, among other things. The falling apple and the orbiting Moon move because of the very same force that he described by a single formula. Did he have a "direct proof" that the apple is doing the same thing in the Earth's gravitational field as the Moon? Well, you can't really have a direct proof of such a statement – which could be described as a metaphor by some. His theory was natural enough and compatible with the available tests. Some of these tests were quantitative yet not guaranteed at the beginning. So of course they increased the probability that the unification of celestial and terrestrial gravity was right. But whether such confirmations would arise, how strong and numerous they would be, and when they would materialize just isn't know at the beginning.
The risk for physics stems primarily from mathematically beautiful “truths,” such as string theory, accepted prematurely for decades as a description of reality just because of their elegance.
OK, this criticism of "elegance" is mostly a misinterpretation of pop science. Scientists sometimes describe their feelings – how their brains feel nicely when things fit together. Sometimes they only talk about these emotional things in order to find some common ground with a journalist or another layman. But at the end, this type of beauty or elegance is very different from the beauty or elegance experienced by the laymen or artists. The theoretical physicists' version of beauty or elegance reflects some rather technical properties of the theories and the statement that these traits increase the probability that the theory is right may be pretty much proven.

But even if you disagree with these proofs, it doesn't matter because the scientific papers simply don't use the beauty or elegance arguments prominently. When you read a new paper about some string dualities, string vacua, or anything of the sort, you don't really read "this would be beautiful, and therefore the value of some quantity is XY". Only when there are some calculations of XY, the authors claim that there is some evidence. Otherwise they call their propositions conjectures or hypotheses. And sometimes they use these words that remind us of the uncertainty even when there is a rather substantial amount of evidence available, too.

But the uncertainty is unavoidable in science. A person who feels sick whenever there is some uncertainty just cannot be a scientist. Despite the uncertainty, a scientist has to determine what seems more likely and less likely right now. When some things look very likely, they may be accepted as facts at a preliminary basis. Some other people's belief in these propositions may be weaker – and they may claim that the proposition was accepted prematurely. But at the end, some preliminary conclusions are being made about many things. Science just couldn't possibly work without them.

By the way, I forgot to discuss the subtitle of Loeb's article:
Our discipline is a dialogue with nature, not a monologue, as some theorists would prefer to believe
Note that he emphasizes that theoretical physics is "his discipline". It sounds similar to Smolin's fraudulent claims that he was a "string theorist". Smolin isn't a string theorist and doesn't have the intellectual abilities to ever become a string theorist. Whether Loeb is a theoretical physicist is at least debatable. He's the boss of Harvard's astronomy department. The words "astrophysicist" would surely be defensible. But the phrase "theoretical physicist" isn't quite the same thing. I hope that you remember Sheldon Cooper's explanation of the difference between a rocket scientist and a theoretical physicist.

Why doesn't Missy just tell them that Sheldon is a toll taker at the Golden Gate Bridge? ;-)

Given Loeb's fundamental problems with the totally basic methodology of theoretical physics – including thought experiments and long periods of careful and patient thinking uninterrupted by experimental distractions – I think it is much more reasonable to say that Loeb clearly isn't a theoretical physicist so his subtitle is a fraudulent effort to claim some authority that he doesn't possess.

OK, Loeb tried to hijack Galileo's name for some delusions about (or against) modern physics that Galileo would almost certainly disagree with. Galileo wouldn't join these Aryan-Physics-style attacks on theoretical physics. At some level, we may consider him a founder of theoretical physics, too.

SETI vs string theory

But my title refers to a particular bizarre coincidence in Loeb's criticism of theorists' thinking that could be experimentally inaccessible for the rest of our (or some living person's?) lifetimes. He wants the experimental results right now, doesn't he? A funny thing is that Loeb is also a key official at the Breakthrough Starshot Project, Yuri Milner's \$100 million kite to be sent to greet the oppressed extraterrestrial minorities who live near Alpha Centauri, the nearest star of ours except for the Sun.

String theory is too speculative for him but the discussions with the ETs are just fine, aren't they? Loeb seems aware of the ludicrous situation in which he has maneuvered himself:
At the same time, many of the same scientists that consider the study of extra dimensions as mainstream regard the search for extraterrestrial intelligence (SETI) as speculative. This mindset fails to recognize that SETI merely involves searching elsewhere for something we already know exists on Earth, and by the knowledge that a quarter of all stars host a potentially habitable Earth-size planet around them.
From his perspective, the efforts to chat with the extraterrestrial aliens are less speculative than modern theoretical physics. Wow. Why is it so? His argument is cute as well. SETI is just searching for something that is known to exist – intelligent life. However, the thing that just searches for something that is known to exist – intelligent life – would have the acronym SI only and it would be completely pointless because the answer is known. SETI also has ET in the middle, you know, which stands for "extraterrestrial". And Loeb must have overlooked these two letters altogether.

It is not known at all whether there are other planets where intelligent life exists, and if they exist, what is their density, age, longevity, appearance, and degree of similarity to the life on Earth. It's even more unknown or speculative how these hypothetical ETs, if they exist near Alpha Centauri, would react to Milner's kite. We couldn't even reliably predict how our civilization would react to a similar kite that would arrive to Earth. How could we make realistic plans about the reactions of a hypothetical extraterrestrial civilization?

On the other hand, string theory is just a technical upgrade of quantum field theory – one that looks unique even 50 years after the birth of string theory. Quantum field theory and string theory yield basically the same predictions for the doable experiments, quantum field theory is demonstrably the relevant approximation of stringy physics, and this approximation has been successfully compared to the empirical data. Everything seems to work.

The extra dimensions are just scalar fields analogous to those that are known to exist that are added on the stringy world sheet (and in this sense, the addition of the extra dimension is as mundane as the addition of an extra flavor of leptons or quarks). We have theoretical reasons to think that the total number of spacetime dimensions should be 10 or 11. Unlike the expectations about the ETs, this is not mere prejudice. There are actually calculations of the critical dimension. Joe Polchinski's "String Theory" textbook contains 7 different calculations of $$D=26$$ for the bosonic string in the first volume; the realistic superstring analogously has $$D=10$$. This is not like saying "there should be cow-like aliens near Alpha Centauri because the stars look alike and I like this assertion".

How can someone say that this research of extensions of successful quantum field theories is as speculative as Skyping with extraterrestrial aliens, let alone more speculative than those big plans with the ETs? At some moments, you can see that some people have simply lost it. And Loeb has lost it. It makes no sense to talk to him about these matters. He seems to hate theoretical physics so fanatically that he's willing to team up not only with the Šmoit-like crackpots but also with extraterrestrial aliens in his efforts to fight against modern theoretical physics.

Too bad, Mr Loeb, but even if extraterrestrial intelligent civilizations exist, it won't help your case because these civilizations – because of the adjective "intelligent" – know that string theory is right and you are full of šit.

And that's the memo.

P.S.: I forgot to discuss the "intellectual power" paragraph:
Given our academic reward system of grades, promotions and prizes, we sometimes forget that physics is a learning experience about nature rather than an arena for demonstrating our intellectual power. As students of experience, we should be allowed to make mistakes and correct our prejudices.
Now, this is a bizarre combination of statements. Loeb says "physics is about" learning, not demonstrating our intellectual power. "Physics is about" is a vague sequence of words, however. We should distinguish two questions: What drives people to do physics? And what decides about their success?

What primarily drives the essential people to do physics is curiosity. Physicists want to know how Nature works. String theorists want lots of more detailed questions about Nature to be answered. Their curiosity is real and they don't give a damn whether an ideologue wants to prevent them from studying some questions: the curiosity is real, they know that they want to know, and some obnoxious Loeb-style babbling can't change anything about it.

Some people are secondary researchers. They do it because it's a good source of income or prestige or whatever. They study it because others have made it possible, they created the jobs, chairs, and so on. But the primary motivation is curiosity.

But then we have the question whether one succeeds. The intellectual power isn't everything but it's obviously important. Loeb clearly wants to deny this importance – but he doesn't want to do it directly because the statement would sound idiotic, indeed. But why does he feel so uncomfortable about the need for intellectual power in theoretical physics?

He presents the intellectual power as the opposite of the validity of physical theories. This contrast is the whole point of the paragraph above. But this contrast is complete nonsense. There is no negative correlation between "intellectual power" and "validity of the theories that are found". On the contrary, the correlation is pretty much obviously positive.

At the end, his attack against the intellectual power is fully analogous to the statement that ice-hockey isn't about the demonstration of one's physical strength and skills, it's about scoring goals. When some parts are emphasized, the sentence is correct. But not too correct. The demonstration of the physical skills and strength is also "what ice-hockey is about". It's what drives some people. And the skills and strength are needed to do it well, too. The rhetorical exercise "either strength, or goals" – which is so completely analogous to Loeb's "either intellectual power, or proper learning of things about Nature" – is just a road to hell. The only possible implication of such a proposition would be to say that "people without the intellectual power should be made theoretical physicists". Does he really believe this makes any sense? Or why does he mix the validity of theories with the intellectual power in this negative way?

Well, let me tell you why. Because he is jealous about some people's superior intellectual powers compared to his. And he is making the bet – probably correctly – that the readers of Scientific American's pages are dumb enough not to notice that his rant is completely illogical, from the beginning to the end.

August 13, 2018

Andrew Jaffe - Leaves on the Line

Planck: Demographics and Diversity

Another aspect of Planck’s legacy bears examining.

A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).

Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.

Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)

I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).

This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)

However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.

This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.

Axel Maas - Looking Inside the Standard Model

Fostering an idea with experience
In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.

August 08, 2018

Clifford V. Johnson - Asymptotia

Science Friday Book Club Q&A

Between 3 and 4 pm Eastern time today (very shortly, as I type!) I’ll be answering questions about Hawking’s “A Brief History of Time” as part of a Live twitter event for Science Friday’s Book Club. See below. Come join in! Hey SciFri Book Clubbers! Do you have had any … Click to continue reading this post

The post Science Friday Book Club Q&A appeared first on Asymptotia.

August 01, 2018

Clifford V. Johnson - Asymptotia

DC Moments…

I'm in Washington DC for a very short time. 16 hours or so. I'd have come for longer, but I've got some parenting to get back to. It feels a bit rude to come to the American Association of Physics Teachers annual meeting for such a short time, especially because the whole mission of teaching physics in all the myriad ways is very dear to my heart, and here is a massive group of people devoted to gathering about it.

It also feels a bit rude because I'm here to pick up an award. (Here's the announcement that I forgot to post some months back.)

I meant what I said in the press release: It certainly is an honour to be recognised with the Klopsteg Memorial Lecture Award (for my work in science outreach/engagemnet), and it'll be a delight to speak to the assembled audience tomorrow and accept the award.

Speaking in an unvarnished way for a moment, I and many others who do a lot of work to engage the public with science have, over the years, had to deal with not being taken seriously by many of our colleagues. Indeed, suffering being dismissed as not being "serious enough" about our other [...] Click to continue reading this post

The post DC Moments… appeared first on Asymptotia.

July 26, 2018

Sean Carroll - Preposterous Universe

Mindscape Podcast

For anyone who hasn’t been following along on other social media, the big news is that I’ve started a podcast, called Mindscape. It’s still young, but early returns are promising!

I won’t be posting each new episode here; the podcast has a “blog” of its own, and episodes and associated show notes will be published there. You can subscribe by RSS as usual, or there is also an email list you can sign up for. For podcast aficionados, Mindscape should be available wherever finer podcasts are served, including iTunes, Google Play, Stitcher, Spotify, and so on.

As explained at the welcome post, the format will be fairly conventional: me talking to smart people about interesting ideas. It won’t be all, or even primarily, about physics; much of my personal motivation is to get the opportunity to talk about all sorts of other interesting things. I’m expecting there will be occasional solo episodes that just have me rambling on about one thing or another.

We’ve already had a bunch of cool guests, check these out:

And there are more exciting episodes on the way. Enjoy, and spread the word!

July 20, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Summer days, academics and technological universities

The heatwave in the northern hemisphere may (or may not) be an ominous portend of things to come, but it’s certainly making for an enjoyable summer here in Ireland. I usually find it quite difficult to do any meaningful research when the sun is out, but things are a bit different when the good weather is regular.  Most days, I have breakfast in the village, a swim in the sea before work, a swim after work and a game of tennis to round off the evening. Tough life, eh.

Counsellor’s Strand in Dunmore East

So far, I’ve got one one conference proceeding written, one historical paper revamped and two articles refereed (I really enjoy the latter process, it’s so easy for academics to become isolated). Next week I hope to get back to that book I never seem to finish.

However, it would be misleading to portray a cosy image of a college full of academics beavering away over the summer. This simply isn’t the case around here – while a few researchers can be found in college this summer, the majority of lecturing staff decamped on June 20th and will not return until September 1st.

And why wouldn’t they? Isn’t that their right under the Institute of Technology contracts, especially given the heavy teaching loads during the semester? Sure – but I think it’s important to acknowledge that this is a very different set-up to the modern university sector, and doesn’t quite square with the move towards technological universities.

This week, the Irish newspapers are full of articles depicting the opening of Ireland’s first technological university, and apparently, the Prime Minister is anxious our own college should get a move on. Hmm. No mention of the prospect of a change in teaching duties, or increased facilities/time for research, as far as I can tell (I’d give a lot for an office that was fit for purpose).  So will the new designation just amount to a name change? And this is not to mention the scary business of the merging of different institutes of technology. Those who raise questions about this now tend to get cast as dismissed as resistors of progress. Yet the history of merging large organisations in Ireland hardly inspires confidence, not least because of a tendency for increased layers of bureaucracy to appear out of nowhere – HSE anyone?

July 19, 2018

Andrew Jaffe - Leaves on the Line

(Almost) The end of Planck

This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.

Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.

And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).

(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)

Planck 2018: the science

So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.

The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.

All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)

Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.

As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).

Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.

The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.

(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)

Planck 2018: lessons learned

So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).

But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.

Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.

That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.

Andrew Jaffe - Leaves on the Line

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.

July 16, 2018

Tommaso Dorigo - Scientificblogging

A Beautiful New Spectroscopy Measurement
What is spectroscopy ?
(A) the observation of ghosts by infrared visors or other optical devices
(B) the study of excited states of matter through observation of energy emissions

If you answered (A), you are probably using a lousy internet search engine; and btw, you are rather dumb. Ghosts do not exist.

Otherwise you are welcome to read on. We are, in fact, about to discuss a cutting-edge spectroscopy measurement, performed by the CMS experiment using lots of proton-proton collisions by the CERN Large Hadron Collider (LHC).

July 12, 2018

Matt Strassler - Of Particular Significance

“Seeing” Double: Neutrinos and Photons Observed from the Same Cosmic Source

There has long been a question as to what types of events and processes are responsible for the highest-energy neutrinos coming from space and observed by scientists.  Another question, probably related, is what creates the majority of high-energy cosmic rays — the particles, mostly protons, that are constantly raining down upon the Earth.

As scientists’ ability to detect high-energy neutrinos (particles that are hugely abundant, electrically neutral, very light-weight, and very difficult to observe) and high-energy photons (particles of light, though not necessarily of visible light) have become more powerful and precise, there’s been considerable hope of getting an answer to these question.  One of the things we’ve been awaiting (and been disappointed a couple of times) is a violent explosion out in the universe that produces both high-energy photons and neutrinos at the same time, at a high enough rate that both types of particles can be observed at the same time coming from the same direction.

In recent years, there has been some indirect evidence that blazars — narrow jets of particles, pointed in our general direction like the barrel of a gun, and created as material swirls near and almost into giant black holes in the centers of very distant galaxies — may be responsible for the high-energy neutrinos.  Strong direct evidence in favor of this hypothesis has just been presented today.   Last year, one of these blazars flared brightly, and the flare created both high-energy neutrinos and high-energy photons that were observed within the same period, coming from the same place in the sky.

I have written about the IceCube neutrino observatory before; it’s a cubic kilometer of ice under the South Pole, instrumented with light detectors, and it’s ideal for observing neutrinos whose motion-energy far exceeds that of the protons in the Large Hadron Collider, where the Higgs particle was discovered.  These neutrinos mostly pass through Ice Cube undetected, but one in 100,000 hits something, and debris from the collision produces visible light that Ice Cube’s detectors can record.   IceCube has already made important discoveries, detecting a new class of high-energy neutrinos.

On Sept 22 of last year, one of these very high-energy neutrinos was observed at IceCube. More precisely, a muon created underground by the collision of this neutrino with an atomic nucleus was observed in IceCube.  To create the observed muon, the neutrino must have had a motion-energy tens of thousand times larger than than the motion-energy of each proton at the Large Hadron Collider (LHC).  And the direction of the neutrino’s motion is known too; it’s essentially the same as that of the observed muon.  So IceCube’s scientists knew where, on the sky, this neutrino had come from.

(This doesn’t work for typical cosmic rays; protons, for instance, travel in curved paths because they are deflected by cosmic magnetic fields, so even if you measure their travel direction at their arrival to Earth, you don’t then know where they came from. Neutrinos, beng electrically neutral, aren’t affected by magnetic fields and travel in a straight line, just as photons do.)

Very close to that direction is a well-known blazar (TXS-0506), four billion light years away (a good fraction of the distance across the visible universe).

The IceCube scientists immediately reported their neutrino observation to scientists with high-energy photon detectors.  (I’ve also written about some of the detectors used to study the very high-energy photons that we find in the sky: in particular, the Fermi/LAT satellite played a role in this latest discovery.) Fermi/LAT, which continuously monitors the sky, was already detecting high-energy photons coming from the same direction.   Within a few days the Fermi scientists had confirmed that TXS-0506 was indeed flaring at the time — already starting in April 2017 in fact, six times as bright as normal.  With this news from IceCube and Fermi/LAT, many other telescopes (including the MAGIC cosmic ray detector telescopes among others) then followed suit and studied the blazar, learning more about the properties of its flare.

Now, just a single neutrino on its own isn’t entirely convincing; is it possible that this was all just a coincidence?  So the IceCube folks went back to their older data to snoop around.  There they discovered, in their 2014-2015 data, a dramatic flare in neutrinos — more than a dozen neutrinos, seen over 150 days, had come from the same direction in the sky where TXS-0506 is sitting.  (More precisely, nearly 20 from this direction were seen, in a time period where normally there’d just be 6 or 7 by random chance.)  This confirms that this blazar is indeed a source of neutrinos.  And from the energies of the neutrinos in this flare, yet more can be learned about this blazar, and how it makes  high-energy photons and neutrinos at the same time.  Interestingly, so far at least, there’s no strong evidence for this 2014 flare in photons, except perhaps an increase in the number of the highest-energy photons… but not in the total brightness of the source.

The full picture, still emerging, tends to support the idea that the blazar arises from a supermassive black hole, acting as a natural particle accelerator, making a narrow spray of particles, including protons, at extremely high energy.  These protons, millions of times more energetic than those at the Large Hadron Collider, then collide with more ordinary particles that are just wandering around, such as visible-light photons from starlight or infrared photons from the ambient heat of the universe.  The collisions produce particles called pions, made from quarks and anti-quarks and gluons (just as protons are), which in turn decay either to photons or to (among other things) neutrinos.  And its those resulting photons and neutrinos which have now been jointly observed.

Since cosmic rays, the mysterious high energy particles from outer space that are constantly raining down on our planet, are mostly protons, this is evidence that many, perhaps most, of the highest energy cosmic rays are created in the natural particle accelerators associated with blazars. Many scientists have suspected that the most extreme cosmic rays are associated with the most active black holes at the centers of galaxies, and now we have evidence and more details in favor of this idea.  It now appears likely that that this question will be answerable over time, as more blazar flares are observed and studied.

The announcement of this important discovery was made at the National Science Foundation by Francis Halzen, the IceCube principal investigator, Olga Botner, former IceCube spokesperson, Regina Caputo, the Fermi-LAT analysis coordinator, and Razmik Mirzoyan, MAGIC spokesperson.

The fact that both photons and neutrinos have been observed from the same source is an example of what people are now calling “multi-messenger astronomy”; a previous example was the observation in gravitational waves, and in photons of many different energies, of two merging neutron stars.  Of course, something like this already happened in 1987, when a supernova was seen by eye, and also observed in neutrinos.  But in this case, the neutrinos and photons have energies millions and billions of times larger!

July 08, 2018

Marco Frasca - The Gauge Connection

ICHEP 2018

The great high-energy physics conference ICHEP 2018 is over and, as usual, I spend some words about it. The big collaborations of CERN presented their last results. I think the most relevant of this is about the evidence ($3\sigma$) that the Standard Model is at odds with the measurement of spin correlation between top-antitop pair of quarks. More is given in the ATLAS communicate. As expected, increasing precision proves to be rewarding.

About the Higgs particle, after the important announcement about the existence of the ttH process, both ATLAS and CMS are pursuing further their improvement of precision. About the signal strength they give the following results. For ATLAS (see here)

$\mu=1.13\pm 0.05({\rm stat.})\pm 0.05({\rm exp.})^{+0.05}_{-0.04}({\rm sig. th.})\pm 0.03({\rm bkg. th})$

and CMS (see here)

$\mu=1.17\pm 0.06({\rm stat.})^{+0.06}_{-0.05}({\rm sig. th.})\pm 0.06({\rm other syst.}).$

The news is that the error is diminished and both agrees. They show a small tension, 13% and 17% respectively, but the overall result is consistent with the Standard Model.

When the different contributions are unpacked in the respective contributions due to different processes, CMS claims some tensions in the WW decay that should be taken under scrutiny in the future (see here). They presented the results from $35.9{\rm fb}^{-1}$ data and so, there is no significant improvement, for the moment, with respect to Moriond conference this year. The situation is rather better for the ZZ decay where no tension appears and the agreement with the Standard Model is there in all its glory (see here). Things are quite different, but not too much, for ATLAS as in this case they observe some tensions but these are all below $2\sigma$ (see here). For the WW decay, ATLAS does not see anything above $1\sigma$ (see here).

So, although there is something to take under attention with the increase of data, that will reach $100 {\rm fb}^{-1}$ this year, but the Standard Model is in good health with respect to the Higgs sector even if there is a lot to be answered yet and precision measurements are the main tool. The correlation in the tt pair is absolutely promising and we should hope this will be confirmed a discovery.

July 04, 2018

Tommaso Dorigo - Scientificblogging

Chasing The Higgs Self Coupling: New CMS Results
Happy Birthday Higgs boson! The discovery of the last fundamental particle of the Standard Model was announced exactly 6 years ago at CERN (well, plus one day, since I decided to postpone to July 5 the publication of this post...).

In the Standard Model, the theory of fundamental interactions among elementary particles which enshrines our current understanding of the subnuclear world,  particles that constitute matter are fermionic: they have a haif-integer value of a quantity we call spin; and particles that mediate interactions between those fermions, keeping them together and governing their behaviour, are bosonic: they have an integer value of spin.

June 25, 2018

Sean Carroll - Preposterous Universe

On Civility

Alex Wong/Getty Images

White House Press Secretary Sarah Sanders went to have dinner at a local restaurant the other day. The owner, who is adamantly opposed to the policies of the Trump administration, politely asked her to leave, and she did. Now (who says human behavior is hard to predict?) an intense discussion has broken out concerning the role of civility in public discourse and our daily life. The Washington Post editorial board, in particular, called for public officials to be allowed to eat in peace, and people have responded in volume.

I don’t have a tweet-length response to this, as I think the issue is more complex than people want to make it out to be. I am pretty far out to one extreme when it comes to the importance of engaging constructively with people with whom we disagree. We live in a liberal democracy, and we should value the importance of getting along even in the face of fundamentally different values, much less specific political stances. Not everyone is worth talking to, but I prefer to err on the side of trying to listen to and speak with as wide a spectrum of people as I can. Hell, maybe I am even wrong and could learn something.

On the other hand, there is a limit. At some point, people become so odious and morally reprehensible that they are just monsters, not respected opponents. It’s important to keep in our list of available actions the ability to simply oppose those who are irredeemably dangerous/evil/wrong. You don’t have to let Hitler eat in your restaurant.

This raises two issues that are not so easy to adjudicate. First, where do we draw the line? What are the criteria by which we can judge someone to have crossed over from “disagreed with” to “shunned”? I honestly don’t know. I tend to err on the side of not shunning people (in public spaces) until it becomes absolutely necessary, but I’m willing to have my mind changed about this. I also think the worry that this particular administration exhibits authoritarian tendencies that could lead to a catastrophe is not a completely silly one, and is at least worth considering seriously.

More importantly, if the argument is “moral monsters should just be shunned, not reasoned with or dealt with constructively,” we have to be prepared to be shunned ourselves by those who think that we’re moral monsters (and those people are out there).  There are those who think, for what they take to be good moral reasons, that abortion and homosexuality are unforgivable sins. If we think it’s okay for restaurant owners who oppose Trump to refuse service to members of his administration, we have to allow staunch opponents of e.g. abortion rights to refuse service to politicians or judges who protect those rights.

The issue becomes especially tricky when the category of “people who are considered to be morally reprehensible” coincides with an entire class of humans who have long been discriminated against, e.g. gays or transgender people. In my view it is bigoted and wrong to discriminate against those groups, but there exist people who find it a moral imperative to do so. A sensible distinction can probably be made between groups that we as a society have decided are worthy of protection and equal treatment regardless of an individual’s moral code, so it’s at least consistent to allow restaurant owners to refuse to serve specific people they think are moral monsters because of some policy they advocate, while still requiring that they serve members of groups whose behaviors they find objectionable.

The only alternative, as I see it, is to give up on the values of liberal toleration, and to simply declare that our personal moral views are unquestionably the right ones, and everyone should be judged by them. That sounds wrong, although we do in fact enshrine certain moral judgments in our legal codes (murder is bad) while leaving others up to individual conscience (whether you want to eat meat is up to you). But it’s probably best to keep that moral core that we codify into law as minimal and widely-agreed-upon as possible, if we want to live in a diverse society.

This would all be simpler if we didn’t have an administration in power that actively works to demonize immigrants and non-straight-white-Americans more generally. Tolerating the intolerant is one of the hardest tasks in a democracy.

June 24, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

7th Robert Boyle Summer School

This weekend saw the 7th Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s one of my favourite conferences – a select number of talks on the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

The Irish-born scientist and aristocrat Robert Boyle

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

Born in Lismore into a wealthy landowning family, Robert Boyle became one of the most important figures in the Scientific Revolution. A contemporary of Isaac Newton and Robert Hooke, he is recognized the world over for his scientific discoveries, his role in the rise of the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

This year, the theme of the conference was ‘What do we know – and how do we know it?’. There were many interesting talks such as Boyle’s Theory of Knowledge by Dr William Eaton, Associate Professor of Early Modern Philosophy at Georgia Southern University: The How, Who & What of Scientific Discovery by Paul Strathern, author of a great many books on scientists and philosophers such as the well-known Philosophers in 90 Minutes series: Scientific Enquiry and Brain StateUnderstanding the Nature of Knowledge by Professor William T. O’Connor, Head of Teaching and Research in Physiology at the University of Limerick Graduate Entry Medical School: The Promise and Peril of Big Data by Timandra Harkness, well-know media presenter, comedian and writer. For physicists, there was a welcome opportunity to hear the well-known American philosopher of physics Robert P. Crease present the talk Science Denial: will any knowledge do? The full programme for the conference can be found here.

All in all, a hugely enjoyable summer school, culminating in a garden party in the grounds of Lismore castle, Boyle’s ancestral home. My own contribution was to provide the music for the garden party – a flute, violin and cello trio, playing the music of Boyle’s contemporaries, from Johann Sebastian Bach to Turlough O’ Carolan. In my view, the latter was a baroque composer of great importance whose music should be much better known outside Ireland.

Images from the garden party in the grounds of Lismore Castle

June 22, 2018

Jester - Resonaances

Both g-2 anomalies
Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections...

June 16, 2018

Tommaso Dorigo - Scientificblogging

On The Residual Brightness Of Eclipsed Jovian Moons
While preparing for another evening of observation of Jupiter's atmosphere with my faithful 16" dobsonian scope, I found out that the satellite Io will disappear behind the Jovian shadow tonight. This is a quite common phenomenon and not a very spectacular one, but still quite interesting to look forward to during a visual observation - the moon takes some time to fully disappear, so it is fun to follow the event.
This however got me thinking. A fully eclipsed jovian moon should still be able to reflect back some light picked up from the still lit other satellites - so it should not, after all, appear completely dark. Can a calculation be made of the effect ? Of course - and it's not that difficult.

June 12, 2018

Axel Maas - Looking Inside the Standard Model

How to test an idea
As you may have guessed from reading through the blog, our work is centered around a change of paradigm: That there is a very intriguing structure of the Higgs and the W/Z bosons. And that what we observe in the experiments are actually more complicated than what we usually assume. That they are not just essentially point-like objects.

This is a very bold claim, as it touches upon very basic things in the standard model of particle physics. And the interpretation of experiments. However, it is at the same time a necessary consequence if one takes the underlying more formal theoretical foundation seriously. The reason that there is not a huge clash is that the standard model is very special. Because of this both pictures give almost the same prediction for experiments. This can also be understood quantitatively. That is where I have written a review about. It can be imagined in this way:

Thus, the actual particle, which we observe, and call the Higgs is actually a complicated object made from two Higgs particles. However, one of those is so much eclipsed by the other that it looks like just a single one. And a very tiny correction to it.

So far, this does not seem to be something where it is necessary to worry about.

However, there are many and good reasons to believe that the standard model is not the end of particle physics. There are many, many blogs out there, which explain the reasons for this much better than I do. However, our research provides hints that what works so nicely in the standard model, may work much less so in some extensions of the standard model. That there the composite nature makes huge differences for experiments. This was what came out of our numerical simulations. Of course, these are not perfect. And, after all, unfortunately we did not yet discover anything beyond the standard model in experiments. So we cannot test our ideas against actual experiments, which would be the best thing to do. And without experimental support such an enormous shift in paradigm seems to be a bit far fetched. Even if our numerical simulations, which are far from perfect, support the idea. Formal ideas supported by numerical simulations is just not as convincing as experimental confirmation.

So, is this hopeless? Do we have to wait for new physics to make its appearance?

Well, not yet. In the figure above, there was 'something'. So, the ideas make also a statement that even within the standard model there should be a difference. The only question is, what is really the value of a 'little bit'? So far, experiments did not show any deviations from the usual picture. So 'little bit' needs indeed to be really rather small. But we have a calculation prescription for this 'little bit' for the standard model. So, at the very least what we can do is to make a calculation for this 'little bit' in the standard model. We should then see if the value of 'little bit' may already be so large that the basic idea is ruled out, because we are in conflict with experiment. If this is the case, this would raise a lot of question on the basic theory, but well, experiment rules. And thus, we would need to go back to the drawing board, and get a better understanding of the theory.

Or, we get something which is in agreement with current experiment, because it is smaller then the current experimental precision. But then we can make a statement how much better experimental precision needs to become to see the difference. Hopefully the answer will not be so much that it will not be possible within the next couple of decades. But this we will see at the end of the calculation. And then we can decide, whether we will get an experimental test.

Doing the calculations is actually not so simple. On the one hand, they are technically challenging, even though our method for it is rather well under control. But it will also not yield perfect results, but hopefully good enough. Also, it depends strongly on the type of experiment how simple the calculations are. We did a first few steps, though for a type of experiment not (yet) available, but hopefully in about twenty years. There we saw that not only the type of experiment, but also the type of measurement matters. For some measurements the effect will be much smaller than for others. But we are not yet able to predict this before doing the calculation. There, we need still much better understanding of the underlying mathematics. That we will hopefully gain by doing more of these calculations. This is a project I am currently pursuing with a number of master students for various measurements and at various levels. Hopefully, in the end we get a clear set of predictions. And then we can ask our colleagues at experiments to please check these predictions. So, stay tuned.

By the way: This is the standard cycle for testing new ideas and theories. Have an idea. Check that it fits with all existing experiments. And yes, this may be very, very many. If your idea passes this test: Great! There is actually a chance that it can be right. If not, you have to understand why it does not fit. If it can be fixed, fix it, and start again. Or have a new idea. And, at any rate, if it cannot be fixed, have a new idea. When you got an idea which works with everything we know, use it to make a prediction where you get a difference to our current theories. By this you provide an experimental test, which can decide whether your idea is the better one. If yes: Great! You just rewritten our understanding of nature. If not: Well, go back to fix it or have a new idea. Of course, it is best if we have already an experiment which does not fit with our current theories. But there we are at this stage a little short off. May change again. If your theory has no predictions which can be testable in any foreseeable future experimentally. Well, that is a good question how to deal with this, and there is not yet a consensus how to proceed.

June 10, 2018

Tommaso Dorigo - Scientificblogging

Modeling Issues Or New Physics ? Surprises From Top Quark Kinematics Study
Simulation, noun:
1. Imitation or enactment
2. The act or process of pretending; feigning.
3. An assumption or imitation of a particular appearance or form; counterfeit; sham.

Well, high-energy physics is all about simulations.

We have a theoretical model that predicts the outcome of the very energetic particle collisions we create in the core of our giant detectors, but we only have approximate descriptions of the inputs to the theoretical model, so we need simulations.

June 09, 2018

Jester - Resonaances

Dark Matter goes sub-GeV
It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.

Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles.

It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing  developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T.  In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck:  ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However,  this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10.

It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space).  Nevertheless, dedicated experiments will soon  be taking over. Recently, two collaborations published first results from their prototype detectors:  one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor.  Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV.  A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.

Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.

June 08, 2018

Jester - Resonaances

Massive Gravity, or You Only Live Twice
Proving Einstein wrong is the ultimate ambition of every crackpot and physicist alike. In particular, Einstein's theory of gravitation -  the general relativity -  has been a victim of constant harassment. That is to say, it is trivial to modify gravity at large energies (short distances), for example by embedding it in string theory, but it is notoriously difficult to change its long distance behavior. At the same time, motivations to keep trying go beyond intellectual gymnastics. For example, the accelerated expansion of the universe may be a manifestation of modified gravity (rather than of a small cosmological constant).

In Einstein's general relativity, gravitational interactions are mediated by a massless spin-2 particle - the so-called graviton. This is what gives it its hallmark properties: the long range and the universality. One obvious way to screw with Einstein is to add mass to the graviton, as entertained already in 1939 by Fierz and Pauli. The Particle Data Group quotes the constraint m ≤ 6*10^−32 eV, so we are talking about the De Broglie wavelength comparable to the size of the observable universe. Yet even that teeny mass may cause massive troubles. In 1970 the Fierz-Pauli theory was killed by the van Dam-Veltman-Zakharov (vDVZ) discontinuity. The problem stems from the fact that a massive spin-2 particle has 5 polarization states (0,±1,±2) unlike a massless one which has only two (±2). It turns out that the polarization-0 state couples to matter with the similar strength as the usual polarization ±2 modes, even in the limit where the mass goes to zero, and thus mediates an additional force which differs from the usual gravity. One finds that, in massive gravity, light bending would be 25% smaller, in conflict with the very precise observations of stars' deflection around the Sun. vDV concluded that "the graviton has rigorously zero mass". Dead for the first time...

The second coming was heralded soon after by Vainshtein, who noticed that the troublesome polarization-0 mode can be shut off in the proximity of stars and planets. This can happen in the presence of graviton self-interactions of a certain type. Technically, what happens is that the polarization-0 mode develops a background value around massive sources which, through the derivative self-interactions, renormalizes its kinetic term and effectively diminishes its interaction strength with matter. See here for a nice review and more technical details. Thanks to the Vainshtein mechanism, the usual predictions of general relativity are recovered around large massive source, which is exactly where we can best measure gravitational effects. The possible self-interactions leading a healthy theory without ghosts have been classified, and go under the name of the dRGT massive gravity.

There is however one inevitable consequence of the Vainshtein mechanism. The graviton self-interaction strength grows with energy, and at some point becomes inconsistent with the unitarity limits that every quantum theory should obey. This means that massive gravity is necessarily an effective theory with a limited validity range and has to be replaced by a more fundamental theory at some cutoff scale 𝞚. This is of course nothing new for gravity: the usual Einstein gravity is also an effective theory valid at most up to the Planck scale MPl～10^19 GeV.  But for massive gravity the cutoff depends on the graviton mass and is much smaller for realistic theories. At best,
So the massive gravity theory in its usual form cannot be used at distance scales shorter than ～300 km. For particle physicists that would be a disaster, but for cosmologists this is fine, as one can still predict the behavior of galaxies, stars, and planets. While the theory certainly cannot be used to describe the results of table top experiments,  it is relevant for the  movement of celestial bodies in the Solar System. Indeed, lunar laser ranging experiments or precision studies of Jupiter's orbit are interesting probes of the graviton mass.

Now comes the latest twist in the story. Some time ago this paper showed that not everything is allowed  in effective theories.  Assuming the full theory is unitary, causal and local implies non-trivial constraints on the possible interactions in the low-energy effective theory. These techniques are suitable to constrain, via dispersion relations, derivative interactions of the kind required by the Vainshtein mechanism. Applying them to the dRGT gravity one finds that it is inconsistent to assume the theory is valid all the way up to 𝞚max. Instead, it must be replaced by a more fundamental theory already at a much lower cutoff scale,  parameterized as 𝞚 = g*^1/3 𝞚max (the parameter g* is interpreted as the coupling strength of the more fundamental theory). The allowed parameter space in the g*-m plane is showed in this plot:

Massive gravity must live in the lower left corner, outside the gray area  excluded theoretically  and where the graviton mass satisfies the experimental upper limit m～10^−32 eV. This implies g* ≼ 10^-10, and thus the validity range of the theory is some 3 order of magnitude lower than 𝞚max. In other words, massive gravity is not a consistent effective theory at distance scales below ～1 million km, and thus cannot be used to describe the motion of falling apples, GPS satellites or even the Moon. In this sense, it's not much of a competition to, say, Newton. Dead for the second time.

Is this the end of the story? For the third coming we would need a more general theory with additional light particles beyond the massive graviton, which is consistent theoretically in a larger energy range, realizes the Vainshtein mechanism, and is in agreement with the current experimental observations. This is hard but not impossible to imagine. Whatever the outcome, what I like in this story is the role of theory in driving the progress, which is rarely seen these days. In the process, we have understood a lot of interesting physics whose relevance goes well beyond one specific theory. So the trip was certainly worth it, even if we find ourselves back at the departure point.

June 07, 2018

Jester - Resonaances

Can MiniBooNE be right?
The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.

This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance.  This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed.

In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012.  The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.

What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles...  Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01.

Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of  tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.

But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper.  In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron  one in a short-baseline experiment is
where U is the 4x4 neutrino mixing matrix.  The Uμ4 matrix elements controls also the νμ survival probability
The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update.
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.

June 01, 2018

Jester - Resonaances

WIMPs after XENON1T
After today's update from the XENON1T experiment, the situation on the front of direct detection of WIMP dark matter is as follows

WIMP can be loosely defined as a dark matter particle with mass in the 1 GeV - 10 TeV range and significant interactions with ordinary matter. Historically, WIMP searches have stimulated enormous interest because this type of dark matter can be easily realized in models with low scale supersymmetry. Now that we are older and wiser, many physicists would rather put their money on other realizations, such as axions, MeV dark matter, or primordial black holes. Nevertheless, WIMPs remain a viable possibility that should be further explored.

To detect WIMPs heavier than a few GeV, currently the most successful strategy is to use huge detectors filled with xenon atoms, hoping one of them is hit by a passing dark matter particle. Xenon1T beats the competition from the LUX and Panda-X experiments because it has a bigger gun tank. Technologically speaking, we have come a long way in the last 30 years. XENON1T is now sensitive to 40 GeV WIMPs interacting with nucleons with the cross section of 40 yoctobarn (1 yb = 10^-12 pb = 10^-48 cm^2). This is 6 orders of magnitude better than what the first direct detection experiment in the Homestake mine could achieve back in the 80s. Compared to the last year, the  limit is better by a factor of two at the most sensitive mass point. At high mass the improvement is somewhat smaller than expected due to a small excess of events observed by XENON1T, which is probably just a 1 sigma upward fluctuation of the background.

What we are learning about WIMPs is how they can (or cannot) interact with us. Of course, at this point in the game we don't see qualitative progress, but rather incremental quantitative improvements. One possible scenario is that WIMPs experience one of the Standard Model forces,  such as the weak or the Higgs force. The former option is strongly constrained by now. If WIMPs had interacted in the same way as our neutrino does, that is by exchanging a Z boson,  it would have been found in the Homestake experiment. Xenon1T is probing models where the dark matter coupling to the Z boson is suppressed by a factor cχ ~ 10^-3 - 10^-4 compared to that of an active neutrino. On the other hand, dark matter could be participating in weak interactions only by exchanging W bosons, which can happen for example when it is a part of an SU(2) triplet. In the plot you can see that XENON1T is approaching but not yet excluding this interesting possibility. As for models using the Higgs force, XENON1T is probing the (subjectively) most natural parameter space where WIMPs couple with order one strength to the Higgs field.

And the arms race continues. The search in XENON1T will go on until the end of this year, although at this point a discovery is extremely unlikely. Further progress is expected on a timescale of a few years thanks to the next generation xenon detectors XENONnT and LUX-ZEPLIN, which should achieve yoctobarn sensitivity. DARWIN may be the ultimate experiment along these lines, in the sense that there is no prefix smaller than yocto it will reach the irreducible background from atmospheric neutrinos, after which new detection techniques will be needed.  For dark matter mass closer to 1 GeV, several orders of magnitude of pristine parameter space will be covered by the SuperCDMS experiment. Until then we are kept in suspense. Is dark matter made of WIMPs? And if yes, does it stick above the neutrino sea?

Tommaso Dorigo - Scientificblogging

MiniBoone Confirms Neutrino Anomaly
Neutrinos, the most mysterious and fascinating of all elementary particles, continue to puzzle physicists. 20 years after the experimental verification of a long-debated effect whereby the three neutrino species can "oscillate", changing their nature by turning one into the other as they propagate in vacuum and in matter, the jury is still out to decide what really is the matter with them. And a new result by the MiniBoone collaboration is stirring waters once more.

May 26, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A festschrift at UCC

One of my favourite academic traditions is the festschrift, a conference convened to honour the contribution of a senior academic. In a sense, it’s academia’s version of an Oscar for lifetime achievement, as scholars from all around the world gather to pay tribute their former mentor, colleague or collaborator.

Festschrifts tend to be very stimulating meetings, as the diverging careers of former students and colleagues typically make for a diverse set of talks. At the same time, there is usually a unifying theme based around the specialism of the professor being honoured.

And so it was at NIALLFEST this week, as many of the great and the good from the world of Einstein’s relativity gathered at University College Cork to pay tribute to Professor Niall O’Murchadha, a theoretical physicist in UCC’s Department of Physics noted internationally for seminal contributions to general relativity.  Some measure of Niall’s influence can be seen from the number of well-known theorists at the conference, including major figures such as Bob WaldBill UnruhEdward Malec and Kip Thorne (the latter was recently awarded the Nobel Prize in Physics for his contribution to the detection of gravitational waves). The conference website can be found here and the programme is here.

University College Cork: probably the nicest college campus in Ireland

As expected, we were treated to a series of high-level talks on diverse topics, from black hole collapse to analysis of high-energy jets from active galactic nuclei, from the initial value problem in relativity to the search for dark matter (slides for my own talk can be found here). To pick one highlight, Kip Thorne’s reminiscences of the forty-year search for gravitational waves made for a fascinating presentation, from his description of early designs of the LIGO interferometer to the challenge of getting funding for early prototypes – not to mention his prescient prediction that the most likely chance of success was the detection of a signal from the merger of two black holes.

All in all, a very stimulating conference. Most entertaining of all were the speakers’ recollections of Niall’s working methods and his interaction with students and colleagues over the years. Like a great piano teacher of old, one great professor leaves a legacy of critical thinkers dispersed around their world, and their students in turn inspire the next generation!

May 21, 2018

Andrew Jaffe - Leaves on the Line

Leon Lucy, R.I.P.

I have the unfortunate duty of using this blog to announce the death a couple of weeks ago of Professor Leon B Lucy, who had been a Visiting Professor working here at Imperial College from 1998.

Leon got his PhD in the early 1960s at the University of Manchester, and after postdoctoral positions in Europe and the US, worked at Columbia University and the European Southern Observatory over the years, before coming to Imperial. He made significant contributions to the study of the evolution of stars, understanding in particular how they lose mass over the course of their evolution, and how very close binary stars interact and evolve inside their common envelope of hot gas.

Perhaps most importantly, early in his career Leon realised how useful computers could be in astrophysics. He made two major methodological contributions to astrophysical simulations. First, he realised that by simulating randomised trajectories of single particles, he could take into account more physical processes that occur inside stars. This is now called “Monte Carlo Radiative Transfer” (scientists often use the term “Monte Carlo” — after the European gambling capital — for techniques using random numbers). He also invented the technique now called smoothed-particle hydrodynamics which models gases and fluids as aggregates of pseudo-particles, now applied to models of stars, galaxies, and the large scale structure of the Universe, as well as many uses outside of astrophysics.

Leon’s other major numerical contributions comprise advanced techniques for interpreting the complicated astronomical data we get from our telescopes. In this realm, he was most famous for developing the methods, now known as Lucy-Richardson deconvolution, that were used for correcting the distorted images from the Hubble Space Telescope, before NASA was able to send a team of astronauts to install correcting lenses in the early 1990s.

For all of this work Leon was awarded the Gold Medal of the Royal Astronomical Society in 2000. Since then, Leon kept working on data analysis and stellar astrophysics — even during his illness, he asked me to help organise the submission and editing of what turned out to be his final papers, on extracting information on binary-star orbits and (a subject dear to my heart) the statistics of testing scientific models.

Until the end of last year, Leon was a regular presence here at Imperial, always ready to contribute an occasionally curmudgeonly but always insightful comment on the science (and sociology) of nearly any topic in astrophysics. We hope that we will be able to appropriately memorialise his life and work here at Imperial and elsewhere. He is survived by his wife and daughter. He will be missed.

May 14, 2018

Sean Carroll - Preposterous Universe

Intro to Cosmology Videos

In completely separate video news, here are videos of lectures I gave at CERN several years ago: “Cosmology for Particle Physicists” (May 2005). These are slightly technical — at the very least they presume you know calculus and basic physics — but are still basically accurate despite their age.

Update: I originally linked these from YouTube, but apparently they were swiped from this page at CERN, and have been taken down from YouTube. So now I’m linking directly to the CERN copies. Thanks to commenters Bill Schempp and Matt Wright.

May 10, 2018

Sean Carroll - Preposterous Universe

User-Friendly Naturalism Videos

Some of you might be familiar with the Moving Naturalism Forward workshop I organized way back in 2012. For two and a half days, an interdisciplinary group of naturalists (in the sense of “not believing in the supernatural”) sat around to hash out the following basic question: “So we don’t believe in God, what next?” How do we describe reality, how can we be moral, what are free will and consciousness, those kinds of things. Participants included Jerry Coyne, Richard Dawkins, Terrence Deacon, Simon DeDeo, Daniel Dennett, Owen Flanagan, Rebecca Newberger Goldstein, Janna Levin, Massimo Pigliucci, David Poeppel, Nicholas Pritzker, Alex Rosenberg, Don Ross, and Steven Weinberg.

Happily we recorded all of the sessions to video, and put them on YouTube. Unhappily, those were just unedited proceedings of each session — so ten videos, at least an hour and a half each, full of gems but without any very clear way to find them if you weren’t patient enough to sift through the entire thing.

No more! Thanks to the heroic efforts of Gia Mora, the proceedings have been edited down to a number of much more accessible and content-centered highlights. There are over 80 videos (!), with a median length of maybe 5 minutes, though they range up to about 20 minutes and down to less than one. Each video centers on a particular idea, theme, or point of discussion, so you can dive right into whatever particular issues you may be interested in. Here, for example, is a conversation on “Mattering and Secular Communities,” featuring Rebecca Goldstein, Dan Dennett, and Owen Flanagan.

The videos can be seen on the workshop web page, or on my YouTube channel. They’re divided into categories:

A lot of good stuff in there. Enjoy!