# Particle Physics Planet

## August 16, 2018

### Christian P. Robert - xi'an's og

a free press needs you [reposted]

“Criticizing the news media — for underplaying or overplaying stories, for getting something wrong — is entirely right. News reporters and editors are human, and make mistakes. Correcting them is core to our job. But insisting that truths you don’t like are “fake news” is dangerous to the lifeblood of democracy. And calling journalists the “enemy of the people” is dangerous, period.”

### Peter Coles - In the Dark

The Problem of the Moving Triangle

I found this nice geometric puzzle a few days ago on Twitter. It’s not too hard, but I thought I’d put it in the Cute Problems‘ folder.

In the above diagram, the small equilateral triangle moves about inside the larger one in such a way that it keeps the orientation shown. What can you say about the sum a+b+c?

## August 15, 2018

### Lubos Motl - string vacua and pheno

Deep thinkers build conjectures upon conjectures upon 5+ more floors
Among the world's string theorists, Sheldon Cooper has given the most accurate evaluation (as far as I can say) of the critics of string theory:
While I have no respect for Leslie [Winkle, a subpar scientist designed to resemble a hybrid of Sabine Hossenfelder and Lee Smolin] as a scientist or a human being for that matter we have to concede her undeniable expertise in the interrelated fields of promiscuity and general sluttiness.
Not even Edward Witten has ever put it this crisply. Winkle has rightfully thanked Sheldon for that praise. Well, I also don't have any respect for the string theory haters as scientists or human beings, for that matter. But I am regularly reminded that the disagreement is much deeper than different opinions about some technical questions. It's a disagreement about the basic ethical and value system.

Many stupid things have been written by journalists and the string theory haters – the difference between these two groups is often tiny – as reactions to the controversies among string theorists concerning the cosmological constant or quintessence and most of these stupid proclamations have been discussed dozens of times on this blog and it's boring to discuss the same stupidities all the time.

But there's one relatively new slogan that has apparently become popular among these individuals. Not Even Wrong, a leading website of the crackpots, has released its 10460th rant
Theorists with a Swamp, not a Theory
Here you have the slogan in four variations:
Will Kinney: The landscape is a conjecture. The “swampland” is a conjecture built on a conjecture.

Sabine Hossenfelder: The landscape itself is already a conjecture build [sic] on a conjecture, the latter being strings to begin with. So: conjecture strings, then conjecture the landscape (so you don’t have to admit the theory isn’t unique), then conjecture the swampland because it’s still not working.

Lars: “It’s conjectures all the way down.” Conjecture built on guess / In turn that’s built on hunch / The latter really rests / On inference a bunch

Peter Woit: The problem is that you don’t know what the relevant string theory equations are. So, this is a conjecture about a conjecture: / First conjecture: There is a well-defined theory satisfying a certain list of properties. / Second conjecture: The equations of this unknown theory do or don’t have certain specific properties.
The sudden explosion of this meme shows many things. The first thing is that they are just talking heads who mindlessly parrots slogans they just heard from other members of that echo chamber. But the content of the slogan is more damning.

You know: All of these individuals clearly have a severe psychological problem with uncertainty and the mental constructions building on uncertain assumptions. But this building upon uncertain starting points is what science is all about! And the more advanced, the deeper the science – and especially modern theoretical physics – is, the higher number of floors built on each other the skyscraper of knowledge has.

Scientists try to connect the bricks as tightly as they can – and they wrestle with the uncertainty as much as possible. But the fact is that it's almost never possible to eliminate at least some uncertainty about important physical questions. Does it mean that physicists should give up?

This qualitative comment isn't a lesson I converged to before I was given a PhD or something like that. The excitement about the ability to build very tall skyscrapers from the metaphorical (intellectual) bricks was something that I already experienced when I was 3 years old – and even more so when I was an older kid.

For creative children. Czech product. Designed with the help of kids. "Finally we can build a proper castle, what do you say, bro?"

The people who aren't thrilled with this construction of tall buildings made out of uncertain ideas are simply not curious people. They're closer to pigs than to theoretical physicists. But the likes of Peter Woit not only fail to be thrilled. They are openly – and, as you can check, hysterically – hostile towards these key drivers of all of modern science – the curiosity and the desire to have a chance to see the grand structures of ideas underlying the Universe in their full glory, with their actual complex relationships. So they're on the opposite side from the pigs than the theoretical physicists. What is the proper name of the creatures whose coordinates may be written as$\vec x_{\rm Šmoits} = 2 \vec x_{\rm Pig} - \vec x_{\text{theoretical physicist}}?$ Oh, I see. The name of the anti theoretical physicists relatively to the pigs at the origin is Šmoits! But seriously, this is something so essential that I could never accept that, tolerate the environment of immoral, intellectually dead, uncurious, arrogant morons such as Woit and Hossenfelder. A nation where this kind of creatures is allowed to influence the public discourse vis-a-vis science is fudged up and deserves to go extinct as soon as possible.

You know, I only "discovered" Richard Feynman when I was 17 or so. But these basic things as the ability to "live with the uncertainty" and "not giving up the desire to figure things out because of the uncertainty" was something much older, as I mentioned. But Feynman has described a true scientist's relationship to the doubts and uncertainty in the BBC program:
I can live with doubt and uncertainty and not knowing. I think it's much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things. But I am not absolutely sure of anything. And there are many things I know nothing about. But I don't have to know any answers. I don't feel frightened by not knowing things, by being lost in the mysterious Universe without having any purpose – which is the way it really is, as far as I can tell, possibly. It doesn't frighten me.
Amen to that. Feynman was really comparing his, scientist's, view on uncertainty with the attitude of non-scientists such as the religious people. You can't really be a scientist if you have a serious psychological problem with uncertainty, doubt, and not knowing things. You can't really do theoretical physics if you find it absurd or impossible to build additional floors on top of assumptions that aren't quite certain – because, as Feynman mentioned, nothing is really quite certain in natural science.

And the non-scientists who believe that they may be certain about something are almost universally wrong. They just "totally" believe in wrong things about Nature. Science is the only reliable method to converge towards the most accurate answers but it still can't eliminate the uncertainty altogether. In fact, the science's admission that it doesn't eliminate the uncertainty about all big questions – a humility of the scientific method, you might say – is one of the key reasons why science works so much better than the alternative, "pompous" methods to find the truth (such as the organized religions).

So the existence of the huge number of de Sitter vacua wasn't ever quite certain and it is still not certain. But people had to build on it, elaborate on the assumptions, try to get as far as they can, find the complicated chains of implications that are nevertheless almost certain. The same holds for other scenarios that are reasonably possible such as the Vafa Team's quintessence picture. Science simply is about the constant formulation of conjectures and hypotheses – and conjectures based upon conjectures; conjectures based upon conjectures based upon conjectures (the Šmoits couldn't even envision such a complicated thing but the actual theoretical physicists have to deal with constructions of this sort and even analogous constructions with 5+ floors – and not only talk about these chains).

If you have a problem with the absence of perfect certainty, and you just can't build new ideas in such a situation, if you can't focus on thinking what some (uncertain) assumptions, hypotheses, conjectures, or axioms imply, it simply means that you can't be a theoretical physicist. Every person who has this psychological problem with building on uncertain axioms but who pretends to be a scientist is a 100% fraud and every person who believes this fraudster's claim that she is a scientist is a complete moroness. (I introduced some affirmative action so that the [never] fudged up feminists can't complain.) They totally suck as thinkers. And if you start to brag about this fatal intellectual defect of yours, others in your environment should appreciate that you're as far from a pig as theoretical physicists are – but you're on the opposite side of the pig than the theoretical physicists. Given these assumptions, you're a Šmoit, a pile of junk.

You need to be treated as junk, otherwise there is something profoundly wrong about the whole society that harbors this junk.

### Peter Coles - In the Dark

Say A Little Prayer

Aretha Franklin, the Queen of Soul, is gravely ill. You could do worse than say a little prayer for her…

Update: I’ve just heard the sad news that Aretha Franklin died today (Thursday 16th August 2018)  at her home.

R.I.P. the Queen of Soul.

### John Baez - Azimuth

Open Petri Nets

Jade Master and I have nearly finished a paper on open Petri nets, and it should appear on the arXiv soon. I’m excited about this, especially because our friends at Statebox are planning to use open Petri nets in their software. They’ve recently come out with a paper too:

• Fabrizio Romano Genovese and Jelle Herold, Executions in (semi-)integer Petri nets are compact closed categories.

Petri nets are widely used to model open systems in subjects ranging from computer science to chemistry. There are various kinds of Petri net, and various ways to make them ‘open’, and my paper with Jade only handles the simplest. But our techniques are flexible, so they can be generalized.

What’s an open Petri net? For us, it’s a thing like this:

The yellow circles are called ‘places’ (or in chemistry, ‘species’). The aqua rectangles are called ‘transitions’ (or in chemistry, ‘reactions’). There can in general be lots of places and lots of transitions. The bold arrows from places to transitions and from transitions to places complete the structure of a Petri net. There are also arbitrary functions from sets $X$ and $Y$ into the set of places. This makes our Petri net into an ‘open’ Petri net.

We can think of open Petri nets as morphisms between finite sets. There’s a way to compose them! Suppose we have an open Petri net $P$ from $X$ to $Y,$ where now I’ve given names to the points in these sets:

We write this as $P \colon X \nrightarrow Y$ for short, where the funky arrow reminds us this isn’t a function between sets. Given another open Petri net $Q \colon Y \nrightarrow Z,$ for example this:

the first step in composing $P$ and $Q$ is to put the pictures together:

At this point, if we ignore the sets $X,Y,Z,$ we have a new Petri net whose set of places is the disjoint union of those for $P$ and $Q.$

The second step is to identify a place of $P$ with a place of $Q$ whenever both are images of the same point in $Y$. We can then stop drawing everything involving $Y,$ and get an open Petri net $QP \colon X \nrightarrow Z,$ which looks like this:

Formalizing this simple construction leads us into a bit of higher category theory. The process of taking the disjoint union of two sets of places and then quotienting by an equivalence relation is a pushout. Pushouts are defined only up to canonical isomorphism: for example, the place labeled $C$ in the last diagram above could equally well have been labeled $D$ or $E.$ This is why to get a category, with composition strictly associative, we need to use isomorphism classes of open Petri nets as morphisms. But there are advantages to avoiding this and working with open Petri nets themselves. Basically, it’s better to work with things than mere isomorphism classes of things! If we do this, we obtain not a category but a bicategory with open Petri nets as morphisms.

However, this bicategory is equipped with more structure. Besides composing open Petri nets, we can also ‘tensor’ them via disjoint union: this describes Petri nets being run in parallel rather than in series. The result is a symmetric monoidal bicategory. Unfortunately, the axioms for a symmetric monoidal bicategory are cumbersome to check directly. Double categories turn out to be more convenient.

Double categories were introduced in the 1960s by Charles Ehresmann. More recently they have found their way into applied mathematics. They been used to study various things, including open dynamical systems:

• Eugene Lerman and David Spivak, An algebra of open continuous time dynamical systems and networks.

open electrical circuits and chemical reaction networks:

• Kenny Courser, A bicategory of decorated cospans, Theory and Applications of Categories 32 (2017), 995–1027.

open discrete-time Markov chains:

• Florence Clerc, Harrison Humphrey and P. Panangaden, Bicategories of Markov processes, in Models, Algorithms, Logics and Tools, Lecture Notes in Computer Science 10460, Springer, Berlin, 2017, pp. 112–124.

and coarse-graining for open continuous-time Markov chains:

• John Baez and Kenny Courser, Coarse-graining open Markov processes. (Blog article here.)

As noted by Shulman, the easiest way to get a symmetric monoidal bicategory is often to first construct a symmetric monoidal double category:

• Mike Shulman, Constructing symmetric monoidal bicategories.

The theory of ‘structured cospans’ gives a systematic way to build symmetric monoidal double categories—Kenny Courser and I are writing a paper on this—and Jade and I use this to construct the symmetric monoidal double category of open Petri nets.

A 2-morphism in a double category can be drawn as a square like this:

We call $X_1,X_2,Y_1$ and $Y_2$ ‘objects’, $f$ and $g$ ‘vertical 1-morphisms’, $M$ and $N$ ‘horizontal 1-cells’, and $\alpha$ a ‘2-morphism’. We can compose vertical 1-morphisms to get new vertical 1-morphisms and compose horizontal 1-cells to get new horizontal 1-cells. We can compose the 2-morphisms in two ways: horizontally and vertically. (This is just a quick sketch of the ideas, not the full definition.)

In our paper, Jade and I start by constructing a symmetric monoidal double category $\mathbb{O}\mathbf{pen}(\textrm{Petri})$ with:

• sets $X, Y, Z, \dots$ as objects,

• functions $f \colon X \to Y$ as vertical 1-morphisms,

• open Petri nets $P \colon X \nrightarrow Y$ as horizontal 1-cells,

• morphisms between open Petri nets as 2-morphisms.

(Since composition of horizontal 1-cells is associative only up to an invertible 2-morphism, this is technically a pseudo double category.)

What are the morphisms between open Petri nets like? A simple example may be help give a feel for this. There is a morphism from this open Petri net:

to this one:

mapping both primed and unprimed symbols to unprimed ones. This describes a process of ‘simplifying’ an open Petri net. There are also morphisms that include simple open Petri nets in more complicated ones, etc.

This is just the start. Our real goal is to study the semantics of open Petri nets: that is, how they actually describe processes! More on that later.

### Emily Lakdawalla - The Planetary Society Blog

Here are some recent postcards from Jupiter
Let's check in on NASA's Juno spacecraft, which completed its 14th close flyby of Jupiter last month.

### Peter Coles - In the Dark

Today’s the day in Ireland that students get the results of their school Leaving Certificate examinations and, over the other side of the Irish Sea, tomorrow is when A-level results come out. For many there will be joy at their success, and I particularly look forward to meeting those who made their grades to get into Maynooth University shortly.

Others will no doubt receive some disappointing news.

For those of you who didn’t get the grades you needed or expected, I have one piece of very clear advice:

In particular, if you didn’t get the Leaving Certificate points you needed for entry to your first University in Ireland or the A-levels needed to do likewise in the United Kingdom, do not despair. There are always options.

For example, in Ireland, you could try looking at alternative choices on the Available Courses, where any places remaining unfilled in particular courses after all offers have been made and the waiting lists of applicants meeting minimum entry requirements have been exhausted, will be advertised.

In the United Kingdom the Clearing system will kick into operation this week. It’s very well organized and student-friendly, so give it a go if you didn’t make your offer.

## August 14, 2018

### Christian P. Robert - xi'an's og

Handbook of Mixture Analysis [cover]

On the occasion of my talk at JSM2018, CRC Press sent me the cover of our incoming handbook on mixture analysis, courtesy of Rob Calver who managed to get it to me on very short notice! We are about ready to send the manuscript to CRC Press and hopefully the volume will get published pretty soon. It would have been better to have it ready for JSM2018, but we editors got delayed by a few months for the usual reasons.

### Jon Butterworth - Life and Physics

Cartographic Errors

Despite my best efforts and those of several others, there are, inevitably, some errors in A Map of the Invisible/Atom Land. Apologies.

When they get spotted and reported, they get fixed in future editions. Where they might cause confusion to the reader who have older editions, I am collecting them on this page. I’ll also add any note or queries which come up and seem like they might be interesting, just as I did with Smashing Physics.

### ZapperZ - Physics and Physicists

MinutePhysics Special Relativity Chapter 8
If you missed Chapter 7 of this series, check it out here.

This time, the topic is on the ever-popular Twin Paradox (which really isn't a paradox since there is a logical explanation for it).

You can compare this explanation with that given by Don Lincoln a while back. I think Don's video is clearer to me, since I can comprehend the math.

Zz.

### CERN Bulletin

CERN Rocked at the Hardronic

As every year in the summer, over one weekend, the Prevessin site becomes a mecca for rock music, welcoming not only CERN staff members but also visitors.

The 2018 version of the Hardronic Festival, took place last August 4th, in a relaxed atmosphere, despite the overwhelming heat which was greatly reduced thanks to the welcome shade provided by the trees, an abundance of beverages and the availability of food trucks.

For this 27th edition, 13 rock and pop groups followed on successively from each other, performing on two stages, providing more than eight solid hours of music, attracting young and old festival-goers in large numbers who, danced and sang the night away.

The quality of the programming and organization allowed everyone to share a beautiful evening of relaxation and music.

This CERN-made festival has become a summer event not to be missed.

Sponsored by the Staff Association, the Hardronic Festival is organised by the Music Club and allows talented musicians to perform. It also highlights the talent, which can be found within the CERN MusiClub, and in the local french and swiss area.

Professionally organised and a pleasant and easily accessible venue,  it was nevertheless mandatory for external visitors to register prior to the event in order to gain access.

A big thank you to Arek and Django for the festival to all the volunteers and the technical team, they all did an excellent job.

Put it in your diaries for next year!

Hardronic Festival: http://hardronic.web.cern.ch/Hardronic/2018/

### Peter Coles - In the Dark

It was on this day 70 years ago (i.e. on 14th August 1948) that the great Australian batsman Sir Donald Bradman played his last Test innings, against England at the Oval. He didn’t know it would be his last knock but Australia won the match by an innings so he never got to bat again in the match, which was the last in the five-match Ashes series that Australia won 4-0.

Bradman needed only to score four runs to finish with a Test batting average of 100, but he was out second ball to the legspinner Eric Hollies, for a duck, and his average was stuck on 99.94.

Here’s a short video of Bradman’s last Test innings, featuring commentary by John Arlott:

Two things struck me when I watched this just now. One is that Norman Yardley’s decision to give Bradman three cheers at the start of his innings may have seemed very sporting at the time, but I’m sure it put the batsman off and I wonder if that was Yardley’s calculated intent?

The second striking thing is the poor state of the pitch, with huge footmarks clearly visible. Although Hollies was bowling round the wicket presumably to exploit them, it’s not clear these played a role in Bradman’s dismissal. It looks to me that he played a loose shot at a full delivery, probably a googly that turned a little. Nevertheless it is worth remembering that batsmen of Bradman’s era had to play on uncovered wickets. I won’t dwell on this point for fear of starting to sound like Geoffrey Boycott, but it does reinforce just how remarkable Bradman’s average really was. Add to that the fact that England had been bowled out on that strip in their first innings for just 52!

Eric Hollies may have been a good bowler, but his record with the bat was at the opposite extreme to Bradman, scoring a total of 37 runs in 13 Test matches, at an average of 5.28. His total of 1,673 runs in first-class matches was 650 fewer than his haul of wickets, and only once (in 1954) did he reach 30 in an innings. In fact, he did not reach 20 in any innings between 1946 and 1953, and equalled an all-time first-class record, between July 1948 and August 1950, of seventy-one consecutive innings without reaching double figures.

Although Australia won the Ashes convincingly in 1948, the Australian camp was not entirely harmonious. The tension therein largely originated in the fact that Bradman was a Protestant and there was a Catholic faction in the touring party that didn’t like him for essentially tribal reasons. Indeed, I’m told that some former Australian players in the Press Box burst out laughing when The Don’ was out for a duck that day.

### Emily Lakdawalla - The Planetary Society Blog

The Venus controversy
A lack of new missions keeps scientists guessing on what shaped the planet’s surface.

### CERN Bulletin

CERN Staff Association and CERN Photo Club Summer Photography Competition

“Summer’s lease hath all too short a date” William Shakespeare

Photography is a wonderful medium to preserve our summer memories. If there is a summer photo you have taken and are particularly proud of, why not enter the CERN Staff Association and CERN Photo Club Summer photography competition. Take advantage of your holidays to send us your photos!

The competition is open to all Members of Personnel at CERN and Members of CERN Clubs and closes on 30 September 2018.

There are two categories:

• Photo(s) ‘Summer’ in general
• Photo(s) ‘Summer at CERN’

Entries should be submitted in JPEG digital format and at a resolution suitable for printing the image in A4 format along with a title and short description explaining the image and what it represents for you to photo.contest@cern.ch with the subject ‘CERN STAFF ASSOCIATION PHOTOGRAPHY COMPETITION 2018’

All participants of the competition accept to comply with the terms and conditions.

Numerous prizes kindly offered by CERN Clubs and the Staff Association are up for grabs!

Wishing you all a great holiday and a happy summer!

## August 13, 2018

### Andrew Jaffe - Leaves on the Line

Planck: Demographics and Diversity

Another aspect of Planck’s legacy bears examining.

A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).

Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.

Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)

I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).

This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)

However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.

This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.

### Emily Lakdawalla - The Planetary Society Blog

Space Policy & Advocacy Program Quarterly Report - July 2018
The Planetary Society's Space Policy and Advocacy team publishes quarterly reports on their activities, actions, priorities, and goals in service of their efforts to promote space science and exploration in Washington, D.C.

### Emily Lakdawalla - The Planetary Society Blog

Chandrayaan-2 launch delayed to 3 January 2019
Chandrayaan-2, expected to launch in October, will now be launching no earlier than 3 January 2019, with its lander and rover touching down in February.

### CERN Bulletin

Exhibition

E=mc² and Stars

Marizeth Baumgarten

From 3 to 14 September
CERN Meyrin, Main Building

A creator of emotions, carried away by inspiration, the artist uses surrealism to express creativity and to provoke feelings and emotions, reactions, such as pleasure, fantasy, like your dreams, and invites you to use your imagination.

Think, reflect and be part of these emotions that convey her work.

In her works are present movement, dynamic, lyrical symbolism and romantic real or unreal, sometimes provocative and ambiguous.

In surrealism, as in art there are no universal concepts.

Every age, every culture has its own.

Each artist expresses artistically from their point of view.

In surrealism no limitations, barriers and rules, everything is permitted.

The very careful technique is acrylic and oil..

For more information and access requests: staff.association@cern.ch | +41 22 767 28 19

### Axel Maas - Looking Inside the Standard Model

Fostering an idea with experience
In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.

### Peter Coles - In the Dark

The Mother of Civilisation Library Project

When I was packing books at my Cardiff residence last week I set aside a few I no longer needed. This morning I put them in a parcel which I took to the post office and sent to the Mother of Civilisation Library Project in Sindh (Pakistan).

In case you weren’t aware, the Mother of Civilization Library is a volunteer organisation in the Indus Valley around Sindh, in the southern part of Pakistan. Their project is to help and facilitate a libraries program in Sindh by collecting books. They contacted me a while ago about making a donation, and I’ve finally done it!

If you have any spare new or used books that you would like to send to the Library program, I’m sure they’d be thrilled to receive them! Your donation could do much to stimulate and encourage the growth of learning, especially among the young generation of students.

Rashid Anees Magsi, Project Manager, Mother of Civilization Library

Street: Sobho Khan Magsi,

Province: Sindh,

Postal Code: 76310,

Country: Pakistan

P. S. If you send a donation from the UK be sure to say that you are sending books – the cost is much lower if your parcel contains only books than if it contains other items of the same weight.

### CERN Bulletin

Interfon

Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.

### CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juillet et décembre.

La prochaine permanence se tiendra le :

Mardi 28 août de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel

Les permanences suivantes auront lieu les mardis 25 septembre, 30 octobre et 27 novembre 2018.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/
Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

### Jon Butterworth - Life and Physics

Life, Physics and Everything

When the Guardian’s science blog network closes, Life & Physics will have been here for eight years. Physics has come a long way in that time, but there is (as always) more to be done…

### Lubos Motl - string vacua and pheno

Search for ETs is more speculative than modern theoretical physics
Edwin has pointed out a new tirade against theoretical physics,
Theoretical Physics Is Pointless without Experimental Tests,
that Abraham Loeb published at pages of Scientific American which used to be an OK journal some 20 years ago. The title itself seems plagiarized from Deutsche or Aryan Physics – which may be considered ironic for Loeb who was born in Israel. And in fact, like his German role models, Loeb indeed tries to mock Einstein as well – and blame his mistakes on the usage of thought experiments:
Einstein made great discoveries based on pure thought, but he also made mistakes. Only experiment and observation could determine which was which.

Albert Einstein is admired for pioneering the use of thought experiments as a tool for unraveling the truth about the physical reality. But we should keep in mind that he was wrong about the fundamental nature of quantum mechanics as well as the existence of gravitational waves and black holes...
Loeb has a small, unimportant plus for acknowledging that Einstein was wrong on quantum mechanics. However, as an argument against theoretical physics based on thought experiments and on the emphasis on the patient and careful mental work in general, the sentences above are at most demagogic.

The fact that Einstein was wrong about quantum mechanics, gravitational waves, or black holes don't imply anything wrong about the usage of thought experiments and other parts of modern physics. There's just no way to credibly show such an implication. Other theorists have used better thought experiments, have thought about them more carefully, and some of them have correctly figured out that quantum mechanics had to be right and gravitational waves and black holes had to exist.

The true fathers of quantum mechanics, especially Werner Heisenberg, were really using Einstein's new approach based on thought experiments, principles, and just like Einstein, they carefully tried to remove the assumptions about physics that couldn't have been operationally established (such as the absolute simultaneity killed by special relativity; and the objective existence of values of observables before an observation, killed by quantum mechanics).

Note that gravitational waves as well as black holes were detected many decades after their theoretical discovery. The theoretical discoveries almost directly followed from Einstein's equations. So Einstein's mistakes meant that he didn't trust (his) theory enough. It surely doesn't mean and cannot mean that Einstein trusted theories and theoretical methods too much. Because Loeb has made this wrong conclusion, it's quite some strong evidence in favor of a defect in Loeb's central processing unit.

The title may be interpreted in a way that makes sense. Experiments surely matter in science. But everything else that Loeb is saying is just wrong and illogical. In particular, Loeb wrote this bizarre paragraph about Galileo and timing:
Similar to the way physicians are obliged to take the Hippocratic Oath, physicists should take a “Galilean Oath,” in which they agree to gauge the value of theoretical conjectures in physics based on how well they are tested by experiments within their lifetime.
Well, I don't know how I could judge theories according to experiments that will be done after I die, after my lifetime. That's clearly impossible so this restriction is vacuous. On the other hand, is it OK to judge theories according to experiments that were done before our lifetimes or before physicists' careers?

You bet. Experimental or empirical facts that have been known for a long time are still experimental or empirical facts. In most cases, they may be repeated today, too. People often don't bother to repeat experiments that re-establish well-established truths. But these old empirical facts are still crucial for the work of every theorist. They are sufficient to determine lots of theoretical principles.

You know, it's correct to say that science is a dialogue between the scientist and Nature. But this is only true in the long run. It doesn't mean that every day or every year, both of them have to speak. If Nature doesn't want to speak, She has the right to stay silent. And She often stays silent even if you complained that She doesn't have the right. She ignores your restrictions on Her rights! So at the LHC after the Higgs boson discovery, Nature chose to remain silent so far – or She kept on saying "the Standard Model will look fine to you, human germ".

You can't change this fact by some wishful thinking about "dialogues". Theorists just didn't get new post-Higgs data from the LHC because so far, there are no new data at the LHC. They need to keep on working which makes it obvious that they have to use older facts and new theoretical relationships between them, new hypotheses etc. In the absence of new theoretical data, it is obvious that theorists' work has to be overwhelmingly theoretical or, in Loeb's jargon, it has to be a monologue! When Nature has something new and interesting to say (through experiments), Nature will say it. But theorists can't be silent or "doing nothing" just because Nature is silent these years! Only a complete idiot may fail to realize these points or agree with Loeb.

What Loeb actually wants to say is that a theorist should be obliged to plan the experiments that will settle all his theoretical ideas within his lifetime. But that's not possible. The whole point of scientific research in physics is to study questions about the laws of Nature that haven't been answered yet. And because they haven't been answered yet, people don't know and can't know what the answer will be – and even when it will be found.

An experimenter (or a boss or a manager of an experimental team) may try to plan what the experiment will do, when it will do these things, and what are the answers that it could provide us with. Even this planning sometimes goes wrong, there are delays etc. But this is not the main problem here. The real problem is that the result of a particular experiment is almost never the real question that people want to be answered. An experiment is often just a step towards adjusting our opinions about a question – and whether this step is a big or small one depends on what the experimental outcome actually is, and this is not known in advance.

Loeb has mentioned examples of such questions himself. People actually wanted to know whether there were black holes and gravitational waves. But a fixed experiment with a fixed budget, predetermined sensitivity etc. simply cannot be guaranteed to produce the answer. That's the crucial point that kills Loeb's Aryan Physics as a proposed (not so) new method to do science.

For example, both gravitational waves and black holes are rather hard to see. Similarly, the numerical value of the cosmological constant (or vacuum energy density) is very small. It's this smallness that has implied that one needed a long – and impossible to plan – period of time to discover these things experimentally.

Because black holes, gravitational waves, and a positive cosmological constant needed fine gadgets – and it was not known in advance how fine they had to be – does it mean that the theorists should be banned from studying these questions and concepts? The correct answer is obviously No – while Loeb's answer is Yes. Almost all of theoretical physics is composed of such questions. We just can't know in advance how much time will be needed to settle the questions we care about (and, as Edwin emphasized, there is nothing special about the timescale given by "our lifespan"). We can't know what the answers will be. We can't know whether the evidence that settles these questions will be theoretical in character, dependent on somewhat new experimental tools, or dependent on completely new experimental tools, discoveries, and inventions.

None of these things about the future flow of evidence can be known now (otherwise we could settle all these things now!) which is why it's impossible for these unknown answers to influence what theorists study now! The influences that Loeb demands would violate causality. If the theorists knew in advance when the answer is obtained, they would really have to know what the answer is – as I mentioned above, the confirmation of a null hypothesis always means that the answer to the interesting qualitative question was postponed. But then the whole research would be pointless.

So if science followed Loeb's Aryan Physics principles, it would be pointless! The real science follows the scientific method. Scientists must make decisions and conclusions, often conclusions blurred by some uncertainty, right now, based on the facts that are already known right now – not according to some 4-year plans, 5-year plans, or 50-year plans. And if their research depends on some assumptions, they have to articulate them and go through the possibilities (ideally all of them).

It's also utterly demagogic for him to talk about the "Galilean Oath" because Galileo Galilei disagreed with ideas that were very similar to Loeb's. In particular, Galileo has never avoided the formulation of hypotheses that could have needed a long time to be settled. One example where he was wrong was Galileo's belief that comets were atmospheric phenomena. That belief looks rather silly to me (didn't they already observe the periodicity of some comets, by the way?) but the knowledge was very different then. Science needed a long time to really settle the question.

But more generally, Galileo did invent lots of conjectures and hypotheses because those were the real new concepts that became widespread once he started the new method, the scientific method. Google search for "Galileo conjectured" or "Galileo hypothesized". Of course you get lots of hits.

As e.g. Feynman said in his simple description of the scientific method, the scientific method to search for new laws works as follows: First, we guess the laws. Then we compute consequences. And then we compare the consequences to the empirical data.

Note the order of the steps: the guess must be at the very beginning, scientists must be free to present all such possible hypotheses and guesses, and the computation of the consequences must still be close to the beginning. Loeb proposes something entirely different. He wants some planning of future experiments to be placed at the beginning, and this planning should restrict what the physicists are allowed to think about in the first place.

Sorry, that wouldn't be science and it couldn't have produced interesting results, at least not systematically. And these restrictions are indeed completely analogous to the bogus restrictions that the church officials – and later various philosophers etc. – tried to place on the scientific research. Like Loeb, the church hierarchy also wanted the evidence to be direct at all cases. But one of the ingenious insights by Galileo was that he realized that the evidence may often be indirect or very indirect but one may still learn a great deal of insights out of it.

The simplest example of this "direct vs indirect" controversy are the telescopes. Galileo has improved the telescope technology and made numerous new observations – such as those of the Jovian moons. The church hierarchy actually disputed that those satellites existed because the observation by telescopes wasn't direct enough for them. It took many years before people realized how incredibly idiotic such an argument was. It would be a straight denial of the evidence. The telescopes really see the same thing as the eyes when both see something. Sometimes, telescopes see more details than the eyes – so they must be considered nothing else than improved eyes. The observations from eyes and telescopes are equally trustworthy. But telescopes have a better resolution.

The laymen trust telescopes today even though the telescope observations are "indirect" ways to see something. But the tools to observe and deduce things in physics have become vastly more indirect than they were in Galileo's lifetime. And most laymen – including folks like Loeb – simply get lost in the long chains of reasoning. That's one reason why many people distrust science. Because they haven't verified them individually (and most laymen wouldn't be smart or patient enough to do so), they believe that the long chains of reasoning and evidence just cannot work. But they do work and they are getting longer.

The importance of reasoning and theory-based generalizations was increasing much more quickly during Newton's lifetime – and it kept on increasing at an accelerating rate. Newton united the celestial and terrestrial gravity, among other things. The falling apple and the orbiting Moon move because of the very same force that he described by a single formula. Did he have a "direct proof" that the apple is doing the same thing in the Earth's gravitational field as the Moon? Well, you can't really have a direct proof of such a statement – which could be described as a metaphor by some. His theory was natural enough and compatible with the available tests. Some of these tests were quantitative yet not guaranteed at the beginning. So of course they increased the probability that the unification of celestial and terrestrial gravity was right. But whether such confirmations would arise, how strong and numerous they would be, and when they would materialize just isn't know at the beginning.
The risk for physics stems primarily from mathematically beautiful “truths,” such as string theory, accepted prematurely for decades as a description of reality just because of their elegance.
OK, this criticism of "elegance" is mostly a misinterpretation of pop science. Scientists sometimes describe their feelings – how their brains feel nicely when things fit together. Sometimes they only talk about these emotional things in order to find some common ground with a journalist or another layman. But at the end, this type of beauty or elegance is very different from the beauty or elegance experienced by the laymen or artists. The theoretical physicists' version of beauty or elegance reflects some rather technical properties of the theories and the statement that these traits increase the probability that the theory is right may be pretty much proven.

But even if you disagree with these proofs, it doesn't matter because the scientific papers simply don't use the beauty or elegance arguments prominently. When you read a new paper about some string dualities, string vacua, or anything of the sort, you don't really read "this would be beautiful, and therefore the value of some quantity is XY". Only when there are some calculations of XY, the authors claim that there is some evidence. Otherwise they call their propositions conjectures or hypotheses. And sometimes they use these words that remind us of the uncertainty even when there is a rather substantial amount of evidence available, too.

But the uncertainty is unavoidable in science. A person who feels sick whenever there is some uncertainty just cannot be a scientist. Despite the uncertainty, a scientist has to determine what seems more likely and less likely right now. When some things look very likely, they may be accepted as facts at a preliminary basis. Some other people's belief in these propositions may be weaker – and they may claim that the proposition was accepted prematurely. But at the end, some preliminary conclusions are being made about many things. Science just couldn't possibly work without them.

By the way, I forgot to discuss the subtitle of Loeb's article:
Our discipline is a dialogue with nature, not a monologue, as some theorists would prefer to believe
Note that he emphasizes that theoretical physics is "his discipline". It sounds similar to Smolin's fraudulent claims that he was a "string theorist". Smolin isn't a string theorist and doesn't have the intellectual abilities to ever become a string theorist. Whether Loeb is a theoretical physicist is at least debatable. He's the boss of Harvard's astronomy department. The words "astrophysicist" would surely be defensible. But the phrase "theoretical physicist" isn't quite the same thing. I hope that you remember Sheldon Cooper's explanation of the difference between a rocket scientist and a theoretical physicist.

Why doesn't Missy just tell them that Sheldon is a toll taker at the Golden Gate Bridge? ;-)

Given Loeb's fundamental problems with the totally basic methodology of theoretical physics – including thought experiments and long periods of careful and patient thinking uninterrupted by experimental distractions – I think it is much more reasonable to say that Loeb clearly isn't a theoretical physicist so his subtitle is a fraudulent effort to claim some authority that he doesn't possess.

OK, Loeb tried to hijack Galileo's name for some delusions about (or against) modern physics that Galileo would almost certainly disagree with. Galileo wouldn't join these Aryan-Physics-style attacks on theoretical physics. At some level, we may consider him a founder of theoretical physics, too.

SETI vs string theory

But my title refers to a particular bizarre coincidence in Loeb's criticism of theorists' thinking that could be experimentally inaccessible for the rest of our (or some living person's?) lifetimes. He wants the experimental results right now, doesn't he? A funny thing is that Loeb is also a key official at the Breakthrough Starshot Project, Yuri Milner's $100 million kite to be sent to greet the oppressed extraterrestrial minorities who live near Alpha Centauri, the nearest star of ours except for the Sun. String theory is too speculative for him but the discussions with the ETs are just fine, aren't they? Loeb seems aware of the ludicrous situation in which he has maneuvered himself: At the same time, many of the same scientists that consider the study of extra dimensions as mainstream regard the search for extraterrestrial intelligence (SETI) as speculative. This mindset fails to recognize that SETI merely involves searching elsewhere for something we already know exists on Earth, and by the knowledge that a quarter of all stars host a potentially habitable Earth-size planet around them. From his perspective, the efforts to chat with the extraterrestrial aliens are less speculative than modern theoretical physics. Wow. Why is it so? His argument is cute as well. SETI is just searching for something that is known to exist – intelligent life. However, the thing that just searches for something that is known to exist – intelligent life – would have the acronym SI only and it would be completely pointless because the answer is known. SETI also has ET in the middle, you know, which stands for "extraterrestrial". And Loeb must have overlooked these two letters altogether. It is not known at all whether there are other planets where intelligent life exists, and if they exist, what is their density, age, longevity, appearance, and degree of similarity to the life on Earth. It's even more unknown or speculative how these hypothetical ETs, if they exist near Alpha Centauri, would react to Milner's kite. We couldn't even reliably predict how our civilization would react to a similar kite that would arrive to Earth. How could we make realistic plans about the reactions of a hypothetical extraterrestrial civilization? On the other hand, string theory is just a technical upgrade of quantum field theory – one that looks unique even 50 years after the birth of string theory. Quantum field theory and string theory yield basically the same predictions for the doable experiments, quantum field theory is demonstrably the relevant approximation of stringy physics, and this approximation has been successfully compared to the empirical data. Everything seems to work. The extra dimensions are just scalar fields analogous to those that are known to exist that are added on the stringy world sheet (and in this sense, the addition of the extra dimension is as mundane as the addition of an extra flavor of leptons or quarks). We have theoretical reasons to think that the total number of spacetime dimensions should be 10 or 11. Unlike the expectations about the ETs, this is not mere prejudice. There are actually calculations of the critical dimension. Joe Polchinski's "String Theory" textbook contains 7 different calculations of $$D=26$$ for the bosonic string in the first volume; the realistic superstring analogously has $$D=10$$. This is not like saying "there should be cow-like aliens near Alpha Centauri because the stars look alike and I like this assertion". How can someone say that this research of extensions of successful quantum field theories is as speculative as Skyping with extraterrestrial aliens, let alone more speculative than those big plans with the ETs? At some moments, you can see that some people have simply lost it. And Loeb has lost it. It makes no sense to talk to him about these matters. He seems to hate theoretical physics so fanatically that he's willing to team up not only with the Šmoit-like crackpots but also with extraterrestrial aliens in his efforts to fight against modern theoretical physics. Too bad, Mr Loeb, but even if extraterrestrial intelligent civilizations exist, it won't help your case because these civilizations – because of the adjective "intelligent" – know that string theory is right and you are full of šit. And that's the memo. P.S.: I forgot to discuss the "intellectual power" paragraph: Given our academic reward system of grades, promotions and prizes, we sometimes forget that physics is a learning experience about nature rather than an arena for demonstrating our intellectual power. As students of experience, we should be allowed to make mistakes and correct our prejudices. Now, this is a bizarre combination of statements. Loeb says "physics is about" learning, not demonstrating our intellectual power. "Physics is about" is a vague sequence of words, however. We should distinguish two questions: What drives people to do physics? And what decides about their success? What primarily drives the essential people to do physics is curiosity. Physicists want to know how Nature works. String theorists want lots of more detailed questions about Nature to be answered. Their curiosity is real and they don't give a damn whether an ideologue wants to prevent them from studying some questions: the curiosity is real, they know that they want to know, and some obnoxious Loeb-style babbling can't change anything about it. Some people are secondary researchers. They do it because it's a good source of income or prestige or whatever. They study it because others have made it possible, they created the jobs, chairs, and so on. But the primary motivation is curiosity. But then we have the question whether one succeeds. The intellectual power isn't everything but it's obviously important. Loeb clearly wants to deny this importance – but he doesn't want to do it directly because the statement would sound idiotic, indeed. But why does he feel so uncomfortable about the need for intellectual power in theoretical physics? He presents the intellectual power as the opposite of the validity of physical theories. This contrast is the whole point of the paragraph above. But this contrast is complete nonsense. There is no negative correlation between "intellectual power" and "validity of the theories that are found". On the contrary, the correlation is pretty much obviously positive. At the end, his attack against the intellectual power is fully analogous to the statement that ice-hockey isn't about the demonstration of one's physical strength and skills, it's about scoring goals. When some parts are emphasized, the sentence is correct. But not too correct. The demonstration of the physical skills and strength is also "what ice-hockey is about". It's what drives some people. And the skills and strength are needed to do it well, too. The rhetorical exercise "either strength, or goals" – which is so completely analogous to Loeb's "either intellectual power, or proper learning of things about Nature" – is just a road to hell. The only possible implication of such a proposition would be to say that "people without the intellectual power should be made theoretical physicists". Does he really believe this makes any sense? Or why does he mix the validity of theories with the intellectual power in this negative way? Well, let me tell you why. Because he is jealous about some people's superior intellectual powers compared to his. And he is making the bet – probably correctly – that the readers of Scientific American's pages are dumb enough not to notice that his rant is completely illogical, from the beginning to the end. ## August 12, 2018 ### Jon Butterworth - Life and Physics USA Temperature: can I sucker you? I’m just back from a bit of a busman’s holiday in California, so US weather is on my mind. No anecdotes though – instead, here is an instructive example of the bad kind of data mining. Suppose I wanted to convince people that temperature in the USA wasn’t going up, it was going down. What would I show? Let’s try yearly average temperature in the conterminous U.S., also known as the “lower 48 states” (I’ll just call it “USA”): View original post 627 more words ## August 11, 2018 ### John Baez - Azimuth The Philosophy and Physics of Noether’s Theorems I’ll be speaking at a conference celebrating the centenary of Emmy Noether’s work connecting symmetries and conservation laws: The Philosophy and Physics of Noether’s Theorems, 5-6 October 2018, Fischer Hall, 1-4 Suffolk Street, London, UK. Organized by Bryan W. Roberts (LSE) and Nicholas Teh (Notre Dame). They write: 2018 brings with it the centenary of a major milestone in mathematical physics: the publication of Amalie (“Emmy”) Noether’s theorems relating symmetry and physical quantities, which continue to be a font of inspiration for “symmetry arguments” in physics, and for the interpretation of symmetry within philosophy. In order to celebrate Noether’s legacy, the University of Notre Dame and the LSE Centre for Philosophy of Natural and Social Sciences are co-organizing a conference that will bring together leading mathematicians, physicists, and philosophers of physics in order to discuss the enduring impact of Noether’s work. There’s a registration fee, which you can see on the conference website, along with a map showing the conference location, a schedule of the talks, and other useful stuff. Here are the speakers: John Baez (UC Riverside) Jeremy Butterfield (Cambridge) Anne-Christine Davis (Cambridge) Sebastian De Haro (Amsterdam and Cambridge) Ruth Gregory (Durham) Yvette Kosmann-Schwarzbach (Paris) Peter Olver (UMN) Sabrina Pasterski (Harvard) Oliver Pooley (Oxford) Tudor Ratiu (Shanghai Jiao Tong and Geneva) Kasia Rejzner (York) Robert Spekkens (Perimeter) I’m looking forward to analyzing the basic assumptions behind various generalizations of Noether’s first theorem, the one that shows symmetries of a Lagrangian give conserved quantities. Having generalized it to Markov processes, I know there’s a lot more to what’s going on here than just the wonders of Lagrangian mechanics: • John Baez and Brendan Fong, A Noether theorem for Markov processes, J. Math. Phys. 54 (2013), 013301. (Blog article here.) I’ve been trying to get to the bottom of it ever since. ### The n-Category Cafe The Philosophy and Physics of Noether's Theorems Nicholas Teh tells me that there is to be a conference held in London, UK, on October 5-6, 2018, celebrating the centenary of Emmy Noether’s work in mathematical physics. 2018 brings with it the centenary of a major milestone in mathematical physics: the publication of Amalie (“Emmy”) Noether’s theorems relating symmetry and physical quantities, which continue to be a font of inspiration for “symmetry arguments” in physics, and for the interpretation of symmetry within philosophy. In order to celebrate Noether’s legacy, the University of Notre Dame and the LSE Centre for Philosophy of Natural and Social Sciences are co-organizing a conference that will bring together leading mathematicians, physicists, and philosophers of physics in order to discuss the enduring impact of Noether’s work. Speakers include our very own John Baez. We have the entry nLab: Noether’s theorem. Since this (the first theorem) concerns group symmetries and conserved quantities, and since we are at the $nn$-Category Café, naturally we’re interested in higher Noetherian constructions, involving actions by higher groups. For an example of this you can turn to Urs Schreiber’s Higher prequantum geometry and its talk of ‘higher Noether currents’ as a ${L}_{\infty }L_\left\{\infty\right\}$-algebra extension (p. 21). Here are all the conference speakers: ## August 10, 2018 ### Emily Lakdawalla - The Planetary Society Blog Hayabusa2 descends again, this time to lower than 1000 meters above Ryugu This week Hayabusa2 completed its closest approach yet to asteroid Ryugu. In a successful gravity measurement experiment on August 6, the spacecraft dipped to within 1 kilometer of the asteroid. ### Lubos Motl - string vacua and pheno Quintessence is a form of dark energy Tristan asked me what I thought about Natalie Wolchover's new Quanta Magazine article, Dark Energy May Be Incompatible With String Theory, exactly when I wanted to write something. Well, first, I must say that I already wrote a text about this dispute, Vafa, quintessence vs Gross, Silverstein, in late June 2018. You may want to reread the text because the comments below may be considered "just an appendix" to that older text. Since that time, I exchanged some friendly e-mails with Cumrun Vafa. I am obviously more skeptical towards their ideas than they are but I think that I have encountered some excessive certainty of some of their main critics. Wolchover's article sketches some basic points about this rather important disagreement about cosmology among string theorists. But there are some very unfortunate details. The first unfortunate detail appears in the title. Wolchover actually says that "dark energy might be incompatible with string theory". That's the statement she seems to attribute to Cumrun Vafa and co-authors. But that misleading formulation is really invalid – it's not what Cumrun is saying. Here, the misunderstanding may be blamed on some sloppy "translation" of the technical terms that has become standard in the pop science press – and the excessively generalized usage of some jargon. OK, what's going on? First of all, the Universe is expanding, isn't it? We're talking about cosmology, the big bang theory (which I don't capitalize – to make sure that I am not talking about the sitcom), and the expansion of the Universe was already seen in the 1920s although people only became confident about it some 50 years ago. In the late 1990s, it was observed that the expansion wasn't slowing down, as widely expected, but speeding up. The accelerated expansion may be explained by dark energy. Dark energy is anything that is present everywhere in the vacuum and that tends to accelerate the expansion of the Universe. Dark energy, like dark matter, is invisible by optical telescopes (that's why both of them are called dark). But unlike dark matter which has (like all matter or dust) the pressure $$p=0$$, the dark energy has nonzero pressure, namely $$p\lt 0$$ or $$p\approx -\rho$$ where $$\rho$$ is the energy density. That's how dark energy and dark matter differ; dark energy's negative pressure is needed for its ability to accelerate the expansion of the Universe. Dark energy is supposed to be a rather general, umbrella term that may be represented by several known, slightly different theoretical concepts described by equations of physics. So far, the by far most widespread and "canonical" or "minimalist" kind of dark energy was the cosmological constant. That's really a number that is independent of space and especially time (it's why it's called a constant) which Einstein added to the original Einstein's equations of the general theory of relativity. Einstein's original goal was to allow the size of the Universe to be stable in time – because his equations seemed to imply that the Universe's size should evolve, much like the height of a freely falling apple. It just can't sit at a constant value – just like the apple usually doesn't sit in the air in the middle of the room. But the expansion of the Universe was discovered. Einstein could have predicted it because it follows from the simplest form of Einstein's equations, as I said. That could have earned him another Nobel prize when the expansion was seen by Hubble. (Well, Einstein's stabilization by the cosmological constant term wouldn't really work even theoretically, anyway. The balance would be unstable, tending to turn to an expansion or the implosion, like a pencil standing on the tip. Any tiny perturbation would be enough for this instability to grow exponentially.) That's probably the main reason why Einstein labeled the introduction of the cosmological constant term "the greatest blunder of his life". Well, it wasn't the greatest blunder of his life: the denial of quantum mechanics and state-of-the-art physics in general in the last 30 years of his life was almost certainly a greater blunder. In the late 1990s, the Universe's expansion was seen to accelerate which is why it seemed obvious that Einstein's blunder wasn't a blunder at all, let alone the worst one: the cosmological constant term seems to be there and it's responsible for the acceleration of the Universe. Suddenly, Einstein's cosmological term (with a different numerical value than Einstein needed – but one that is of the same order) seemed like a perfect, minimalistic explanation of the accelerated expansions. Recall that Einstein's equations say$G_{\mu\nu} +\Lambda g_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu}.$ Note that even in the complicated SI units, there is no $$\hbar$$ here – Einstein's general relativity is a classical theory that doesn't depend on quantum mechanics at all. Here, $G_{\mu\nu} = R_{\mu\nu} - \frac{1}{2} R g_{\mu\nu}$ is the Einstein curvature tensor, constructed from the Ricci tensor and the Ricci scalar $$R$$. It's some function of the metric and its first and especially second partial derivatives in the spacetime. On the right hand side of Einstein's equations, $$T_{\mu\nu}$$ is the stress-energy tensor that knows about the sources, the density of mass/energy and momentum and their flow. The $$\Lambda g_{\mu\nu}$$, a simple term that adds an additional mixture of the metric tensor to Einstein's equations, is the cosmological constant term. It naturally reappeared in the late 1990s. It's a rather efficient theory. The term doesn't have to be there but in some sense, it's even "simpler" than Einstein's tensor, so why should it be absent? And it seems to explain the accelerated expansion, so we need it. The theory is really natural which is why the standard cosmological model was the $$\Lambda{CDM}$$ model, i.e. a big bang theory with the cold dark matter (CDM) and the cosmological constant term $$\Lambda$$. What about string theory? String theory really predicts gravity. You may derive Einstein's equations, including the equivalence principle, from the vibrating strings. Einstein's theory of gravity is a prediction of string theory, which is still one of the main reasons to be confident that string theory is on the right track to find a deeper or final theory in physics, to say the least. Aside from gravitons and gravity (and Einstein's equations that may be derived from string theory for this force), string theory also predicts gauge fields and matter fields such as leptons and quarks. They have their (Dirac, Maxwell...) equations and their stress-energy tensors also enter as terms in $$T_{\mu\nu}$$ on the right hand side of Einstein's equations. String theory demonstrably predicts Einstein's equations as the low-energy limit for the massless, spin-two field (the graviton field) that unavoidably arises as a low-lying excitation of a vibrating string. To some extent, this appearance of Einstein's equations is guaranteed by consistency of the theory (or by the relevant gauge invariance, namely the diffeomorphisms) – and string theory is consistent (which is a highly unusual, and probably unprecedented, virtue of string theory among quantum mechanical theories dealing with massless spin-two fields). Does string theory also predict the cosmological constant term, one that Einstein originally included in the equations? At this level, the answer is unquestionably Yes and Cumrun Vafa and pals surely agree. To say the least, string theory predicts lots of vacua with a negative value of the cosmological constant, the anti de Sitter (AdS) vacua. In fact, those are the vacua where the holographic principle of quantum gravity may be shown rather rigorously – holography takes the form of Maldacena's AdS/CFT correspondence. There are lots of Minkowski, $$\Lambda=0$$, vacua in string theory. And there are also lots of AdS, $$\Lambda\lt 0$$, vacua in string theory. I think that the evidence is clear and no one who is considered a real string theorist by most string theorists disputes the statement that both groups of vacua, flat Minkowski vacua and AdS vacua, are predicted by string theory. The real open question is whether string theory allows the existence of $$\Lambda \gt 0$$ (de Sitter or dS) vacua. Those seem to be needed to describe the accelerated expansion of the Universe in terms of the cosmological constant. After 2000, the widespread view – if counted by the number of heads or number of papers – was that string theory allowed the positive cosmological constant. Even though I still find de Sitter vacua in string theory plausible, I believe that it's fair to say that the frantic efforts to spread this de Sitter view – and write papers about de Sitter in string theory – may be described as a sign of group think in the community. There have always been reason to doubt whether string theory allows de Sitter vacua at all. At the end of the last millennium, Maldacena and Nunez wrote a paper with a no-go theorem. It was mostly based on supergravity, a supersymmetric extension of Einstein's general relativity and a low-energy limit of superstring theories, but people generally believed that this approximation of string theory was valid in the context of the proof. Sociologically, you may also want to know that in the 1990s, Edward Witten was "predicting" that the cosmological constant had to be exactly zero (and a symmetry-like principle would be found that implies the vanishing value). He was motivated by the experience with string theory. Even before Maldacena and Nunez and lots of similar work, it looked very hard to establish de Sitter, $$\Lambda \gt 0$$ vacua in string theory. However, some of these problems could have been – and were – considered just technical difficulties. Why? Because if the cosmological constant is positive, you don't have any time-like Killing vectors and there can be no unbroken spacetime supersymmetry. Controlled stringy calculations only work when the spacetime supersymmetry is present (and guarantees lots of cancellations etc.) which is why people were willing to think that the difficulties in finding de Sitter vacua in string theory were only technical difficulties – caused by the hard calculations in the case of a broken supersymmetry. However, aside from Maldacena-Nunez, we got additional reasons to think that string theory might prohibit de Sitter vacua in general. Cumrun Vafa's Swampland – the term for an extension of the (nice stringy) landscape that also includes effective field theories that string theory wouldn't touch, not even with a long stick – implies various general (sometimes qualitative, sometimes quantitative) predictions of string theory that hold in all the stringy vacua, despite their high number. Along with his friend Donald Trump, Cumrun Vafa has always wanted to drain the swamp. ;-) The Swampland program has produced several, more or less established, general laws of string theory – that may also be considered consequences of a consistent theory of quantum gravity. Wolchover mentions that the most well-established example of a Swampland law is our "weak gravity conjecture". Gravity (among elementary particles) is much weaker than other forces in our Universe – and in fact, it probably has to be the case in all Universes that are consistent at all. The Swampland business contains many other laws like that, some of them are more often challenged than the weak gravity conjecture. Cumrun Vafa and his co-authors have presented an incomplete sketch of a proof that de Sitter vacua could be banned in string theory for Swampland reasons – for similar general reasons that guarantee that gravity is the weakest force. This assertion is unsurprisingly disputed by lots of people, especially people around Stanford, because Stanford University (with Linde, Kallosh, Susskind, Kachru, Silverstein, and many others) has been the hotbed of the "standard stringy cosmology" after 2000. They wrote lots of papers about cosmology, starting from the KKLT paper, and the most famous ones have thousands of citations. At some level, authors of such papers may be tempted to think that their papers just can't be wrong. But even the main claims of papers with thousands of citations ultimately may be wrong, of course. Sadly, I must say that some of this Stanford environment likes to use group think – and arguments about authorities and number of papers – that resembles the "consensus science" about the global warming. Sorry, ladies and gentlemen, but that's not how science works. Doubts about the KKLT construction are reasonable because the KKLT and similar papers still build on certain assumptions and approximations. I am confident it is correct to say that the authors of some of the critical papers questioning the KKLT (especially the final, de Sitter "uplift" of some intermediate AdS vacua, an uplift that is achieved by the addition of some anti-D3-branes) are competent physicists – at least "basically indistinguishable" in competence from the Stanford folks. See e.g. Thomas Van Riet's TRF guest blog from November 2014 (time is fast, 1 year per year). Cumrun Vafa et al. don't want to say that string theory has been ruled out. Instead, they say that in string theory, the observed dark energy is represented by quintessence which is just a form of dark energy (read the first sentence of the Wikipedia article I just linked to) – and that's why Wolchover's title that "dark energy is incompatible with string theory" is so misleading. I think that the previous sentence is enough for everyone to understand the main unfortunate terminological blunder in Wolchover's article. Cumrun and pals say that dark energy is described by quintessence, a form of dark energy, in string theory. They don't say that dark energy is impossible in string theory. Wolchover's blunder may be blamed upon the habit to consider the phrase "dark energy" to be the pop science equivalent of the "cosmological constant". Well, they are not quite equivalent and to understand the proposals by Cumrun Vafa et al., the difference between the terms "dark energy" and "cosmological constant" is absolutely paramount. Quintessence is a philosophically if not spiritually sounding word but in cosmology, it's just a fancy word for an ordinary time-dependent generalization of the cosmological constant – that results from the potential energy of a new, inflaton-like scalar field. String theory often predicts many scalar fields, some of them may play the role of the inflaton, others – similar ones – may be the quintessence that fills our Universe with the dark energy which is responsible for the accelerated expansion. Now, the disagreement between "Team Vafa" and "Team Stanford" may be described as follows: Team Stanford uses the seemingly simplest description, one using Einstein's old cosmological constant. It's really constant, string theory allows it, and elaborate – but not quite exact – constructions with antibranes exist in the literature. They use lots of sophisticated equations, do many details very accurately and technically, but the question whether these de Sitter vacua exist remains uncertain because approximations are still used. Team Stanford ignores the uncertainty and sometimes intimidates other people by sociology – by a large number of authors who have joined this direction. The cosmological constant may be positive, they believe, and there are very many, like the notorious number $$10^{500}$$, ways to obtain de Sitter vacua in string theory. We may live in one of them. Because of the high number, the predictive power of string theory may be reduced and some form of the multiverse or even the anthropic principle may be relevant. Team Vafa uses a next-to-simplest description of dark energy, quintessence, which is a scalar field. This scalar field evolves and the potential normally needs to be fine-tuned even more so than the cosmological constant. But Team Vafa says that due to some characteristically stringy relationships, the new, added fine-tuning is actually not independent from the old one, the tuning of the apparently tiny cosmological constant, so from this viewpoint, their picture might be actually as bad (or as good) as the normal cosmological constant. The very large hypothetical landscape may be an illusion – all these constructions may be inconsistent and therefore non-existent, due to subtle technical bugs overlooked by the approximations or, equivalently, due to very general Swampland-like principles that may be used to kill all these hypothetical vacua simultaneously. Team Vafa doesn't have too many fancy mathematical calculations of the potential energy and it doesn't have a very large landscape. So in this sense, Team Vafa looks less technical and more speculative than Team Stanford. But one may argue that Team Stanford's fancy equations are just a way to intimidate the readers and they don't really increase the probability that the stringy de Sitter vacua exist. These are just two very different sketches how dark energy is actually incorporated in string theory. They differ by some basic statements, by the expectation "how very technical certain adequate papers answering a question should be", and in many other respects. I think we can't be certain which of them, if any, is right – even though Team Stanford would be tempted to disagree. But their constructions simply aren't waterproof and they look arbitrary or contrived from many points of view. And yes, as you could have figured out, I do have some feeling that the way of argumentation by Team Stanford has always been similar to the "consensus science" behind the global warming hysteria. Occasional references to the "consensus" and a large number of papers and authors – and equations that seem complicated but if you think about their implications, they don't really settle the basic question (whether the de Sitter vacua – or the dangerous global warming – exist at all). Team Vafa proposes a new possibility and I surely believe it deserves to be considered. It's "controversial" in the sense that Team Stanford is upset, especially some of the members such as E.S. But I dislike Wolchover's subtitle: A controversial new paper argues that universes with dark energy profiles like ours do not exist in the “landscape” of universes allowed by string theory. What's the point of labeling it "controversial"? It may still be right. Strictly speaking, the KKLT paper and the KKLT-based constructions by Team Stanford are controversial as well. These a priori labels just don't belong to the science reporting, I think – they belong to the reporting about pseudosciences such as the global warming hysteria. Reasonable people just don't give a damn about these labels. They care about the evidence. Cumrun Vafa is a top physicist, he and pals have proposed some ideas and presented some evidence, and this evidence hasn't really been killed by solid counter-evidence as of now. Incidentally, after less than two months, Team Vafa already has 23+19 citations. So it doesn't look like some self-evidently wrong crackpot papers, like papers claiming that the Standard Model is all about octonions. I was also surprised by another adjective used by Wolchover: In the meantime, string theorists, who normally form a united front, will disagree about the conjecture. Do they form a united front? What is it supposed to mean and what's the evidence that the statement is correct whatever it means? Are all string theorists members of Marine Le Pen's National Front? Boris Pioline could be one but I think that even he is not. ;-) String theorists are theoretical physicists at the current cutting-edge of fundamental physics and they do the work as well as they can. So when something looks clearly proven by some papers, they agree about it. When something looks uncertain, they are individually uncertain – and/or they disagree about the open questions. When a possible new loophole is presented that challenges some older lore or no-go not-yet-theorems, people start to think about the new possibilities and usually have different views about it, at least for a while. What is Wolchover's "front" supposed to be "united" for or against? String theorists are united in the sense that they take string theory seriously. Well, that's a tautology. They wouldn't be called string theorists otherwise. String theory also implies something so they of course take these implications – as far as they're clearly there – seriously. But is there any valid, non-tautological content in Wolchover's statement about the "united front"? It's complete nonsense to say that string theories are "more united as a front" than folks in any other typical scientific discipline that does things properly. String theorists have disagreed about numerous things that didn't seem settled to some of them. I could list many technical examples but one recent example is very conceptual – the firewall by late Joe Polchinski and his team. There were sophisticated constructions and equations in the papers by Polchinski et al. but the existence of the firewalls obviously remained disputed, and I think that almost all string theorists think that firewalls don't exist in any useful operational sense. But they followed the papers by Polchinski et al. to some extent. Polchinski and others weren't excommunicated for a heresy in any sense – despite the fact that the statement "the black holes don't have any interior at all" would unquestionably be a radical change of the lore. This disagreement about the representation of dark energy within string theory is comparably deep and far-reaching as the firewall wars. Again, I still assign the probability above 50% to the basic picture of Team Stanford which leads to a cosmological constant from string theory. But I don't think it has been proven (a similar warning I have said about $$P\neq NP$$ and other things). I have communicated with many apparently smart and technically powerful folks who had sensible arguments against the validity of the basic conclusions of the KKLT. I am extremely nervous about the apparent efforts of some Stanford folks to "ban any disagreement" about the KKLT-based constructions, a ban that would be "justified" by the existence of many papers and their mutual citations. That's not how actual science may progress for a very long time. If folks like Vafa have doubts about de Sitter vacua in string theory and all related constructions, and they propose quintessence models that could be more natural than once believed (the simple reasons why quintessence would be dismissed by string theorists including myself just a few years ago), they must have the freedom – not just formally, but also in practice – to pursue these alternative scenarios, regardless of the number of papers in literature that take KKLT for granted! Only when the plausibility and attractiveness of these ideas really disappears according to the body of the experts, it could make sense to suggest that Vafa seems to be losing. These two pictures offer very different sketches how the real world is realized within string theory. Indeed, the string phenomenological communities that would work on these two possibilities could easily evolve into "two separated species" that can't talk to each other usefully (although both of them would still be trained with the help of the same textbooks up to a basic textbook of string theory). But as long as we're uncertain, this splitting of the research to several different possibilities is simply the right thing that should happen. Putting eggs to one basket when we're not quite sure about the right basket would simply be wrong. Wolchover also mentions the work of Dr Wrase. I haven't read that so I won't comment. But I will comment on some remarks by Matt Kleban (trained at Team Stanford, now NYU) such as Maybe string theory doesn’t describe the world. [Maybe] dark energy has falsified it. Well, that's nice. String theory is surely falsifiable and such things might happen which would be a big event. But I think it's obvious that Kleban isn't really taking the side of the string theory critics. Instead this statement – that dark energy may have falsified string theory – is a subtle demagogic attack against Team Vafa which is whom he actually cares about (he doesn't care about Šm*its). Effectively, Matt is trying to compare Vafa et al. to Šmoits. If the dark energy in string theory doesn't work in the Stanford way, I will scream and cry, Matt says, and you will give it up. Matt knows that the real people whom he cares about wouldn't consider string theory ruled out for similar reasons so he's effectively saying that they shouldn't buy Team Vafa's claims, either. Sorry, Matt, but that's a demagogy. Team Vafa doesn't really claim that they have falsified string theory. There is a genuine new possibility whether you like to admit it or not. Also, Matt expressed his attacks against Team Vafa using a different verbal construction: He stresses that the new swampland conjecture is highly speculative and an example of “lamppost reasoning,"... Cute, Matt. I always love when people complain about lamppost reasoning. I've had funny discussions both with Brian Greene and Lisa Randall about this phrase before they published their popular books. Lisa felt very entertained when I said it was actually rational to spend more time by looking under the lamppost. But it is rational. I must explain the proverb here. There exists some mathematical set of possibilities in theoretical physics or string theory but only some of them have been discovered or understood, OK? So we call those things that have been understood or studied (intensely enough) "the insights under the lamppost". Now, the "lamppost reasoning" is a criticism used by some people who accuse others from a specific kind of bias. What is this sin or bias supposed to be? Well, the sin is that these people only search for their lost keys under the lamppost. Now, this is supposed to be funny and immediately mock the perpetrators of the "sin" and kill their arguments. If you lose your keys somewhere, it's a matter of luck whether the keys are located under a lamppost, where you could see them, or elsewhere, where you couldn't. So obviously, you should look for the keys everywhere, including places that aren't illumined by the lamp, Kleban and Randall say, among others. But there's a problem with this recommendation. You can't find the keys in the dark too easily – because you don't see anything there. Perhaps if you sweep the whole surface by your fingers. But it's harder and the dark area may be very large. If you want to increase the probability that you find something, you should appreciate the superiority of vision and primarily look at the places where you can see something! You aren't guaranteed to find the keys but your probability to find them per unit time may be higher because you can see there. And there might even exist reasons why the keys are even more likely to be under the lamppost. When you were losing them, you probably preferred to walk at places where you could see, too. You may have lost them while checking the content of your wallet, and you were more likely to do it under the lamppost. So that's why you were more likely under the lamppost at that time, too! Similarly, when God was creating the world, assuming her similar mathematical skills, She was likely to start with discovering things that were relatively easy for us to discover and clarify, too. So she was more likely to drop our Universe under the lamppost, too, and that's why it's right to focus our attention there, too. For a researcher, it's damn reasonable to focus on things that are easier to be understood properly. The two situations (keys, physics) aren't quite analogous but they're close enough. My claim is even clearer in the metaphorical "lamppost" of physics. If you want to settle a question, such as the existence of de Sitter vacua, you simply have to build primarily on the concepts – both general principles and the particular constructions – that have been understood well enough. You can't build on the things that are completely unknown. And if you build on things that are only known vaguely or with a lot of uncertainty, you can be misled easily! So in some sense, I am saying that you should look for your keys under the lamppost, and then increase the sensitivity of your retinas and increase your range that you have a control over. That's how knowledge normally grows – but there always exist regions in the space of ideas and facts that aren't understood yet. The suggestion that claims in physics may be supported by constructions that are either completely unknown or badly understood are just ludicrous. They may sound convincing to them because the keys may be anywhere, the keys may be in the dark. But in the dark of ignorance, science can't be applied and we must appreciate that all our scientific conclusions may only be based on the things that have been illuminated – all of our legitimate science is built out of the insights about the vicinity of the lamppost. Whoever claims to have knowledge derived from the dark is a charlatan – sorry but it's true, Lisa and Matt! In this particular case, it's totally sensible for Team Vafa to evaluate the experience with the known constructions of the vacua and conclude that it seems rather convincing that no de Sitter vacua exist in string theory and the existing counterexamples are fishy and likely to be inconsistent. This evidence is circumstantial because it builds on the "set of constructions" that have been studied or illuminated – constructions under the lamppost – but that's still vastly better than if you make up your facts and make far-reaching claims about the "world in the dark" that we have no real evidence of! You surely expect comparisons to politics as well. I can't avoid the feeling that the Team Stanford claim that de Sitter vacua simply have to exist is just another example of some egalitarianism or non-discrimination. Like men and women, anti de Sitter and de Sitter vacua must be treated as equal. But sorry to say, like men and women, de Sitter and anti de Sitter vacua are simply not equal. The constructions of these two classes within string theory look very different and unlike the anti de Sitter vacua, it's plausible and at least marginally compatible with the evidence that the de Sitter vacua don't exist at all. A Palo Alto leftist could prefer a non-discrimination policy but the known facts, evidence, and constructions surely do discriminate between de Sitter and anti de Sitter spaces – and Team Vafa, like any honest scientist who actually cares about the evidence, assigns some importance to this highly asymmetric observation! ## August 09, 2018 ### ZapperZ - Physics and Physicists Is Online Education Just As Good And Effective? Rhett Allain is tackling a topic that I've been dealing with for a while. It isn't about learning things online, but rather is an online education and degree just as good and effective as brick-and-mortar education? Here, he approached this from the point of view that an "education" involves more than just the subject matter. It involves human and social interaction, and learning about things that are not related to your area. He used the analogy of chocolate chips and chocolate chip cookies: The cookie is the on-campus experience. College is not just about the chocolate chips. It's about all of that stuff that holds the chips together. College is more than a collection of classes. It's the experience of living away from home. It's the cookie dough of relationships with other humans and even faculty. College can be about clubs and other student groups. It's about studying with your peers. College is the whole cookie. . . . But wait! While we are talking about learning stuff, I have one more point to make. Don't think that you should acquire all of the skills and knowledge you need for your whole career during your time at school. You will always be learning new things, and there will always be new stuff to learn (no one learned about smartphones in the '80s). In fact, a college degree is not about job training. It's not. Really, it's not about that. Then what is the whole chocolate chip cookie about? It's about exploring who you are and learning things that might not directly relate to a particular field. College is about taking classes that might not have anything to do with work. Art history is a great class—even if you aren't going to work in a museum. Algebra should be taken by all students—even though you probably won't need it (most humans get by just fine without a solid math background). So really, the whole cookie is about becoming more mature as a human. It's about leveling up in the human race—and that is something that is difficult to do online (but surely not impossible). I have no issue with these points. However, we can even go right down to the jugular with this one instead of invoking some esoteric plea for a well-rounded education and social skills. There are compelling evidence that online-only lessons are not as effective and efficient as in-person, in-class lessons, if the latter is done properly. I will use the example of the effectiveness of peer-instruction method as introduced by Harvard's Eric Mazur. Here, he showed how active learning, instead of passive learning, can be significantly more effective for the students. In such cases, student-to-student interactions are a vital part of learning, with the instructor serving as a "guidance counselor". This is not the only example where active learning is more favorable than passive learning. There have been other students that have show significant improvement in students' understanding and grasp of the material when they are actively engaged in the learning process. Active learning is something that hasn't been done and maybe can't be easily done with online lessons, and certainly not from simply watching or reading the material online. So forget about honing your social skills or learning about art history. Even the subject matter that you wish to understand may be more difficult to comprehend when you do this by yourself in an online course. There are enough evidence to support this, and it is why you shouldn't be surprised if you struggle to understand the material that you are trying to learn by yourself. Zz. ## August 08, 2018 ### ZapperZ - Physics and Physicists Loop Quantum Gravity This is one of those still-unverified theory that tries to reconcile quantum mechanics with General Relativity. I'm not in this field, so I have no expertise in it. But I know that for many people who have read about it, they are aware of String theory and it's competition, Loop Quantum Gravity. In this video, Fermilab's Don Lincoln tries to explain LQG to the masses. Keep in mind that this idea is still lacking in experimental support. The gamma ray burst observation that he mentioned in the video has been highlighted here quite a while back. Without experimental verification, both String theory and LQG continue to have issues with their credibility as a science. Zz. ### Clifford V. Johnson - Asymptotia Science Friday Book Club Q&A Between 3 and 4 pm Eastern time today (very shortly, as I type!) I’ll be answering questions about Hawking’s “A Brief History of Time” as part of a Live twitter event for Science Friday’s Book Club. See below. Come join in! Hey SciFri Book Clubbers! Do you have had any … Click to continue reading this post The post Science Friday Book Club Q&A appeared first on Asymptotia. ## August 07, 2018 ### ZapperZ - Physics and Physicists Ban Cellphone Use In Classrooms? First of all, let me state my policy on the use of electronic devices (mobile phones, tablets, laptop computers, etc.) in my classrooms. I do not have an outright ban (other than during exams and quizzes) during class, but they can't be use in an intrusive manner that disrupts the running of the class. So no making phone calls, etc. So far, I haven't had any issues to change that policy. Many of my colleagues do have an outright ban on the use of these devices during class. Now, a few weeks ago, I came across this paper. They studied students who used these devices for non-class related purposes during class. They found that the distraction of these devices, in the end, affects the average class grade that the student received at the end of the course (they were psychology courses). The distracted students, on average, scored half a grade lower than those that are in classes that ban the use of these devices for non-class related purposes. But what is also surprising is that there was a collateral damage done onto students who were in the same class as these distracted students, but they themselves did not use these devices during class. Furthermore, when the use of electronic devices was allowed in class, performance on the unit exams and final exams was poorer for students who did not use electronic devices during the class as well as for the students who did use an electronic device. This is the first-ever finding in an actual classroom of the social effect of classroom distraction on subsequent exam performance. The effect of classroom distraction on exam performance confirms the laboratory finding of the social effect of distraction (Sana et al.,2013). So this is like second-hand smoking. The good thing about this is that, I can now tell my students that, while I allow their use in the class during lessons, there is evidence that if they choose to use them, their grades may suffer. I may even upload this paper to the Learning Management System. However, because of the collateral damage that might be done to other students who do not use these devices during class, I am seriously rethinking my policy, and am considering imposing an outright ban on the non-class related use of these devices during my lessons. If you teach, what is your experience with this? Zz. ## August 05, 2018 ### ZapperZ - Physics and Physicists APS's Don't Drink And Derive T-Shirt I was cleaning my closet (I do that now and then) and came across this old shirt from way back when. This was bought during the 1999 APS March Meeting in Atlanta, GA, which celebrated the 100th anniversary of the APS. When I first saw it, I said to the person at the counter that all the formulae are wrong. And then, duh, it suddenly hit me why and I got it. So of course, I had to buy it. I haven't worn it in ages, because of a small tear on the front. But I'll probably start wearing it around the house, especially if I'm working on the yard. This t-shirt is the opposite of the one I bought while I was at the Kennedy Space Center in Cape Canaveral, FL. That t-shirt had all the correct formulae and shows my nerdy self whenever I wear it. 😁 Zz. ## August 01, 2018 ### Clifford V. Johnson - Asymptotia DC Moments… I'm in Washington DC for a very short time. 16 hours or so. I'd have come for longer, but I've got some parenting to get back to. It feels a bit rude to come to the American Association of Physics Teachers annual meeting for such a short time, especially because the whole mission of teaching physics in all the myriad ways is very dear to my heart, and here is a massive group of people devoted to gathering about it. It also feels a bit rude because I'm here to pick up an award. (Here's the announcement that I forgot to post some months back.) I meant what I said in the press release: It certainly is an honour to be recognised with the Klopsteg Memorial Lecture Award (for my work in science outreach/engagemnet), and it'll be a delight to speak to the assembled audience tomorrow and accept the award. Speaking in an unvarnished way for a moment, I and many others who do a lot of work to engage the public with science have, over the years, had to deal with not being taken seriously by many of our colleagues. Indeed, suffering being dismissed as not being "serious enough" about our other [...] Click to continue reading this post The post DC Moments… appeared first on Asymptotia. ## July 30, 2018 ### Lubos Motl - string vacua and pheno An 11-dimensional brain: a bit too exciting jargon A month ago, lots of media wrote about a truly exciting topic, the eleven-dimensional brain. Some links to the article may be found in The “Eleven Dimensional” Brain? Topology of Neural Networks by Neuroskeptic, a blogger at the Discover Magazine. I recommend you that article if you want to demystify the whole thing. It's likely that most of the "regular media" prefer to keep you mystified. This "higher-dimensional brain" reminds me of some papers that caught my attention in the mid 1990s – papers by (otherwise) string theorist Dimitri Nanopoulos and his collaborators such as Mavromatos. To give you a great example, look at this 1995 hep-ph (!) paper Theory of Brain Function, Quantum Mechanics and Superstrings Micropoulos wrote a lot about the NanoTubules – OK, it was the other way around, Nanopoulos wrote about MicroTubules. I was always rather skeptical and that skepticism was sufficient to prevent me from trying to read such papers carefully. But in the subsequent two decades, I have read a lot of this ambitious, quirky science and my skepticism deepened. These days, I would probably dismiss Nanopoulos' paper right away. In the abstract, Nanopoulos referred to the Penrose-Hameroff "quantum theories of the brain". I think that those claims – partly driven by Penrose's misunderstanding of quantum mechanics and Hameroff's misunderstanding of any physics – were so stupid that the stupidity is enough to reasonably dismiss any paper that just positively mentions Penrose's and Hameroff's ideas. It was always attractive to imagine some higher-dimensional structures that secretly exist inside the brain. There was something fascinatingly possible – and these speculations gave me goosebumps despite the skepticism. Fortunately, Neuroskeptic has beautifully demystified the newest stuff. Biologists say that the brain is $$N$$-dimensional as soon as you find a group of $$(N+1)$$ neurons in which every neuron is connected with all other neurons. It's like the connections between the $$(N+1)$$ vertices of a simplex in $$N$$ dimensions (such as the triangle and tetrahedron for $$N=2,3$$, respectively). OK, you may see that the neuroscientists are rather modest. As soon as they see sufficiently many connections between several neurons, they talk about higher-dimensional space. If you keep on reading, it starts to sound like Radio Yerevan (from the Soviet jokes). OK, instead of truly higher-dimensional structures, you just have many connections between neurons that may be ordered in the usual 3-dimensional space. On top of that, the maximum dimension they found was not 11, like the spacetime in M-theory, but only 7. And to make the story even less persuasive than what the hype sounds like, this high dimension isn't a feature of a real brain but just a simulation of a brain. And it's not a simulated human brain, it's just a simulated rat brain. Well, the writers of the simulation may surely decide how much the "cliques" are connected, can't they? When you realize such a thing, it becomes totally puzzling what their claim actually is. The statement that "one may write down a simulation with many connections" surely doesn't sound like an exciting scientific discovery to me. Neuroskeptic says it is very interesting work, anyway, so I may be overlooking something very precious. But I just don't see it and it's not clear to me how this may be a flagship result of a brain center whose funding is$1 billion. The suggestions that they have found a link to M-theory are probably vacuous and they're nothing else than pure hype.

If you tell me something exciting that I misunderstand, it may be appreciated.

## July 28, 2018

### Jon Butterworth - Life and Physics

Doomsday Love affair

Some podcasts about the end of the world. I’m in Episode 3. Not sure of the exact date (of recording, or of the end of the world).

## July 26, 2018

### Sean Carroll - Preposterous Universe

Mindscape Podcast

For anyone who hasn’t been following along on other social media, the big news is that I’ve started a podcast, called Mindscape. It’s still young, but early returns are promising!

I won’t be posting each new episode here; the podcast has a “blog” of its own, and episodes and associated show notes will be published there. You can subscribe by RSS as usual, or there is also an email list you can sign up for. For podcast aficionados, Mindscape should be available wherever finer podcasts are served, including iTunes, Google Play, Stitcher, Spotify, and so on.

As explained at the welcome post, the format will be fairly conventional: me talking to smart people about interesting ideas. It won’t be all, or even primarily, about physics; much of my personal motivation is to get the opportunity to talk about all sorts of other interesting things. I’m expecting there will be occasional solo episodes that just have me rambling on about one thing or another.

And there are more exciting episodes on the way. Enjoy, and spread the word!

## July 24, 2018

### Lubos Motl - string vacua and pheno

"Standard Model has no octonions in it" is politically incorrect, too
Left-wing activists stifle the discussion about anything so much that the Inquisition looks like a fresh air in comparison

My essay about Cohl Furey and octonions has been read by some 6,000 people, higher than the average, but the relative increase of insufferable trolls – and bans per article – was much higher than the increase of the number of readers.

Many people often present the medieval Inquisition as a textbook example of an institution that was preventing the people from researching and even talking about scientific matters. Many Christian readers love to defend or lionize the Inquisition or the Catholic Church and its officially sponsored thinkers – and they occasionally attack the likes of Galileo Galilei.

Make no mistake about it, I am squarely on Galileo's side and I would be on the analogous side even in disputes where science was represented by a less shining man than the founder of science.

However, I find it increasingly obvious that the Inquisition represented the freedom of thought and the open-minded approach to arguments relatively to the left-wing activists that have literally contaminated the whole Planet Earth by 2018.

The Inquisition was trying to preserve some Catholic theses, dogmas, and "ways of thinking" that it considered vital for the preservation of the "system of the civilized world". They were just wrong: this suppression of the freedom of thought wasn't really good – or vital – for the preservation of the civilization. But these protected dogmas were rather special – in some sense, there were just several risky statements that researchers should have been careful about.

For example, in March, I was rather persuaded that Giordano Bruno was burned for his belief in exoplanets. The idea that other planets are analogous to Earth and they're just floating somewhere, independently of ours, was (and is) very clearly dangerous for the perceived centrality of Earth and therefore the centrality of the Biblical God, too.

In the Bible, God said that He was focused on the Earth in one way or another. If the Earth is just one many planets that have nothing to do with each other and move in rather random directions, it was rather stupid for God to focus on Earth, right? The whole thing is rather stupid... and people may keep on thinking, it's dangerous, and Giordano Bruno had to be burned.

There weren't really too many scholars analogous to Giordano Bruno who were burned for heresies related to statements about physical sciences. On top of that, Bruno could be said to be an ideologue of a sort, not a pure scientist. But I think that Bruno was a good, scientifically literate scholar and many of his statements about the structure of the Solar System and the Universe were actually (even) more modern than those of Copernicus, Kepler, and others. Some of the missions searching for exoplanets etc. should be named after Bruno – that would be more appropriate e.g. than Kepler.

OK, the Inquisition was really protecting a few theses that were written at a few prominent places of the Bible. Most other things could have been thought about. If you discussed the atomic theory or its details, people wouldn't even have a clue which of the competing views should be considered "the view sponsored by the Catholic Church" and which of them should be viewed as a potential heresy.

Sadly, it seems to me that the contemporary post-truth, mostly left-wing, octopus is significantly more far-reaching and classifies a huge percentage of the possible statements about science as heresies – or, in the modern terms, as politically incorrect propositions. In some corners, including those claiming to be scholarly ones, people can't say that there is obviously no threat posed by the climate change; all predictions of the apocalypse driven by the population growth or climate change or other things have spectacularly failed; that blacks statistically differ from whites, women from men, and pretty much any two groups defined by similar criteria differ from each other in most respects, sometimes dramatically.

People are harassed for saying anything that could be potentially interpreted as the statement $$X\neq Y$$ for any $$X,Y$$. And there are lots of other forbidden ideas. I think that the percentage of the forbidden statements is higher than it was during the Inquisition because the present replacement of the Inquisition, the set of obnoxious left-wing trolls who occasionally turn a university or a Soros into their key ally, has defined the only politically correct statements about virtually all topics you may imagine.

I could see that their fanaticism is really extreme because even the following innocent statement has been treated as a heresy:
The Standard Model has no octonionic structures in it and existing papers claiming to prove otherwise are wrong.
This statement has apparently nothing to do with the politically correct dogmas that the left-wing trolls defend by spamming the "mainstream" newspapers and comment sections on the Internet. What is the relationship between octonions and egalitarianism? Well, the problem is that:

Everything has something to do with the left-wingers' sensitive points.

The snowflakes are troubled by basically everything you may say. In this particular case, the reason why octonions became a politically sensitive issue is simple. A deceitful article by Natalie Wolchover connected the success of women in science with the presence of octonions in the Standard Model.

The twisted logic of the PC attack dogs therefore is: If you dare to say that octonions in the Standard Model is pseudoscientific rubbish, you're also against the women in science, and therefore you're a sexist chauvinist pig! It sounds incredible that some people are so fudged up to politicize all things in this way but this is where much of the mankind seems to be evolving.

(Before I banned every troll in that thread, I have verified that every single one of them was motivated by identity politics. Cohl Furey is female but a fair person doesn't care and I have spent much more time by deconstructing pseudoscience written by male crackpots than female crackpots so please give me a break with these ludicrous off-topic accusations and this insane politicization of algebra.)

Even if there were octonions in the Standard Model, it wouldn't do much for (the rational appraisal of the role of) women in science because Ms Cohl Furey just copied all these ideas from the likes of Mr M. Günaydin, Mr F. Gürsey, Mr Geoffrey Dixon, and perhaps a few others and everything that she has added is just irrelevant would-be technical gibberish that changes nothing about the story whatsoever. So not only legitimate physical sciences but even this particular corner of pseudoscience is overwhelmingly dominated by men.

But the more important point is that the octonions in the Standard Model are just an erroneous idea. There is nothing octonionic about or inside the Standard Model of particle physics. And there's no $$G_2$$, the exceptional Lie algebra that is the automorphism group of the octonion algebra, inside the Standard Model, either.

95% of the commenters at the Quanta Magazine – who were happily persuaded that there were octonions in the Standard Model – are not only scientifically illiterate. They don't really know how to use a search engine on the Internet. Or they just didn't have the idea that they could try. (Well, maybe most of them just don't want to learn the truth – they prefer the lies they are being served because they decided Ms Furey is great as a person – she's impressive, indeed – or as a political cause and the truth about Nature is secondary.) Just look for octonions and the "Standard Model" on Google Scholar. You will get papers that are decades old.

The most famous paper in the search ends up to be one by our (Frank) Tony Smith, a paper with 40 citations. Tony Smith is great, nothing against him, and I have surely described him as a crackpot, and even if I haven't, I will do it right now: Tony Smith is a textbook example of a crackpot. Still, his papers have some standards and you would find reasons to think that he's a counterpart of the researchers who is working outside the Academia.

If you replace the octonions e.g. by $$SO(10)$$, you get a vastly more impressive list of papers. This list is 500 times longer and numerous papers on the first page have over 1,000 citations. That's what it looks like when science has found some actual evidence that there could be a relationship between the Standard Model and something else – in this case, the $$SO(10)$$ grand unified gauge group.

You wouldn't need to look for Google Scholar. You could read Cohl Furey's CV or anything. You would find out that she's been writing basically identical papers since 2010. No one has ever done followup research on that because the papers clearly don't make sense according to physicists. Recently, one of the copies of these papers penetrated to a journal – because of a referee's mistake or his sabotage – which was enough for her to get a PhD and land a job. The thirst for female researchers and the corresponding affirmative action has become totally extreme, indeed.

But every sane person could easily figure out that the physicists think it's no good, they have no way how to elaborate on these writings, and nothing can realistically change about the status of these papers, just like nothing could change about the crackpot status of Lisi's papers despite the amazing hype he received from the media.

Alternatively, you may know something about group theory and particle physics. When you have a Lagrangian or an equivalent expression defining the laws of physics, it is really a straightforward exercise to determine the unbroken symmetry group. The Standard Model has the $$SU(3)\times SU(2) \times U(1) / \ZZ_6$$ gauge group. Sometimes, there may be hidden symmetries that are broken – such as the grand unified group – or hidden symmetries whose action is nonlocal or otherwise advanced – like the Yangian or enhanced gauge groups in string theory. The discovery of such hidden groups requires some ingenious steps. It's manifest that nothing like that is contained in Furey's and similar papers. One needs minutes to check it for a given paper.

Furey's and similar papers are an algebrology, if I use a Mitchell Porter's term – a counterpart of numerology where symbols for algebraic structures, instead of numbers, are religiously worshiped. But just like in the case of numerology, no actual physical role of the worshiped objects is ever found. What she is actually doing is simply counting the number of components and imagining that the fields of the Standard Model are labeled by labels that smell like directions in the octonions or an octonion-like algebra. But if you rename some fields or components and give them names of the puppies, it doesn't mean that you have found a relationship between the Standard Model and dogs. The case of octonions is absolutely identical to the case of dogs.

The actual characteristic properties of dogs and/or relationships between dogs (with each other or the rest of the world) aren't reflected in any properties or relationships inside the Standard Model. And the same is true for octonions in the Standard Model. So there are really no dogs and octonions in the Standard Model.

There are many concise arguments that instantly prove that the efforts to combine the Standard Model with the octonions are nonsensical. First, the Standard Model is a quantum mechanical theory where all the observables are (and must be) linear maps. And maps are associative. A key, pretty much defining, property of the octonions is that they are non-associative. So the observables cannot be octonions. They cannot be functions of octonions, either.

There could hypothetically be octonions outside observables, the octonions could play a different role. But there's really no known reason why physical objects should make non-associative division algebras useful. There's no known possible physical application of the main nontrivial operation inside octonions, the non-associative product. Something could hypothetically change about this negative statement in the future but there's a more down-to-Earth statement we may be certain about: Furey hasn't changed anything about the physical irrelevance of the octonionic product (yet).

So when she uses the tensor products like $$\HHH\otimes \OO$$, the product has no physical implications. The tensor product $$\HHH\otimes \OO$$ looks spicy but at the end, it's only used as a generic space $$\RR^{32}$$. It's just some 32 real components. The tensor product exists but it inherits no interesting properties from the factors. In particular, the tensor product isn't a division algebra. The octonionic product – which is what makes octonions octonionic – is completely forgotten throughout her papers. So the claim that she has used octonions or found octonions somewhere in the Standard Model is just a sleight-of-hand, an illusion designed to impress those who don't look carefully at all.

The octonions have the $$G_2$$ automorphism group, a subgroup of $$SO(7)$$. Well, $$G_2$$ is surely not a symmetry of the Standard Model. In particular, fermionic fields of the Standard Model don't form full representations of $$G_2$$. There are many reasons why they don't. The proposal that "$$G_2$$ is a symmetry of the Standard Model" is so ludicrously wrong that you may prove it wrong immediately, in many different ways.

For example, $$G_2$$, like $$E_8$$, only has real representations. So when you decompose it to representations of $$SU(3)$$ etc., you will always find $${\bf 3}$$ and $$\bar{\bf 3}$$ in pairs. For every color triplet, for example, there will be the antitriplet that has the same handedness under the Lorentz group and the same hypercharge. But the Standard Model is chiral. The left-handed and right-handed quarks and antiquarks (and leptons and antileptons) carry different hypercharges. The hypercharge of the fields has a sign correlated with the field's being in $${\bf 3}$$ or $$\bar{\bf 3}$$, respectively.

The chirality of the weak nuclear force was discovered in a sequence of deep insights in particle physics half a century ago or so (the violation of C,P,CP etc.). Mathematically speaking, these discoveries have also showed that we need complex – i.e. non-real, non-quaternionic – representations to describe the fermionic fields. (There are no octonionic representations at all because representations are those of associative groups while octonions violate the associativity.) This was a (moderate) revolution and there can't be any full-blown counterrevolution because the previous, innocent, left-right-symmetric image of the Universe was falsified and the falsification is irreversible in physics.

So obviously, $$G_2$$ was never used as a grand unified group. It's too small, too. It's ludicrously wrong at many levels. The octonions are a rather sophisticated special algebraic structure but that doesn't mean that they're relevant for the Standard Model – or anything else in Nature that someone finds important. They're not.

I have banned roughly five obnoxious trolls who were attacking me personally for saying that the "there aren't octonions in the Standard Model and papers claiming otherwise are wrong or pseudoscience". In fact, I noticed something remarkable (the octonion wars weren't the first context in which I have noticed that). They find the absence of octonions in the Standard Model so incredibly heretical that they are not even able to repeat my simple statement!

In practice, almost all these trolls have distorted – and incredibly softened – my statements because they would probably die immediately if they dared to repeat my simple statements. So they claimed that I wrote that the octonions weren't "fruitful" in the Standard Model and the future may show that the papers weren't important, and so on.

That's not what I wrote and what I say. I say that the papers are complete garbage, they are demonstrably wrong now, no relationship between the Standard Model and octonions has been found in these papers as they exist, and because these are mathematical facts, nothing can possibly change about these facts in the future. You know, that's one of the glorious features of mathematics (and mathematical physics) that one may unambiguously and permanently say that some statements are right and some statements are wrong – instead of the omnipresent would-be diplomatic fog that the folks from the "humanities" prefer at all times. There's simply zero evidence of a relationship in her papers and a competent physicist who is asked to find octonions inside the Standard Model will end up with the answer "there aren't any", whether or not he is allowed to use the existing literature on similar questions.

Well, most actual physics researchers will leave some room for a possible new discovery in the future and they may choose a welcoming language but when they review the existing papers and understand what is being said, they will agree that the discovery hasn't been made yet.

What's going on is that these fanatical, idiotic trolls are – not so subtly – intimidating us all the time. "You can't possibly say that this paper is wrong," we effectively hear. At most, you may say "you are not completely certain whether this brilliant paper will be fruitful in the year 2100", and even that is an unforgivable heresy.

I am sorry, comrades, but your rules don't apply to me. Furey's and Dixon's papers on octonions are pure crackpottery, they are completely wrong, the chance that something will change about this fact in the future is zero, people who suggest that these papers have the same status as big papers on string theory are just psychopaths, and your fanatical and unfriendly defense of the indefensible and your efforts to silence scientists mean that you are a threat for the civilization and the civilization will either die in a slow death or it will have to look for ways to prevent the material like you from spreading on the surface of Planet Earth.

And that's the memo.

## July 23, 2018

### Jon Butterworth - Life and Physics

Two quarks for Muster Higgs

Since the big discovery of 2012, the Large Hadron Collider at CERN has been accumulating data and making steady progress. Two recent results establish the origins of the mass of the two heaviest quarks

At the Guardian.

## July 20, 2018

### Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Summer days, academics and technological universities

The heatwave in the northern hemisphere may (or may not) be an ominous portend of things to come, but it’s certainly making for an enjoyable summer here in Ireland. I usually find it quite difficult to do any meaningful research when the sun is out, but things are a bit different when the good weather is regular.  Most days, I have breakfast in the village, a swim in the sea before work, a swim after work and a game of tennis to round off the evening. Tough life, eh.

Counsellor’s Strand in Dunmore East

So far, I’ve got one one conference proceeding written, one historical paper revamped and two articles refereed (I really enjoy the latter process, it’s so easy for academics to become isolated). Next week I hope to get back to that book I never seem to finish.

However, it would be misleading to portray a cosy image of a college full of academics beavering away over the summer. This simply isn’t the case around here – while a few researchers can be found in college this summer, the majority of lecturing staff decamped on June 20th and will not return until September 1st.

And why wouldn’t they? Isn’t that their right under the Institute of Technology contracts, especially given the heavy teaching loads during the semester? Sure – but I think it’s important to acknowledge that this is a very different set-up to the modern university sector, and doesn’t quite square with the move towards technological universities.

This week, the Irish newspapers are full of articles depicting the opening of Ireland’s first technological university, and apparently, the Prime Minister is anxious our own college should get a move on. Hmm. No mention of the prospect of a change in teaching duties, or increased facilities/time for research, as far as I can tell (I’d give a lot for an office that was fit for purpose).  So will the new designation just amount to a name change? And this is not to mention the scary business of the merging of different institutes of technology. Those who raise questions about this now tend to get cast as dismissed as resistors of progress. Yet the history of merging large organisations in Ireland hardly inspires confidence, not least because of a tendency for increased layers of bureaucracy to appear out of nowhere – HSE anyone?

### The n-Category Cafe

Compositionality: the Editorial Board

An editorial board has now been chosen for the journal Compositionality, and they’re waiting for people to submit papers.

We are happy to announce the founding editorial board of Compositionality, featuring established researchers working across logic, computer science, physics, linguistics, coalgebra, and pure category theory (see the full list below). Our steering board considered many strong applications to our initial open call for editors, and it was not easy narrowing down to the final list, but we think that the quality of this editorial board and the general response bodes well for our growing research community.

In the meantime, we hope you will consider submitting something to our first issue. Look out in the coming weeks for the journal’s official open-for-submissions announcement.

The editorial board of Compositionality:

• Corina Cristea, University of Southampton, UK

• Ross Duncan, University of Strathclyde, UK

• Andrée Ehresmann, University of Picardie Jules Verne, France

• Tobias Fritz, Max Planck Institute, Germany

• Neil Ghani, University of Strathclyde, UK

• Dan Ghica, University of Birmingham, UK

• Jeremy Gibbons, University of Oxford, UK

• Nick Gurski, Case Western Reserve University, USA

• Helle Hvid Hansen, Delft University of Technology, Netherlands

• Chris Heunen, University of Edinburgh, UK

• Aleks Kissinger, Radboud University, Netherlands

• Joachim Kock, Universitat Autònoma de Barcelona, Spain

• Martha Lewis, University of Amsterdam, Netherlands

• Samuel Mimram, École Polytechnique, France

• Simona Paoli, University of Leicester, UK

• Dusko Pavlovic, University of Hawaii, USA

• Christian Retoré, Université de Montpellier, France

• Peter Selinger, Dalhousie University, Canada

• Pawel Sobocinski, University of Southampton, UK

• David Spivak, MIT, USA

• Jamie Vicary, University of Birmingham, UK

• Simon Willerton, University of Sheffield, UK

Best,
Joshua Tan, Brendan Fong, and Nina Otter
Executive editors, Compositionality

## July 19, 2018

### Andrew Jaffe - Leaves on the Line

(Almost) The end of Planck

This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.

Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.

And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).

(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)

### Planck 2018: the science

So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.

The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.

All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)

Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.

As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).

Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.

The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.

(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)

### Planck 2018: lessons learned

So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).

But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.

Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.

That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.

### Andrew Jaffe - Leaves on the Line

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.

### The n-Category Cafe

The Duties of a Mathematician

What are the ethical responsibilities of a mathematician? I can think of many, some of which I even try to fulfill, but this document raises one that I have mixed feelings about:

Namely:

The ethical responsibility of mathematicians includes a certain duty, never precisely stated in any formal way, but of course felt by and known to serious researchers: to dedicate an appropriate amount of time to study each new groundbreaking theory or proof in one’s general area. Truly groundbreaking theories are rare, and this duty is not too cumbersome. This duty is especially applicable to researchers who are in the most active research period of their mathematical life and have already senior academic positions. In real life this informal duty can be taken to mean that a reasonable number of mathematicians in each major mathematical country studies such groundbreaking theories.

My first reaction to this claimed duty was quite personal: namely, that I couldn’t possibly meet it. My research is too thinly spread over too many fields to “study each new groundbreaking theory or proof” in my general area. While Fesenko says that “truly groundbreaking theories are rare, and this duty is not too cumbersome”, I feel the opposite. I’d really love to learn more about the Langlands program, and the amplitudohedron, and Connes’ work on the Riemann Hypothesis, and Lurie’s work on $\left(\infty ,1\right)\left(\infty,1\right)$-topoi, and homotopy type theory, and Monstrous Moonshine, and new developments in machine learning, and … many other things. But there’s not enough time!

More importantly, while it’s undeniably good to know what’s going on, that doesn’t make it a “duty”. I believe mathematicians should be free to study what they’re interested in.

But perhaps Fesenko has a specific kind of mathematician in mind, without mentioning it: not the larks who fly free, but the solid, established “gatekeepers” and “empire-builders”. These are the people who master a specific field, gain academic power, and strongly influence the field’s development, often by making pronouncements about what’s important and what’s not.

For such people to ignore promising developments in their self-proclaimed realm of expertise can indeed be damaging. Perhaps these people have a duty to spend a certain amount of time studying each new ground-breaking theory in their ambit. But I’m fundamentally suspicious of these people in the first place! So, I’m not eager to figure out their duties.

What do you think about “the duties of a mathematician”?

Of course I would be remiss not to mention the obvious, namely that Fesenko is complaining about the reception of Mochizuki’s work on inter-universal Teichmüller theory. If you read his whole article, that will be completely clear. But this is a controversial subject, and “hard cases make bad law”—so while it makes a fascinating read, I’d rather talk about the duties of a mathematician more generally. If you want to discuss what Fesenko has to say about inter-universal Teichmüller theory, Peter Woit’s blog might be a better place, since he’s jumped right into the middle of that conversation:

As for me, my joy is to learn new mathematics, figure things out, explain things, and talk to people about math. My duties include helping students who are having trouble, trying to make mathematics open-access, and coaxing mathematicians to turn their skills toward saving the planet. The difference is that joy makes me do things spontaneously, while duty taps me on the shoulder and says “don’t forget….”

## July 18, 2018

### Clifford V. Johnson - Asymptotia

Muskovites Vs Anti-Muskovites…

Saw this split over Elon Musk coming over a year ago. This is panel from my graphic short story “Resolution” that appears in the 2018 SF anthology Twelve Tomorrows, edited by Wade Roush (There’s even an e-version now if you want fast access!) -cvj

The post Muskovites Vs Anti-Muskovites… appeared first on Asymptotia.

## July 17, 2018

### John Baez - Azimuth

Compositionality: the Editorial Board

The editors of this journal have an announcement:

We are happy to announce the founding editorial board of Compositionality, featuring established researchers working across logic, computer science, physics, linguistics, coalgebra, and pure category theory (see the full list below). Our steering board considered many strong applications to our initial open call for editors, and it was not easy narrowing down to the final list, but we think that the quality of this editorial board and the general response bodes well for our growing research community.

In the meantime, we hope you will consider submitting something to our first issue. Look out in the coming weeks for the journal’s official open-for-submissions announcement.

The editorial board of Compositionality:

• Corina Cristea, University of Southampton, UK
• Ross Duncan, University of Strathclyde, UK
• Andrée Ehresmann, University of Picardie Jules Verne, France
• Tobias Fritz, Max Planck Institute, Germany
• Neil Ghani, University of Strathclyde, UK
• Dan Ghica, University of Birmingham, UK
• Jeremy Gibbons, University of Oxford, UK
• Nick Gurski, Case Western Reserve University, USA
• Helle Hvid Hansen, Delft University of Technology, Netherlands
• Chris Heunen, University of Edinburgh, UK
• Aleks Kissinger, Radboud University, Netherlands
• Joachim Kock, Universitat Autònoma de Barcelona, Spain
• Martha Lewis, University of Amsterdam, Netherlands
• Samuel Mimram, École Polytechnique, France
• Simona Paoli, University of Leicester, UK
• Dusko Pavlovic, University of Hawaii, USA
• Christian Retoré, Université de Montpellier, France
• Peter Selinger, Dalhousie University, Canada
• Pawel Sobocinski, University of Southampton, UK
• David Spivak, MIT, USA
• Jamie Vicary, University of Birmingham, UK
• Simon Willerton, University of Sheffield, UK

Best,
Josh, Brendan, and Nina
Executive editors, Compositionality

## July 16, 2018

### Tommaso Dorigo - Scientificblogging

A Beautiful New Spectroscopy Measurement
What is spectroscopy ?
(A) the observation of ghosts by infrared visors or other optical devices
(B) the study of excited states of matter through observation of energy emissions

If you answered (A), you are probably using a lousy internet search engine; and btw, you are rather dumb. Ghosts do not exist.

Otherwise you are welcome to read on. We are, in fact, about to discuss a cutting-edge spectroscopy measurement, performed by the CMS experiment using lots of proton-proton collisions by the CERN Large Hadron Collider (LHC).

## July 13, 2018

### John Baez - Azimuth

Applied Category Theory Course: Collaborative Design

In my online course we’re reading the fourth chapter of Fong and Spivak’s book Seven Sketches. Chapter 4 is about collaborative design: building big projects from smaller parts. This is based on work by Andrea Censi:

• Andrea Censi, A mathematical theory of co-design.

The main mathematical content of this chapter is the theory of enriched profunctors. We’ll mainly talk about enriched profunctors between categories enriched in monoidal preorders. The picture above shows what one of these looks like!

Here are my lectures so far:

### John Baez - Azimuth

Random Points on a Group

In Random Points on a Sphere (Part 1), we learned an interesting fact. You can take the unit sphere in $\mathbb{R}^n$, randomly choose two points on it, and compute their distance. This gives a random variable, whose moments you can calculate.

And now the interesting part: when n = 1, 2 or 4, and seemingly in no other cases, all the even moments are integers.

These are the dimensions in which the spheres are groups. We can prove that the even moments are integers because they are differences of dimensions of certain representations of these groups. Rogier Brussee and Allen Knutson pointed out that if we want to broaden our line of investigation, we can look at other groups. So that’s what I’ll do today.

If we take a representation of a compact Lie group $G,$ we get a map from group into a space of square matrices. Since there is a standard metric on any space of square matrices, this lets us define the distance between two points on the group. This is different than the distance defined using the shortest geodesic in the group: instead, we’re taking a straight-line path in the larger space of matrices.

If we randomly choose two points on the group, we get a random variable, namely the distance between them. We can compute the moments of this random variable, and today I’ll prove that the even moments are all integers.

So, we get a sequence of integers from any representation $\rho$ of any compact Lie group $G.$ So far we’ve only studied groups that are spheres:

• The defining representation of $\mathrm{O}(1) \cong S^0$ on the real numbers $\mathbb{R}$ gives the powers of 2.

• The defining representation of $\mathrm{U}(1) \cong S^1$ on the complex numbers $\mathbb{C}$ gives the central binomial coefficients $\binom{2n}{n}.$

• The defining representation of $\mathrm{Sp}(1) \cong S^3$ on the quaternions $\mathbb{H}$ gives the Catalan numbers.

It could be fun to work out these sequences for other examples. Our proof that the even moments are integers will give a way to calculate these sequences, not by doing integrals over the group, but by counting certain ‘random walks in the Weyl chamber’ of the group. Unfortunately, we need to count walks in a certain weighted way that makes things a bit tricky for me.

But let’s see why the even moments are integers!

If our group representation is real or quaternionic, we can either turn it into a complex representation or adapt my argument below. So, let’s do the complex case.

Let $G$ be a compact Lie group with a unitary representation $\rho$ on $\mathbb{C}^n.$ This means we have a smooth map

$\rho \colon G \to \mathrm{End}(\mathbb{C}^n)$

where $\mathrm{End}(\mathbb{C}^n)$ is the algebra of $n \times n$ complex matrices, such that

$\rho(1) = 1$

$\rho(gh) = \rho(g) \rho(h)$

and

$\rho(g) \rho(g)^\dagger = 1$

where $A^\dagger$ is the conjugate transpose of the matrix $A.$

To define a distance between points on $G$ we’ll give $\mathrm{End}(\mathbb{C}^n)$ its metric

$\displaystyle{ d(A,B) = \sqrt{ \sum_{i,j} \left|A_{ij} - B_{ij}\right|^2} }$

This clearly makes $\mathrm{End}(\mathbb{C}^n)$ into a $2n^2$-dimensional Euclidean space. But a better way to think about this metric is that it comes from the norm

$\displaystyle{ \|A\|^2 = \mathrm{tr}(AA^\dagger) = \sum_{i,j} |A_{ij}|^2 }$

where $\mathrm{tr}$ is the trace, or sum of the diagonal entries. We have

$d(A,B) = \|A - B\|$

I want to think about the distance between two randomly chosen points in the group, where ‘randomly chosen’ means with respect to normalized Haar measure: the unique translation-invariant probability Borel measure on the group. But because this measure and also the distance function are translation-invariant, we can equally well think about the distance between the identity 1 and one randomly chosen point $g$ in the group. So let’s work out this distance!

I really mean the distance between $\rho(g)$ and $\rho(1),$ so let’s compute that. Actually its square will be nicer, which is why we only consider even moments. We have

$\begin{array}{ccl} d(\rho(g),\rho(1))^2 &=& \|\rho(g) - \rho(1)\|^2 \\ \\ &=& \|\rho(g) - 1\|^2 \\ \\ &=& \mathrm{tr}\left((\rho(g) - 1)(\rho(g) - 1)^\dagger)\right) \\ \\ &=& \mathrm{tr}\left(\rho(g)\rho(g)^\dagger - \rho(g) - \rho(g)^\ast + 1\right) \\ \\ &=& \mathrm{tr}\left(2 - \rho(g) - \rho(g)^\dagger \right) \end{array}$

Now, any representation $\sigma$ of $G$ has a character

$\chi_\sigma \colon G \to \mathbb{C}$

defined by

$\chi_\sigma(g) = \mathrm{tr}(\sigma(g))$

and characters have many nice properties. So, we should rewrite the distance between $g$ and the identity using characters. We have our representation $\rho,$ whose character can be seen lurking in the formula we saw:

$d(\rho(g),\rho(1))^2 = \mathrm{tr}\left(2 - \rho(g) - \rho(g)^\dagger \right)$

But there’s another representation lurking here, the dual

$\rho^\ast \colon G \to \mathrm{End}(\mathbb{C}^n)$

given by

$\rho^\ast(g)_{ij} = \overline{\rho(g)_{ij}}$

This is a fairly lowbrow way of defining the dual representation, good only for unitary representations on $\mathbb{C}^n,$ but it works well for us here, because it lets us instantly see

$\mathrm{tr}(\rho(g)^\dagger) = \mathrm{tr}(\rho^\ast(g)) = \chi_{\rho^\ast}(g)$

This is useful because it lets us write our distance squared

$d(\rho(g),\rho(1))^2 = \mathrm{tr}\left(2 - \rho(g) - \rho(g)^\dagger \right)$

in terms of characters:

$d(\rho(g),\rho(1))^2 = 2n - \chi_\rho(g) - \chi_{\rho^\ast}(g)$

So, the distance squared is an integral linear combination of characters. (The constant function 1 is the character of the 1-dimensional trivial representation.)

And this does the job: it shows that all the even moments of our distance squared function are integers!

Why? Because of these two facts:

1) If you take an integral linear combination of characters, and raise it to a power, you get another integral linear combination of characters.

2) If you take an integral linear combination of characters, and integrate it over $G,$ you get an integer.

I feel like explaining these facts a bit further, because they’re part of a very beautiful branch of math, called character theory, which every mathematician should know. So here’s a quick intro to character theory for beginners. It’s not as elegant as I could make it; it’s not as simple as I could make it: I’ll try to strike a balance here.

There’s an abelian group $R(G)$ consisting of formal differences of isomorphism classes of representations of $G$, mod the relation

$[\rho] + [\sigma] = [\rho \oplus \sigma]$

Elements of $R(G)$ are called virtual representations of $G.$ Unlike actual representations we can subtract them. We can also add them, and the above formula relates addition in $R(G)$ to direct sums of representations.

We can also multiply them, by saying

$[\rho] [\sigma] = [\rho \otimes \sigma]$

and decreeing that multiplication distributes over addition and subtraction. This makes $R(G)$ into a ring, called the representation ring of $G.$

There’s a map

$\chi \colon R(G) \to C(G)$

where $C(G)$ is the ring of continuous complex-valued functions on $G.$ This map sends each finite-dimensional representation $\rho$ to its character $\chi_\rho.$ This map is one-to-one because we know a representation up to isomorphism if we know its character. This map is also a ring homomorphism, since

$\chi_{\rho \oplus \sigma} = \chi_\rho + \chi_\sigma$

and

$\chi_{\rho \otimes \sigma} = \chi_\rho \chi_\sigma$

These facts are easy to check directly.

We can integrate continuous complex-valued functions on $G,$ so we get a map

$\displaystyle{\int} \colon C(G) \to \mathbb{C}$

The first non-obvious fact in character theory is that we can compute inner products of characters as follows:

$\displaystyle{\int} \overline{\chi_\sigma} \chi_\rho = \dim(\mathrm{hom}(\sigma,\rho))$

where the expression at right is the dimension of the space of ‘intertwining operators’, or morphisms of representations, between the representation $\sigma$ and the representation $\rho.$

What matters most for us now is that this inner product is an integer. In particular, if $\chi_\rho$ is the character of any representation,

$\displaystyle{\int} \chi_\rho$

is an integer because we can take $\sigma$ to be the trivial representation in the previous formula, giving $\chi_\sigma = 1.$

Thus, the map

$R(G) \stackrel{\chi}{\longrightarrow} C(G) \stackrel{\int}{\longrightarrow} \mathbb{C}$

actually takes values in $\mathbb{Z}.$

Now, our distance squared function

$2n - \chi_\rho - \chi_{\rho^\ast} \in C(G)$

is actually the image under $\chi$ of an element of the representation ring, namely

$2n - [\rho] - [\rho^\ast]$

So the same is true for any of its powers—and when we integrate any of these powers we get an integer!

This stuff may seem abstract, but if you’re good at tensoring representations of some group, like $\mathrm{SU}(3),$ you should be able to use it to compute the even moments of the distance function on this group more efficiently than using the brute-force direct approach. Instead of complicated integrals we wind up doing combinatorics.

I would like to know what sequence of integers we get for $\mathrm{SU}(3).$ A much easier, less thrilling but still interesting example is $\mathrm{SO}(3).$ This is the 3-dimensional real projective space $\mathbb{R}\mathrm{P}^3,$ which we can think of as embedded in the 9-dimensional space of $3\times 3$ real matrices. It’s sort of cool that I could now work out the even moments of the distance function on this space by hand! But I haven’t done it yet.

### Clifford V. Johnson - Asymptotia

Friday will see me busy in the Radio world! Two things: (1) On the WNPR Connecticut morning show “Where We Live” they’ll be doing Summer reading recommendations. I’ll be on there live talking about my graphic non-fiction book The Dialogues: Conversations about the Nature of the Universe. Tune in either … Click to continue reading this post

## July 12, 2018

### Clifford V. Johnson - Asymptotia

Splashes

In case you’re wondering, after yesterday’s post… Yes I did find some time to do a bit of sketching. Here’s one that did not get finished but was fun for working the rust off… The caption from instagram says: Quick Sunday watercolour pencil dabbling … been a long time. This … Click to continue reading this post

The post Splashes appeared first on Asymptotia.

### Matt Strassler - Of Particular Significance

“Seeing” Double: Neutrinos and Photons Observed from the Same Cosmic Source

There has long been a question as to what types of events and processes are responsible for the highest-energy neutrinos coming from space and observed by scientists.  Another question, probably related, is what creates the majority of high-energy cosmic rays — the particles, mostly protons, that are constantly raining down upon the Earth.

As scientists’ ability to detect high-energy neutrinos (particles that are hugely abundant, electrically neutral, very light-weight, and very difficult to observe) and high-energy photons (particles of light, though not necessarily of visible light) have become more powerful and precise, there’s been considerable hope of getting an answer to these question.  One of the things we’ve been awaiting (and been disappointed a couple of times) is a violent explosion out in the universe that produces both high-energy photons and neutrinos at the same time, at a high enough rate that both types of particles can be observed at the same time coming from the same direction.

In recent years, there has been some indirect evidence that blazars — narrow jets of particles, pointed in our general direction like the barrel of a gun, and created as material swirls near and almost into giant black holes in the centers of very distant galaxies — may be responsible for the high-energy neutrinos.  Strong direct evidence in favor of this hypothesis has just been presented today.   Last year, one of these blazars flared brightly, and the flare created both high-energy neutrinos and high-energy photons that were observed within the same period, coming from the same place in the sky.

I have written about the IceCube neutrino observatory before; it’s a cubic kilometer of ice under the South Pole, instrumented with light detectors, and it’s ideal for observing neutrinos whose motion-energy far exceeds that of the protons in the Large Hadron Collider, where the Higgs particle was discovered.  These neutrinos mostly pass through Ice Cube undetected, but one in 100,000 hits something, and debris from the collision produces visible light that Ice Cube’s detectors can record.   IceCube has already made important discoveries, detecting a new class of high-energy neutrinos.

On Sept 22 of last year, one of these very high-energy neutrinos was observed at IceCube. More precisely, a muon created underground by the collision of this neutrino with an atomic nucleus was observed in IceCube.  To create the observed muon, the neutrino must have had a motion-energy tens of thousand times larger than than the motion-energy of each proton at the Large Hadron Collider (LHC).  And the direction of the neutrino’s motion is known too; it’s essentially the same as that of the observed muon.  So IceCube’s scientists knew where, on the sky, this neutrino had come from.

(This doesn’t work for typical cosmic rays; protons, for instance, travel in curved paths because they are deflected by cosmic magnetic fields, so even if you measure their travel direction at their arrival to Earth, you don’t then know where they came from. Neutrinos, beng electrically neutral, aren’t affected by magnetic fields and travel in a straight line, just as photons do.)

Very close to that direction is a well-known blazar (TXS-0506), four billion light years away (a good fraction of the distance across the visible universe).

The IceCube scientists immediately reported their neutrino observation to scientists with high-energy photon detectors.  (I’ve also written about some of the detectors used to study the very high-energy photons that we find in the sky: in particular, the Fermi/LAT satellite played a role in this latest discovery.) Fermi/LAT, which continuously monitors the sky, was already detecting high-energy photons coming from the same direction.   Within a few days the Fermi scientists had confirmed that TXS-0506 was indeed flaring at the time — already starting in April 2017 in fact, six times as bright as normal.  With this news from IceCube and Fermi/LAT, many other telescopes (including the MAGIC cosmic ray detector telescopes among others) then followed suit and studied the blazar, learning more about the properties of its flare.

Now, just a single neutrino on its own isn’t entirely convincing; is it possible that this was all just a coincidence?  So the IceCube folks went back to their older data to snoop around.  There they discovered, in their 2014-2015 data, a dramatic flare in neutrinos — more than a dozen neutrinos, seen over 150 days, had come from the same direction in the sky where TXS-0506 is sitting.  (More precisely, nearly 20 from this direction were seen, in a time period where normally there’d just be 6 or 7 by random chance.)  This confirms that this blazar is indeed a source of neutrinos.  And from the energies of the neutrinos in this flare, yet more can be learned about this blazar, and how it makes  high-energy photons and neutrinos at the same time.  Interestingly, so far at least, there’s no strong evidence for this 2014 flare in photons, except perhaps an increase in the number of the highest-energy photons… but not in the total brightness of the source.

The full picture, still emerging, tends to support the idea that the blazar arises from a supermassive black hole, acting as a natural particle accelerator, making a narrow spray of particles, including protons, at extremely high energy.  These protons, millions of times more energetic than those at the Large Hadron Collider, then collide with more ordinary particles that are just wandering around, such as visible-light photons from starlight or infrared photons from the ambient heat of the universe.  The collisions produce particles called pions, made from quarks and anti-quarks and gluons (just as protons are), which in turn decay either to photons or to (among other things) neutrinos.  And its those resulting photons and neutrinos which have now been jointly observed.

Since cosmic rays, the mysterious high energy particles from outer space that are constantly raining down on our planet, are mostly protons, this is evidence that many, perhaps most, of the highest energy cosmic rays are created in the natural particle accelerators associated with blazars. Many scientists have suspected that the most extreme cosmic rays are associated with the most active black holes at the centers of galaxies, and now we have evidence and more details in favor of this idea.  It now appears likely that that this question will be answerable over time, as more blazar flares are observed and studied.

The announcement of this important discovery was made at the National Science Foundation by Francis Halzen, the IceCube principal investigator, Olga Botner, former IceCube spokesperson, Regina Caputo, the Fermi-LAT analysis coordinator, and Razmik Mirzoyan, MAGIC spokesperson.

The fact that both photons and neutrinos have been observed from the same source is an example of what people are now calling “multi-messenger astronomy”; a previous example was the observation in gravitational waves, and in photons of many different energies, of two merging neutron stars.  Of course, something like this already happened in 1987, when a supernova was seen by eye, and also observed in neutrinos.  But in this case, the neutrinos and photons have energies millions and billions of times larger!

## July 09, 2018

### The n-Category Cafe

Beyond Classical Bayesian Networks

guest post by Pablo Andres-Martinez and Sophie Raynor

In the final installment of the Applied Category Theory seminar, we discussed the 2014 paper “Theory-independent limits on correlations from generalized Bayesian networks” by Henson, Lal and Pusey.

In this post, we’ll give a short introduction to Bayesian networks, explain why quantum mechanics means that one may want to generalise them, and present the main results of the paper. That’s a lot to cover, and there won’t be a huge amount of category theory, but we hope to give the reader some intuition about the issues involved, and another example of monoidal categories used in causal theory.

## Introduction

Bayesian networks are a graphical modelling tool used to show how random variables interact. A Bayesian network consists of a pair $\left(G,P\right)\left(G,P\right)$ of directed acyclic graph (DAG) $GG$ together with a joint probability distribution $PP$ on its nodes, satisfying the Markov condition. Intuitively the graph describes a flow of information.

The Markov condition says that the system doesn’t have memory. That is, the distribution on a given node $YY$ is only dependent on the distributions on the nodes $XX$ for which there is an edge $X\to YX \rightarrow Y$. Consider the following chain of binary events. In spring, the pollen in the air may cause someone to have an allergic reaction that may make them sneeze.

In this case the Markov condition says that given that you know that someone is having an allergic reaction, whether or not it is spring is not going to influence your belief about the likelihood of them sneezing. Which seems sensible.

Bayesian networks are useful

• as an inference tool, thanks to belief propagation algorithms,

• and because, given a Bayesian network $\left(G,P\right)\left(G,P\right)$, we can describe d-separation properties on $GG$ which enable us to discover conditional independences in $PP$.

It is this second point that we’ll be interested in here.

Before getting into the details of the paper, let’s try to motivate this discussion by explaining its title: “Theory-independent limits on correlations from generalized Bayesian networks" and giving a little more background to the problem it aims to solve.

Crudely put, the paper aims to generalise a method that assumes classical mechanics to one that holds in quantum and more general theories.

Classical mechanics rests on two intuitively reasonable and desirable assumptions, together called local causality,

• Causality:

Causality is usually treated as a physical primitive. Simply put it is the principle that there is a (partial) ordering of events in space time. In order to have information flow from event $AA$ to event $BB$, $AA$ must be in the past of $BB$.

Physicists often define causality in terms of a discarding principle: If we ignore the outcome of a physical process, it doesn’t matter what process has occurred. Or, put another way, the outcome of a physical process doesn’t change the initial conditions.

• Locality:

Locality is the assumption that, at any given instant, the values of any particle’s properties are independent of any other particle. Intuitively, it says that particles are individual entities that can be understood in isolation of any other particle.

Physicists usually picture particles as having a private list of numbers determining their properties. The principle of locality would be violated if any of the entries of such a list were a function whose domain is another particle’s property values.

In 1935 Einstein, Podolsky and Rosen showed that quantum mechanics (which was a recently born theory) predicted that a pair of particles could be prepared so that applying an action on one of them would instantaneously affect the other, no matter how distant in space they were, thus contradicting local causality. This seemed so unreasonable that the authors presented it as evidence that quantum mechanics was wrong.

But Einstein was wrong. In 1964, John S. Bell set the bases for an experimental test that would demonstrate that Einstein’s “spooky action at a distance” (Einstein’s own words), now known as entanglement, was indeed real. Bell’s experiment has been replicated countless of times and has plenty of variations. This video gives a detailed explanation of one of these experiments, for a non-physicist audience.

But then, if acting on a particle has an instantaneous effect on a distant point in space, one of the two principle above is violated: On one hand, if we acted on both particles at the same time, each action being a distinct event, both would be affecting each other’s result, so it would not be possible to decide on an ordering; causality would be broken. The other option would be to reject locality: a property’s value may be given by a function, so the resulting value may instantaneously change when the distant ‘domain’ particle is altered. In that case, the particles’ information was never separated in space, as they were never truly isolated, so causality is preserved.

Since causality is integral to our understanding of the world and forms the basis of scientific reasoning, the standard interpretation of quantum mechanics is to accept non-locality.

The definition of Bayesian networks implies a discarding principle and hence there is a formal sense in which they are causal (even if, as we shall see, the correlations they model do not always reflect the temporal order). Under this interpretation, the causal theory Bayesian networks describe is classical. Precisely, they can only model probability distributions that satisfy local causality. Hence, in particular, they are not sufficient to model all physical correlations.

The goal of the paper is to develop a framework that generalises Bayesian networks and d-separation results, so that we can still use graph properties to reason about conditional dependence under any given causal theory, be it classical, quantum, or even more general. In particular, this theory will be able to handle all physically observed correlations, and all theoretically postulated correlations.

Though category theory is not mentioned explicitly, the authors achieve their goal by using the categorical framework of operational probablistic theories (OPTs).

## Bayesian networks and d-separation

Consider the situation in which we have three Boolean random variables. Alice is either sneezing or she is not, she either has a a fever or she does not, and she may or may not have flu.

Now, flu can cause both sneezing and fever, that is

$P\left(\mathrm{sneezing}\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{flu}\right)\ne P\left(\mathrm{sneezing}\right)\phantom{\rule{thickmathspace}{0ex}}\text{and likewise}\phantom{\rule{thickmathspace}{0ex}}P\left(\mathrm{fever}\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{flu}\right)\ne P\left(\mathrm{fever}\right)P\left(sneezing \ | \ flu \right) \neq P\left( sneezing\right) \ \text\left\{ and likewise \right\} \ P\left(fever \ | \ flu \right) \neq P\left( fever\right)$

so we could represent this graphically as

Moreover, intuitively we wouldn’t expect there to be any other edges in the above graph. Sneezing and fever, though correlated - each is more likely if Alice has flu - are not direct causes of each other. That is,

$P\left(\mathrm{sneezing}\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{fever}\right)\ne P\left(\mathrm{sneezing}\right)\phantom{\rule{thickmathspace}{0ex}}\text{but}\phantom{\rule{thickmathspace}{0ex}}P\left(\mathrm{sneezing}\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{fever},\phantom{\rule{thickmathspace}{0ex}}\mathrm{flu}\right)=P\left(\mathrm{sneezing}\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{flu}\right).P\left(sneezing \ | \ fever \right) \neq P\left(sneezing\right) \ \text\left\{ but \right\} \ P\left(sneezing \ | \ fever, \ flu \right) = P\left(sneezing \ | \ flu\right).$

### Bayesian networks

Let $GG$ be a directed acyclic graph or DAG $GG$. (Here a directed graph is a presheaf on ($•⇉•\bullet \rightrightarrows \bullet$)).

The set $\mathrm{Pa}\left(Y\right)Pa\left(Y\right)$ of parents of a node $YY$ of $GG$ contains those nodes $XX$ of $GG$ such that there is a directed edge $X\to YX \to Y$.

So, in the example above $\mathrm{Pa}\left(\mathrm{flu}\right)=\varnothing Pa\left(flu\right) = \emptyset$ while $\mathrm{Pa}\left(\mathrm{fever}\right)=\mathrm{Pa}\left(\mathrm{sneezing}\right)=\left\{\mathrm{flu}\right\}Pa\left(fever\right) = Pa\left(sneezing\right) = \\left\{ flu \\right\}$.

To each node $XX$ of a directed graph $GG$, we may associate a random variable, also denoted $XX$. If $VV$ is the set of nodes of $GG$ and $\left({x}_{X}{\right)}_{X\in V}\left(x_X\right)_\left\{X \in V\right\}$ is a choice of value ${x}_{X}x_X$ for each node $XX$, such that $yy$ is the chosen value for $YY$, then $\mathrm{pa}\left(y\right)pa\left(y\right)$ will denote the $\mathrm{Pa}\left(Y\right)Pa\left(Y\right)$-tuple of values $\left({x}_{X}{\right)}_{X\in \mathrm{Pa}\left(Y\right)}\left(x_X\right)_\left\{X \in Pa\left(Y\right)\right\}$.

To define Bayesian networks, and establish the notation, let’s revise some probability basics.

Let $P\left(x,y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}z\right)P\left(x,y \ | \ z\right)$ mean $P\left(X=x\text{and}\phantom{\rule{thickmathspace}{0ex}}Y=y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}Z=z\right)P\left(X = x \text\left\{ and \right\} \ Y = y \ | \ Z = z\right)$, the probability that $XX$ has the value $xx$, and $YY$ has the value $yy$ given that $ZZ$ has the value $zz$. Recall that this is given by

$P\left(x,y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}z\right)=\frac{P\left(x,y,z\right)}{P\left(z\right)}.P\left(x,y \ |\ z\right) = \frac\left\{ P\left(x,y,z\right) \right\}\left\{P\left(z\right)\right\}.$

The chain rule says that, given a value $xx$ of $XX$ and sets of values $\Omega ,\Lambda \Omega, \Lambda$ of other random variables,

$P\left(x,\Omega \phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\Lambda \right)=P\left(x\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\Lambda \right)P\left(\Omega \phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}x,\Lambda \right).P\left(x, \Omega \ | \ \Lambda\right) = P\left( x \ | \ \Lambda\right) P\left( \Omega \ | \ x, \Lambda\right).$

Random variables $XX$ and $YY$ are said to be conditionally independent given $ZZ$, written $X\perp \phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}ZX \perp\!\!\!\!\!\!\!\perp Y \ | \ Z$, if for all values $xx$ of $XX$, $yy$ of $YY$ and $zz$ of $ZZ$

$P\left(x,y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}z\right)=P\left(x\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}z\right)P\left(y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}z\right).P\left(x,y \ | \ z\right) = P\left(x \ | \ z\right) P\left(y \ | \ z\right).$

By the chain rule this is equivalent to

$P\left(x\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}y,z\right)=P\left(x\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}z\right),\phantom{\rule{thickmathspace}{0ex}}\forall x,y,z.P\left(x \ | \ y,z \right) = P \left(x \ | \ z\right) , \ \forall x,y, z.$

More generally, we may replace $X,YX,Y$ and $ZZ$ with sets of random variables. So, in the special case that $ZZ$ is empty, then $XX$ and $YY$ are independent if and only if $P\left(x,y\right)=P\left(x\right)P\left(y\right)P\left(x, y\right) = P\left(x\right)P\left(y\right)$ for all $x,yx,y$.

#### Markov condition

A joint probability distribution $PP$ on the nodes of a DAG $GG$ is said to satisfy the Markov condition if for any set of random variable $\left\{{X}_{i}{\right\}}_{i=1}^{n}\\left\{X_i\\right\}_\left\{i = 1\right\}^n$ on the nodes of $GG$, with choice of values $\left\{{x}_{i}{\right\}}_{i=1}^{n}\\left\{x_i\\right\}_\left\{i = 1\right\}^n$

$P\left({x}_{i},\dots ,{x}_{n}\right)=\prod _{i=1}^{n}P\left({x}_{i}\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{pa}\left({x}_{i}\right)\right).P\left(x_i, \dots, x_n\right) = \prod_\left\{i = 1\right\}^n P\left(x_i \ | \ \left\{pa\left(x_i\right)\right\}\right).$

So, for the flu, fever and sneezing example above, a distribution $PP$ satisfies the Markov condition if

$P\left(\mathrm{flu},\phantom{\rule{thickmathspace}{0ex}}\mathrm{fever},\phantom{\rule{thickmathspace}{0ex}}\mathrm{sneezing}\right)=P\left(\mathrm{fever}\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{flu}\right)P\left(\mathrm{sneezing}\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{flu}\right)P\left(\mathrm{flu}\right).P\left(flu, \ fever, \ sneezing\right) = P\left(fever \ | \ flu\right) P\left(sneezing \ | \ flu\right) P\left(flu\right).$

A Bayesian network is defined as a pair $\left(G,P\right)\left(G,P\right)$ of a DAG $GG$ and a joint probability distribution $PP$ on the nodes of $GG$ that satisfies the Markov condition with respect to $GG$. This means that each node in a Bayesian network is conditionally independent, given its parents, of any of the remaining nodes.

In particular, given a Bayesian network $\left(G,P\right)\left(G,P\right)$ such that there is a directed edge $X\to YX \to Y$, the Markov condition implies that

$\sum _{y}P\left(x,y\right)=\sum _{y}P\left(x\right)P\left(y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}x\right)=P\left(x\right)\sum _{y}P\left(y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}x\right)=P\left(x\right)\sum_\left\{y\right\} P\left(x,y\right) = \sum_y P\left(x\right) P\left(y \ | \ x\right) = P\left(x\right) \sum_y P\left(y \ | \ x\right) = P\left(x\right)$

which may be interpreted as a discard condition. (The ordering is reflected by the fact that we can’t derive $P\left(y\right)P\left(y\right)$ from ${\sum }_{x}P\left(x,y\right)={\sum }_{x}P\left(x\right)P\left(y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}x\right)\sum_\left\{x\right\} P\left(x,y\right) = \sum_x P\left(x\right) P\left(y \ | \ x\right)$.)

Let’s consider some simple examples.

Fork

In the example of flu, sneezing and fever above, the graph has a fork shape. For a probability distribution $PP$ to satisfy the Markov condition for this graph we must have

$P\left(x,y,z\right)=P\left(x\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}z\right)P\left(y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}z\right)P\left(z\right),\phantom{\rule{thickmathspace}{0ex}}\forall x,y,z.P\left(x, y, z\right) = P\left(x \ | \ z\right) P\left(y \ | \ z\right)P\left(z\right), \ \forall x,y,z.$

However, $P\left(x,y\right)\ne P\left(x\right)P\left(y\right)P\left(x,y\right) \neq P\left(x\right) P\left(y\right)$.

In other words, $X\perp \phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}ZX \perp\!\!\!\!\!\!\!\perp Y \ | \ Z$, though $XX$ and $YY$ are not independent. This makes sense, we wouldn’t expect sneezing and fever to be uncorrelated, but given that we know whether or not Alice has flu, telling us that she has fever isn’t going to tell us anything about her sneezing.

Collider

Reversing the arrows in the fork graph above gives a collider as in the following example.

Clearly whether or not Alice has allergies other than hayfever is independent of what season it is. So we’d expect a distribution on this graph to satisfy $X\perp \phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\varnothing X \perp\!\!\!\!\!\!\!\perp Y \ | \ \emptyset$. However, if we know that Alice is having an allergic reaction, and it happens to be spring, we will likely assume that she has some allergy, i.e. $XX$ and $YY$ are not conditionally independent given $ZZ$.

Indeed, the Markov condition and chain rule for this graph gives us $X\perp \phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\varnothing X \perp\!\!\!\!\!\!\!\perp Y \ | \ \emptyset$:

$P\left(x,y,z\right)=P\left(x\right)P\left(y\right)P\left(z\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}x,\phantom{\rule{thickmathspace}{0ex}}y\right)=P\left(z\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}x,\phantom{\rule{thickmathspace}{0ex}}y\right)P\left(x\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}y\right)P\left(y\right)\phantom{\rule{thickmathspace}{0ex}}\forall x,y,z.P\left(x, y, z\right) = P\left(x\right)P\left(y\right) P\left(z \ | \ x,\ y\right) = P\left(z \ | \ x,\ y\right) P\left( x\ | \ y\right) P\left(y\right) \ \forall x,y,z.$

from which we cannot derive $P\left(x\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}z\right)P\left(y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}z\right)=P\left(x,y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}z\right)P\left(x \ | \ z\right) P\left(y \ | \ z\right) = P\left(x,y \ | \ z\right)$. (However, it could still be true for some particular choice of probability distribution.)

Chain

Finally, let us return to the chain of correlations presented in the introduction.

Clearly the probabilities that it is spring and that Alice is sneezing are not independent, and indeed, we cannot derive $P\left(x,y\right)=P\left(x\right)P\left(y\right)P\left(x, y\right) = P\left(x\right) P\left(y\right)$. However observe that, by the chain rule, a Markov distribution on the chain graph must satisfy $X\perp \phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}ZX\perp\!\!\!\!\!\!\!\perp Y \ | \ Z$. If we know Alice is having an allergic reaction that is not hayfever, whether or not she is sneezing is not going to affect our guess as to what season it is.

Crucially, in this case, knowing the season is also not going to affect whether we think Alice is sneezing. By definition, conditional independence of $XX$ and $YY$ given $ZZ$ is symmetric in $XX$ and $YY$. In other words, a joint distribution $PP$ on the variables $X,Y,ZX,Y,Z$ satisfies the Markov condition with respect to the chain graph

$X⟶Z⟶YX \longrightarrow Z \longrightarrow Y$

if and only if $PP$ satisfies the Markov condition on

$Y⟶Z⟶X.Y \longrightarrow Z \longrightarrow X .$

### d-separation

The above observations can be generalised to statements about conditional independences in any Bayesian network. That is, if $\left(G,P\right)\left(G,P\right)$ is a Bayesian network then the structure of $GG$ is enough to derive all the conditional independences in $PP$ that are implied by the graph $GG$ (in reality there may be more that have not been included in the network!).

Given a DAG $GG$ and a set of vertices $UU$ of $GG$, let $m\left(U\right)m\left(U\right)$ denote the union of $UU$ with all the vertices $vv$ of $GG$ such that there is a directed edge from $UU$ to $vv$. The set $W\left(U\right)W\left(U\right)$ will denote the non-inclusive future of $UU$, that is, the set of vertices $vv$ of $GG$ for which there is no directed (possibly trivial) path from $vv$ to $UU$.

For a graph $GG$, let $X,Y,ZX, Y, Z$ now denote disjoint subsets of the vertices of $GG$ (and their corresponding random variables). Set $W:=W\left(X\cup Y\cup Z\right)W := W\left(X \cup Y \cup Z\right)$.

Then $XX$ and $YY$ are said to be d-separated by $ZZ$, written $X\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}ZX \perp Y \ | \ Z$, if there is a partition $\left\{U,V,W,Z\right\}\\left\{U,V,W,Z\\right\}$ of the nodes of $GG$ such that

• $X\subseteq UX \subseteq U$ and $Y\subseteq VY \subseteq V$, and

• $m\left(U\right)\cap m\left(V\right)\subseteq W,m\left(U\right) \cap m\left(V\right) \subseteq W,$ in other words $UU$ and $VV$ have no direct influence on each other.

(This is lemma 19 in the paper.)

Now d-separation is really useful since it tells us everything there is to know about the conditional dependences on Bayesian networks with underlying graph $GG$. Indeed,

#### Theorem 5

• Soundness of d-separation (Verma and Pearl, 1988) If $PP$ is a Markov distribution with respect to a graph $GG$ then for all disjoint subsets $X,Y,ZX,Y,Z$ of nodes of $GG$ $X\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}ZX \perp Y \ | \ Z$ implies that $X\perp \phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}ZX \perp\!\!\!\!\!\!\!\perp Y \ | \ Z$.

• Completeness of d-separation (Meek, 1995) If $X\perp \phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}ZX \perp\!\!\!\!\!\!\!\perp Y \ | \ Z$ for all $PP$ Markov with respect to $GG$, then $X\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}ZX \perp Y \ | \ Z$.

We can combine the previous examples of fork, collider and chain graphs to get the following

A priori, Allergic reaction is conditionally independent of Fever. Indeed, we have the partition

which clearly satisfies d-separation. However, if Sneezing is known then $W=\varnothing W = \emptyset$, so Allergic reaction and Fever are not independent. Indeed, if we use the same sets $UU$ and $VV$ as before, then $m\left(U\right)\cap m\left(V\right)=\left\{\mathrm{Sneezing}\right\}m\left(U\right) \cap m\left(V\right) = \\left\{ Sneezing \\right\}$, so the condition for d-separation fails; and it does for any possible choice of $UU$ and $VV$. Interestingly, if Flu is also known, we again obtain conditional independence between Allergic reaction and Fever, as shown below.

Before describing the limitations of this setup and why we may want to generalise it, it is worth observing that Theorem 5 is genuinely useful computationally. Theorem 5 says that given a Bayesian network $\left(G,P\right)\left(G,P\right)$, the structure of $GG$ gives us a recipe to factor $PP$, thereby greatly increasing computation efficiency for Bayesian inference.

### Latent variables, hidden variables, and unobservables

In the context of Bayesian networks, there are two reasons that we may wish to add variables to a probabilistic model, even if we are not entirely sure what the variables signify or how they are distributed. The first reason is statistical and the second is physical.

Consider the example of flu, fever and sneezing discussed earlier. Although our analysis told us $\mathrm{Fever}\perp \phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\perp \mathrm{Sneezing}\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{Flu}Fever \perp\!\!\!\!\!\!\!\perp Sneezing \ | \ Flu$, if we conduct an experiment we are likely to find:

$P\left(\mathrm{fever}\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{sneezing},\phantom{\rule{thickmathspace}{0ex}}\mathrm{flu}\right)\ne P\left(\mathrm{fever}\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\mathrm{flu}\right).P\left(fever \ | \ sneezing, \ flu\right) \neq P\left(fever \ | \ flu\right).$

The problem is caused by the graph not properly modelling reality, but a simplification of it. After all, there are a whole bunch of things that can cause sneezing and flu. We just don’t know what they all are or how to measure them. So, to make the network work, we may add a hypothetical latent variable that bunches together all the unknown joint causes, and equip it with a distribution that makes the whole network Bayesian, so that we are still able to perform inference methods like belief propagation.

On the other hand, we may want to add variables to a Bayesian network if we have evidence that doing so will provide a better model of reality.

For example, consider the network with just two connected nodes

Every distribution on this graph is Markov, and we would expect there to be a correlation between a road being wet and the grass next to it being wet as well, but most people would claim that there’s something missing from the picture. After all, rain could be a ‘common cause’ of the road and the grass being wet. So, it makes sense to add a third variable.

But maybe we can’t observe whether it has rained or not, only whether the grass and/or road are wet. Nonetheless, the correlation we observe suggests that they have a common cause. To deal with such cases, we could make the third variable hidden. We may not know what information is included in a hidden variable, nor its probability distribution.

All that matters is that the hidden variable helps to explain the observed correlations.

So, latent variables are a statistical tool that ensure the Markov condition holds. Hence they are inherently classical, and can, in theory, be known. But the universe is not classical, so, even if we lump whatever we want into as many classical hidden variables as we want and put them wherever we need, in some cases, there will still be empirically observed correlations that do not satisfy the Markov condition.

Most famously, Bell’s experiment shows that it is possible to have distinct variables $AA$ and $BB$ that exhibit correlations that cannot be explained by any classical hidden variable, since classical variables are restricted by the principle of locality.

In other words, though $A\perp B\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\Lambda A \perp B \ | \ \Lambda$,

$P\left(a\phantom{\rule{thickmathspace}{0ex}}|b,\phantom{\rule{thickmathspace}{0ex}}\lambda \right)\ne P\left(a\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\lambda \right).P\left(a \ | b,\ \lambda\right) \neq P\left(a \ | \ \lambda\right).$

Implicitly, this means that a classical $\Lambda \Lambda$ is not enough. If we want $P\left(a\phantom{\rule{thickmathspace}{0ex}}|b,\phantom{\rule{thickmathspace}{0ex}}\lambda \right)\ne P\left(a\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\lambda \right)P\left(a \ | b,\ \lambda\right) \neq P\left(a \ | \ \lambda\right)$ to hold, $\Lambda \Lambda$ must be a non-local (non-classical) variable. Quantum mechanics implies that we can’t possibly empirically find the value of a non-local variable (for similar reasons to the Heisenberg’s uncertainty principle), so non-classical variables are often called unobservables. In particular, it is irrelevant to question whether $A\perp \phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\perp B\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}\Lambda A \perp\!\!\!\!\!\!\!\perp B \ | \ \Lambda$, as we would need to know the value of $\Lambda \Lambda$ in order to condition over it.

Indeed, this is the key idea behind what follows. We declare certain variables to be unobservable and then insist that conditional (in)dependence only makes sense between observable variables conditioned over observable variables.

## Generalising classical causality

The correlations observed in the Bell experiment can be explained by quantum mechanics. But thought experiments such as the one described here suggest that theoretically, correlations may exist that violate even quantum causality.

So, given that graphical models and d-separation provide such a powerful tool for causal reasoning in the classical context, how can we generalise the Markov condition and Theorem 5 to quantum, and even more general causal theories? And, if we have a theory-independent Markov condition, are there d-separation results that don’t correspond to any given causal theory?

Clearly the first step in answering these questions is to fix a definition of a causal theory.

### Operational probabilistic theories

An operational theory is a symmetric monoidal category $\left(C,\otimes ,I\right)\left(\mathsf \left\{C\right\}, \otimes, I\right)$ whose objects are known as systems or resources. Morphisms are finite sets $f=\left\{{𝒞}_{i}{\right\}}_{i\in I}f = \\left\{\mathcal \left\{C\right\}_i\\right\}_\left\{i \in I\right\}$ called tests, whose elements are called outcomes. Tests with a single element are called deterministic, and for each system $A\in \mathrm{ob}\left(C\right)A \in ob \left(\mathsf \left\{C\right\}\right)$, the identity ${\mathrm{id}}_{A}\in \left(A,A\right)id_A \in \mathsf \left(A,A\right)$ is a deterministic test.

In this discussion, we’ll identify tests $\left\{{𝒞}_{i}{\right\}}_{i},\left\{{𝒟}_{j}{\right\}}_{j}\\left\{\mathcal \left\{C\right\}_i \\right\}_i , \\left\{\mathcal \left\{D\right\}_j\\right\}_j$ in $C\mathsf \left\{C\right\}$ if we may always replace one with the other without affecting the distributions in $C\left(I,I\right)\mathsf \left\{C\right\}\left(I, I\right)$.

Given $\left\{{𝒞}_{i}{\right\}}_{i}\in C\left(B,C\right)\\left\{\mathcal \left\{C\right\}_i \\right\}_i \in \mathsf \left\{C\right\}\left(B, C\right)$ and $\left\{{𝒟}_{j}\right\}\in C\left(A,B\right)\\left\{\mathcal \left\{D\right\}_j \\right\} \in \mathsf \left\{C\right\}\left(A, B\right)$, their composition $f\circ gf \circ g$ is given by

$\left\{{𝒞}_{i}\circ {𝒟}_{j}{\right\}}_{i,j}\in C\left(A,C\right).\\left\{ \mathcal \left\{C\right\}_i \circ \mathcal \left\{D\right\}_j \\right\}_\left\{i,j\right\} \in \mathsf \left\{C\right\}\left(A, C\right).$

First apply $𝒟\mathcal \left\{D\right\}$ with output $BB$ then apply $𝒞\mathcal \left\{C\right\}$ with outcome $CC$.

The monoidal composition $\left\{{𝒞}_{i}\otimes {𝒟}_{j}{\right\}}_{i,j}\in C\left(A\otimes C,B\otimes D\right)\\left\{ \mathcal \left\{C\right\}_i \otimes \mathcal \left\{D\right\}_j \\right\}_\left\{i, j\right\} \in \mathsf \left\{C\right\}\left(A \otimes C, B \otimes D\right)$ corresponds to applying $\left\{{𝒞}_{i}{\right\}}_{i}\in C\left(A,B\right)\\left\{\mathcal \left\{C\right\}_i\\right\}_i \in \mathsf \left\{C\right\}\left(A,B\right)$ and $\left\{{𝒟}_{j}{\right\}}_{j}\\left\{ \mathcal \left\{D\right\}_j \\right\}_j$ separately on $AA$ and $CC$.

An operational probabilistic theory or OPT is an operational theory such that every test $I\to II \to I$ is a probability distribution.

A morphism $\left\{{𝒞}_{i}{\right\}}_{i}\in C\left(A,I\right)\\left\{ \mathcal \left\{C\right\}_i \\right\}_i \in \mathsf \left\{C\right\}\left(A, I\right)$ is called an effect on $AA$. An OPT $C\mathsf \left\{C\right\}$ is called causal or a causal theory if, for each system $A\in \mathrm{ob}\left(C\right)A \in ob \left(\mathsf \left\{C\right\}\right)$, there is a unique deterministic effect ${\top }_{A}\in C\left(A,I\right)\top_A \in \mathsf \left\{C\right\}\left( A, I\right)$ which we call the discard of $AA$.

In particular, for a causal OPT $C\mathsf \left\{C\right\}$, uniqueness of the discard implies that, for all systems $A,B\in \mathrm{ob}\left(C\right)A, B \in ob \left(\mathsf \left\{C\right\}\right)$,

${\top }_{A}\otimes {\top }_{B}={\top }_{A\otimes B},\top_A \otimes \top_B = \top_\left\{A \otimes B\right\},$ and, given any determinstic test $𝒞\in C\left(A,B\right)\mathcal \left\{C\right\} \in \mathsf \left\{C\right\}\left(A, B\right)$,

${\top }_{B}\circ 𝒞={\top }_{A}.\top_B \circ \mathcal \left\{C\right\} = \top_A.$

The existence of a discard map allows a definition of causal morphisms in a causal theory. For example, as we saw in January when we discussed Kissinger and Uijlen’s paper, a test $\left\{{𝒞}_{i}{\right\}}_{i}\in C\left(A,B\right)\\left\{ \mathcal \left\{C\right\}_i \\right\}_i \in \mathsf \left\{C\right\} \left(A, B\right)$ is causal if

${\top }_{B}\circ \left\{{𝒞}_{i}{\right\}}_{i}={\top }_{A}\in C\left(A,I\right).\top_B \circ \\left\{ \mathcal \left\{C\right\}_i \\right\}_i = \top_A \in \mathsf \left\{C\right\}\left( A, I\right).$

In other words, for a causal test, discarding the outcome is the same as not performing the test. Intuitively it is not obvious why such morphisms should be called causal. But this definition enables the formulation of a non-signalling condition that describes the conditions under which the possibility of cause-effect correlation is excluded, in particular, it implies the impossibility of time travel.

#### Examples

The category $\mathrm{Mat}\left({ℝ}_{+}\right)Mat\left(\mathbb \left\{R\right\}_+\right)$ of natural numbers and with $\mathrm{Mat}\left({ℝ}_{+}\right)\left(m,n\right)Mat\left(\mathbb \left\{R\right\}_+\right)\left(m,n\right)$ the set of $n×mn \times m$ matrices, has the structure of a causal OPT. The causal morphisms in $\mathrm{Mat}\left({ℝ}_{+}\right)Mat\left(\mathbb \left\{R\right\}_+\right)$ are the stochastic maps (the matrices whose columns sum to 1). This category describes classical probability theory.

The category $\mathrm{CPM}\mathsf\left\{CPM\right\}$ of sets of linear operators on Hilbert spaces and completely positive maps between them is an OPT and describes quantum relations. The causal morphisms are the trace preserving completely positive maps.

Finally, Boxworld is the theory that allows to describe any correlation between two variables as the cause of some resource of the theory in the past.

### Generalised Bayesian networks

So, we’re finally ready to give the main construction and results of the paper. As mentioned before, to get a generalised d-separation result, the idea is that we will distinguish observable and unobservable variables, and simply insist that conditional independence is only defined relative to observable variables.

To this end, a generalised DAG or GDAG is a DAG $GG$ together with a partition on the nodes of $GG$ into two subsets called observed and unobserved. We’ll represent observed nodes by triangles, and unobserved nodes by circles. An edge out of an (un)observed node will be called (un)observed and represented by a (solid) dashed arrow.

In order to get a generalisation of Theorem 5, we still need to come up with a sensible generalisation of the Markov property which will essentially say that at an observed node that has only observed parents, the distribution must be Markov. However, if an observed node has an unobserved parent, the latter’s whole history is needed to describe the distribution.

To state this precisely, we will associate a causal theory $\left(C,\otimes ,I\right)\left(\mathsf \left\{C\right\}, \otimes, I\right)$ to a GDAG $GG$ via an assignment of systems to edges of $GG$ and tests to nodes of $GG$, such that the observed edges of $GG$ will ‘carry’ only the outcomes of classical tests (so will say something about conditional probability) whereas unobserved edges will carry only the output system.

Precisely, such an assignment $PP$ satisfies the generalised Markov condition (GMC) and is called a generalised Markov distribution if

• Each unobserved edge corresponds to a distinct system in the theory.

• If we can’t observe what is happening at a node, we can’t condition over it: To each unobserved node and each value of its observed parents, we assign a deterministic test from the system defined by the product of its incoming (unobserved) edges to the system defined by the product of its outgoing (unobserved) edges.

• Each observed node $XX$ is an observation test, i.e. a morphism in $C\left(A,I\right)\mathsf \left\{C\right\}\left(A, I\right)$ for the system $A\in \mathrm{ob}\left(C\right)A \in ob\left( \mathsf \left\{C\right\}\right)$ corresponding to the product of the systems assigned to the unobserved input edges of $XX$. Since $C\mathsf \left\{C\right\}$ is a causal theory, this says that $XX$ is assigned a classical random variable, also denoted $XX$, and that if $YY$ is an observed node, and has observed parent $XX$, the distribution at $YY$ is conditionally dependent on the distribution at $XX$ (see here for details).

• It therefore follows that each observed edge is assigned the trivial system $II$.

• The joint probability distribution on the observed nodes of $GG$ is given by the morphism $C\left(I,I\right)\mathsf \left\{C\right\}\left(I, I\right)$ that results from these assignments.

A generalised Bayesian network consists of a GDAG $GG$ together with a generalised Markov distribution $PP$ on $GG$.

#### Example

Consider the following GDAG

Let’s build its OPT morphism as indictated by the generalised Markov condition.

The observed node $XX$ has no incoming edges so it corresponds to a $C\left(I,I\right)\mathsf \left\{C\right\}\left(I, I\right)$ morphism, and thus we assign a probability distribution to it.

The unobserved node A depends on $XX$, and has no unobserved inputs, so we assign a deterministic test $A\left(x\right):I\to AA\left(x\right): I \to A$ for each value $xx$ of $XX$.

The observed node $YY$ has one incoming unobserved edge and no incoming observed edges so we assign to it a test $Y:A\to IY: A \to I$ such that, for each value $xx$ of $XX$, $Y\circ A\left(x\right)Y \circ A\left(x\right)$ is a probability distribution.

Building up the rest of the picture gives an OPT diagram of the form

which is a $C\left(I,I\right)\mathsf \left\{C\right\}\left(I, I\right)$ morphism that defines the joint probability distribution $P\left(x,y,z,w\right)P\left(x,y,z,w\right)$. We now have all the ingredients to state Theorem 22, the generalised d-separation theorem. This is the analogue of Theorem 5 for generalised Markov distributions.

#### Theorem 22

Given a GDAG $GG$ and subsets $X,Y,ZX,Y, Z$ of observed nodes

• if a probability distribution $PP$ is generalised Markov relative to $GG$ then $X\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}Z⇒X\perp \phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}ZX \perp Y \ | \ Z \Rightarrow X\perp\!\!\!\!\!\!\!\perp Y \ | \ Z$.

• If $X\perp \phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\phantom{\rule{negativethinmathspace}{0ex}}\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}ZX\perp\!\!\!\!\!\!\!\perp Y \ | \ Z$ holds for all generalised Markov probability distributions on $GG$, then $X\perp Y\phantom{\rule{thickmathspace}{0ex}}|\phantom{\rule{thickmathspace}{0ex}}ZX \perp Y \ | \ Z$.

Note in particular that there is no change in the definition of d-separation: d-separation of a GDAG $GG$ is simply d-separation with respect to its underlying DAG. There is also no change in the definition of conditional independence. Now, however, we restrict to statements of conditional independence with respect to observed nodes only. This enables the generalised soundness and completeness statements of the theorem.

The proof of soundness uses uniqueness of discarding, and completeness follows since generalised Markov is a stronger condition on a distribution than classically Markov.

### Classical distributions on GDAGs

Theorem 22 is all well and good. But does it really generalise the classical case? That is, can we recover Theorem 5 for all classical Bayesian networks from Theorem 22?

As a first step, Proposition 17 states that if all the nodes of a generalised Bayesian network are observed, then it is a classical bayesian network. In fact, this follows pretty immediately from the definitions.

Moreover, it is easily checked that, given a classical Bayesian network, even if it has hidden or latent variables, it can still be expressed directly as a generalised Bayesian network with no unobserved nodes.

In fact, Theorem 22 generalises Theorem 5 in a stricter sense. That is, the generalised Bayesian network setup together with classical causality adds nothing extra to the theory of classical Bayesian networks. If a generalised Markov distribution is classical (then hidden and latent variables may be represented by unobserved nodes), it can be viewed as a classical Bayesian network. More precisely, Lemma 18 says that, given any generalised Bayesian network $\left(G,P\right)\left(G,P\right)$ with underlying DAG $G\prime G\text{'}$ and distribution $P\in 𝒞P \in \mathcal \left\{C\right\}$, we can construct a classical Bayesian network $\left(G\prime ,P\prime \right)\left(G\text{'}, P\text{'}\right)$ such that $P\prime P\text{'}$ agrees with $PP$ on the observed nodes.

It is worth voicing a note of caution. The authors themselves mention in the conclusion that the construction based on GDAGs with two types of nodes is not entirely satisfactory. The problem is that, although the setups and results presented here do give a generalisation of Theorem 5, they do not, as such, provide a way of generalising Bayesian networks as they are used for probabilistic inference to non-classical settings. For example, belief propagation works through observed nodes, but there is no apparent way of generalising it for unobserved nodes.

## Theory independence

More generally, given a GDAG $GG$, we can look at the set of distributions on $GG$ that are generalised Markov with respect to a given causal theory. Of particular importance are the following.

• The set $𝒞\mathcal \left\{C\right\}$ of generalised Markov distributions in $\mathrm{Mat}\left({ℝ}_{+}\right)Mat\left(\mathbb \left\{R\right\}_+\right)$ on $GG$.

• The set $𝒬\mathcal \left\{Q\right\}$ of generalised Markov distributions in $\mathrm{CPM}\mathsf\left\{CPM\right\}$ on $GG$.

• The set $𝒢\mathcal \left\{G\right\}$ of all generalised Markov distributions on $GG$. (This is the set of generalised Markov distributions in Boxworld.)

Moreover, we can distinguish another class of distributions on $GG$, by not restricting to d-seperation of observed nodes, but considering distributions that satisfy the observable conditional independences given by any d-separation properties on the graph. Theorem 22 implies, in particular that $G\subseteq IG \subseteq I$.

And, so, since $\mathrm{Mat}\left({ℝ}_{+}\right)Mat\left(\mathbb \left\{R\right\}_+\right)$ embeds into $\mathrm{CPM}\mathsf\left\{CPM\right\}$, we have $𝒞\subseteq 𝒬\subseteq 𝒢\subseteq ℐ\mathcal \left\{C\right\} \subseteq \mathcal \left\{Q\right\} \subseteq \mathcal \left\{G\right\} \subseteq \mathcal \left\{I\right\}$.

This means that one can ask for which graphs (some or all of) these inequalities are strict, and the last part of the paper explores these questions. In the original paper, a sufficient condition is given for graphs to satisfy $𝒞\ne ℐ\mathcal \left\{C\right\} \neq \mathcal \left\{I\right\}$. I.e. for these graphs it is guaranteed that the causal structure admits correlations that are non-local. Moreover the authors show that their condition is necessary for small enough graphs.

Another interesting result is that there exist graphs for which $𝒢\ne ℐ\mathcal \left\{G\right\} \neq \mathcal \left\{I\right\}$. This means that using a theory of resources, whatever theory it may be, to explain correlations imposes constraints that are stronger than those imposed by the relations themselves.

## What next?

This setup represents one direction for using category theory to generalise Bayesian networks. In our group work at the ACT workshop, we considered another generalisation of Bayesian networks, this time staying within the classical realm. Namely, building on the work of Bonchi, Gadducci, Kissinger, Sobocinski, and Zanasi, we gave a functorial Markov condition on directed graphs admitting cycles. Hopefully we’ll present this work here soon.

## July 08, 2018

### Marco Frasca - The Gauge Connection

ICHEP 2018

The great high-energy physics conference ICHEP 2018 is over and, as usual, I spend some words about it. The big collaborations of CERN presented their last results. I think the most relevant of this is about the evidence ($3\sigma$) that the Standard Model is at odds with the measurement of spin correlation between top-antitop pair of quarks. More is given in the ATLAS communicate. As expected, increasing precision proves to be rewarding.

About the Higgs particle, after the important announcement about the existence of the ttH process, both ATLAS and CMS are pursuing further their improvement of precision. About the signal strength they give the following results. For ATLAS (see here)

$\mu=1.13\pm 0.05({\rm stat.})\pm 0.05({\rm exp.})^{+0.05}_{-0.04}({\rm sig. th.})\pm 0.03({\rm bkg. th})$

and CMS (see here)

$\mu=1.17\pm 0.06({\rm stat.})^{+0.06}_{-0.05}({\rm sig. th.})\pm 0.06({\rm other syst.}).$

The news is that the error is diminished and both agrees. They show a small tension, 13% and 17% respectively, but the overall result is consistent with the Standard Model.

When the different contributions are unpacked in the respective contributions due to different processes, CMS claims some tensions in the WW decay that should be taken under scrutiny in the future (see here). They presented the results from $35.9{\rm fb}^{-1}$ data and so, there is no significant improvement, for the moment, with respect to Moriond conference this year. The situation is rather better for the ZZ decay where no tension appears and the agreement with the Standard Model is there in all its glory (see here). Things are quite different, but not too much, for ATLAS as in this case they observe some tensions but these are all below $2\sigma$ (see here). For the WW decay, ATLAS does not see anything above $1\sigma$ (see here).

So, although there is something to take under attention with the increase of data, that will reach $100 {\rm fb}^{-1}$ this year, but the Standard Model is in good health with respect to the Higgs sector even if there is a lot to be answered yet and precision measurements are the main tool. The correlation in the tt pair is absolutely promising and we should hope this will be confirmed a discovery.

## July 04, 2018

### The n-Category Cafe

Symposium on Compositional Structures

There’s a new conference series, whose acronym is pronounced “psycho”. It’s part of the new trend toward the study of “compositionality” in many branches of thought, often but not always using category theory:

• First Symposium on Compositional Structures (SYCO1), School of Computer Science, University of Birmingham, 20-21 September, 2018. Organized by Ross Duncan, Chris Heunen, Aleks Kissinger, Samuel Mimram, Simona Paoli, Mehrnoosh Sadrzadeh, Pawel Sobocinski and Jamie Vicary.

The Symposium on Compositional Structures is a new interdisciplinary series of meetings aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language. We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering friendly discussion, disseminating new ideas, and spreading knowledge between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students.

More details below! Our very own David Corfield is one of the invited speakers.

The Symposium on Compositional Structures is a new interdisciplinary series of meetings aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language. We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering friendly discussion, disseminating new ideas, and spreading knowledge between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students.

Submission is easy, with no format requirements or page restrictions. The meeting does not have proceedings, so work can be submitted even if it has been submitted or published elsewhere.

While no list of topics could be exhaustive, SYCO welcomes submissions with a compositional focus related to any of the following areas, in particular from the perspective of category theory:

• logical methods in computer science, including classical and quantum programming, type theory, concurrency, natural language processing and machine learning;
• graphical calculi, including string diagrams, Petri nets and reaction networks;
• languages and frameworks, including process algebras, proof nets, type theory and game semantics;
• abstract algebra and pure category theory, including monoidal category theory, higher category theory, operads, polygraphs, and relationships to homotopy theory;
• quantum algebra, including quantum computation and representation theory;
• tools and techniques, including rewriting, formal proofs and proof assistants, and game theory;
• industrial applications, including case studies and real-world problem descriptions.

This new series aims to bring together the communities behind many previous successful events which have taken place over the last decade, including “Categories, Logic and Physics”, “Categories, Logic and Physics (Scotland)”, “Higher-Dimensional Rewriting and Applications”, “String Diagrams in Computation, Logic and Physics”, “Applied Category Theory”, “Simons Workshop on Compositionality”, and the “Peripatetic Seminar in Sheaves and Logic”.

The steering committee hopes that SYCO will become a regular fixture in the academic calendar, running regularly throughout the year, and becoming over time a recognized venue for presentation and discussion of results in an informal and friendly atmosphere. To help create this community, in the event that more good-quality submissions are received than can be accommodated in the timetable, we may choose to defer some submissions to a future meeting, rather than reject them. This would be done based on submission order, giving an incentive for early submission, and avoiding any need to make difficult choices between strong submissions. Deferred submissions would be accepted for presentation at any future SYCO meeting without the need for peer review. This will allow us to ensure that speakers have enough time to present their ideas, without creating an unnecessarily competitive atmosphere. Meetings would be held sufficiently frequently to avoid a backlog of deferred papers.

# Invited Speakers

• David Corfield, Department of Philosophy, University of Kent: “The ubiquity of modal type theory”.

• Jules Hedges, Department of Computer Science, University of Oxford: “Compositional game theory”

# Important Dates

All times are anywhere-on-earth.

• Submission deadline: Sunday 5 August 2018
• Author notification: Monday 13 August 2018
• Travel support application deadline: Monday 20 August 2018
• Symposium dates: Thursday 20 September and Friday 21 September 2018

# Submissions

Submission is by EasyChair, via the following link:

Submissions should present research results in sufficient detail to allow them to be properly considered by members of the programme committee, who will assess papers with regards to significance, clarity, correctness, and scope. We encourage the submission of work in progress, as well as mature results. There are no proceedings, so work can be submitted even if it has been previously published, or has been submitted for consideration elsewhere. There is no specific formatting requirement, and no page limit, although for long submissions authors should understand that reviewers may not be able to read the entire document in detail.

# Funding

Some funding is available to cover travel and subsistence costs, with a priority for PhD students and junior researchers. To apply for this funding, please contact the local organizer Jamie Vicary at j.o.vicary@bham.ac.uk by the deadline given above, with a short statement of your travel costs and funding required.

# Programme Committee

The symposium managed by the following people, who also serve as the programme committee.

• Ross Duncan, University of Strathclyde
• Chris Heunen, University of Edinburgh
• Aleks Kissinger, Radboud University Nijmegen
• Samuel Mimram, École Polytechnique
• Simona Paoli, University of Leicester
• Pawel Sobocinski, University of Southampton
• Jamie Vicary, University of Birmingham and University of Oxford (local organizer)

### Tommaso Dorigo - Scientificblogging

Chasing The Higgs Self Coupling: New CMS Results
Happy Birthday Higgs boson! The discovery of the last fundamental particle of the Standard Model was announced exactly 6 years ago at CERN (well, plus one day, since I decided to postpone to July 5 the publication of this post...).

In the Standard Model, the theory of fundamental interactions among elementary particles which enshrines our current understanding of the subnuclear world,  particles that constitute matter are fermionic: they have a haif-integer value of a quantity we call spin; and particles that mediate interactions between those fermions, keeping them together and governing their behaviour, are bosonic: they have an integer value of spin.

## June 25, 2018

### Sean Carroll - Preposterous Universe

On Civility

Alex Wong/Getty Images

White House Press Secretary Sarah Sanders went to have dinner at a local restaurant the other day. The owner, who is adamantly opposed to the policies of the Trump administration, politely asked her to leave, and she did. Now (who says human behavior is hard to predict?) an intense discussion has broken out concerning the role of civility in public discourse and our daily life. The Washington Post editorial board, in particular, called for public officials to be allowed to eat in peace, and people have responded in volume.

I don’t have a tweet-length response to this, as I think the issue is more complex than people want to make it out to be. I am pretty far out to one extreme when it comes to the importance of engaging constructively with people with whom we disagree. We live in a liberal democracy, and we should value the importance of getting along even in the face of fundamentally different values, much less specific political stances. Not everyone is worth talking to, but I prefer to err on the side of trying to listen to and speak with as wide a spectrum of people as I can. Hell, maybe I am even wrong and could learn something.

On the other hand, there is a limit. At some point, people become so odious and morally reprehensible that they are just monsters, not respected opponents. It’s important to keep in our list of available actions the ability to simply oppose those who are irredeemably dangerous/evil/wrong. You don’t have to let Hitler eat in your restaurant.

This raises two issues that are not so easy to adjudicate. First, where do we draw the line? What are the criteria by which we can judge someone to have crossed over from “disagreed with” to “shunned”? I honestly don’t know. I tend to err on the side of not shunning people (in public spaces) until it becomes absolutely necessary, but I’m willing to have my mind changed about this. I also think the worry that this particular administration exhibits authoritarian tendencies that could lead to a catastrophe is not a completely silly one, and is at least worth considering seriously.

More importantly, if the argument is “moral monsters should just be shunned, not reasoned with or dealt with constructively,” we have to be prepared to be shunned ourselves by those who think that we’re moral monsters (and those people are out there).  There are those who think, for what they take to be good moral reasons, that abortion and homosexuality are unforgivable sins. If we think it’s okay for restaurant owners who oppose Trump to refuse service to members of his administration, we have to allow staunch opponents of e.g. abortion rights to refuse service to politicians or judges who protect those rights.

The issue becomes especially tricky when the category of “people who are considered to be morally reprehensible” coincides with an entire class of humans who have long been discriminated against, e.g. gays or transgender people. In my view it is bigoted and wrong to discriminate against those groups, but there exist people who find it a moral imperative to do so. A sensible distinction can probably be made between groups that we as a society have decided are worthy of protection and equal treatment regardless of an individual’s moral code, so it’s at least consistent to allow restaurant owners to refuse to serve specific people they think are moral monsters because of some policy they advocate, while still requiring that they serve members of groups whose behaviors they find objectionable.

The only alternative, as I see it, is to give up on the values of liberal toleration, and to simply declare that our personal moral views are unquestionably the right ones, and everyone should be judged by them. That sounds wrong, although we do in fact enshrine certain moral judgments in our legal codes (murder is bad) while leaving others up to individual conscience (whether you want to eat meat is up to you). But it’s probably best to keep that moral core that we codify into law as minimal and widely-agreed-upon as possible, if we want to live in a diverse society.

This would all be simpler if we didn’t have an administration in power that actively works to demonize immigrants and non-straight-white-Americans more generally. Tolerating the intolerant is one of the hardest tasks in a democracy.

## June 24, 2018

### Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

7th Robert Boyle Summer School

This weekend saw the 7th Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s one of my favourite conferences – a select number of talks on the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

The Irish-born scientist and aristocrat Robert Boyle

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

Born in Lismore into a wealthy landowning family, Robert Boyle became one of the most important figures in the Scientific Revolution. A contemporary of Isaac Newton and Robert Hooke, he is recognized the world over for his scientific discoveries, his role in the rise of the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

This year, the theme of the conference was ‘What do we know – and how do we know it?’. There were many interesting talks such as Boyle’s Theory of Knowledge by Dr William Eaton, Associate Professor of Early Modern Philosophy at Georgia Southern University: The How, Who & What of Scientific Discovery by Paul Strathern, author of a great many books on scientists and philosophers such as the well-known Philosophers in 90 Minutes series: Scientific Enquiry and Brain StateUnderstanding the Nature of Knowledge by Professor William T. O’Connor, Head of Teaching and Research in Physiology at the University of Limerick Graduate Entry Medical School: The Promise and Peril of Big Data by Timandra Harkness, well-know media presenter, comedian and writer. For physicists, there was a welcome opportunity to hear the well-known American philosopher of physics Robert P. Crease present the talk Science Denial: will any knowledge do? The full programme for the conference can be found here.

All in all, a hugely enjoyable summer school, culminating in a garden party in the grounds of Lismore castle, Boyle’s ancestral home. My own contribution was to provide the music for the garden party – a flute, violin and cello trio, playing the music of Boyle’s contemporaries, from Johann Sebastian Bach to Turlough O’ Carolan. In my view, the latter was a baroque composer of great importance whose music should be much better known outside Ireland.

Images from the garden party in the grounds of Lismore Castle

## June 22, 2018

### Jester - Resonaances

Both g-2 anomalies
Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections...

## June 16, 2018

### Tommaso Dorigo - Scientificblogging

On The Residual Brightness Of Eclipsed Jovian Moons
While preparing for another evening of observation of Jupiter's atmosphere with my faithful 16" dobsonian scope, I found out that the satellite Io will disappear behind the Jovian shadow tonight. This is a quite common phenomenon and not a very spectacular one, but still quite interesting to look forward to during a visual observation - the moon takes some time to fully disappear, so it is fun to follow the event.
This however got me thinking. A fully eclipsed jovian moon should still be able to reflect back some light picked up from the still lit other satellites - so it should not, after all, appear completely dark. Can a calculation be made of the effect ? Of course - and it's not that difficult.

## June 12, 2018

### Axel Maas - Looking Inside the Standard Model

How to test an idea
As you may have guessed from reading through the blog, our work is centered around a change of paradigm: That there is a very intriguing structure of the Higgs and the W/Z bosons. And that what we observe in the experiments are actually more complicated than what we usually assume. That they are not just essentially point-like objects.

This is a very bold claim, as it touches upon very basic things in the standard model of particle physics. And the interpretation of experiments. However, it is at the same time a necessary consequence if one takes the underlying more formal theoretical foundation seriously. The reason that there is not a huge clash is that the standard model is very special. Because of this both pictures give almost the same prediction for experiments. This can also be understood quantitatively. That is where I have written a review about. It can be imagined in this way:

Thus, the actual particle, which we observe, and call the Higgs is actually a complicated object made from two Higgs particles. However, one of those is so much eclipsed by the other that it looks like just a single one. And a very tiny correction to it.

So far, this does not seem to be something where it is necessary to worry about.

However, there are many and good reasons to believe that the standard model is not the end of particle physics. There are many, many blogs out there, which explain the reasons for this much better than I do. However, our research provides hints that what works so nicely in the standard model, may work much less so in some extensions of the standard model. That there the composite nature makes huge differences for experiments. This was what came out of our numerical simulations. Of course, these are not perfect. And, after all, unfortunately we did not yet discover anything beyond the standard model in experiments. So we cannot test our ideas against actual experiments, which would be the best thing to do. And without experimental support such an enormous shift in paradigm seems to be a bit far fetched. Even if our numerical simulations, which are far from perfect, support the idea. Formal ideas supported by numerical simulations is just not as convincing as experimental confirmation.

So, is this hopeless? Do we have to wait for new physics to make its appearance?

Well, not yet. In the figure above, there was 'something'. So, the ideas make also a statement that even within the standard model there should be a difference. The only question is, what is really the value of a 'little bit'? So far, experiments did not show any deviations from the usual picture. So 'little bit' needs indeed to be really rather small. But we have a calculation prescription for this 'little bit' for the standard model. So, at the very least what we can do is to make a calculation for this 'little bit' in the standard model. We should then see if the value of 'little bit' may already be so large that the basic idea is ruled out, because we are in conflict with experiment. If this is the case, this would raise a lot of question on the basic theory, but well, experiment rules. And thus, we would need to go back to the drawing board, and get a better understanding of the theory.

Or, we get something which is in agreement with current experiment, because it is smaller then the current experimental precision. But then we can make a statement how much better experimental precision needs to become to see the difference. Hopefully the answer will not be so much that it will not be possible within the next couple of decades. But this we will see at the end of the calculation. And then we can decide, whether we will get an experimental test.

Doing the calculations is actually not so simple. On the one hand, they are technically challenging, even though our method for it is rather well under control. But it will also not yield perfect results, but hopefully good enough. Also, it depends strongly on the type of experiment how simple the calculations are. We did a first few steps, though for a type of experiment not (yet) available, but hopefully in about twenty years. There we saw that not only the type of experiment, but also the type of measurement matters. For some measurements the effect will be much smaller than for others. But we are not yet able to predict this before doing the calculation. There, we need still much better understanding of the underlying mathematics. That we will hopefully gain by doing more of these calculations. This is a project I am currently pursuing with a number of master students for various measurements and at various levels. Hopefully, in the end we get a clear set of predictions. And then we can ask our colleagues at experiments to please check these predictions. So, stay tuned.

By the way: This is the standard cycle for testing new ideas and theories. Have an idea. Check that it fits with all existing experiments. And yes, this may be very, very many. If your idea passes this test: Great! There is actually a chance that it can be right. If not, you have to understand why it does not fit. If it can be fixed, fix it, and start again. Or have a new idea. And, at any rate, if it cannot be fixed, have a new idea. When you got an idea which works with everything we know, use it to make a prediction where you get a difference to our current theories. By this you provide an experimental test, which can decide whether your idea is the better one. If yes: Great! You just rewritten our understanding of nature. If not: Well, go back to fix it or have a new idea. Of course, it is best if we have already an experiment which does not fit with our current theories. But there we are at this stage a little short off. May change again. If your theory has no predictions which can be testable in any foreseeable future experimentally. Well, that is a good question how to deal with this, and there is not yet a consensus how to proceed.

## June 10, 2018

### Tommaso Dorigo - Scientificblogging

Modeling Issues Or New Physics ? Surprises From Top Quark Kinematics Study
Simulation, noun:
1. Imitation or enactment
2. The act or process of pretending; feigning.
3. An assumption or imitation of a particular appearance or form; counterfeit; sham.

Well, high-energy physics is all about simulations.

We have a theoretical model that predicts the outcome of the very energetic particle collisions we create in the core of our giant detectors, but we only have approximate descriptions of the inputs to the theoretical model, so we need simulations.

## June 09, 2018

### Jester - Resonaances

Dark Matter goes sub-GeV
It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.

Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles.

It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing  developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T.  In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck:  ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However,  this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10.

It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space).  Nevertheless, dedicated experiments will soon  be taking over. Recently, two collaborations published first results from their prototype detectors:  one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor.  Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV.  A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.

Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.

## June 08, 2018

### Jester - Resonaances

Massive Gravity, or You Only Live Twice
Proving Einstein wrong is the ultimate ambition of every crackpot and physicist alike. In particular, Einstein's theory of gravitation -  the general relativity -  has been a victim of constant harassment. That is to say, it is trivial to modify gravity at large energies (short distances), for example by embedding it in string theory, but it is notoriously difficult to change its long distance behavior. At the same time, motivations to keep trying go beyond intellectual gymnastics. For example, the accelerated expansion of the universe may be a manifestation of modified gravity (rather than of a small cosmological constant).

In Einstein's general relativity, gravitational interactions are mediated by a massless spin-2 particle - the so-called graviton. This is what gives it its hallmark properties: the long range and the universality. One obvious way to screw with Einstein is to add mass to the graviton, as entertained already in 1939 by Fierz and Pauli. The Particle Data Group quotes the constraint m ≤ 6*10^−32 eV, so we are talking about the De Broglie wavelength comparable to the size of the observable universe. Yet even that teeny mass may cause massive troubles. In 1970 the Fierz-Pauli theory was killed by the van Dam-Veltman-Zakharov (vDVZ) discontinuity. The problem stems from the fact that a massive spin-2 particle has 5 polarization states (0,±1,±2) unlike a massless one which has only two (±2). It turns out that the polarization-0 state couples to matter with the similar strength as the usual polarization ±2 modes, even in the limit where the mass goes to zero, and thus mediates an additional force which differs from the usual gravity. One finds that, in massive gravity, light bending would be 25% smaller, in conflict with the very precise observations of stars' deflection around the Sun. vDV concluded that "the graviton has rigorously zero mass". Dead for the first time...

The second coming was heralded soon after by Vainshtein, who noticed that the troublesome polarization-0 mode can be shut off in the proximity of stars and planets. This can happen in the presence of graviton self-interactions of a certain type. Technically, what happens is that the polarization-0 mode develops a background value around massive sources which, through the derivative self-interactions, renormalizes its kinetic term and effectively diminishes its interaction strength with matter. See here for a nice review and more technical details. Thanks to the Vainshtein mechanism, the usual predictions of general relativity are recovered around large massive source, which is exactly where we can best measure gravitational effects. The possible self-interactions leading a healthy theory without ghosts have been classified, and go under the name of the dRGT massive gravity.

There is however one inevitable consequence of the Vainshtein mechanism. The graviton self-interaction strength grows with energy, and at some point becomes inconsistent with the unitarity limits that every quantum theory should obey. This means that massive gravity is necessarily an effective theory with a limited validity range and has to be replaced by a more fundamental theory at some cutoff scale 𝞚. This is of course nothing new for gravity: the usual Einstein gravity is also an effective theory valid at most up to the Planck scale MPl～10^19 GeV.  But for massive gravity the cutoff depends on the graviton mass and is much smaller for realistic theories. At best,
So the massive gravity theory in its usual form cannot be used at distance scales shorter than ～300 km. For particle physicists that would be a disaster, but for cosmologists this is fine, as one can still predict the behavior of galaxies, stars, and planets. While the theory certainly cannot be used to describe the results of table top experiments,  it is relevant for the  movement of celestial bodies in the Solar System. Indeed, lunar laser ranging experiments or precision studies of Jupiter's orbit are interesting probes of the graviton mass.

Now comes the latest twist in the story. Some time ago this paper showed that not everything is allowed  in effective theories.  Assuming the full theory is unitary, causal and local implies non-trivial constraints on the possible interactions in the low-energy effective theory. These techniques are suitable to constrain, via dispersion relations, derivative interactions of the kind required by the Vainshtein mechanism. Applying them to the dRGT gravity one finds that it is inconsistent to assume the theory is valid all the way up to 𝞚max. Instead, it must be replaced by a more fundamental theory already at a much lower cutoff scale,  parameterized as 𝞚 = g*^1/3 𝞚max (the parameter g* is interpreted as the coupling strength of the more fundamental theory). The allowed parameter space in the g*-m plane is showed in this plot:

Massive gravity must live in the lower left corner, outside the gray area  excluded theoretically  and where the graviton mass satisfies the experimental upper limit m～10^−32 eV. This implies g* ≼ 10^-10, and thus the validity range of the theory is some 3 order of magnitude lower than 𝞚max. In other words, massive gravity is not a consistent effective theory at distance scales below ～1 million km, and thus cannot be used to describe the motion of falling apples, GPS satellites or even the Moon. In this sense, it's not much of a competition to, say, Newton. Dead for the second time.

Is this the end of the story? For the third coming we would need a more general theory with additional light particles beyond the massive graviton, which is consistent theoretically in a larger energy range, realizes the Vainshtein mechanism, and is in agreement with the current experimental observations. This is hard but not impossible to imagine. Whatever the outcome, what I like in this story is the role of theory in driving the progress, which is rarely seen these days. In the process, we have understood a lot of interesting physics whose relevance goes well beyond one specific theory. So the trip was certainly worth it, even if we find ourselves back at the departure point.

## June 07, 2018

### Jester - Resonaances

Can MiniBooNE be right?
The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.

This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance.  This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed.

In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012.  The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.

What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles...  Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01.

Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of  tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.

But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper.  In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron  one in a short-baseline experiment is
where U is the 4x4 neutrino mixing matrix.  The Uμ4 matrix elements controls also the νμ survival probability
The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update.
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.

## June 01, 2018

### Jester - Resonaances

WIMPs after XENON1T
After today's update from the XENON1T experiment, the situation on the front of direct detection of WIMP dark matter is as follows

WIMP can be loosely defined as a dark matter particle with mass in the 1 GeV - 10 TeV range and significant interactions with ordinary matter. Historically, WIMP searches have stimulated enormous interest because this type of dark matter can be easily realized in models with low scale supersymmetry. Now that we are older and wiser, many physicists would rather put their money on other realizations, such as axions, MeV dark matter, or primordial black holes. Nevertheless, WIMPs remain a viable possibility that should be further explored.

To detect WIMPs heavier than a few GeV, currently the most successful strategy is to use huge detectors filled with xenon atoms, hoping one of them is hit by a passing dark matter particle. Xenon1T beats the competition from the LUX and Panda-X experiments because it has a bigger gun tank. Technologically speaking, we have come a long way in the last 30 years. XENON1T is now sensitive to 40 GeV WIMPs interacting with nucleons with the cross section of 40 yoctobarn (1 yb = 10^-12 pb = 10^-48 cm^2). This is 6 orders of magnitude better than what the first direct detection experiment in the Homestake mine could achieve back in the 80s. Compared to the last year, the  limit is better by a factor of two at the most sensitive mass point. At high mass the improvement is somewhat smaller than expected due to a small excess of events observed by XENON1T, which is probably just a 1 sigma upward fluctuation of the background.

What we are learning about WIMPs is how they can (or cannot) interact with us. Of course, at this point in the game we don't see qualitative progress, but rather incremental quantitative improvements. One possible scenario is that WIMPs experience one of the Standard Model forces,  such as the weak or the Higgs force. The former option is strongly constrained by now. If WIMPs had interacted in the same way as our neutrino does, that is by exchanging a Z boson,  it would have been found in the Homestake experiment. Xenon1T is probing models where the dark matter coupling to the Z boson is suppressed by a factor cχ ~ 10^-3 - 10^-4 compared to that of an active neutrino. On the other hand, dark matter could be participating in weak interactions only by exchanging W bosons, which can happen for example when it is a part of an SU(2) triplet. In the plot you can see that XENON1T is approaching but not yet excluding this interesting possibility. As for models using the Higgs force, XENON1T is probing the (subjectively) most natural parameter space where WIMPs couple with order one strength to the Higgs field.

And the arms race continues. The search in XENON1T will go on until the end of this year, although at this point a discovery is extremely unlikely. Further progress is expected on a timescale of a few years thanks to the next generation xenon detectors XENONnT and LUX-ZEPLIN, which should achieve yoctobarn sensitivity. DARWIN may be the ultimate experiment along these lines, in the sense that there is no prefix smaller than yocto it will reach the irreducible background from atmospheric neutrinos, after which new detection techniques will be needed.  For dark matter mass closer to 1 GeV, several orders of magnitude of pristine parameter space will be covered by the SuperCDMS experiment. Until then we are kept in suspense. Is dark matter made of WIMPs? And if yes, does it stick above the neutrino sea?

### Tommaso Dorigo - Scientificblogging

MiniBoone Confirms Neutrino Anomaly
Neutrinos, the most mysterious and fascinating of all elementary particles, continue to puzzle physicists. 20 years after the experimental verification of a long-debated effect whereby the three neutrino species can "oscillate", changing their nature by turning one into the other as they propagate in vacuum and in matter, the jury is still out to decide what really is the matter with them. And a new result by the MiniBoone collaboration is stirring waters once more.

## May 26, 2018

### Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A festschrift at UCC

One of my favourite academic traditions is the festschrift, a conference convened to honour the contribution of a senior academic. In a sense, it’s academia’s version of an Oscar for lifetime achievement, as scholars from all around the world gather to pay tribute their former mentor, colleague or collaborator.

Festschrifts tend to be very stimulating meetings, as the diverging careers of former students and colleagues typically make for a diverse set of talks. At the same time, there is usually a unifying theme based around the specialism of the professor being honoured.

And so it was at NIALLFEST this week, as many of the great and the good from the world of Einstein’s relativity gathered at University College Cork to pay tribute to Professor Niall O’Murchadha, a theoretical physicist in UCC’s Department of Physics noted internationally for seminal contributions to general relativity.  Some measure of Niall’s influence can be seen from the number of well-known theorists at the conference, including major figures such as Bob WaldBill UnruhEdward Malec and Kip Thorne (the latter was recently awarded the Nobel Prize in Physics for his contribution to the detection of gravitational waves). The conference website can be found here and the programme is here.

University College Cork: probably the nicest college campus in Ireland

As expected, we were treated to a series of high-level talks on diverse topics, from black hole collapse to analysis of high-energy jets from active galactic nuclei, from the initial value problem in relativity to the search for dark matter (slides for my own talk can be found here). To pick one highlight, Kip Thorne’s reminiscences of the forty-year search for gravitational waves made for a fascinating presentation, from his description of early designs of the LIGO interferometer to the challenge of getting funding for early prototypes – not to mention his prescient prediction that the most likely chance of success was the detection of a signal from the merger of two black holes.

All in all, a very stimulating conference. Most entertaining of all were the speakers’ recollections of Niall’s working methods and his interaction with students and colleagues over the years. Like a great piano teacher of old, one great professor leaves a legacy of critical thinkers dispersed around their world, and their students in turn inspire the next generation!

## May 21, 2018

### Andrew Jaffe - Leaves on the Line

Leon Lucy, R.I.P.

I have the unfortunate duty of using this blog to announce the death a couple of weeks ago of Professor Leon B Lucy, who had been a Visiting Professor working here at Imperial College from 1998.

Leon got his PhD in the early 1960s at the University of Manchester, and after postdoctoral positions in Europe and the US, worked at Columbia University and the European Southern Observatory over the years, before coming to Imperial. He made significant contributions to the study of the evolution of stars, understanding in particular how they lose mass over the course of their evolution, and how very close binary stars interact and evolve inside their common envelope of hot gas.

Perhaps most importantly, early in his career Leon realised how useful computers could be in astrophysics. He made two major methodological contributions to astrophysical simulations. First, he realised that by simulating randomised trajectories of single particles, he could take into account more physical processes that occur inside stars. This is now called “Monte Carlo Radiative Transfer” (scientists often use the term “Monte Carlo” — after the European gambling capital — for techniques using random numbers). He also invented the technique now called smoothed-particle hydrodynamics which models gases and fluids as aggregates of pseudo-particles, now applied to models of stars, galaxies, and the large scale structure of the Universe, as well as many uses outside of astrophysics.

Leon’s other major numerical contributions comprise advanced techniques for interpreting the complicated astronomical data we get from our telescopes. In this realm, he was most famous for developing the methods, now known as Lucy-Richardson deconvolution, that were used for correcting the distorted images from the Hubble Space Telescope, before NASA was able to send a team of astronauts to install correcting lenses in the early 1990s.

For all of this work Leon was awarded the Gold Medal of the Royal Astronomical Society in 2000. Since then, Leon kept working on data analysis and stellar astrophysics — even during his illness, he asked me to help organise the submission and editing of what turned out to be his final papers, on extracting information on binary-star orbits and (a subject dear to my heart) the statistics of testing scientific models.

Until the end of last year, Leon was a regular presence here at Imperial, always ready to contribute an occasionally curmudgeonly but always insightful comment on the science (and sociology) of nearly any topic in astrophysics. We hope that we will be able to appropriately memorialise his life and work here at Imperial and elsewhere. He is survived by his wife and daughter. He will be missed.

## May 14, 2018

### Sean Carroll - Preposterous Universe

Intro to Cosmology Videos

In completely separate video news, here are videos of lectures I gave at CERN several years ago: “Cosmology for Particle Physicists” (May 2005). These are slightly technical — at the very least they presume you know calculus and basic physics — but are still basically accurate despite their age.

Update: I originally linked these from YouTube, but apparently they were swiped from this page at CERN, and have been taken down from YouTube. So now I’m linking directly to the CERN copies. Thanks to commenters Bill Schempp and Matt Wright.

## May 10, 2018

### Sean Carroll - Preposterous Universe

User-Friendly Naturalism Videos

Some of you might be familiar with the Moving Naturalism Forward workshop I organized way back in 2012. For two and a half days, an interdisciplinary group of naturalists (in the sense of “not believing in the supernatural”) sat around to hash out the following basic question: “So we don’t believe in God, what next?” How do we describe reality, how can we be moral, what are free will and consciousness, those kinds of things. Participants included Jerry Coyne, Richard Dawkins, Terrence Deacon, Simon DeDeo, Daniel Dennett, Owen Flanagan, Rebecca Newberger Goldstein, Janna Levin, Massimo Pigliucci, David Poeppel, Nicholas Pritzker, Alex Rosenberg, Don Ross, and Steven Weinberg.

Happily we recorded all of the sessions to video, and put them on YouTube. Unhappily, those were just unedited proceedings of each session — so ten videos, at least an hour and a half each, full of gems but without any very clear way to find them if you weren’t patient enough to sift through the entire thing.

No more! Thanks to the heroic efforts of Gia Mora, the proceedings have been edited down to a number of much more accessible and content-centered highlights. There are over 80 videos (!), with a median length of maybe 5 minutes, though they range up to about 20 minutes and down to less than one. Each video centers on a particular idea, theme, or point of discussion, so you can dive right into whatever particular issues you may be interested in. Here, for example, is a conversation on “Mattering and Secular Communities,” featuring Rebecca Goldstein, Dan Dennett, and Owen Flanagan.

The videos can be seen on the workshop web page, or on my YouTube channel. They’re divided into categories:

A lot of good stuff in there. Enjoy!