Filed under: Kids, Mountains, pictures, Running, Travel, Wines Tagged: birthday, Drôme, France, jatp, morning light, morning run, Rhone-Alpes, trail running, Vercors

# Particle Physics Planet

## April 29, 2017

### Christian P. Robert - xi'an's og

Filed under: Kids, Mountains, pictures, Running, Travel, Wines Tagged: birthday, Drôme, France, jatp, morning light, morning run, Rhone-Alpes, trail running, Vercors

### Peter Coles - In the Dark

Well, the plot thickens.

The penultimate round of matches this weekend has seen another twist in the story of this year’s Championship.

Last night Newcastle United played Cardiff City here in Cardiff, beating the home side 2-0. I didn’t go to the match, but there seem to have been plenty of Newcastle fans in town last night.

That result meant that Newcastle United were still 2nd, but only one point behind leaders Brighton and Hove Albion.

A win for them this afternoon at home against lowly Bristol City would have given them the Championship. Surprisingly, however, they lost 1-0.

The title race, somewhat unexpectedly, therefore goes to the last round of matches next Sunday. If Brighton win, they are Champions. If they don’t, and Newcastle win or draw, then Newcastle United are champions (the latter courtesy of goal difference). If Newcastle lose then Brighton are champions whatever their result.

Given the way this season has gone it seems rather fitting that it will be decided in the final round of matches. May the best team finish top (as long as it’s Newcastle)!

And in other news, to crown an excellent weekend for Newcastle supporters, Sunderland got relegated from the Premiership.

Follow @telescoper## April 28, 2017

### Christian P. Robert - xi'an's og

### Symmetrybreaking - Fermilab/SLAC

See Boston University physicist Tulika Bose's answers to readers’ questions about research at the Large Hadron Collider.

### Christian P. Robert - xi'an's og

Google France is celebrating the 256th anniversary of the birth of Marie Harel, who, according to local legend, invented Camembert cheese. Enjoy (if you can!).

Filed under: Kids, pictures, Wines Tagged: camembert, Google, Marie Harel, Normandie fort et vert, Normandy

### Peter Coles - In the Dark

Well, look what the postman brought me today!

Hot off the press, here is a textbook by my friend and erstwhile collaborator Bernard Jones. As you will see, it even has an endorsement by me on the back cover. I think its a very fine book indeed and it will be immensely useful for cosmologists young and old alike!

### Emily Lakdawalla - The Planetary Society Blog

### Christian P. Robert - xi'an's og

**A**fter a rather extended wait, I learned today of the dates of the next MCMski conference, now called Bayes Comp, in Barcelona, Spain, March 26-29, next year (2018). With a cool webpage! (While the ski termination has been removed from the conference name, there are ski resorts located not too far from Barcelona, in the Pyrenees.) Just unfortunate that it happens at the same dates as the ENAR 2018 meeting. (And with the Gregynog Statistical Conference!)

Filed under: Mountains, pictures, Statistics, Travel, University life, Wines Tagged: Barcelona, BayesComp, Bayesian Computing Section, ENAR 2018, Gregynog Statistical Conference, MCMSki, Monte Carlo Statistical Methods, ski resorts, Spain, University of Warwick

### Peter Coles - In the Dark

After a busy morning, I reckon it’s time for a pause and a quick blog post. I stumbled across this clip of a great drum solo a while ago and immediately bookmarked it for future posting. As happens most times I do that I then forgot about it, only finding it again right now so I thought I’d post it before I forget again.

This is the great Joe Morello at the very peak of his prowess in 1964, with the Dave Brubeck Quartet with whom he recorded over 60 albums. That band pioneered the use of unusual time signatures in jazz, such as 3/4, 7/4, 13/4, 9/8 and most famously in their big hit *Take Five *which is in 5/4 time throughout; they recorded a number of other tracks in which the time signature shifts backwards and forwards between, e.g., 7/4 and the standard 4/4.

A few points struck me watching this clip. The first is that it’s a great example of the use of the ‘trad’ grip which is with the left hand *under* the stick, passing between the thumb and index finger and between the second and third fingers, thusly:

The right stick is usually held with an overhand grip. Most jazz drummers (whether they play ‘trad’ jazz or not) use this grip. Most rock drummers on the other hand use a ‘balanced’ grip in which both sticks are held with an overhand grip. You might think holding the left-hand and right-hand sticks the same way is the obvious thing to do, but do bear in mind that people aren’t left-right symmetric and neither are drum kits so it’s really not obvious at all!

The trad grip looks a bit unnatural when you first see it, but it does have an advantage for many of the patterns often used in jazz. Once you’ve mastered the skill, a slight rotation of the wrist and subtle use of the fingers makes some difficult techniques (e.g. rolls) much easier to do rapidly with this grip than with the balanced grip. I’m not claiming to be a drummer when I say all this, but my Dad was and he did teach me the rudiments. In fact, he thought that drummers who used the balanced grip weren’t proper drummers at all!

(I’ll no doubt get a bunch of angry comments from rock drummers now, but what the hell…)

Anyway you can see Joe Morello using the trad grip to great effect in this clip, in which he displays astonishing speed, accuracy and control. The way he builds that single-stroke roll from about 2:28 is absolutely astonishing. In fact he’s so much in command throughout his solo, that he even has time to adjust his spectacles and move his bass drum a bit closer! Jazz musicians used to joke that atomic clocks could be set to Joe Morello, as he kept time so accurately, but as you can see in this clip he did so much more than beat out a rhythm. It’s only about 3 minutes long but this solo really is a master class.

Joe Morello was never a ‘showy’ musician. He never adopted the popular image of the drummer as the madman who sat at the back of the band that was cultivated by the likes of Gene Krupa in the jazz world and later spread into rock’n’roll. Bespectacled and wearing a suit and tie he looks a bit like a bank clerk, but boy could he play! The expression on Dave Brubeck’s face tells you that he knew he was very lucky to have Joe Morello in his band.

Follow @telescoper

### Emily Lakdawalla - The Planetary Society Blog

## April 27, 2017

### John Baez - Azimuth

Here’s a video of the talk I gave at the Stanford Complexity Group:

You can see slides here:

• Biology as information dynamics.

Abstract.If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’ — a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Liebler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clearer, more general formulation of Fisher’s fundamental theorem of natural selection.

I’d given a version of this talk earlier this year at a workshop on Quantifying biological complexity, but I’m glad this second try got videotaped and not the first, because I was a lot happier about my talk this time. And as you’ll see at the end, there were a lot of interesting questions.

### Peter Coles - In the Dark

I’ve had today off to work on the launch of my new project, called WikiLeeks.

I’m thrilled now to be able to publish our first findings.

Follow @telescoper### Symmetrybreaking - Fermilab/SLAC

Boston University physicist Tulika Bose explains why there's more than one large, general-purpose particle detector at the Large Hadron Collider.

Physicist Tulika Bose of the CMS experiment at CERN explains how the CMS and ATLAS experiments complement one another at the Large Hadron Collider.

### Clifford V. Johnson - Asymptotia

I just noticed! The book is now in MIT Press' Fall 2017 catalog, and so you can see the cover and read the blurb they wrote about it! See the full thing here (a pdf; on page 9). Alternatively, here is the online page for it. (I can also reveal what I could not say before: Frank Wilczek kindly agreed to write a foreword for it.)

This. is. so. exciting.

I don't know about how you pre-order yet, but when I do I'll let you know.

-cvj

Click to continue reading this post

The post Almost Within Grasp! appeared first on Asymptotia.

### Tommaso Dorigo - Scientificblogging

### Axel Maas - Looking Inside the Standard Model

In a standard treatment, this identity is just an integral part of the particle. However, results from the late 1970ies and early 1980ies as well as our own research point to a somewhat different direction. I have described the basic idea sometime back. The basic idea back then was that what we perceive as an electron is not really just an electron. It consists itself out of two particles. A Higgs and something I would call a constituent electron. Back then, we were just thinking about how to test this idea.

This took some time.

We thought this was an outrageous question, putting almost certain things into question.

Now we see: Oh, this was just the beginning. And things got more crazy in every step.

But, as a theoretician, if I determine the consequences of a theory, we should not stop because something sounds crazy. Almost everything what we take for granted today, like quantum physics, sounded crazy in the beginning. But if you have reason to believe that a theory is right, then you have to take it seriously. And then its consequences are what they are. Of course, we may just have made an error somewhere. But that remains to be checked, preferably by independent research groups. After all, at some point, it is hard to see the forest for the trees. But so far, we are convinced that we made at most quantitative errors, but no qualitative errors. So the concept appears to us sound. And therefore I keep on writing about it here.

The older works was just the beginning. And we just followed their suggestion to take the standard model of particle physics not only serious, but also literal.

I will start out with the leptons, i.e. electrons, muons, and tauons as well as the three neutrinos. I come back to the quarks later.

The first thing we established was that it is indeed possible to think of particles like the electron as a kind of bound state of other particles, without upsetting what we have measured in experiment. We also gave an estimate what would be necessary to test this statement in an experiment. Though really exact numbers are as always complicated, we believe that the next generation of experiments which use electrons and positrons and collide them could be able to detect difference between the conventional picture and our results. In fact, the way they are currently designed makes them ideally suited to do so. However, they will not provide a measurement before, roughly, 2035 or so. We also understand quite well, why we would need these machines to see the effect. So right now, we will have to sit and wait for this. Keep your fingers crossed that they will be build, if you are interested in the answer.

Naturally, we therefore asked ourselves if there is no alternative. The unfortunate thing is that you will need at least enough energy to copiously produce the Higgs to test this. The only existing machine being able to do so is the LHC at CERN. However, to do so they collide protons. So we had to discuss whether the same effect also occurs for protons. Now a proton is much more complicated than any lepton, because it is already build from quarks and gluons. Still, what we found is the following: If we take the standard model serious as a theory, then a proton cannot be a theoretically well-defined entity if it is only made out of three quarks. Rather, it needs to have some kind of Higgs component. And this should be felt somehow. However, for the same reason as with the lepton, only the LHC could test it. And here comes the problem. Because the proton is made up out of three quarks, it has already a very complicated structure. Furthermore, even at the LHC, the effect of the additional Higgs component will likely be tiny. In fact, the probably best chance to probe it will be if this Higgs component can be linked to the production of the heaviest known quark, the top quark The reason is that the the top quark is so very sensitive to the Higgs. While the LHC indeed produces a lot of top quarks, producing a top quark linked to a Higgs is much harder. Even just the strongest effect has not yet been seen above doubt. And what we find will only be a (likely small) correction to it. There is still a chance, but this will need much more data. But the LHC will keep on running for a long time. So maybe, it will be enough. We will see.

So, this is what we did. In fact, this will all be part of the review I am writing. So, more will be told about this.

If you are still reading, I want to give you some more of the really weird stuff, which came out.

The first is that live is actually even more complicated. Even without all of what I have written about above, there are actually two types of electrons in the standard model. One which is affected by the weak interaction, and one which is not. Other than that, they are the same. They have the same mass, and they are electromagnetically the same. The same is actually true for all leptons and quarks. The matter all around us is actually a mixture of both types. However, the subtle effects I have been talking so far about only affect those which are affected by the weak interaction. There is a technical reason for this (the weak interaction is a so-called gauge symmetry). However, it makes detecting everything more harder, because it only works if we get the 'right' type of an electron.

The second is that electrons and quarks come in three sets of four particles each, the so-called generations or families. The only difference between these copies is the mass. Other than that, there is no difference that we know of. Though we cannot exclude it, but we have no experiment saying otherwise with sufficient confidence. This is one of the central mysteries. It occupies, and keeps occupying, many physicist. Now, we had the following idea: If we provide internal structure to the members of the family - could it be that the different generations are just different arrangements of the internal structure? That such things are in principle possible is known already from atoms. Here, the problem is even more involved, because of the two types of each of the quarks and leptons. This was just a speculation. However, we found that this is, at least logically, possible. Unfortunately, it is yet too complicated to provide definite quantitative prediction how this can be tested. But, at least, it seems to be not at odds with what we know already. If this would be true, this would be a major step in understanding particle physics. But we are still far, far away from this. Still, we are motivated to continue this road.

by Axel Maas (noreply@blogger.com) at April 27, 2017 08:29 AM

## April 26, 2017

### Emily Lakdawalla - The Planetary Society Blog

### Peter Coles - In the Dark

I suddenly realized this morning that I there was a bit of community service I meant to do when I got back from vacations, namely to pass on to astronomers and particle physicists a link to the results of the latest Programmatic Review (actually ‘Breadth of Programme’ Exercise) produced by the Science and Technology Facilities Council.

It’s a lengthy document, running to 89 pages, but it’s a must-read if you’re in the UK and work in area of science under the remit of STFC. There was considerable uncertainty about the science funding situation anyway because of BrExit, and that has increased dramatically because of the impending General Election which will probably kick quite a few things into the long grass, quite possibly delaying the planned reorganization of the research councils. Nevertheless, this document is well worth reading as it will almost certainly inform key decisions that will have to be made whatever happens in the broader landscape. With `flat cash’ being the most optimistic scenario, increasing inflation means that some savings will have to be found so belts will inevitable have to be tightened. Moreover, there are strong strategic arguments that some areas should grow, rather than remain static, which means that others will have to shrink to compensate.

There are 29 detailed recommendations and I can’t discuss them all here, but here are a couple of tasters:

The E-ELT is the European Extremely Large Telescope, in case you didn’t know.

Another one that caught my eye is this:

I’ve never really understood why gravitational-wave research came under ‘Particle Astrophysics’ anyway, but given their recent discovery by Advanced LIGO there is a clear case for further investment in future developments, especially because the UK community is currently rather small.

Anyway, do read the document and, should you be minded to do so, please feel free to comment on it below through the comments box.

Follow @telescoper

## April 25, 2017

### Symmetrybreaking - Fermilab/SLAC

Undergraduates search for hidden tombs in Turkey using cosmic-ray muons.

While the human eye is an amazing feat of evolution, it has its limitations. What we can see tells only a sliver of the whole story. Often, it is what is on the inside that counts.

To see a broken femur, we pass X-rays through a leg and create an image on a metal film. Archaeologists can use a similar technique to look for ancient cities buried in hillsides. Instead of using X-rays, they use muons, particles that are constantly raining down on us from the upper atmosphere.

Muons are heavy cousins of the electron and are produced when single-atom meteorites called cosmic rays collide with the Earth’s atmosphere. Hold your hand up and a few muons will pass through it every second.

Physics undergraduates at Texas Tech University, led by Professors Nural Akchurin and Shuichi Kunori, are currently developing detectors that will act like an X-ray film and record the patterns left behind by muons as they pass through hillsides in Turkey. Archaeologists will use these detectors to map the internal structure of hills and look for promising places to dig for buried archaeological sites.

Like X-rays, muons are readily absorbed by thick, dense materials but can traverse through lighter materials. So they can be stopped by rock but move easily through the air in a buried cavern.

The detector under development at Texas Tech will measure the amount of cosmic-ray muons that make it through the hill. An unexpected excess could mean that there’s a hollow subterranean structure facilitating the muon’s passage.

“We’re looking for a void, or a tomb, that the archaeologists can investigate to learn more about the history of the people that were buried there,” says Hunter Cymes, one of the students working on the project.

The technique of using cosmic muons to probe for subterranean structures was developed almost half a century ago. Luis Alvarez, a Nobel Laureate in Physics, first used this technique to look inside the Second Pyramid of Chephren, one of the three great pyramids of Egypt. Since then, it has been used for many different applications, including searching for hidden cavities in other pyramids and estimating the lava content of volcanoes.

According to Jason Peirce, another undergraduate student working on this project, those previous applications had resolutions of about 10 meters. “We’re trying to make that smaller, somewhere in the range of 2 to 5 meters, to find a smaller room than what’s previously been done.”

They hope to accomplish this by using an array of scintillators, a type of plastic that can be used to detect particles. “When a muon passes through it, it absorbs some of that energy and creates light,” says student Hunter Cymes. That light can then be detected and measured and the data stored for later analysis.

Unfortunately, muons with enough energy to travel through a hill and reach the detector are relatively rare, meaning that the students will need to develop robust detectors which can collect data over a long period of time. Just like it’s hard to see in dim light, it’s difficult to reconstruct the internal structure of a hill with only a handful of muons.

Aashish Gupta, another undergraduate working on this project, is currently developing a simulation of cosmic-ray muons, the hill, and the detector prototype. The group hopes to use the simulation to guide their design process by predicting how well different designs will work and much data they will need to take.

As Peirce describes it, they are “getting some real, hands-on experience putting this together while also keeping in mind that we need to have some more of these results from the simulation to put together the final design.”

They hope to finish building the prototype detector within the next few months and are optimistic about having a final design by next fall.

### Emily Lakdawalla - The Planetary Society Blog

## April 24, 2017

### Symmetrybreaking - Fermilab/SLAC

Particles seen by the ALICE experiment hint at the formation of quark-gluon plasma during proton-proton collisions.

About 13.8 billion years ago, the universe was a hot, thick soup of quarks and gluons—the fundamental components that eventually combined into protons, neutrons and other hadrons.

Scientists can produce this primitive particle soup, called the quark-gluon plasma, in collisions between heavy ions. But for the first time physicists on an experiment at the Large Hadron Collider have observed particle evidence of its creation in collisions between protons as well.

The LHC collides protons during the majority of its run time. This new result, published in *Nature Physics* by the ALICE collaboration, challenges long-held notions about the nature of those proton-proton collisions and about possible phenomena that were previously missed.

“Many people think that protons are too light to produce this extremely hot and dense plasma,” says Livio Bianchi, a postdoc at the University of Houston who worked on this analysis. “But these new results are making us question this assumption.”

Scientists at the LHC and at the US Department of Energy’s Brookhaven National Laboratory’s Relativistic Heavy Ion Collider, or RHIC, have previously created quark-gluon plasma in gold-gold and lead-lead collisions.

In the quark gluon plasma, mid-sized quarks—such as strange quarks—freely roam and eventually bond into bigger, composite particles (similar to the way quartz crystals grow within molten granite rocks as they slowly cool). These hadrons are ejected as the plasma fizzles out and serve as a telltale signature of their soupy origin. ALICE researchers noticed numerous proton-proton collisions emitting strange hadrons at an elevated rate.

“In proton collisions that produced many particles, we saw more hadrons containing strange quarks than predicted,” says Rene Bellwied, a professor at the University of Houston. “And interestingly, we saw an even bigger gap between the predicted number and our experimental results when we examined particles containing two or three strange quarks.”

From a theoretical perspective, a proliferation of strange hadrons is not enough to definitively confirm the existence of quark-gluon plasma. Rather, it could be the result of some other unknown processes occurring at the subatomic scale.

“This measurement is of great interest to quark-gluon-plasma researchers who wonder how a possible QGP signature can arise in proton-proton collisions,” says Urs Wiedemann, a theorist at CERN. “But it is also of great interest for high energy physicists who have never encountered such a phenomenon in proton-proton collisions.”

Earlier research at the LHC found that the spatial orientation of particles produced during some proton-proton collisions mirrored the patterns created during heavy-ion collisions, suggesting that maybe these two types of collisions have more in common than originally predicted. Scientists working on the ALICE experiment will need to explore multiple characteristics of these strange proton-proton collisions before they can confirm if they are really seeing a miniscule droplet of the early universe.

“Quark-gluon plasma is a liquid, so we also need to look at the hydrodynamic features,” Bianchi says. “The composition of the escaping particles is not enough on its own.”

This finding comes from data collected the first run of the LHC between 2009 and 2013. More research over the next few years will help scientists determine whether the LHC can really make quark-gluon plasma in proton-proton collisions.

“We are very excited about this discovery,” says Federico Antinori, spokesperson of the ALICE collaboration. “We are again learning a lot about this extreme state of matter. Being able to isolate the quark-gluon-plasma-like phenomena in a smaller and simpler system, such as the collision between two protons, opens up an entirely new dimension for the study of the properties of the primordial state that our universe emerged from.”

Other experiments, such as those using RHIC, will provide more information about the observable traits and experimental characteristics of quark-gluon plasmas at lower energies, enabling researchers to gain a more complete picture of the characteristics of this primordial particle soup.

“The field makes far more progress by sharing techniques and comparing results than we would be able to with one facility alone,” says James Dunlop, a researcher at RHIC. “We look forward to seeing further discoveries from our colleagues in ALICE.”

### CERN Bulletin

Le 3 avril dernier, la Vice-Présidente et le Président de l’Association du personnel ont présenté en réunion du Directorat élargi (Directeurs et Chefs de départements et d’unités) le plan des activités de l’Association du personnel pour 2017 et ont fait part des préoccupations de l’AP.

Cinq sujets ont été abordés en commençant par la mise en œuvre des décisions prises dans le cadre de l’examen quinquennal de 2015.

# Examen quinquennal – suivi (*voir Echo n° 257*)

## 2016 – Principales mises en œuvre

De nombreux changements ont déjà été mis en place en 2016 :

- Révision des Statut et Règlement du personnel en janvier 2016, pour les aspects de diversité, et en septembre 2016, pour la nouvelle structure de carrière : grille des salaires avec l’introduction des grades ;
- Révision de la Circulaire administrative n° 26 (Rev 11) sur la « Reconnaissance du mérite » ;
- Placement des titulaires dans des grades et placement provisoire dans des emplois repères ;
- Définition des lignes directrices de l’exercice MERIT pour 2017.

L’Association du personnel a été largement associée à ces révisions et à leur mise en place. Le processus de concertation a en général bien fonctionné dans ce cadre : des accords qui préservent les intérêts du personnel et ceux de l’Organisation ont été trouvés.

## 2017 – 1^{ère} année de l’exercice MERIT** **(*voir Echo n°259*)

L’Association du personnel a mis l’accent sur les points suivants :

### Correction de placement dans un emploi repère (*voir Echo n° 261*)

Fin février 2017, de nombreuses demandes de corrections ont déjà été formulées auprès du Département HR. Ces demandes émanaient :

- de titulaires (144) : majoritairement des demandes de changement d’emploi repère pour un emploi repère dans une gamme de grades supérieure (p. ex. de technicien(ne) en 3-4-5 à ingénieur(e) technicien(ne) en 4-5-6) et, dans une moindre mesure, des demandes de changement de grade ;
- de la hiérarchie (242) : majoritairement des changements de titre d’emploi repère dans une même gamme de grades.

Pour l’Association du personnel, l’accord reste que les demandes de changement de grade (promotion) doivent être étudiées dans le cadre de la procédure de promotion.

En revanche, nous avons insisté pour que les corrections suite à un placement dans le mauvais emploi repère, avec ou sans changement de gamme de grades, soient instruites et traitées au plus tôt. Ces corrections doivent être effectives avant le 1^{er} juillet 2017, date de la confirmation officielle du placement dans un emploi repère.

### Positions personnelles de titulaires

La mise en application de la nouvelle grille de salaires a entrainé le placement de nombreux titulaires dans des « positions personnelles », c’est-à-dire des positions salariales en dehors de la grille des salaires, soit au-dessous du minimum de leur grade, soit plus fréquemment au-dessus du maximum de leur grade.

L’Association du personnel a dit au ED être consciente que nos collègues en position personnelle, avec un salaire supérieur au salaire maximum de leur grade, ne pourront pas tous bénéficier d’une promotion cette année ; l’AP a même conscience que, pour certains d’entre eux, il n’y aura pas de promotion tout court.

Néanmoins, nous avons insisté pour que le cas de chaque collègue en position personnelle soit considéré avec une réponse individuelle donnée.

### Lignes directrices MERIT de 2017

L’Association du personnel a rappelé :

- qu’une
**promotion**est un**changement de grade**; - qu’un
**changement d’emploi repère**sanctionne un**changement de fonctions**; - que ces deux concepts sont différents dans leur usage et suivent donc des procédures différentes ;
- que ces procédures s’appliquent à
**l’ensemble du CERN**de la même façon (**CERN-Wide**) ; - qu’aucune ligne directrice numérique n’est applicable, comme décidé par la Direction et accepté par l’Association du personnel.

En conséquence, l’Association du personnel s’attend en 2017 à un maximum de promotions, tout en tenant compte de la maitrise de l’augmentation du budget à long terme.

### Emplois repères sur trois grades et non sur deux + un

Sur la base du Guide des promotions (*voir Echo n° 263*), le passage au 3^{e} grade d’un emploi repère est analysé et évalué de la même façon que le passage du 1^{er} au 2^{e} grade, sur la base de critères tenant compte du niveau des fonctions occupées, de l’expérience et de l’expertise acquises, etc.

Par ailleurs, les recrutements s’effectuent normalement sur le 1^{er} ou le 2^{e} grade d’un emploi repère, en fonction de l’expérience du candidat et de son expertise ; toutefois, l’embauche sur un 3^{e} grade, bien qu’exceptionnelle, reste possible. Le(s) grade(s) de recrutement doi(ven)t toujours être spécifié(s) dans la vacance de poste.

En conclusion, tout affichage de grades qui fait apparaitre des parenthèses « 1-2-(3) » ou un 3^{e} grade grisé n’est absolument pas nécessaire en raison des processus HR et ne peut être que démotivant. Nous avons donc instamment demandé que cet affichage se fasse sur trois grades « 1-2-3 » et sans partie grisée.

### Mises en garde

L’Association du personnel a fait part d’informations concernant le non-respect de règles concertées qui lui ont été rapportées, et notamment des deux points suivants :

- la non-éligibilité à une promotion pour les titulaires dont la position salariale serait inférieure à 110 % du salaire médian de leur grade, ce qui revient à limiter les propositions de promotions aux seuls titulaires ayant une position salariale égale ou supérieure à 110 % de leur grade. Ceci est inacceptable et contraire aux règles fixées par le Management, en accord avec l’Association du personnel, et valable pour l’ensemble du CERN ;
- le refus de changement d’emploi repère pour des raisons de convenance personnelle. Il faut rappeler que l’emploi-repère assigné à une personne doit refléter les fonctions réelles de la personne et non les diplômes obtenus ou un titre académique. En effet, les emplois repères doivent permettre d’avoir une vue précise des fonctions occupées au CERN (type et nombre de postes) et donc d’aider à établir une planification des ressources (« Capacity planning »). Enfin, une personne dont les fonctions ne correspondent pas à l’emploi repère assigné sera évaluée, pour les exercices de promotion, sur les fonctions associées à l’emploi repère et non sur celles réellement occupées, ce qui aura sans aucun doute un impact sur la carrière de cette personne.

L’Association du personnel a recommandé fortement que chaque personne au CERN ait le bon emploi-repère, même si celui-ci ne correspond plus au diplôme initial de la personne.

### Encore trois thèmes à aborder

Pour clore la mise en œuvre de l’examen quinquennal, trois thèmes sont encore à traiter en 2017 :

- la mobilité interne,
- la Validation des Acquis de l’Expérience (VAE),
- les entretiens en développement de carrière.

Trois groupes de travail ont été lancés par le Département HR, avec la participation de représentants de l’Association du personnel. Pour l’Association du personnel ces éléments vont dynamiser les carrières et en partie compenser les pertes sur l’avancement actées lors de la révision quinquennale.

## Concertation

L’Association du personnel a rappelé que la concertation est un processus selon lequel le Directeur général et l’Association du personnel se concertent afin de trouver autant que possible une position commune. La concertation nécessite une attitude positive, loin de toute défiance, et une confiance mutuelle. L’Association du personnel est fermement engagée dans ce sens, mais elle constate que la concertation ne se porte pas aussi bien que nous le voudrions. En réponse à une question de la Directrice générale, l’exemple a été donné de la communication décalée des minutes et des documents du Comité de Concertation Permanent qui garde l’Association à bonne distance sans aucune raison objective.

## Enquêtes et justice internes

Un travail sur les processus internes d’enquêtes et de justice est nécessaire et urgent. Ce constat est partagé par différents services et à différents niveaux.

L’Association du personnel a rappelé que le CERN comme Organisation Internationale a les devoirs d’un État à l’égard de son personnel et qu’il doit mettre en place des processus exemplaires dans le domaine touchant aux enquêtes et à la justice interne.

L’Association du personnel demande donc qu’un groupe de travail soit mis en place aussi rapidement que possible, sous l’égide du Département HR et avec une participation de l’AP dans ce groupe.

## Santé et Sécurité

Le Service Médical du CERN a fait état, dans son rapport annuel, de problèmes en lien avec le bien-être psychosocial : le nombre de jours d’absences de longue durée pour maladie en lien avec des problèmes psychosociaux a augmenté de façon significative.

Un groupe de travail a été lancé par HR afin de bien appréhender cette problématique, identifier les causes et établir un plan d’action. L’Association du personnel participe à cette étude, au même titre que HR, le Service Médical, HSE et la hiérarchie en général. Le message de l’AP au ED a été qu’il n’y a pas lieu de paniquer mais que le CERN ne peut ignorer les signaux qui sont perçus et qui reflètent une souffrance au travail mais aussi une désorganisation et une perte économique pour les services.

## VICO et Élections

### VICO (VIsite COlleagues) (*voir Echo n° 264*)

Une campagne de courtes visites au personnel du CERN par les délégués du personnel a été lancée mi-mars et se poursuivra jusqu’à mi-juin.

Le but de cette campagne est de rencontrer nos collègues, d’initier un dialogue sur des sujets d’intérêt mutuel et de répondre autant que possible à leurs interrogations. C’est aussi une opportunité pour inciter nos collègues à adhérer à l’Association et pour proposer à certains de se présenter aux élections du Conseil du personnel prévues en novembre 2017.

### Collèges électoraux

Suite à la restructuration de l’Organisation en janvier 2016 et au remplacement des filières de carrière par des grades, l’Association du personnel doit revoir les collèges électoraux en tenant compte des différentes catégories professionnelles, des différents secteurs / départements / unités, de la distribution du nombre de titulaires par départements / unités, etc.

Nous avons rappelé que cinq places au Conseil du personnel sont réservées aux délégués représentant les boursiers et les membres du personnel associés. En réponse à une question, l’AP a indiqué que le nombre de ces places sera augmenté dès que l’intérêt pour l’Association aura augmenté chez les boursiers et MPA en nombre d’adhérents et de candidats aux élections ; actuellement seules deux de ces cinq places sont pourvues.

Nous avons insisté auprès des Directeurs et des Chefs de départements et d’unités sur la nécessité d’une bonne représentation de toutes les catégories professionnelles et de tous les secteurs et départements au sein du Conseil du personnel, et nous leur avons demandé de contribuer à assurer cette représentativité.

La présentation s’est terminée par une série de questions et réponses. La Directrice générale a remercié la Vice-présidente et le Président de l’Association du personnel pour les sujets soulevés dans cette présentation et les franches réponses aux questions et a invité l’Association à revenir plus tard devant le Directorat élargi pour poursuivre ce dialogue constructif.

Ce que nous ne manquerons pas de faire, bien sûr !

*La version anglaise de cet article sera publié dans le prochain Echo.*

### CERN Bulletin

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juin, juillet et décembre.

La prochaine permanence se tiendra le :

**Mardi 30 mai de 13 h 30 à 16 h 00 Salle de réunion de l’Association du personnel**

Les permanences suivantes auront lieu les mardis 29 août, 26 septembre, 31 octobre et 28 novembre 2017.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/.

Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

### CERN Bulletin

**Wednesday 3 May 2017 at 20:00**

CERN Council Chamber

CERN Council Chamber

# Kagemusha

Directed by Akira Kurosawa

Japan, 1980, 162 minutes

When a powerful warlord in medieval Japan dies, a poor thief recruited to impersonate him finds difficulty living up to his role and clashes with the spirit of the warlord during turbulent times in the kingdom.

Original version Japanese; English subtitles.

### CERN Bulletin

# Special event

## on Thursday 4 May 2017 at 18:30

CERN Council Chamber

In collaboration with the CERN Running Club and the Women In Technology initiative, the CERN CineClub is happy to announce the screening of the film

# Free to Run

Directed by Pierre Morath

Switzerland, 2016, 99 minutes

Today, all anybody needs to run is the determination and a pair of the right shoes. But just fifty years ago, running was viewed almost exclusively as the domain of elite male athletes who competed on tracks. With insight and propulsive energy, director Pierre Morath traces running's rise to the 1960s, examining how the liberation movements and newfound sense of personal freedom that defined the era took the sport out of the stadiums and onto the streets, and how legends like Steve Prefontaine, Fred Lebow, and Kathrine Switzer redefined running as a populist phenomenon.

Original version French; English subtitles.

**Come along to watch the film and learn more about the history of popular races and amateur running, and how women had to fight for their rights to be free to run! Join us after the projection for drinks in restaurant 1, so that we can share impressions and discuss about the film.**

### CERN Bulletin

# La couleur des jours

## oriSio

**Du 2 au 12 mai 2017 CERN Meyrin, Bâtiment principal**

Suite à un fort intérêt pour la Chine et une curiosité pour un médium très ancien, la laque !

Je réinterprète cet art à travers un style abstrait.

Je présente ici des laques sur aluminium, travaillés au plasma et ensuite colorés à l’aide de pigments pour l’essentiel.

Mes œuvres je les veux brutes, déchirées, évanescentes, gondolées, voire trouées mais avec une belle approche de profondeur de la couleur.

Pour plus d’informations : staff.association@cern.ch | Tél: 022 766 37 38

### John Baez - Azimuth

This book looks interesting:

• David S. Wilson and Alan Kirman, editors, *Complexity and Evolution: Toward a New Synthesis for Economics*, MIT Press, Cambridge Mass., 2016.

You can get some chapters for free here. I’ve only looked carefully at this one:

• Joshua M. Epstein and Julia Chelen, Advancing Agent_Zero.

**Agent_Zero** is a simple toy model of an agent that’s not the idealized rational actor often studied in economics: rather, it has emotional, deliberative, and social modules which interact with each other to make decisions. Epstein and Chelen simulate collections of such agents and see what they do:

Abstract.Agent_Zero is a mathematical and computational individual that can generate important, but insufficiently understood, social dynamics from the bottom up. First published by Epstein (2013), this new theoretical entity possesses emotional, deliberative, and social modules, each grounded in contemporary neuroscience. Agent_Zero’s observable behavior results from the interaction of these internal modules. When multiple Agent_Zeros interact with one another, a wide range of important, even disturbing, collective dynamics emerge. These dynamics are not straightforwardly generated using the canonical rational actor which has dominated mathematical social science since the 1940s. Following a concise exposition of the Agent_Zero model, this chapter offers a range of fertile research directions, including the use of realistic geographies and population levels, the exploration of new internal modules and new interactions among them, the development of formal axioms for modular agents, empirical testing, the replication of historical episodes, and practical applications. These may all serve to advance the Agent_Zero research program.

It sounds like a fun and productive project as long as one keeps ones wits about one. It’s hard to draw conclusions about *human* behavior from such simplified agents. One can argue about this, and of course economists will. But regardless of this, one *can* draw conclusions about which kinds of simplified agents will engage in which kinds of collective behavior under which conditions.

Basically, one can start mapping out a small simple corner of the huge ‘phase space’ of possible societies. And that’s bound to lead to interesting new ideas that one wouldn’t get from either 1) empirical research on human and animal societies or 2) pure theoretical pondering without the help of simulations.

Here’s an article whose title, at least, takes a vastly more sanguine attitude toward benefits of such work:

• Kate Douglas, Orthodox economics is broken: how evolution, ecology, and collective behavior can help us avoid catastrophe, *Evonomics*, 22 July 2016.

I’ll quote just a bit:

For simplicity’s sake, orthodox economics assumes that

Homo economicus, when making a fundamental decision such as whether to buy or sell something, has access to all relevant information. And because our made-up economic cousins are so rational and self-interested, when the price of an asset is too high, say, they wouldn’t buy—so the price falls. This leads to the notion that economies self-organise into an equilibrium state, where supply and demand are equal.Real humans—be they Wall Street traders or customers in Walmart—don’t always have accurate information to hand, nor do they act rationally. And they certainly don’t act in isolation. We learn from each other, and what we value, buy and invest in is strongly influenced by our beliefs and cultural norms, which themselves change over time and space.

“Many preferences are dynamic, especially as individuals move between groups, and completely new preferences may arise through the mixing of peoples as they create new identities,” says anthropologist Adrian Bell at the University of Utah in Salt Lake City. “Economists need to take cultural evolution more seriously,” he says, because it would help them understand who or what drives shifts in behaviour.

Using a mathematical model of price fluctuations, for example, Bell has shown that prestige bias—our tendency to copy successful or prestigious individuals—influences pricing and investor behaviour in a way that creates or exacerbates market bubbles.

We also adapt our decisions according to the situation, which in turn changes the situations faced by others, and so on. The stability or otherwise of financial markets, for instance, depends to a great extent on traders, whose strategies vary according to what they expect to be most profitable at any one time. “The economy should be considered as a complex adaptive system in which the agents constantly react to, influence and are influenced by the other individuals in the economy,” says Kirman.

This is where biologists might help. Some researchers are used to exploring the nature and functions of complex interactions between networks of individuals as part of their attempts to understand swarms of locusts, termite colonies or entire ecosystems. Their work has provided insights into how information spreads within groups and how that influences consensus decision-making, says Iain Couzin from the Max Planck Institute for Ornithology in Konstanz, Germany—insights that could potentially improve our understanding of financial markets.

Take the popular notion of the “wisdom of the crowd”—the belief that large groups of people can make smart decisions even when poorly informed, because individual errors of judgement based on imperfect information tend to cancel out. In orthodox economics, the wisdom of the crowd helps to determine the prices of assets and ensure that markets function efficiently. “This is often misplaced,” says Couzin, who studies collective behaviour in animals from locusts to fish and baboons.

By creating a computer model based on how these animals make consensus decisions, Couzin and his colleagues showed last year that the wisdom of the crowd works only under certain conditions—and that contrary to popular belief, small groups with access to many sources of information tend to make the best decisions.

That’s because the individual decisions that make up the consensus are based on two types of environmental cue: those to which the entire group are exposed—known as high-correlation cues—and those that only some individuals see, or low-correlation cues. Couzin found that in larger groups, the information known by all members drowns out that which only a few individuals noticed. So if the widely known information is unreliable, larger groups make poor decisions. Smaller groups, on the other hand, still make good decisions because they rely on a greater diversity of information.

So when it comes to organising large businesses or financial institutions, “we need to think about leaders, hierarchies and who has what information”, says Couzin. Decision-making structures based on groups of between eight and 12 individuals, rather than larger boards of directors, might prevent over-reliance on highly correlated information, which can compromise collective intelligence. Operating in a series of smaller groups may help prevent decision-makers from indulging their natural tendency to follow the pack, says Kirman.

Taking into account such effects requires economists to abandon one-size-fits-all mathematical formulae in favour of “agent-based” modelling—computer programs that give virtual economic agents differing characteristics that in turn determine interactions. That’s easier said than done: just like economists, biologists usually model relatively simple agents with simple rules of interaction. How do you model a human?

It’s a nut we’re beginning to crack. One attendee at the forum was Joshua Epstein, director of the Center for Advanced Modelling at Johns Hopkins University in Baltimore, Maryland. He and his colleagues have come up with Agent_Zero, an open-source software template for a more human-like actor influenced by emotion, reason and social pressures. Collections of Agent_Zeros think, feel and deliberate. They have more human-like relationships with other agents and groups, and their interactions lead to social conflict, violence and financial panic. Agent_Zero offers economists a way to explore a range of scenarios and see which best matches what is going on in the real world. This kind of sophistication means they could potentially create scenarios approaching the complexity of real life.

Orthodox economics likes to portray economies as stately ships proceeding forwards on an even keel, occasionally buffeted by unforeseen storms. Kirman prefers a different metaphor, one borrowed from biology: economies are like slime moulds, collections of single-celled organisms that move as a single body, constantly reorganising themselves to slide in directions that are neither understood nor necessarily desired by their component parts.

For Kirman, viewing economies as complex adaptive systems might help us understand how they evolve over time—and perhaps even suggest ways to make them more robust and adaptable. He’s not alone. Drawing analogies between financial and biological networks, the Bank of England’s research chief Andrew Haldane and University of Oxford ecologist Robert May have together argued that we should be less concerned with the robustness of individual banks than the contagious effects of one bank’s problems on others to which it is connected. Approaches like this might help markets to avoid failures that come from within the system itself, Kirman says.

To put this view of macroeconomics into practice, however, might mean making it more like weather forecasting, which has improved its accuracy by feeding enormous amounts of real-time data into computer simulation models that are tested against each other. That’s not going to be easy.

## April 23, 2017

### The n-Category Cafe

*Guest post by Pierre Cagne*

The Kan Extension Seminar II continues with a third consecutive of
Kelly, entitled *On clubs and data-type
constructors*. It
deals with the notion of club,
first introduced by Kelly as an attempt to encode theories of
categories with structure involving some kind of coherence
issues. Astonishing enough, there is no mention of operads whatsoever
in this article. (To be fair, there is a mention of “those Lawvere
theories with only associativity axioms”…) Is it because the notion
of club was developed in several stages at various time periods,
making operads less identifiable among this work? Or does Kelly judge
irrelevant the link between the two notions? I am not sure, but
anyway I think it is quite interesting to read this article in the
light of what we now know about operads.

Before starting with the mathematical content, I would like to thank Alexander, Brendan and Emily for organizing this online seminar. It is a great opportunity to take a deeper look at seminal papers that would have been hard to explore all by oneself. On that note, I am also very grateful for the rich discussions we have with my fellow participants.

### Non symmetric Set-operads

Let us take a look at the simplest kind of operads: non symmetric
$<semantics>\mathrm{Set}<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}</annotation></semantics>$-operads. Those are informally collections of
*operations* with given arities closed under compositions. The usual
way to define them is to endow the category
$<semantics>[N,\mathrm{Set}]<annotation\; encoding="application/x-tex">[\backslash mathbf\{N\},\backslash mathsf\{Set\}]</annotation></semantics>$ of $<semantics>N<annotation\; encoding="application/x-tex">\backslash mathbf\{N\}</annotation></semantics>$-indexed families of sets
with the substitution monoidal product (see Simon’s
post):
for two such families $<semantics>R<annotation\; encoding="application/x-tex">R</annotation></semantics>$ and $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$,
$$<semantics>(R\circ S{)}_{n}=\sum _{{k}_{1}+\dots +{k}_{m}=n}{R}_{m}\times {S}_{{k}_{1}}\times \dots \times {S}_{{k}_{m}}\phantom{\rule{1em}{0ex}}\forall n\in N<annotation\; encoding="application/x-tex">\; (R\; \backslash circ\; S)\_n\; =\; \backslash sum\_\{k\_1+\backslash dots+k\_m\; =\; n\}\; R\_m\; \backslash times\; S\_\{k\_1\}\; \backslash times\; \backslash dots\; \backslash times\; S\_\{k\_m\}\; \backslash quad\; \backslash forall\; n\; \backslash in\; \backslash mathbf\{N\}\; </annotation></semantics>$$
This monoidal product is better understood when elements of $<semantics>{R}_{n}<annotation\; encoding="application/x-tex">R\_n</annotation></semantics>$ and
$<semantics>{S}_{n}<annotation\; encoding="application/x-tex">S\_n</annotation></semantics>$ are thought as *branching* with $<semantics>n<annotation\; encoding="application/x-tex">n</annotation></semantics>$ inputs and one output:
$<semantics>R\circ S<annotation\; encoding="application/x-tex">R\backslash circ\; S</annotation></semantics>$ is then obtained by plugging outputs of elements of $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ to
the inputs of elements of $<semantics>R<annotation\; encoding="application/x-tex">R</annotation></semantics>$. A non symmetric operad is defined to be
a monoid for that monoidal product, a typical example being the family
$<semantics>(\mathrm{Set}({X}^{n},X){)}_{n\in N}<annotation\; encoding="application/x-tex">(\backslash mathsf\{Set\}(X^n,X))\_\{n\backslash in\backslash mathbf\{N\}\}</annotation></semantics>$ for a set $<semantics>X<annotation\; encoding="application/x-tex">X</annotation></semantics>$.

We can now take advantage of the equivalence $<semantics>[N,\mathrm{Set}]\stackrel{\sim}{\to}\mathrm{Set}/N<annotation\; encoding="application/x-tex">[\backslash mathbf\{N\},\backslash mathsf\{Set\}]\; \backslash overset\; \backslash sim\; \backslash to\; \backslash mathsf\{Set\}/\backslash mathbf\{N\}</annotation></semantics>$ to equip the category $<semantics>\mathrm{Set}/N<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}/\backslash mathbf\{N\}</annotation></semantics>$ with a monoidal product. This equivalence maps a family $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ to the coproduct $<semantics>{\sum}_{n}{S}_{n}<annotation\; encoding="application/x-tex">\backslash sum\_n\; S\_n</annotation></semantics>$ with the canonical map to $<semantics>N<annotation\; encoding="application/x-tex">\backslash mathbf\{N\}</annotation></semantics>$, while the inverse equivalence maps a function $<semantics>a:A\to N<annotation\; encoding="application/x-tex">a:\; A\; \backslash to\; \backslash mathbf\{N\}</annotation></semantics>$ to the family of fibers $<semantics>({a}^{-1}(n){)}_{n\in N}<annotation\; encoding="application/x-tex">(a^\{-1\}(n))\_\{n\backslash in\backslash mathbf\{N\}\}</annotation></semantics>$. It means that a $<semantics>N<annotation\; encoding="application/x-tex">\backslash mathbf\{N\}</annotation></semantics>$-indexed family can be thought either as a set of operations of arity $<semantics>n<annotation\; encoding="application/x-tex">n</annotation></semantics>$ for each $<semantics>n<annotation\; encoding="application/x-tex">n</annotation></semantics>$ or as a bunch of operations, each labeled by an integer given its arity. Let us transport the monoidal product of $<semantics>[N,\mathrm{Set}]<annotation\; encoding="application/x-tex">[\backslash mathbf\{N\},\; \backslash mathsf\{Set\}]</annotation></semantics>$ to $<semantics>\mathrm{Set}/N<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}/\backslash mathbf\{N\}</annotation></semantics>$: given two maps $<semantics>a:A\to N<annotation\; encoding="application/x-tex">a:\; A\; \backslash to\; \backslash mathbf\{N\}</annotation></semantics>$ and $<semantics>b:B\to N<annotation\; encoding="application/x-tex">b:\; B\; \backslash to\; \backslash mathbf\{N\}</annotation></semantics>$, we compute the $<semantics>\circ <annotation\; encoding="application/x-tex">\backslash circ</annotation></semantics>$-product of the family of fibers, and then take the coproduct to get $$<semantics>A\circ B=\{(x,{y}_{1},\dots ,{y}_{m}):x\in A,{y}_{i}\in B,a(x)=m\}<annotation\; encoding="application/x-tex">\; A\backslash circ\; B\; =\; \backslash \{\; (x,y\_1,\backslash dots,y\_m)\; :\; x\; \backslash in\; A,\; y\_i\; \backslash in\; B,\; a(x)\; =\; m\; \backslash \}\; </annotation></semantics>$$ with the map $<semantics>A\circ B\to N<annotation\; encoding="application/x-tex">A\backslash circ\; B\; \backslash to\; \backslash mathbf\{N\}</annotation></semantics>$ mapping $<semantics>(x,{y}_{1},\dots ,{y}_{m})\mapsto {\sum}_{i}b({y}_{i})<annotation\; encoding="application/x-tex">(x,y\_1,\backslash dots,y\_m)\backslash mapsto\; \backslash sum\_i\; b(y\_i)</annotation></semantics>$. That is, the monoidal product is achieved by computing the following pullback:

where $<semantics>L<annotation\; encoding="application/x-tex">L</annotation></semantics>$ is the free monoid monad (or *list* monad) on
$<semantics>\mathrm{Set}<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}</annotation></semantics>$. Hence a non symmetric operad is equivalently a monoid
in $<semantics>\mathrm{Set}/N<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}/\backslash mathbf\{N\}</annotation></semantics>$ for this monoidal product. In Burroni’s
terminology, it
would be called a $<semantics>L<annotation\; encoding="application/x-tex">L</annotation></semantics>$-category with one object.

In my opinion, Kelly’s clubs are a way to generalize this point of view to other kind of operads, replacing $<semantics>N<annotation\; encoding="application/x-tex">\backslash mathbf\{N\}</annotation></semantics>$ by the groupoid $<semantics>P<annotation\; encoding="application/x-tex">\backslash mathbf\; P</annotation></semantics>$ of bijections (to get symmetric operads) or the category $<semantics>\mathrm{Fin}<annotation\; encoding="application/x-tex">\backslash mathsf\{Fin\}</annotation></semantics>$ of finite sets (to get Lawvere theories). Obviously, $<semantics>\mathrm{Set}/P<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}/\backslash mathbf\; P</annotation></semantics>$ or $<semantics>\mathrm{Set}/\mathrm{Fin}<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}/\backslash mathsf\{Fin\}</annotation></semantics>$ does not make much sense, but the coproduct functor of earlier can be easily understood as a Grothendieck construction that adapts neatly in this context, providing functors: $$<semantics>[P,\mathrm{Set}]\to \mathrm{Cat}/P,\phantom{\rule{2em}{0ex}}[\mathrm{Fin},\mathrm{Set}]\to \mathrm{Cat}/\mathrm{Fin}<annotation\; encoding="application/x-tex">\; [\backslash mathbf\; P,\backslash mathsf\{Set\}]\; \backslash to\; \backslash mathsf\{Cat\}/\backslash mathbf\; P,\backslash qquad\; [\backslash mathsf\{Fin\},\backslash mathsf\{Set\}]\; \backslash to\; \backslash mathsf\{Cat\}/\backslash mathsf\{Fin\}\; </annotation></semantics>$$ Of course, these functors are not equivalences anymore, but it does not prevent us from looking for monoidal products on $<semantics>\mathrm{Cat}/P<annotation\; encoding="application/x-tex">\backslash mathsf\{Cat\}/\backslash mathbf\; P</annotation></semantics>$ and $<semantics>\mathrm{Cat}/\mathrm{Fin}<annotation\; encoding="application/x-tex">\backslash mathsf\{Cat\}/\backslash mathsf\{Fin\}</annotation></semantics>$ that restrict to the substitution product on the essential images of these functors (i.e. the discrete opfibrations). Before going to the abstract definitions, you might keep in mind the following goal: we are seeking those small categories $<semantics>\mathcal{C}<annotation\; encoding="application/x-tex">\backslash mathcal\{C\}</annotation></semantics>$ such that $<semantics>\mathrm{Cat}/\mathcal{C}<annotation\; encoding="application/x-tex">\backslash mathsf\{Cat\}/\backslash mathcal\{C\}</annotation></semantics>$ admits a monoidal product reflecting through the Grothendieck construction the substition product in $<semantics>[\mathcal{C},\mathrm{Set}]<annotation\; encoding="application/x-tex">[\backslash mathcal\{C\},\backslash mathsf\{Set\}]</annotation></semantics>$.

### Abstract clubs

Recall that in a monoidal category $<semantics>\mathcal{E}<annotation\; encoding="application/x-tex">\backslash mathcal\{E\}</annotation></semantics>$ with product $<semantics>\otimes <annotation\; encoding="application/x-tex">\backslash otimes</annotation></semantics>$ and unit $<semantics>I<annotation\; encoding="application/x-tex">I</annotation></semantics>$, any monoid $<semantics>M<annotation\; encoding="application/x-tex">M</annotation></semantics>$ with multiplication $<semantics>m:M\otimes M\to M<annotation\; encoding="application/x-tex">m:\; M\backslash otimes\; M\; \backslash to\; M</annotation></semantics>$ and unit $<semantics>u:I\to M<annotation\; encoding="application/x-tex">u:\; I\; \backslash to\; M</annotation></semantics>$ induces a monoidal structure on $<semantics>\mathcal{E}/M<annotation\; encoding="application/x-tex">\backslash mathcal\{E\}/M</annotation></semantics>$ as follows: the unit is $<semantics>u:I\to M<annotation\; encoding="application/x-tex">u:\; I\; \backslash to\; M</annotation></semantics>$ and the product of $<semantics>f:X\to M<annotation\; encoding="application/x-tex">f:\; X\; \backslash to\; M</annotation></semantics>$ by $<semantics>g:Y\to M<annotation\; encoding="application/x-tex">g:\; Y\; \backslash to\; M</annotation></semantics>$ is the composite $$<semantics>X\otimes Y\stackrel{f\otimes g}{\to}M\otimes M\stackrel{m}{\to}M<annotation\; encoding="application/x-tex">\; X\backslash otimes\; Y\; \backslash overset\; \{f\backslash otimes\; g\}\backslash to\; M\; \backslash otimes\; M\; \backslash overset\{m\}\backslash to\; M\; </annotation></semantics>$$ Be aware that this monoidal structure depends heavily on the monoid $<semantics>M<annotation\; encoding="application/x-tex">M</annotation></semantics>$. For example, even if $<semantics>\mathcal{E}<annotation\; encoding="application/x-tex">\backslash mathcal\{E\}</annotation></semantics>$ is finitely complete and $<semantics>\otimes <annotation\; encoding="application/x-tex">\backslash otimes</annotation></semantics>$ is the cartesian product, the induced structure on $<semantics>\mathcal{E}/M<annotation\; encoding="application/x-tex">\backslash mathcal\{E\}/M</annotation></semantics>$ is almost never the cartesian one. A notable fact about this structure on $<semantics>\mathcal{E}/M<annotation\; encoding="application/x-tex">\backslash mathcal\{E\}/M</annotation></semantics>$ is that the monoids in it are exactly the morphisms of monoids with codomain $<semantics>M<annotation\; encoding="application/x-tex">M</annotation></semantics>$.

We will use this property in the monoidal category
$<semantics>[\mathcal{A},\mathcal{A}]<annotation\; encoding="application/x-tex">[\backslash mathcal\{A\},\backslash mathcal\{A\}]</annotation></semantics>$ of endofunctors on a category
$<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$. I will not say a lot about size issues here, but of
course we assume that there exist enough universes to make sense of
$<semantics>[\mathcal{A},\mathcal{A}]<annotation\; encoding="application/x-tex">[\backslash mathcal\{A\},\backslash mathcal\{A\}]</annotation></semantics>$ as a category even when $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$ is
not small but only locally small: that is, if *smallness* is
relative to a universe $<semantics>\mathbb{U}<annotation\; encoding="application/x-tex">\backslash mathbb\{U\}</annotation></semantics>$, then we posit a universe
$<semantics>\mathbb{V}\ni \mathbb{U}<annotation\; encoding="application/x-tex">\backslash mathbb\{V\}\; \backslash ni\; \backslash mathbb\{U\}</annotation></semantics>$ big enough to contain the set of objects
of $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$, making $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$ a $<semantics>\mathbb{V}<annotation\; encoding="application/x-tex">\backslash mathbb\{V\}</annotation></semantics>$-small category
hence $<semantics>[\mathcal{A},\mathcal{A}]<annotation\; encoding="application/x-tex">[\backslash mathcal\{A\},\backslash mathcal\{A\}]</annotation></semantics>$ a locally $<semantics>\mathbb{V}<annotation\; encoding="application/x-tex">\backslash mathbb\{V\}</annotation></semantics>$-small
category. The monoidal product on $<semantics>[\mathcal{A},\mathcal{A}]<annotation\; encoding="application/x-tex">[\backslash mathcal\{A\},\backslash mathcal\{A\}]</annotation></semantics>$ is just
the composition of endofunctors and the unit is the identity functor
$<semantics>\mathrm{Id}<annotation\; encoding="application/x-tex">\backslash mathrm\{Id\}</annotation></semantics>$. The monoids in that category are precisely the monads
on $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$, and for any such $<semantics>S:\mathcal{A}\to \mathcal{A}<annotation\; encoding="application/x-tex">S:\; \backslash mathcal\{A\}\; \backslash to\; \backslash mathcal\{A\}</annotation></semantics>$
with multiplication $<semantics>n:\mathrm{SS}\to S<annotation\; encoding="application/x-tex">n:\; SS\; \backslash to\; S</annotation></semantics>$ and unit $<semantics>j:\mathrm{Id}\to S<annotation\; encoding="application/x-tex">j:\; \backslash mathrm\{Id\}\; \backslash to\; S</annotation></semantics>$, the
slice category $<semantics>[\mathcal{A},\mathcal{A}]/S<annotation\; encoding="application/x-tex">[\backslash mathcal\{A\},\backslash mathcal\{A\}]/S</annotation></semantics>$ inherits a monoidal
structure with unit $<semantics>j<annotation\; encoding="application/x-tex">j</annotation></semantics>$ and product $<semantics>\alpha {\circ}^{S}\beta <annotation\; encoding="application/x-tex">\backslash alpha\; \backslash circ^S\; \backslash beta</annotation></semantics>$ the
composite
$$<semantics>TR\stackrel{\alpha \beta}{\to}SS\stackrel{n}{\to}S<annotation\; encoding="application/x-tex">\; T\; R\; \backslash overset\{\backslash alpha\backslash beta\}\; \backslash to\; S\; S\; \backslash overset\; n\; \backslash to\; S\; </annotation></semantics>$$
for any $<semantics>\alpha :T\to S<annotation\; encoding="application/x-tex">\backslash alpha:\; T\; \backslash to\; S</annotation></semantics>$ and $<semantics>\beta :R\to S<annotation\; encoding="application/x-tex">\backslash beta:\; R\; \backslash to\; S</annotation></semantics>$.

Now a natural transformation $<semantics>\gamma <annotation\; encoding="application/x-tex">\backslash gamma</annotation></semantics>$ between two functors
$<semantics>F,G:\mathcal{A}\to \mathcal{A}<annotation\; encoding="application/x-tex">F,G:\; \backslash mathcal\{A\}\; \backslash to\; \backslash mathcal\{A\}</annotation></semantics>$ is said to be *cartesian*
whenever the naturality squares

are pullback diagrams. If $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$ is finitely complete, as it will be for the rest of the post, it admits in particular a terminal object $<semantics>1<annotation\; encoding="application/x-tex">1</annotation></semantics>$ and the pasting lemma ensures that we only have to check for the pullback property of the naturality squares of the form

to know if $<semantics>\gamma <annotation\; encoding="application/x-tex">\backslash gamma</annotation></semantics>$ is cartesian. Let us denote by $<semantics>\mathcal{M}<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}</annotation></semantics>$ the (possibly large) set of morphsisms in $<semantics>[\mathcal{A},\mathcal{A}]<annotation\; encoding="application/x-tex">[\backslash mathcal\{A\},\backslash mathcal\{A\}]</annotation></semantics>$ that are cartesian in this sense, and denote by $<semantics>\mathcal{M}/S<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}/S</annotation></semantics>$ the full subcategory of $<semantics>[\mathcal{A},\mathcal{A}]/S<annotation\; encoding="application/x-tex">[\backslash mathcal\{A\},\backslash mathcal\{A\}]/S</annotation></semantics>$ whose objects are in $<semantics>\mathcal{M}<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}</annotation></semantics>$.

**Definition.** A *club in $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$* is a monad $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ such that
$<semantics>\mathcal{M}/S<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}/S</annotation></semantics>$ is closed under the monoidal product $<semantics>{\circ}^{S}<annotation\; encoding="application/x-tex">\backslash circ^S</annotation></semantics>$.

By “closed under $<semantics>{\circ}^{S}<annotation\; encoding="application/x-tex">\backslash circ^S</annotation></semantics>$”, it is understood that the unit $<semantics>j<annotation\; encoding="application/x-tex">j</annotation></semantics>$ of $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ is in $<semantics>\mathcal{M}<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}</annotation></semantics>$ and that the product $<semantics>\alpha {\circ}^{S}\beta <annotation\; encoding="application/x-tex">\backslash alpha\; \backslash circ^S\; \backslash beta</annotation></semantics>$ of two elements of $<semantics>\mathcal{M}<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}</annotation></semantics>$ with codomain $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ still is in $<semantics>\mathcal{M}<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}</annotation></semantics>$. A useful alternate characterization is the following:

**Lemma.** A monad $<semantics>(S,n,j)<annotation\; encoding="application/x-tex">(S,n,j)</annotation></semantics>$ is a club if and only if $<semantics>n,j\in \mathcal{M}<annotation\; encoding="application/x-tex">n,j\; \backslash in\; \backslash mathcal\{M\}</annotation></semantics>$ and $<semantics>S\mathcal{M}\subseteq \mathcal{M}<annotation\; encoding="application/x-tex">S\backslash mathcal\{M\}\backslash subseteq\; \backslash mathcal\{M\}</annotation></semantics>$.

It is clear from the definition of $<semantics>{\circ}^{S}<annotation\; encoding="application/x-tex">\backslash circ^S</annotation></semantics>$ that the condition is sufficient, as the $<semantics>\alpha {\circ}^{S}\beta <annotation\; encoding="application/x-tex">\backslash alpha\; \backslash circ^S\; \backslash beta</annotation></semantics>$ can be written as $<semantics>n\cdot (S\beta )\cdot (\alpha T)<annotation\; encoding="application/x-tex">n\backslash cdot(S\backslash beta)\backslash cdot(\backslash alpha\; T)</annotation></semantics>$ via the exchange rule. Now suppose $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ is a club: $<semantics>j\in \mathcal{M}<annotation\; encoding="application/x-tex">j\; \backslash in\; \backslash mathcal\{M\}</annotation></semantics>$ as it is the monoidal unit; $<semantics>n\in \mathcal{M}<annotation\; encoding="application/x-tex">n\; \backslash in\; \backslash mathcal\{M\}</annotation></semantics>$ comes from $<semantics>{\mathrm{id}}_{S}{\circ}^{S}{\mathrm{id}}_{S}\in \mathcal{M}<annotation\; encoding="application/x-tex">\backslash mathrm\{id\}\_S\; \backslash circ^S\; \backslash mathrm\{id\}\_S\; \backslash in\; \backslash mathcal\{M\}</annotation></semantics>$; finally for any $<semantics>\alpha :T\to S\in \mathcal{M}<annotation\; encoding="application/x-tex">\backslash alpha:\; T\; \backslash to\; S\; \backslash in\; \backslash mathcal\{M\}</annotation></semantics>$, we should have $<semantics>{\mathrm{id}}_{S}{\circ}^{S}\alpha =n\cdot (S\alpha )\in \mathcal{M}<annotation\; encoding="application/x-tex">\backslash mathrm\{id\}\_S\; \backslash circ^S\; \backslash alpha\; =\; n\backslash cdot(S\backslash alpha)\; \backslash in\; \backslash mathcal\{M\}</annotation></semantics>$, and having already $<semantics>n\in \mathcal{M}<annotation\; encoding="application/x-tex">n\backslash in\backslash mathcal\{M\}</annotation></semantics>$ this yields $<semantics>S\alpha \in \mathcal{M}<annotation\; encoding="application/x-tex">S\backslash alpha\; \backslash in\; \backslash mathcal\{M\}</annotation></semantics>$ by the pasting lemma.

In particular, this lemma shows that monoids in $<semantics>\mathcal{M}/S<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}/S</annotation></semantics>$, which coincide with monad maps $<semantics>T\to S\in \mathcal{M}<annotation\; encoding="application/x-tex">T\; \backslash to\; S\; \backslash in\; \backslash mathcal\{M\}</annotation></semantics>$ for some monad $<semantics>T<annotation\; encoding="application/x-tex">T</annotation></semantics>$, are clubs too. We shall denote the category of these by $<semantics>\mathrm{Club}(\mathcal{A})/S<annotation\; encoding="application/x-tex">\backslash mathbf\{Club\}(\backslash mathcal\{A\})/S</annotation></semantics>$.

The lemma also implies that any *cartesian monad*, by which is
meant a pullbacks preserving monad with cartesian unit and
multiplication, is automatically a club.

Now note that evaluation at $<semantics>1<annotation\; encoding="application/x-tex">1</annotation></semantics>$ provides an equivalence $<semantics>\mathcal{M}/S\stackrel{\sim}{\to}\mathcal{A}/S1<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}/S\; \backslash overset\backslash sim\backslash to\; \backslash mathcal\{A\}/S1</annotation></semantics>$ whose pseudo inverse is given for a map $<semantics>f:K\to S1<annotation\; encoding="application/x-tex">f:K\; \backslash to\; S1</annotation></semantics>$ by the natural transformation pointwise defined as the pullback

The previous monoidal product on $<semantics>\mathcal{M}/S<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}/S</annotation></semantics>$ can be transported on $<semantics>\mathcal{A}/S1<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}/S1</annotation></semantics>$ and bears a fairly simple description: given $<semantics>f:K\to S1<annotation\; encoding="application/x-tex">f:K\; \backslash to\; S1</annotation></semantics>$ and $<semantics>g:H\to S1<annotation\; encoding="application/x-tex">g:H\; \backslash to\; S1</annotation></semantics>$, the product, still denoted $<semantics>f{\circ}^{S}g<annotation\; encoding="application/x-tex">f\backslash circ^S\; g</annotation></semantics>$, is the evaluation at $<semantics>1<annotation\; encoding="application/x-tex">1</annotation></semantics>$ of the composite $<semantics>\mathrm{TR}\to \mathrm{SS}\to S<annotation\; encoding="application/x-tex">TR\; \backslash to\; SS\; \backslash to\; S</annotation></semantics>$ where $<semantics>T\to S<annotation\; encoding="application/x-tex">T\; \backslash to\; S</annotation></semantics>$ corresponds to $<semantics>f<annotation\; encoding="application/x-tex">f</annotation></semantics>$ and $<semantics>R\to S<annotation\; encoding="application/x-tex">R\backslash to\; S</annotation></semantics>$ to $<semantics>g<annotation\; encoding="application/x-tex">g</annotation></semantics>$. Hence the explicit equivalence given above allows us to write this as

**Definition.** By abuse of terminology, a monoid in $<semantics>\mathcal{A}/S1<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}/S1</annotation></semantics>$
is said to be a *club over $<semantics>S1<annotation\; encoding="application/x-tex">S1</annotation></semantics>$*.

### Examples of clubs

On $<semantics>\mathrm{Set}<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}</annotation></semantics>$, the free monoid monad $<semantics>L<annotation\; encoding="application/x-tex">L</annotation></semantics>$ is cartesian, hence a club on $<semantics>\mathrm{Set}<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}</annotation></semantics>$ in the above sense. Of course, we retrieve as $<semantics>{\circ}^{L}<annotation\; encoding="application/x-tex">\backslash circ^L</annotation></semantics>$ the monoidal product of the introduction on $<semantics>\mathrm{Set}/N<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}/\backslash mathbf\{N\}</annotation></semantics>$. Hence, clubs over $<semantics>N<annotation\; encoding="application/x-tex">\backslash mathbf\{N\}</annotation></semantics>$ in $<semantics>\mathrm{Set}<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}</annotation></semantics>$ are exactly the non symmetric $<semantics>\mathrm{Set}<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}</annotation></semantics>$-operads.

Considering $<semantics>\mathrm{Cat}<annotation\; encoding="application/x-tex">\backslash mathsf\{Cat\}</annotation></semantics>$ as a $<semantics>1<annotation\; encoding="application/x-tex">1</annotation></semantics>$-category, the free finite coproduct category monad $<semantics>F<annotation\; encoding="application/x-tex">F</annotation></semantics>$ on $<semantics>\mathrm{Cat}<annotation\; encoding="application/x-tex">\backslash mathsf\{Cat\}</annotation></semantics>$ is a club in the above sense. This can be shown directly through the charaterization we stated earlier: its unit and multiplication are cartesian and it maps cartesian transformations to cartesian transformations. Moreover, the obvious monad map $<semantics>P\to F<annotation\; encoding="application/x-tex">P\; \backslash to\; F</annotation></semantics>$ is cartesian, where $<semantics>P<annotation\; encoding="application/x-tex">P</annotation></semantics>$ is the free strict symmetric monoidal category monad on $<semantics>\mathrm{Cat}<annotation\; encoding="application/x-tex">\backslash mathsf\{Cat\}</annotation></semantics>$. Hence it yields for free that $<semantics>P<annotation\; encoding="application/x-tex">P</annotation></semantics>$ is also a club on $<semantics>\mathrm{Cat}<annotation\; encoding="application/x-tex">\backslash mathsf\{Cat\}</annotation></semantics>$. Note that the groupoid $<semantics>P<annotation\; encoding="application/x-tex">\backslash mathbf\{P\}</annotation></semantics>$ of bijections is $<semantics>P1<annotation\; encoding="application/x-tex">P1</annotation></semantics>$ and the category $<semantics>\mathrm{Fin}<annotation\; encoding="application/x-tex">\backslash mathsf\{Fin\}</annotation></semantics>$ of finite sets is $<semantics>F1<annotation\; encoding="application/x-tex">F1</annotation></semantics>$. So it is now a matter of careful bookkeeping to establish that the functors (given by the Grothendieck construction) $$<semantics>[P,\mathrm{Set}]\to \mathrm{Cat}/P,\phantom{\rule{2em}{0ex}}[\mathrm{Fin},\mathrm{Set}]\to \mathrm{Cat}/\mathrm{Fin}<annotation\; encoding="application/x-tex">\; [\backslash mathbf\{P\},\backslash mathsf\{Set\}]\; \backslash to\; \backslash mathsf\{Cat\}/\backslash mathbf\{P\},\; \backslash qquad\; [\backslash mathsf\{Fin\},\backslash mathsf\{Set\}]\; \backslash to\; \backslash mathsf\{Cat\}/\backslash mathsf\{Fin\}\; </annotation></semantics>$$ are strong monoidal where the domain categories are given Kelly’s substition product. In other words, it exhibits symmetric $<semantics>\mathrm{Set}<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}</annotation></semantics>$-operads and non enriched Lawvere theories as special clubs over $<semantics>P<annotation\; encoding="application/x-tex">\backslash mathbf\{P\}</annotation></semantics>$ and $<semantics>\mathrm{Fin}<annotation\; encoding="application/x-tex">\backslash mathsf\{Fin\}</annotation></semantics>$.

We could say that we are done: we have a polished abstract notion of clubs that can encompass the different notions of operads on $<semantics>\mathrm{Set}<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}</annotation></semantics>$ that we are used to. But what about operads on other categories? Also, the above monads $<semantics>P<annotation\; encoding="application/x-tex">P</annotation></semantics>$ and $<semantics>F<annotation\; encoding="application/x-tex">F</annotation></semantics>$ are actually $<semantics>2<annotation\; encoding="application/x-tex">2</annotation></semantics>$-monads on $<semantics>\mathrm{Cat}<annotation\; encoding="application/x-tex">\backslash mathsf\{Cat\}</annotation></semantics>$ when seen as a $<semantics>2<annotation\; encoding="application/x-tex">2</annotation></semantics>$-category. Can we extend the notion to this enrichement?

### Enriched clubs

We shall fix a cosmos $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$ to enriched over (and denote as
usual the underlying ordinary notions by a $<semantics>0<annotation\; encoding="application/x-tex">0</annotation></semantics>$-index), but we want it
to have good properties, so that *finite completeness* makes sense
in this enriched framework. Hence we ask that $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$ is locally
finitely presentable as a closed category (see
David’s post). Taking a look at what we did in the ordinary case, we see
that it heavily relies on the possibility of defining slice
categories, which is not possible in full generality. Hence we ask for
$<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$ to be semicartesian, meaning that the monoidal unit of
$<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$ is its terminal object: then for a
$<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-category $<semantics>\mathcal{B}<annotation\; encoding="application/x-tex">\backslash mathcal\{B\}</annotation></semantics>$, the slice category
$<semantics>\mathcal{B}/B<annotation\; encoding="application/x-tex">\backslash mathcal\{B\}/B</annotation></semantics>$ is defined to have elements $<semantics>1\to \mathcal{B}(X,B)<annotation\; encoding="application/x-tex">1\; \backslash to\; \backslash mathcal\{B\}(X,B)</annotation></semantics>$
as objects, and the space of morphisms between such
$<semantics>f:1\to \mathcal{B}(X,B)<annotation\; encoding="application/x-tex">f:1\; \backslash to\; \backslash mathcal\{B\}(X,B)</annotation></semantics>$ and $<semantics>f\prime :1\to \mathcal{B}(X\prime ,B)<annotation\; encoding="application/x-tex">f\text{\'}:1\; \backslash to\; \backslash mathcal\{B\}(X\text{\'},B)</annotation></semantics>$ is given
by the following pullback in $<semantics>{\mathcal{V}}_{0}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}\_0</annotation></semantics>$:

If we also want to be able to talk about the category of enriched clubs over something, we should be able to make a $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-category out of the monoids in a monoidal $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-category. Again, this is a priori not possible to do: the space of monoid maps between $<semantics>(M,m,i)<annotation\; encoding="application/x-tex">(M,m,i)</annotation></semantics>$ and $<semantics>(N,n,j)<annotation\; encoding="application/x-tex">(N,n,j)</annotation></semantics>$ is supposed to interpret “the subspace of those $<semantics>f:M\to N<annotation\; encoding="application/x-tex">f:\; M\; \backslash to\; N</annotation></semantics>$ such that $<semantics>\mathrm{fi}=j<annotation\; encoding="application/x-tex">fi=j</annotation></semantics>$ and $<semantics>\mathrm{fm}(x,y)=n(\mathrm{fx},\mathrm{fy})<annotation\; encoding="application/x-tex">fm(x,y)=n(fx,fy)</annotation></semantics>$ for all $<semantics>x,y<annotation\; encoding="application/x-tex">x,y</annotation></semantics>$”, where the later equation has two occurences of $<semantics>f<annotation\; encoding="application/x-tex">f</annotation></semantics>$ on the right. Hence we ask that $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$ is actually a cartesian cosmos, so that the interpretation of such a subspace is the joint equalizer of

Moreover, these hypothesis also resolve the set theoretical issues: because of all the hypotheses on $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$, the underlying $<semantics>{\mathcal{V}}_{0}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}\_0</annotation></semantics>$ identifies with the category $<semantics>\mathrm{Lex}[{\mathcal{T}}_{0},\mathrm{Set}]<annotation\; encoding="application/x-tex">\backslash mathrm\{Lex\}[\backslash mathcal\{T\}\_0,\backslash mathsf\{Set\}]</annotation></semantics>$ of $<semantics>\mathrm{Set}<annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}</annotation></semantics>$-valued left exact functors from the finitely presentables of $<semantics>{\mathcal{V}}_{0}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}\_0</annotation></semantics>$. Hence, for a $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-category $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$, the category of $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-endofunctors $<semantics>[\mathcal{A},\mathcal{A}]<annotation\; encoding="application/x-tex">[\backslash mathcal\{A\},\backslash mathcal\{A\}]</annotation></semantics>$ is naturally a $<semantics>\mathcal{V}\prime <annotation\; encoding="application/x-tex">\backslash mathcal\{V\}\text{\'}</annotation></semantics>$-category for the cartesian cosmos $<semantics>\mathcal{V}\prime =\mathrm{Lex}[{\mathcal{T}}_{0},\mathrm{Set}\prime ]<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}\text{\'}=\backslash mathrm\{Lex\}[\backslash mathcal\{T\}\_0,\backslash mathsf\{Set\}\text{\'}]</annotation></semantics>$ where $<semantics>\mathrm{Set}\prime <annotation\; encoding="application/x-tex">\backslash mathsf\{Set\}\text{\'}</annotation></semantics>$ is the category of $<semantics>\mathbb{V}<annotation\; encoding="application/x-tex">\backslash mathbb\{V\}</annotation></semantics>$-small sets for a universe $<semantics>\mathbb{V}<annotation\; encoding="application/x-tex">\backslash mathbb\{V\}</annotation></semantics>$ big enough to contain the set of objects of $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$. Hence we do not care so much about size issues and consider everything to be a $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-category; the careful reader will replace $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$ by $<semantics>\mathcal{V}\prime <annotation\; encoding="application/x-tex">\backslash mathcal\{V\}\text{\'}</annotation></semantics>$ when necessary.

In the context of categories enriched over a locally finitely presentable cartesian closed cosmos $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$, all we did in the ordinary case is directly enrichable. We call a $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-natural transformation $<semantics>\alpha :T\to S<annotation\; encoding="application/x-tex">\backslash alpha:\; T\; \backslash to\; S</annotation></semantics>$ cartesian just when it is so as a natural transformation $<semantics>{T}_{0}\to {S}_{0}<annotation\; encoding="application/x-tex">T\_0\; \backslash to\; S\_0</annotation></semantics>$, and denote the set of these by $<semantics>\mathcal{M}<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}</annotation></semantics>$. For a $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-monad $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ on $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$, the category $<semantics>\mathcal{M}/S<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}/S</annotation></semantics>$ is the full subcategory of the slice $<semantics>[\mathcal{A},\mathcal{A}]/S<annotation\; encoding="application/x-tex">[\backslash mathcal\{A\},\backslash mathcal\{A\}]/S</annotation></semantics>$ spanned by the objects in $<semantics>\mathcal{M}<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}</annotation></semantics>$.

**Definition.** A $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-club on $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$ is a
$<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-monad $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ such that $<semantics>\mathcal{M}/S<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}/S</annotation></semantics>$ is closed under
the induced $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-monoidal product of
$<semantics>[\mathcal{A},\mathcal{A}]/S<annotation\; encoding="application/x-tex">[\backslash mathcal\{A\},\backslash mathcal\{A\}]/S</annotation></semantics>$.

Now comes the fundamental proposition about enriched clubs:

**Proposition.** A $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-monad $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ is a $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-club if
and only if $<semantics>{S}_{0}<annotation\; encoding="application/x-tex">S\_0</annotation></semantics>$ is an ordinary club.

In that case, the category of monoids in $<semantics>\mathcal{M}/S<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}/S</annotation></semantics>$ is composed
of the clubs $<semantics>T<annotation\; encoding="application/x-tex">T</annotation></semantics>$ together with a $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-monad map
$<semantics>1\to [\mathcal{A},\mathcal{A}](T,S)<annotation\; encoding="application/x-tex">1\; \backslash to\; [\backslash mathcal\{A\},\backslash mathcal\{A\}](T,S)</annotation></semantics>$ in $<semantics>\mathcal{M}<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}</annotation></semantics>$. We will still
denote it $<semantics>\mathrm{Club}(\mathcal{A})/S<annotation\; encoding="application/x-tex">\backslash mathbf\{Club\}(\backslash mathcal\{A\})/S</annotation></semantics>$ and its underlying ordinary
category is $<semantics>\mathrm{Club}({\mathcal{A}}_{0})/{S}_{0}<annotation\; encoding="application/x-tex">\backslash mathbf\{Club\}(\backslash mathcal\{A\}\_0)/S\_0</annotation></semantics>$. We can once again take
advantage of the $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-equivalence
$<semantics>\mathcal{M}/S\simeq \mathcal{A}/S1<annotation\; encoding="application/x-tex">\backslash mathcal\{M\}/S\; \backslash simeq\; \backslash mathcal\{A\}/S1</annotation></semantics>$ to equip the later with a
$<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-monoidal product, and abuse terminlogy to call its
monoids *$<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-clubs over $<semantics>S1<annotation\; encoding="application/x-tex">S1</annotation></semantics>$*. Proving all that
carefully require notions of enriched factorization systems that are
of no use for this post.

So basically, the slogan is: as long as $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$ is a cartesian cosmos which is loccally presentable as a closed category, everything works the same way as in the ordinary case, and $<semantics>(-{)}_{0}<annotation\; encoding="application/x-tex">(-)\_0</annotation></semantics>$ preserves and reflects clubs.

### Examples of enriched clubs

As we said earlier, $<semantics>F<annotation\; encoding="application/x-tex">F</annotation></semantics>$ and $<semantics>P<annotation\; encoding="application/x-tex">P</annotation></semantics>$ are $<semantics>2<annotation\; encoding="application/x-tex">2</annotation></semantics>$-monads on $<semantics>\mathrm{Cat}<annotation\; encoding="application/x-tex">\backslash mathsf\{Cat\}</annotation></semantics>$, and the underlying $<semantics>{F}_{0}<annotation\; encoding="application/x-tex">F\_0</annotation></semantics>$ and $<semantics>{P}_{0}<annotation\; encoding="application/x-tex">P\_0</annotation></semantics>$ (earlier just denoted $<semantics>F<annotation\; encoding="application/x-tex">F</annotation></semantics>$ and $<semantics>P<annotation\; encoding="application/x-tex">P</annotation></semantics>$) are ordinary clubs. So $<semantics>F<annotation\; encoding="application/x-tex">F</annotation></semantics>$ and $<semantics>P<annotation\; encoding="application/x-tex">P</annotation></semantics>$ are $<semantics>\mathrm{Cat}<annotation\; encoding="application/x-tex">\backslash mathsf\{Cat\}</annotation></semantics>$-clubs, maybe better called $<semantics>2<annotation\; encoding="application/x-tex">2</annotation></semantics>$-clubs. Moreover, the map $<semantics>{P}_{0}\to {F}_{0}<annotation\; encoding="application/x-tex">P\_0\; \backslash to\; F\_0</annotation></semantics>$ mentioned earlier is easily promoted to a $<semantics>2<annotation\; encoding="application/x-tex">2</annotation></semantics>$-natural transformation making $<semantics>P<annotation\; encoding="application/x-tex">\backslash mathbf\{P\}</annotation></semantics>$ a $<semantics>2<annotation\; encoding="application/x-tex">2</annotation></semantics>$-club over $<semantics>\mathrm{Fin}<annotation\; encoding="application/x-tex">\backslash mathsf\{Fin\}</annotation></semantics>$.

The free monoid monad on a cartesian cosmos $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$ is a $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-club and the clubs over $<semantics>L1<annotation\; encoding="application/x-tex">L1</annotation></semantics>$ are precisely the non symmetric $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-operads.

Last but not least, a quite surprising example at first sight. Any small ordinary category $<semantics>{\mathcal{A}}_{0}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}\_0</annotation></semantics>$ is naturally enriched in its category of presheaves $<semantics>\mathrm{Psh}({\mathcal{A}}_{0})<annotation\; encoding="application/x-tex">\backslash mathrm\{Psh\}(\backslash mathcal\{A\}\_0)</annotation></semantics>$, as the full subcategory of the cartesian cosmos $<semantics>\mathcal{V}=\mathrm{Psh}({\mathcal{A}}_{0})<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}=\backslash mathrm\{Psh\}(\backslash mathcal\{A\}\_0)</annotation></semantics>$ spanned by the representables. Concretely, the space of morphisms between $<semantics>A<annotation\; encoding="application/x-tex">A</annotation></semantics>$ and $<semantics>B<annotation\; encoding="application/x-tex">B</annotation></semantics>$ is given by the presheaf $$<semantics>\mathcal{A}(A,B):C\mapsto {\mathcal{A}}_{0}(A\times C,B)<annotation\; encoding="application/x-tex">\; \backslash mathcal\{A\}(A,B):\; C\; \backslash mapsto\; \backslash mathcal\{A\}\_0(A\; \backslash times\; C,\; B)\; </annotation></semantics>$$ Hence an $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-endofunctor $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ on $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$ is the data of a map $<semantics>A\mapsto \mathrm{SA}<annotation\; encoding="application/x-tex">A\; \backslash mapsto\; SA</annotation></semantics>$ on objects, together with for any $<semantics>A,B<annotation\; encoding="application/x-tex">A,B</annotation></semantics>$ a $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-natural transformation $<semantics>{\sigma}_{A,B}:\mathcal{A}(A,B)\to \mathcal{A}(\mathrm{SA},\mathrm{SB})<annotation\; encoding="application/x-tex">\backslash sigma\_\{A,B\}:\; \backslash mathcal\{A\}(A,B)\; \backslash to\; \backslash mathcal\{A\}(SA,SB)</annotation></semantics>$ satisfying some axioms. Now fixing $<semantics>A,C\in \mathcal{A}<annotation\; encoding="application/x-tex">A,C\; \backslash in\; \backslash mathcal\{A\}</annotation></semantics>$, the collection of $$<semantics>({\sigma}_{A,B}{)}_{C}:{\mathcal{A}}_{0}(A\times C,B)\to {\mathcal{A}}_{0}(\mathrm{SA}\times C,\mathrm{SB})<annotation\; encoding="application/x-tex">\; (\backslash sigma\_\{A,B\})\_C\; :\; \backslash mathcal\{A\}\_0(A\backslash times\; C,B)\; \backslash to\; \backslash mathcal\{A\}\_0(SA\; \backslash times\; C,\; SB)\; </annotation></semantics>$$ is equivalently, via Yoneda, a collection of $$<semantics>{\tilde{\sigma}}_{A,C}:{\mathcal{A}}_{0}(\mathrm{SA}\times C,S(A\times C)).<annotation\; encoding="application/x-tex">\; \backslash tilde\{\backslash sigma\}\_\{A,C\}\; :\; \backslash mathcal\{A\}\_0(SA\backslash times\; C,S(A\; \backslash times\; C)).\; </annotation></semantics>$$ The axioms that $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$ satisfies as a $<semantics>\mathcal{V}<annotation\; encoding="application/x-tex">\backslash mathcal\{V\}</annotation></semantics>$-enriched natural transformation make $<semantics>\tilde{\sigma}<annotation\; encoding="application/x-tex">\backslash tilde\; \backslash sigma</annotation></semantics>$ a strength for the endofunctor $<semantics>{S}_{0}<annotation\; encoding="application/x-tex">S\_0</annotation></semantics>$. Along this translation, a strong monad on $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$ is then just a $<semantics>\mathrm{Psh}({\mathcal{A}}_{0})<annotation\; encoding="application/x-tex">\backslash mathrm\{Psh\}(\backslash mathcal\{A\}\_0)</annotation></semantics>$-monad. And it is very common, when modelling side effects by monads in Computer Science, to end up with strong cartesian monads. As cartesian monads, they are in particular ordinary clubs on $<semantics>{\mathcal{A}}_{0}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}\_0</annotation></semantics>$. Hence, those are $<semantics>\mathrm{Psh}({\mathcal{A}}_{0})<annotation\; encoding="application/x-tex">\backslash mathrm\{Psh\}(\backslash mathcal\{A\}\_0)</annotation></semantics>$-monads whose underlying ordinary monad is a club: that is, they are $<semantics>\mathrm{Psh}({\mathcal{A}}_{0})<annotation\; encoding="application/x-tex">\backslash mathrm\{Psh\}(\backslash mathcal\{A\}\_0)</annotation></semantics>$-clubs on $<semantics>\mathcal{A}<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}</annotation></semantics>$.

In conclusion, let me point out that there is much more in Kelly’s
article than presented here, especially on *local factorisation
systems* and their link to (replete) reflexive subcategories with a
left exact reflexion. It is by the way quite surprising that he does
not stay in full generality longer, as one could define an abstract
club in just that framework. Maybe there is just no interesting
example to come up with at that level of generality…

Also, a great deal of examples of club comes from never published work of Robin Cockett (or at least, I was not able to find it), so these motivations are quite difficult to follow.

Going a little further in the generalization, the cautious reader
should have noticed that we did not say anything about *coloured*
operads. For then we would not have to look at slice categories of the
form $<semantics>\mathcal{A}/S1<annotation\; encoding="application/x-tex">\backslash mathcal\{A\}/S1</annotation></semantics>$, but at categories of span with one leg pointing
to $<semantics>SC<annotation\; encoding="application/x-tex">S\; C</annotation></semantics>$ (morally mapping an operation to its coloured arity) and the
other one to $<semantics>C<annotation\; encoding="application/x-tex">C</annotation></semantics>$ (morally picking the output colour), where the $<semantics>C<annotation\; encoding="application/x-tex">C</annotation></semantics>$ is
the object of colours. Those spans actually appear above implicitly
whenever a map or the form $<semantics>!:X\to 1<annotation\; encoding="application/x-tex">!:X\; \backslash to\; 1</annotation></semantics>$ is involved (morally, this is
the map picking the “only output colour” in a non coloured
operad). This somehow should be contained somewhere in Garner’s work
on *double clubs* or in Shulman’s and Cruttwell’s unified framework
for generalized multicategories. I
am looking forward to learn more about that in the comments!

## April 22, 2017

### Lubos Motl - string vacua and pheno

Woit is a typical failing-grade student who simply isn't and has never been the right material for college. His inability to learn string theory is a well-known aspect of this fact. But most people in the world – and maybe even most of the physics students – misunderstand string theory. But his low math-related intelligence is often manifested in things that

*are*comprehensible to all average or better students of physics.

Two years ago, Woit argued that

the West Coast metric is the wrong one.Now, unless you are a complete idiot, you must understand that the choice of the metric tensor – either \(({+}{-}{-}{-})\) or \(({-}{+}{+}{+})\) – is a pure convention. The metric tensor \(g^E_{\mu\nu}\) of the first culture is simply equal to minus the metric tensor of the second culture \(g^W_{\mu\nu}\), i.e. \(g^E_{\mu\nu} = - g^W_{\mu\nu}\), and every statement or formula written with one set of conventions may obviously be translated to a statement written in the other, and vice versa. The equations or statements basically differ just by some signs. The translation from one convention to another is always possible and is no more mysterious than the translation from British to U.S. English or vice versa.

How stupid do you have to be to misunderstand this point, that there can't be any "wrong" convention for the sign? And how many people are willing to believe that someone's inability to get this simple point is compatible with the credibility of his comments about

*string theory*?

Well, this individual has brought us a new ludicrous triviality of the same type,

Two Pet PeevesWe're told that we mustn't use the same notation for a Lie group and a Lie algebra. Why? Because Tony Zee, Pierre Ramond, and partially Howard Georgi were using the unified notation and Woit "remember[s] being very confused about this when I first started studying the subject". Well, Mr Woit, you were confused simply because you have never been college material. But it's easier to look for flaws in Lie groups and Lie algebras than in your own worthless existence, right?

Many physicists use the same symbols for Lie groups and the corresponding Lie algebras for a simple reason: they – or at least their behavior near the identity (or any other point on the group manifold) – is completely equivalent. Except for some global behavior, the information about the Lie group is completely equivalent to the information about the corresponding Lie algebra. They're just two languages to talk about the same thing.

Just to be sure, in my and Dr Zahradník's textbook on linear algebra, we used the separate symbols and I love the fraktur fonts. In Czechia and maybe elsewhere, most people who are familiar with similar fonts at all call them "Schwabacher" but strictly speaking, Textura, Rotunda, Schwabacher, and Fraktur are four different typefaces. Schwabacher is older and was replaced by Fraktura in the 16th century. In 1941, Hitler decided that there were too many typos in the newspapers and that foreigners couldn't decode Fraktura which diminishes the importance of Germany abroad, so he banned Fraktura and replaced it with Antiqua.

When we published our textbook, I was bragging about the extensive index that was automatically created by a \({\rm \LaTeX}\) macro. I told somebody: Tell me any word and you will see that we can find it in the index. In front of several witnesses, the first person wanted to humiliate me so he said: "A broken bone." So I abruptly responded: "The index doesn't include a 'broken bone' literally but there's a fracture in it!" ;-) Yes, I did include a comment about the font in the index. You know, the composition of the index was as simple as placing the command like \placeInTheIndex{fraktura} in a given place of the source. After several compilations, the correct index was automatically created. I remember that in 1993 when I began to type it, one compilation of the book took 15 minutes on the PCs in the computer lab of our hostel! When we received new 90 GHz frequency PCs, the speed was almost doubled. ;-)

OK, I don't want to review elementary things because some readers know them and wouldn't learn anything new, while others don't know these things and a brief introduction wouldn't help them. But there is a simple relationship between a Lie algebra and a Lie group. You may obtain the elements of the group by a simple exponentiation of an element of a Lie algebra. For this reason, all the "structure coefficients" \(f_{ij}{}^k\) that remember the structure of commutators\[

[T_i,T_j] = f_{ij}{}^k T_k

\] contain the same information as all the curvature information about the group manifold near the identity. The Lie algebra simply

*is*the tangent space of the group manifold around the identity (or any element) and all the commutators in the Lie algebra are equivalent to the information about the distortions that a projection of the neighborhood of the identity in the group manifold to a flat space causes.

We often use the same symbols because it's harder to write the gothic fonts. More importantly,

whenever a theory, a solution, or a situation is connected with a particular Lie group, it's also connected with the corresponding Lie algebra, and vice versa!That's the real reason why it doesn't matter whether you talk about a Lie group or a Lie algebra. We use their labels for "identification purposes" and the identification is the same whether you have a Lie group or a Lie algebra in mind. A very simple example:

There exist two rank-8, dimension-496 heterotic string theories whose gauge groups in the 10-dimensional spacetime are \(SO(32)\) and \(E_8\times E_8\), respectively.I wrote the sentence in two ways. The first one sort of talks about the group manifolds while the second talks about Lie algebras. The information is obviously almost completely equivalent.

There exist two rank-8, dimension-496 heterotic string theories whose gauge groups in the 10-dimensional spacetime are (or have the Lie algebras) \({\mathfrak so}(32)\) and \({\mathfrak e}_8\oplus {\mathfrak e}_8\), respectively.

Well, except for subtleties – the global choices and identifications in the group manifold that don't affect the behavior of the group manifold in the vicinity of the identity element. If you want to be careful about these subtleties, you need to talk about the group manifolds, not just Lie algebras, because the Lie algebras "forget" the information about these global issues.

So you might want to be accurate and talk about the Lie groups in 10 dimensions – and say that the allowed heterotic gauge groups are \(E_8\times E_8\) and \(SO(32)\). However, this effort of yours would actually make things

*worse*because when you use a language that has the ambition of being correct about the global issues, it's your responsibility to be correct about them, indeed, and chances are that your first guess will be wrong!

In particular, the "\(SO(32)\)" heterotic string also contains spinors. So a somewhat smart person could say that the gauge group of that heterotic string is actually \(Spin(32)\), not \(SO(32)\). However, that would be about as wrong as \(SO(32)\) itself – almost no improvement – because the actual perturbative gauge group of this heterotic theory is isomorphic to\[

Spin(32) / \ZZ_2

\] where the \(\ZZ_2\) is chosen in such a way that the group is

*not*isomorphic to \(SO(32)\). It's another \(\ZZ_2\) from the center isomorphic to \(\ZZ_2\times \ZZ_2\) that allows left-handed spinors but not the right-handed ones! By the way, funnily, the S-dual theory is type I superstring theory whose gauge group – arising from Chan-Paton factors of the open strings – seems to be \(O(32)\). However, the global form of the gauge group gets modified by D-particles, the other half of \(O(32)\) beyond \(SO(32)\) is broken, and spinors of \(Spin(32)\) are allowed by the D-particles so non-perturbatively, the gauge group of type I superstring theory agrees with that of the heterotic S-dual theory including the global subtleties.

(Peter Woit also ludicrously claims that physicists only need three groups, \(U(1),SU(2), SO(3)\). That may have been almost correct in the 1920s but it's surely not true in the 21st century particle physics. If you're an undergraduate with plans to do particle physics and someone offers you to quickly learn about symplectic or exceptional groups, and perhaps a few others, you shouldn't refuse it.)

You don't need to talk about string theory to encounter similar subtleties. Ask a simple question. What is the gauge group of the Standard Model? Well, people will normally answer \(SU(3)\times SU(2)\times U(1)\). But what they actually mean is just the statement that the Lie algebra of the gauge group is\[

{\mathfrak su}(3) \oplus {\mathfrak su}(2) \oplus {\mathfrak u}(1).

\] Note that the simple, Cartesian \(\times\) product of Lie groups gets translated to the direct \(\oplus\) sum of the Lie algebras – the latter are linear vector spaces. OK, so the statement that the Lie algebra of the gauge group of the Standard Model is the displayed expression above is correct.

But if you have the ambition to talk about the precise group manifolds, those know about all the "global subtleties" and it turns out that \(SU(3)\times SU(2)\times U(1)\) is

*not*isomorphic to the Standard Model gauge group. Instead, the Standard Model gauge group is\[

[SU(3)\times SU(2)\times U(1)] / \ZZ_6.

\] The quotient by \(\ZZ_6\) must be present because all the fields of the Standard Model have a correlation between the hypercharge \(Y\) modulo \(1/6\) and the spin under the \(SU(2)\) as well as the representation under the \(SU(3)\). It is therefore impossible to construct states that wouldn't be invariant under this \(\ZZ_6\) even

*a priori*which means that this \(\ZZ_6\) acts trivially even on the original Hilbert space and "it's not there".

The \(\ZZ_6\) must be divided by for the same reasons why we usually say that the Standard Model gauge group doesn't contain an \(E_8\) factor. You could also say that there's also an \(E_8\) factor except that all fields transform as a singlet. ;-) We don't do it – when we say that there is a symmetry or a gauge group, we want at least something to transform nontrivially.

OK, you see that the analysis of the correlations of the discrete charges modulo \(1/6\) may be subtle. We usually don't care about these details when we want to determine much more important things – how many gauge bosons there are and what their couplings are. These important things are given purely by the Lie algebra which is why our statements about the identity of the gauge group should mostly be understood as statements about Lie algebras.

At some level, you may want to be picky and discuss the global properties of the gauge group and correlations. But you usually don't need to know these answers for anything else. The knowledge of these facts is usually only good for its own sake. You can't calculate any couplings from it, and so on. That's why our sentences should be assumed not to talk about these details at all – and/or be sloppy about these details.

(Just to be sure, the global subtleties, centers of the group, differences between \(SO(N)\) and \(O(N)\) and \(Spin(N)\), differences for even and odd \(N\), or dependence on \(N\) modulo 8, may still lead to interesting physical consequences and consistency checks and several papers of mine, especially about the heterotic matrix models, were obsessed with these details, too. But this kind of concerns only represents a minority of physicists' interests, especially in the case of beginners.)

By the way, the second "pet sleeve" by Woit is that one should distinguish real and complexified versions of the same Lie algebras (and groups). Well, I agree you should distinguish them. But at some general analytic or algebraic level, all algebras and other structures should always be understood as the complexified ones – and only afterwards, we may impose some reality conditions on fields (and therefore the allowed symmetries, too). So I would say that to a large extent, even this complaint of Woit reflects his misunderstanding of something important – the fact that the most important information about the Lie groups is hiding in the structure constants of the corresponding Lie algebra, and those are identical for all Lie groups with the same Lie algebra, and they're also identical for real and complex versions of the groups.

(By the way, he pretends to be very careful about the complexification, but he writes the condition for matrix elements of an \(SU(2)\) matrix as \(\alpha^2+\beta^2=1\) instead of \(|\alpha|^2+|\beta|^2 = 1\). Too bad. You just shouldn't insist on people's distinguishing non-essential things about the complexification if you can't even write the essential ones correctly yourself.)

In the futile conversations about the foundations of quantum mechanics, I often hear or read comments like:

Please, don't use the confusing word "observation" which makes it look like quantum mechanics depends on what is an observation and what isn't etc. and it's scary.Well, the reason why my – and Heisenberg's – statements look like we are saying that quantum mechanics depends on observations is that quantum mechanics depends on observations, indeed. So the dissatisfied laymen or beginners really ask the physicists to use the language that would strengthen the listeners' belief that classical physics is still basically right. Except that it's not! We mostly use this language – including the word "observation" – because it really

*is*essential in the new framework of physics.

In the same way, failing-grade students such as Peter Woit may be constantly asking whether a physicist talks about a Lie group or the corresponding Lie algebra. They are basically complaining:

Georgi, Ramond, Zee, don't use this notation that looks like it suggests that the Lie group and the Lie algebra are basically the same thing even though they are something completely different.The problem is, of course, that the failing-grade students such as Peter Woit are wrong. Georgi, Ramond, Zee, and others often use the same symbols for the Lie groups and the Lie algebras because they

*really are*basically the same thing. And it's just too bad if you don't understand this tight relationship – basically an equivalence.

I think that there exist many lousy teachers of mathematics and physics that are similar to Peter Woit. Those don't understand the

*substance*– what is really important, what is true. So they focus on what they understand – arbitrarily invented rules what the students are obliged to parrot for the teacher to feel more important. So the poor students who have such teachers are often being punished for using a different metric tensor convention once or for using a wrong font for a Lie algebra. These teachers don't understand the power and beauty of mathematics and physics and they're working hard to make sure that their students won't understand them, either.

by Luboš Motl (noreply@blogger.com) at April 22, 2017 01:16 PM

### ZapperZ - Physics and Physicists

Unfortunately, I will not be participating in it, because I'm flying off to start my vacation. However, I have the March for Science t-shirt, and will be wearing it all day. So I may not be with all of you who will be participating it in today, but I'll be there in spirit.

And yes, I have written to my elected officials in Washington DC to let them know how devastating the Trump budget proposal is to science and the economic future of this country. Unfortunately, I may be preaching to the choir, because all 3 of them (2 Senators and 1 Representative of my district) are all Democrats who I expect to oppose the Trump budget as it is anyway.

Anyhow, to those of you who will be marching, YOU GO, BOYS AND GIRLS!

Zz.

## April 21, 2017

### Clifford V. Johnson - Asymptotia

I’ll be at Silicon Valley Comic Con this weekend, talking on two panels about science and its intersection with film on the one hand (tonight at 7pm if my flight is not too delayed), and non-fiction comics (see my book to come) on the other (Saturday at 12:30 or so). … Click to continue reading this post

The post Silicon Valley appeared first on Asymptotia.

### ZapperZ - Physics and Physicists

What he is arguing is that scientists should learn the mindset of the arts and literature, while those in the humanities and the arts should learn the mindset of science. College courses should not be tailored in such a way that the mindset of the home department is lost, and that a course in math, let's say, has been devolved into something palatable to an arts major.

I especially like his summary at the end:

One of the few good reasons is that a mindset that embraces ambiguity is something useful for scientists to see and explore a bit. By the same token, though, the more rigorous and abstract scientific mindset is something that is equally worthy of being experienced and explored by the more literarily inclined. A world in which physics majors are more comfortable embracing divergent perspectives, and English majors are more comfortable with systematic problem solving would be a better world for everyone.

I think we need to differentiate between changing the mindset versus tailoring a course for a specific need. I've taught a physics class for mainly life science majors. The topics that we covered is almost identical to that offered to engineering/physics majors, with the exception that they do not contain any calculus. But other than that, it has the same rigor and coverage. The thing that made it specific to the group of students is that many of the examples that I used came out of biology and medicine. These were what I used to keep the students' interest, and to show them the relevance of what they were studying to their major area. But the systematic and analytical approach to the subject are still there. In fact, I consciously emphasized the technique and skills in analyzing and solving a problem, and made them as important as the material itself. In other words, this is the "mindset" that Chad Orzel was referring to that we should not lose when the subject is being taught to non-STEM majors.

Zz.

### Clifford V. Johnson - Asymptotia

Well, I've been meaning to tell you about this for some time, but I've been distracted by many other things. Last year I had the pleasure of working closely with the writers and producers on the forthcoming series on National Geographic entitled "Genius". (Promotional photo above borrowed from the show's website.)The first season, starting on Tuesday, is about Einstein - his life and work. It is a ten episode arc. I'm going to venture that this is a rather new kind of TV show that I really hope does well, because it could open the door to longer more careful treatments of subjects that usually are considered too "difficult" for general audiences, or just get badly handled in the short duration of a two-hour movie.

Since reviews are already coming out, let me urge you to keep an open mind, and bear in mind that the reviewers (at the time of writing) have only seen the two or three episodes that have been sent to them for review. A review based on two or three episodes of a series like this (which is more like a ten hour movie - you know how these newer forms of "long form TV" work) is akin to a review based on watching the first 25-35 minutes of a two hour film. You can get a sense of tone and so forth from such a short sample, but not much can be gleaned about content to come. So remember that when the various opinion pieces appear in the next few weeks.

So... content. That's what I spent a lot of time helping them with. I do this sort of thing for movies and TV a lot, as you know, but this was a far [...] Click to continue reading this post

The post Advising on Genius: Helping Bring a Real Scientist to Screen appeared first on Asymptotia.

## April 19, 2017

### ZapperZ - Physics and Physicists

It explains the reason why we don't believe that the proton spin is due just to the 3 quarks that make up the proton, and in the process, you get an idea how complicated things can be inside a proton.

There are three good reasons that these three components might not add up so simply.Expect the same with a neutron.

- The quarks aren't free, but are bound together inside a small structure: the proton. Confining an object can shift its spin, and all three quarks are very much confined.
- There are gluons inside, and gluons spin, too. The gluon spin can effectively "screen" the quark spin over the span of the proton, reducing its effects.
- And finally, there are quantum effects that delocalize the quarks, preventing them from being in exactly one place like particles and requiring a more wave-like analysis. These effects can also reduce or alter the proton's overall spin.

Zz.

### The n-Category Cafe

I’ve just finished teaching a seminar course officially called “Functional Equations”, but really more about the concepts of entropy and diversity.

I’m grateful to the participants — from many parts of mathematics, biology and physics, at levels from undergraduate to professor — who kept coming and contributing, week after week. It was lots of fun, and I learned a great deal.

This post collects together all the material in one place. First, the notes:

- Tom Leinster,
*Functional Equations*. Rough and ready course notes, University of Edinburgh, 2017.

Now, the posts I wrote every week:

- I. Cauchy’s equation
- II. Shannon entropy
- III. Explaining relative entropy
- IV. A simple characterization of relative entropy
- V. Expected surprise
- VI. Using probability theory to solve functional equations
- VII. The $<semantics>p<annotation\; encoding="application/x-tex">p</annotation></semantics>$-norms
- VIII. Measuring biodiversity
- IX. Entropy on a metric space
- X. Value
- XI. The diversity of a metacommunity

by leinster (Tom.Leinster@ed.ac.uk) at April 19, 2017 04:19 PM

### The n-Category Cafe

The eleventh and final installment of the functional equations course can be described in two ways:

From one perspective, I talked about conditional entropy, mutual information, and a very appealing analogy between these concepts and the most basic primary-school Venn diagrams.

From another, it was about diversity across a metacommunity, that is, an ecological community divided into smaller communities (e.g. geographical sites).

The notes begin on page 44 here.

by leinster (Tom.Leinster@ed.ac.uk) at April 19, 2017 04:05 PM

### Emily Lakdawalla - The Planetary Society Blog

### Lubos Motl - string vacua and pheno

String/M-theory is the most beautiful, powerful, and predictive theory we know – and, most likely, the #1 with these adjectives among those that are mathematically possible – but the degree of one's appreciation for its exceptional credentials depends on one's general knowledge of physics, especially quantum mechanics.Wednesday papers:Arkani-Hamed et al. show that the amplituhedron is all about sign flips. Maldacena et al. study the double-trace deformations that make a wormhole traversable. Among other things, they argue that the cloning is avoided because the extraction (by "Bob") eliminates the interior copy of the quantum information.

*Click to see an animation (info).*

Quantum mechanics was basically discovered at one point in the mid 1920s and forced physics to make a one-time quantum jump. On the other hand, it also defines a trend because the novelties of quantum mechanics may be taken more or less seriously, exploited more or less cleverly and completely, and as physics was evolving towards more advanced, stringy theories and explanations of things, the role of the quantum mechanical thinking was undoubtedly increasing.

When we say "classical string theory", it is a slightly ambiguous term. We can take various classical limits of various theories that emerge from string theory, e.g. the classical field theory limit of some effective field theories in the spacetime. But the most typical representation of "classical string theory" is given by the dull yellow animation above. A classical string is literally a curve in a pre-existing spacetime that oscillates according to a wave equation of a sort.

OK, on that picture, you see a vibrating rope. It is not better or more exceptional than an oscillating membrane, a Chladni pattern, a little green man with Parkinson's disease, or anything else that moves and jiggles. The power of string theory only emerges once you consider the real, adult theory where all the observables such as the positions of points along the string are given by non-commuting operators.

Just to be sure, the rule that "observable = measurable quantities are associated with non-commuting operators" is what I

*mean*by quantum mechanics.

What does quantum mechanics do for a humble string like the yellow string above?

**First, it makes the spectrum of vibrations discrete.**

Classically, you may change the initial state of the vibrating string arbitrarily and continuously, and the energy carried by the string is therefore continuous, too. That's not the case in quantum mechanics. Quantum mechanics got its name from the quantized, discrete eigenvalues of the energy. A vibrating string is basically equivalent to a collection of infinitely many harmonic oscillators. Each quantum mechanical harmonic oscillator only carries an integer number of excitations, not a continuous amount of energy.

The discreteness of the spectrum – which depends on quantum mechanics for understandable reasons – is obviously needed for strings in string theory to coincide with a finite number of particle species we know in particle physics – or a countable one that we may know in the future. Without the quantization, the number of species would be uncountably infinite. The species would form a continuum. There would be not just an electron and a muon but also elemuon and all other things in between, in an infinite-dimensional space.

**Quantum mechanics is needed for some vibrating strings to act as gravitons and other exceptional particles.**

String theory predicts gravity. It makes Einstein's general relativity – and the curved spacetime and gravitational waves that result from it – unavoidable. Why is it so? It's because some of the low-energy vibrating strings, when they're added into the spacetime, have exactly the same effect as a deformation of the underlying geometry – or other low-energy fields defining the background.

Why is it so? It's ultimately because of the state-operator correspondence. The internal dynamics of a string depends on the underlying spacetime geometry. And the spacetime geometry may be changed. But the infinitesimal change of the action etc. for a string is equivalent to the interaction of the string with another, "tiny" string that is equivalent to the geometry change.

We may determine the right vibration of the "tiny" string that makes the previous sentence work because for every operator on the world sheet (2D history of a fundamental string), there exists a state of the string in the Hilbert space of the stringy vibrations. And this state-operator correspondence totally depends on quantum mechanics, too.

In classical physics, the number of observables – any function \(f(x_i,p_i)\) on a phase space – is vastly greater than the number of states. The states are just points given by the coordinates \((x_i,p_i)\) themselves. It's not hard to see that the first set is much greater – an infinite-dimensional vector space – than the second. However, quantum mechanics increases the number of states (by allowing all the superpositions) and reduces the number of observables (by making them quantized, or respectful towards the quantization of the phase space) and the two numbers become equivalent up to a simple tensoring with the functions of the parameter \(\sigma\) along the string.

I don't want to explain the state-operator correspondence, other blog posts have tried it and it is a rather technical issue in conformal field theory that you should study once you are really serious about learning string theory. But here, I want to emphasize that it wouldn't be possible in any classical world.

Let me point out that the world of the "interpreters" of quantum mechanics who imagine that the wave function is on par with a classical wave

*is*a classical world, so it is exactly as impotent as any other world.

**T-duality depends on quantum mechanics**

A nice elementary symmetry that you discover in string theory compactified on tori is the so-called T-duality. The compactified string theory on a circle of radius \(R\) is the same as the theory on a circle of radius \(\alpha' / R\) where \(T=1/2 \pi \alpha'\) is the string tension (energy or mass per unit length of the string). Well, this property depends on quantum mechanics as well because the T-duality map exchanges the momentum \(n\) with the winding \(w\) which are two integers.

But in a classical string theory, the winding number \(w\in \ZZ\) would still be integer (it counts how many times a closed string is wrapped around the circle) while the momentum would be continuous, \(n\in\RR\). So they couldn't be related by a permutation symmetry. The T-duality couldn't exist.

**Enhanced gauge symmetry on a self-dual radius depends on quantum mechanics**

The fancier features of string theory you look at, the more obviously unavoidable quantum mechanics becomes. One of the funny things of bosonic string theory compactified on a circle is that the generic gauge group \(U(1)\times U(1)\) gets enhanced to \(SU(2)\times SU(2)\) on the self-dual radius. Even though you start with a theory where everything is "Abelian" or "linear" in some simple sense – a string propagating on a circle – you discover that the non-Abelian \(SU(2)\) automatically arises if the radius obeys \(R = \alpha' / R\), if it is self-dual.

I have discussed the enhanced symmetries in string theory some years ago but let's shorten the story. Why does the group get enhanced?

First, one must understand that for a generic radius, the unbroken gauge group is \(U(1)\times U(1)\). One gets two \(U(1)\) gauge groups because the gauge fields are basically \(g_{\mu,25}\) and \(B_{\mu,25}\). They arise as "last columns" of a symmetric tensor, the metric tensor, and an antisymmetric tensor, the \(B\)-field. The first (metric tensor-based) \(U(1)\) group is the standard Kaluza-Klein gauge group and it is \(U(1)\) because \(U(1)\) is the isometry group of the compactification manifold. There is another gauge group arising from the gauge field that you get from a pre-existing 2-index gauge field \(B_{\mu\nu}\), a two-form, if you set the second index equal to the compactified direction.

These two gauge fields are permuted by the T-duality symmetry (just like the momentum and winding are permuted, because the momentum and winding are really the charges under these two symmetries).

OK, how do you get the \(SU(2)\)? The funny thing is that the \(U(1)\) gauge bosons are associated, via the operator-state correspondence mentioned above, with the operators on the world sheet\[

(\partial_z X^{25}, \quad \partial_{\bar z} X^{25}).

\] One of them is holomorphic, the other one is anti-holomorphic, we say. T-duality maps these operators to\[

(\partial_z X^{25}, \quad -\partial_{\bar z} X^{25}).

\] so it may be understood as a mirror reflection of the \(X^{25}\) coordinate of the spacetime except that it only acts on the anti-holomorphic (or right-moving) oscillations propagating along the string. That's great. You have something like a discrete T-duality which is just some sign flip or, equivalently, the exchange of the momentum and winding. How do you get a continuous \(SU(2)\), I ask again?

The funny thing is that at the self-dual radius, there are not just two operators like that but six. The holomorphic one, \(\partial_z X^{25}\), becomes just one component of a three-dimensional vector\[

(\partial_z X_L^{25},\,\, :\exp(+i X_L^{25}):, :\exp(-i X_L^{25}):)

\] Classically, the first operator looks nothing like the last two. If you have a holomorphic function \(X_L^{25}(z)\) of some coordinate \(z\), its \(z\)-derivative seems to be something completely different than its exponential, right? But quantum mechanically, they are almost the same thing! Why is it so?

If you want to describe all physically meaningful properties of three operators like that, the algebra of all their commutators encodes all the information. Just like string theory has the state-operator correspondence that allows you to translate between states and operators, it also has the OPEs – operator-product expansions – that allow you to extract the commutators of operators from the singularities in a decomposition of their products etc.

And it just happens that the singularities in the OPEs of any such operators are compatible with the statement that these three operators are components of a triplet that transforms under an \(SU(2)\) symmetry. So you get one \(SU(2)\) from the left-moving, \(z\)-dependent part \(X_L^{25}\), and one \(SU(2)\) from the \(\bar z\)-dependent \(X_R^{25}\).

All other non-Abelian and sporadic or otherwise cool groups that you get from perturbative string theory arise similarly, and are therefore similarly dependent on quantum mechanics. For example, the monster group in the string theory model explaining the monstrous moonshine only exists because of a similar "equivalence" that is only true at the quantum level.

**Spacetime dimension and sizes of group are only predictable in quantum mechanics**

String theory is so predictive that it forces you to choose a preferred dimension of the spacetime. The simple bosonic string theory has \(D=26\) and superstring theory, the more realistic and fancy one, similarly demands \(D=10\). This contrasts with the relatively unconstrained, "anything goes" theories of the pre-stringy era.

Polchinski's book contains "seven" ways to calculate the critical dimension, according to the counting by the author. But here, what is important is that all of them depend on a cancellation of some quantum anomalies.

In the covariant quantization, \(D=26\) basically arises as the number of bosonic fields \(X^\mu\) whose conformal anomaly cancels that from the \(bc\) ghost system. The latter has \(c=1-3k^2=-26\) because some constant is \(k=3\): the central charge describes a coefficient in front of a standard term to the conformal anomaly. Well, you need to add \(c=+26\) – from 26 bosons – to get zero. And you need to get zero for the conformal symmetry to hold, even in the quantum theory. And the conformal symmetry is needed for the state-operator correspondence and other things – it is a basic axioms of covariant perturbative string theory.

Alternatively, you may define string theory in the light-cone gauge. The full Lorentz symmetry won't be obvious anymore. You will find out that some commutators\[

[j^{i-},j^{j-}] = \dots

\] in the light-cone coordinates behaves almost correctly. Except that when you substitute the "bilinear in stringy oscillators" expressions for the generators \(j^{i-}\), the calculation of the commutator will contain not only the "single contractions" – this part of the calculation is basically copying a classical calculation – but also the "double contraction" terms. And those don't trivially cancel. You will find out that they only cancel for 24 transverse coordinates. Needless to say, the "double contraction" is something invisible at the level of the Poisson brackets. You really need to talk about the "full commutators" – and therefore full quantum mechanics, not just some Poisson-bracket-like approximation – to get these terms at all.

Again, the correct spacetime dimension \(D=26\) or \(D=10\) arises from the cancellation of some quantum anomaly – some new quantum mechanical effects that have the potential of spoiling some symmetries that "trivially" hold in the classical limit that may have inspired you. The prediction couldn't be there if you ignored quantum mechanics.

**The field equations in the spacetime result from an anomaly cancellation, too.**

If you order perturbative strings to propagate on a curved spacetime background, you may derive Einstein's equations (plus stringy short-distance corrections), which in the vacuum simply demand the Ricci-flatness \[

R_{\mu\nu} = 0.

\] A century ago, Einstein had to discover that this is what the geometry has to obey in the vacuum. It's an elegant equation and among similarly simple ones, it's basically unique that is diffeomorphism-symmetric. And you may derive it from the extremization of the Einstein-Hilbert action, too.

However, string theory is capable of doing all this guesswork for you. In other words, string theory is capable of replacing Einstein's 10 years of work. You may derive the Ricci-flatness from the cancellation of the conformal anomaly, too. You need the world sheet theory to stay invariant under the scaling of the world sheet coordinates, even at the quantum level.

But the world sheet theory depends on the functions\[

g_{\mu\nu} (X^\lambda(\sigma,\tau))

\] and for every point in the spacetime given by the numbers \(\{X^\lambda\}\), you have a whole symmetric tensor \(g_{\mu\nu}\) of parameters that behave like "coupling constants" in the theory. But in a quantum field theory, and the world sheet theory is a quantum field theory, every coupling constant generically "runs". Its value depends on the chosen energy scale \(E\). And the derivative with respect to the scale\[

\frac{dg_{\mu\nu}(X^\lambda)}{d (\ln E)} = \beta_{\mu\nu}(X^\lambda)

\] is known as the beta-function. Here you have as many beta-functions as you have the numbers that determine the metric tensor at each spacetime point. The beta-functions have to vanish for the theory to remain scale-invariant on the world sheet – and you need it. And you will find out that\[

\beta_{\mu\nu}(X^\lambda) = R_{\mu\nu} (X^\lambda).

\] The beta-function is nothing else than the Ricci tensor. Well, it could be the Einstein tensor and there could be extra constants and corrections. But I want to please you with the cool stuff; I hope that you don't doubt that if you want to work with these things, you have to take care of many details that make the exact answers deviate from the most elegant, naive Ansatz with the given amount of beauty.

So Einstein's equations result from the cancellation of the conformal anomaly as well. The very requirement that the theory remains consistent at the quantum level – and the preservation of gauge symmetries is indeed needed for the consistency – is enough to derive the equations for the metric tensor in the spacetime.

Needless to say, this rule generalizes to all the fields that you may get from particular vibrating strings in the spacetime. Dirac, Weyl, Maxwell, Yang-Mills, Proca, Higgs, and other equations of motions for the fields in the spacetime (including all their desirable interactions) may be derived from the scale-invariance of the world sheet theory, too.

In this sense, the logical consistency of the quantum mechanical theory dictates not only the right spacetime dimension and other numbers of degrees of freedom, sizes of groups such as \(E_8\times E_8\) or \(SO(32)\) for the heterotic string (the rank must be \(16\) and the dimension has to be \(496\), among other conditions), but the consistency also determines all the dynamical equations of motion.

**S-duality, T-duality, mirror symmetry, AdS/CFT and holography, ER-EPR, and so on**

And I could continue. S-duality – the symmetry of the theories under the \(g\to 1/g\) maps of the coupling constant – also depend on quantum mechanics. It's absolutely obvious that no S-duality could ever work in a classical world, not even in quantum field theory. Among other things, S-dualities exchange the elementary electrically charged particles such as electrons with the magnetically charged ones, the magnetic monopoles. But classically, those are very different: electrons are point-like objects with an "intrinsic" charge while the magnetic monopoles are solitonic solutions where the charge is spread over the solution and quantized because of topological considerations.

However, quantum mechanically, they may be related by a permutation symmetry.

Mirror symmetry is an application of T-duality in the Calabi-Yau context, so everything I said about the quantum mechanical dependence of T-duality obviously holds for mirror symmetry, too.

Holography in quantum gravity – as seen in AdS/CFT and elsewhere – obviously depends on quantum mechanics, too. The extra holographic dimension morally arises from the "energy scale" in the boundary theory. But the AdS space has an isometry relating all these dimensions. Classically, "energy scale" cannot be indistinguishable from a "spacetime coordinate". Classically, the energy and momentum

*live*in a spacetime, they have different roles.

Quantum mechanically, there may be such symmetries between energy/momentum and position/timing. The harmonic oscillator is a basic template for such a symmetry: \(x\) and \(p\) may be rotated to each other.

ER-EPR talks about the quantum entanglement so it's obvious that it would be impossible in a classical world.

I could make the same point about basically

*anything*that is attractive about string theory – and even about comparably but less intriguing features of quantum field theories. All these things depend on quantum mechanics. They would be impossible in a classical world.

**Summary: quantum mechanics erases qualitative differences, creates new symmetries, merges concepts, magnifies new degrees of freedom to make singularities harmless.**

Quantum mechanics does a lot of things. You have seen many examples – and there are many others – that quantum mechanics generally allows you to find symmetries between objects that look classically

*totally different*. Like the momentum and winding of a string. Or the derivative of \(X\) with the exponential of \(X\) – at the self-dual radius. Or the states and operators. Or elementary particles and composite objects such as magnetic monopoles. And so on, and so on.

Sometimes, the spectrum of a quantity becomes discrete in order for the map or symmetry to be possible.

Sometimes, just the qualitative differences are erased. Sometimes, all the differences are erased and quantum mechanics enables the emergence of exact new symmetries that would be totally crazy within classical physics. Sometimes, these symmetries are combined with some naive ones that already exist classically. \(U(1)\times U(1)\) may be extended to \(SU(2)\times SU(2)\) quantum mechanically. Similarly, \(SO(16)\times SO(16)\) in the fermionic definition or \(U(1)^{16}\) in the bosonic formulation of the heterotic string gets extended to \(E_8\times E_8\). A much smaller, classically visible discrete group gets extended to the monster group in the full quantum string theory explaining the monstrous moonshine.

Whenever a classical theory would be getting dangerously singular, quantum mechanics changes the situation so that either the dangerous states disappear or they're supplemented with new degrees of freedom or another cure. In many typical cases, the "potentially dangerous regime" of a theory – where you could be afraid of an inconsistency – is protected and consistent because quantum mechanics makes all the modifications and additions needed for that regime to be

*exactly equivalent*to another theory that you have known – or whose classical limit you have encountered. Quantum mechanics is what allows all the dualities and the continuous connection of all seemingly inequivalent vacua of string/M-theory into one master theory.

All the constraints - on the number of dimensions, sizes of gauge groups, and even equations of motion for the fields in spacetime – arise from the quantum mechanical consistency, e.g. from the anomaly cancellation conditions.

When you become familiar with all these amazing effects of string theory and others, you are

*forced*to start to think quantum mechanically. You will understand that the interesting theory – with the uniqueness, predictive power, consistency, symmetries, unification of concepts – is unavoidably just the quantum mechanical one. There is really no cool classical theory. The classical theories that you encounter anywhere in string theory are the

*classical limits*of the full theory.

You will unavoidably get rid of the bad habit of thinking of a classical theory as the "primary one", while the quantum mechanical theory is often considered "derived" from it by the beginners (including permanent beginners). Within string/M-theory, it's spectacularly clear that the right relationship is going in the opposite direction. The quantum mechanical theory – with its quantum rules, objects, statements, and relationships – is the primary one while classical theories are just approximations and caricatures that lack the full glory of the quantum mechanical theory.

by Luboš Motl (noreply@blogger.com) at April 19, 2017 06:39 AM

### John Baez - Azimuth

Aaron Goodman of the Stanford Complexity Group invited me to give a talk there on Thursday April 20th. If you’re nearby—like in Silicon Valley—please drop by! It will be in Clark S361 at 4:20 pm.

Here’s the idea. Everyone likes to say that biology is all about information. There’s something true about this—just think about DNA. But what does this insight actually do for us, quantitatively speaking? To figure this out, we need to do some work.

Biology is also about things that make copies of themselves. So it makes sense to figure out how information theory is connected to the replicator equation—a simple model of population dynamics for self-replicating entities.

To see the connection, we need to use ‘relative information’: the information of one probability distribution *relative to another*, also known as the Kullback–Leibler divergence. Then everything pops into sharp focus.

It turns out that free energy—energy in forms that can actually be *used*, not just waste heat—is a special case of relative information Since the decrease of free energy is what drives chemical reactions, biochemistry is founded on relative information.

But there’s a lot more to it than this! Using relative information we can also see evolution as a learning process, fix the problems with Fisher’s fundamental theorem of natural selection, and more.

So this what I’ll talk about! You can see my slides here:

• John Baez, Biology as information dynamics.

but my talk will be videotaped, and it’ll eventually be put here:

• Stanford complexity group, YouTube.

You can already see lots of cool talks at this location!

## April 18, 2017

### Symmetrybreaking - Fermilab/SLAC

A new result from the LHCb experiment could be an early indicator of an inconsistency in the Standard Model.

The subatomic universe is an intricate mosaic of particles and forces. The Standard Model of particle physics is a time-tested instruction manual that precisely predicts how particles and forces behave. But it’s incomplete, ignoring phenomena such as gravity and dark matter.

Today the LHCb experiment at CERN European research center released a result that could be an early indication of new, undiscovered physics beyond the Standard Model.

However, more data is needed before LHCb scientists can definitively claim they’ve found a crack in the world’s most robust roadmap to the subatomic universe.

“In particle physics, you can’t just snap your fingers and claim a discovery,” says Marie-Hélène Schune, a researcher on the LHCb experiment from Le Centre National de la Recherche Scientifique in Orsay, France. “It’s not magic. It’s long, hard work and you must be obstinate when facing problems. We always question everything and never take anything for granted.”

The LHCb experiment records and analyzes the decay patterns of rare hadrons—particles made of quarks—that are produced in the Large Hadron Collider’s energetic proton-proton collisions. By comparing the experimental results to the Standard Model’s predictions, scientists can search for discrepancies. Significant deviations between the theory and experimental results could be an early indication of an undiscovered particle or force at play.

This new result looks at hadrons containing a bottom quark as they transform into hadrons containing a strange quark. This rare decay pattern can generate either two electrons or two muons as byproducts. Electrons and muons are different types or “flavors” of particles called leptons. The Standard Model predicts that the production of electrons and muons should be equally favorable—essentially a subatomic coin toss every time this transformation occurs.

“As far as the Standard Model is concerned, electrons, muons and tau leptons are completely interchangeable,” Schune says. “It’s completely blind to lepton flavors; only the large mass difference of the tau lepton plays a role in certain processes. This 50-50 prediction for muons and electrons is very precise.”

But instead of finding a 50-50 ratio between muons and electrons, the latest results from the LHCb experiment show that it’s more like 40 muons generated for every 60 electrons.

“If this initial result becomes stronger with more data, it could mean that there are other, invisible particles involved in this process that see flavor,” Schune says. “We’ll leave it up to the theorists’ imaginations to figure out what’s going on.”

However, just like any coin-toss, it’s difficult to know if this discrepancy is the result of an unknown favoritism or the consequence of chance. To delineate between these two possibilities, scientists wait until they hit a certain statistical threshold before claiming a discovery, often 5 sigma.

“Five sigma is a measurement of statistical deviation and means there is only a 1-in-3.5-million chance that the Standard Model is correct and our result is just an unlucky statistical fluke,” Schune says. “That’s a pretty good indication that it’s not chance, but rather the first sightings of a new subatomic process.”

Currently, this new result is at approximately 2.5 standard deviations, which means there is about a 1-in-125 possibility that there’s no new physics at play and the experimenters are just the unfortunate victims of statistical fluctuation.

This isn’t the first time that the LHCb experiment has seen unexpected behavior in related processes. Hassan Jawahery from the University of Maryland also works on the LHCb experiment and is studying another particle decay involving bottom quarks transforming into charm quarks. He and his colleagues are measuring the ratio of muons to tau leptons generated during this decay.

“Correcting for the large mass differences between muons and tau leptons, we’d expect to see about 25 taus produced for every 100 muons,” Jawahery says. “We measured a ratio of 34 taus for every 100 muons.”

On its own, this measurement is below the line of statistical significance needed to raise an eyebrow. However, two other experiments—the BaBar experiment at SLAC and the Belle experiment in Japan—also measured this process and saw something similar.

“We might be seeing the first hints of a new particle or force throwing its weight around during two independent subatomic processes,” Jawahery says. “It’s tantalizing, but as experimentalists we are still waiting for all these individual results to grow in significance before we get too excited.”

More data and improved experimental techniques will help the LHCb experiment and its counterparts narrow in on these processes and confirm if there really is something funny happening behind the scenes in the subatomic universe.

“Conceptually, these measurements are very simple,” Schune says. “But practically, they are very challenging to perform. These first results are all from data collected between 2011 and 2012 during Run 1 of the LHC. It will be intriguing to see if data from Run 2 shows the same thing.”

### ZapperZ - Physics and Physicists

Things get even weirder if one observer accelerates. Any observer traveling at a constant speed will measure the temperature of empty space as absolute zero. But an accelerated observer will find the vacuum hotter. At least that's what William Unruh, a theorist at the University British Columbia in Vancouver, Canada, argued in 1976. To a nonaccelerating observer, the vacuum is devoid of particles—so that if he holds a particle detector it will register no clicks. In contrast, Unruh argued, an accelerated observer will detect a fog of photons and other particles, as the number of quantum particles flitting about depends on an observer's motion. The greater the acceleration, the higher the temperature of that fog or "bath."

So obviously, this is a very difficult effect to detect, which explains why we haven't had any evidence for it since it was first proposed in 1976. That is why this new paper is causing heads to turn, because the authors are proposing a test using our existing technology. You may read the two links above to see what they are proposing using our current particle accelerators.

But what is a bit amusing is that there are already skeptics about this methodology of testing, but each camp is arguing it for different reasons.

Skeptics say the experiment won’t work, but they disagree on why. If the situation isproperly analyzed, there is no fog of photons in the accelerated frame, says Detlev Buchholz, a theorist at the University of Göttingen in Germany. "The Unruh gas does not exist!" he says. Nevertheless, Buchholz says, the vacuum will appear hot to an accelerated observer, but because of a kind of friction that arises through the interplay of quantum uncertainty and acceleration. So,the experiment might show the desired effect, but that wouldn't reveal the supposed fog of photons in the accelerating frame.

In contrast, Robert O'Connell, a theorist at Louisiana State University in Baton Rouge, insists that in the accelerated frame there is a fog of photons. However, he contends, it is not possible to draw energy out of that fog to produce extra radiation in the lab frame. O'Connell cites a basic bit of physics called the fluctuation-dissipation theorem, which states that a particle interacting with a heat bath will pump as much energy into the bath as it pulls out. Thus, he argues, Unruh's fog of photons exists, but the experiment should not produce the supposed signal anyway.

If there's one thing that experimenters like, it is to prove theorists wrong! :) So which ever way an experiment on this turns out, it will bound to disprove one group of theorists or another. It's a win-win situation! :)

Zz.

### Tommaso Dorigo - Scientificblogging

### Symmetrybreaking - Fermilab/SLAC

While driven by the desire to pursue curiosity, fundamental investigations are the crucial first step to innovation.

When scientists announced their discovery of gravitational waves in 2016, it made headlines all over the world. The existence of these invisible ripples in space-time had finally been confirmed.

It was a momentous feat in basic research, the curiosity-driven search for fundamental knowledge about the universe and the elements within it. Basic (or “blue-sky”) research is distinct from applied research, which is targeted toward developing or advancing technologies to solve a specific problem or to create a new product.

But the two are deeply connected.

“Applied research is exploring the continents you know, whereas basic research is setting off in a ship and seeing where you get,” says Frank Wilczek, a theoretical physicist at MIT. “You might just have to return, or sink at sea, or you might discover a whole new continent. So it’s much more long-term, it’s riskier and it doesn’t always pay dividends.”

When it does, he says, it opens up entirely new possibilities available only to those who set sail into uncharted waters.

Most of physics—especially particle physics—falls under the umbrella of basic research. In particle physics “we’re asking some of the deepest questions that are accessible by observations about the nature of matter and energy—and ultimately about space and time also, because all of these things are tied together,” says Jim Gates, a theoretical physicist at the University of Maryland.

Physicists seek answers to questions about the early universe, the nature of dark energy, and theoretical phenomena, such as supersymmetry, string theory and extra dimensions.

Perhaps one of the most well-known basic researchers was the physicist who predicted the existence of gravitational waves: Albert Einstein.

Einstein devoted his life to elucidating elementary concepts such as the nature of gravity and the relationship between space and time. According to Wilczek, “it was clear that what drove what he did was not the desire to produce a product, or anything so worldly, but to resolve puzzles and perceived imperfections in our understanding.”

In addition to advancing our understanding of the world, Einstein’s work led to important technological developments. The Global Positioning System, for instance, would not have been possible without the theories of special and general relativity. A GPS receiver, like the one in your smart phone, determines its location based on timed signals it receives from the nearest four of a collection of GPS satellites orbiting Earth. Because the satellites are moving so quickly while also orbiting at a great distance from the gravitational pull of Earth, they experience time differently from the receiver on Earth’s surface. Thanks to Einstein’s theories, engineers can calculate and correct for this difference.

There’s a long history of serendipitous output from basic research. For example, in 1989 at CERN European research center, computer scientist Tim Berners-Lee was looking for a way to facilitate information-sharing between researchers. He invented the World Wide Web.

While investigating the properties of nuclei within a magnetic field at Columbia University in the 1930s, physicist Isidor Isaac Rabi discovered the basic principles of nuclear magnetic resonance. These principles eventually formed the basis of Magnetic Resonance Imaging, MRI.

It would be another 50 years before MRI machines were widely used—again with the help of basic research. MRI machines require big, superconducting magnets to function. Luckily, around the same time that Rabi’s discovery was being investigated for medical imaging, scientists and engineers at the US Department of Energy’s Fermi National Accelerator Laboratory began building the Tevatron particle accelerator to enable research into the fundamental nature of particles, a task that called for huge amounts of superconducting wire.

“We were the first large, demanding customer for superconducting cable,” says Chris Quigg, a theoretical physicist at Fermilab. “We were spending a lot of money to get the performance that we needed.” The Tevatron created a commercial market for superconducting wire, making it practical for companies to build MRI machines on a large scale for places like hospitals.

Doctors now use MRI to produce detailed images of the insides of the human body, helpful tools in diagnosing and treating a variety of medical complications, including cancer, heart problems, and diseases in organs such as the liver, pancreas and bowels.

Another tool of particle physics, the particle detector, has also been adopted for uses in various industries. In the 1980s, for example, particle physicists developed technology precise enough to detect a single photon. Today doctors use this same technology to detect tumors, heart disease and central nervous system disorders. They do this by conducting positron emission tomography scans, or PET scans. Before undergoing a PET scan, the patient is given a dye containing radioactive tracers, either through an injection or by ingesting or inhaling. The tracers emit antimatter particles, which interact with matter particles and release photons, which are picked up by the PET scanner to create a picture detailed enough to reveal problems at the cellular level.

As Gates says, “a lot of the devices and concepts that you see in science fiction stories will never come into existence unless we pursue the concept of basic research. You’re not going to be able to construct starships unless you do the research now in order to build these in the future.”

It’s unclear what applications could come of humanity’s new knowledge of the existence of gravitational waves.

It could be enough that we have learned something new about how our universe works. But if history gives us any indication, continued exploration will also provide additional benefits along the way.

### Lubos Motl - string vacua and pheno

LHCb: RK*0 = 0.69+0.12-0.08 in 1-6 GeV q^2 bin; new 2.5 sigma deviation from Standard Model adding fuel to B-meson anomalies.

— Jester (@Resonaances) April 18, 2017

Various physicists have mentioned a new announcement by the LHCb collaboration which is smaller than ATLAS and CMS but at least equally assertive.

Another physicist has embedded the key graph where you should notice that the black crosses sit well below the dotted line where they're predicted to sit

New result from the @LHCbExperiment showing tantalising hints of lepton non-universality in rare B meson decays https://t.co/jhWnzi0IU6 pic.twitter.com/pZ15KFlFvj

— Greig Cowan (@GreigCowan) April 18, 2017

and we were told about the LHCb PowerPoint presentation where this graph was taken from.

To make the story short, some ratio describing the decays of B-mesons that should be one according to the Standard Model if the electron, muon, and tau are equally behaved – except for their differing masses which are rather irrelevant here – ends up being \[

\Large {\mathcal R}_{K^{*0}} = 0.69 + 0.12 - 0.08

\] especially in the interval of momentum transfer \(q^2 \in (1,6)\GeV^2\).

There are some similar deviations at higher values of \(q^2\), it's always about 2.2-2.5 standard deviations below the Standard Model. Sadly, it seems that neither BaBar nor Belle saw these deficits: their mean values are slightly

*greater*than one although their error margin was greater than that of the LHCb collaboration. On the other hand, the deficit seems rather compatible with the LHCb's recent announcements based on a (hopefully) disjoint set of decays.

An obvious reaction is that the deviation in this low-energy range isn't too exciting, anyway, because

However, similar large discrepancy in low q^2 bins which is not expected from new physics https://t.co/aVMjDodu6M

— Jester (@Resonaances) April 18, 2017

Well, unless it's some new physics (new even for Jester) that affects this energy range. ;-)

I find this deviation rather small and our survival of the 4-sigma excess at \(750\GeV\) should have made us a little bit more demanding when it comes to the significance level that is needed to make us aroused. But those who are interested in the existing or potentially emerging experimental anomalies should be aware of this deviation because the competition in this field is very limited.

by Luboš Motl (noreply@blogger.com) at April 18, 2017 10:07 AM

## April 17, 2017

### ZapperZ - Physics and Physicists

Quantum behavior are clearly seen at the macroscopic level because of the problem in maintaining coherence over a substantial length and time scales. One of the ways one can extend such scales is by cooling things down to extremely low temperatures so that decoherence due to thermal scattering is minimized.

So it is with great interest that I read this new paper on atoms interferometer that has been accomplished with "warm" atomic vapor[1]! You also have access to the actual paper from that link.

While the sensitivity of this technique is significantly and unsurprisingly low when compared to cold atoms, it has 2 major advantages:

However, sensitivity is not the only parameter of relevance for applications, and the new scheme offers two important advantages over cold schemes. The first is that it can acquire data at a rate of 10 kHz, in contrast to the typical 1-Hz rate of cold-atom LPAIs. The second advantage is the broader range of accelerations that can be measured with the same setup. This vapor-cell sensor remains operational over an acceleration range of 88g, several times larger than the typical range of cold LPAIs.The large bandwidth and dynamic range of the instrument built by Biedermann and co-workers may enable applications like inertial navigation in highly vibrating environments, such as spacecraft or airplanes. What’s more, the new scheme, like all LPAIs, has an important advantage over devices like laser or electromechanical gyroscopes: it delivers acceleration measurements that are absolute, without requiring a reference signal. This opens new possibilities for drift-free inertial navigation devices that work even when signals provided by global satellite positioning systems are not available, such as in underwater navigation.

**118**, 163601 (2017).

## April 15, 2017

### The n-Category Cafe

What is the value of the whole in terms of the values of the parts?

More specifically, given a finite set whose elements have assigned “values” $<semantics>{v}_{1},\dots ,{v}_{n}<annotation\; encoding="application/x-tex">v\_1,\; \backslash ldots,\; v\_n</annotation></semantics>$ and assigned “sizes” $<semantics>{p}_{1},\dots ,{p}_{n}<annotation\; encoding="application/x-tex">p\_1,\; \backslash ldots,\; p\_n</annotation></semantics>$ (normalized to sum to $<semantics>1<annotation\; encoding="application/x-tex">1</annotation></semantics>$), how can we assign a value $<semantics>\sigma (p,v)<annotation\; encoding="application/x-tex">\backslash sigma(\backslash mathbf\{p\},\; \backslash mathbf\{v\})</annotation></semantics>$ to the set in a coherent way?

This seems like a very general question. But in fact, just a few sensible requirements on the function $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$ are enough to pin it down almost uniquely. And the answer turns out to be closely connected to existing mathematical concepts that you probably already know.

Let’s write

$$<semantics>{\Delta}_{n}=\{({p}_{1},\dots ,{p}_{n})\in {\mathbb{R}}^{n}:{p}_{i}\ge 0,\sum {p}_{i}=1\}<annotation\; encoding="application/x-tex">\; \backslash Delta\_n\; =\; \backslash Bigl\backslash \{\; (p\_1,\; \backslash ldots,\; p\_n)\; \backslash in\; \backslash mathbb\{R\}^n\; :\; p\_i\; \backslash geq\; 0,\; \backslash sum\; p\_i\; =\; 1\; \backslash Bigr\backslash \}\; </annotation></semantics>$$

for the set of probability distributions on $<semantics>\{1,\dots ,n\}<annotation\; encoding="application/x-tex">\backslash \{1,\; \backslash ldots,\; n\backslash \}</annotation></semantics>$. Assuming that our “values” are positive real numbers, we’re interested in sequences of functions

$$<semantics>(\sigma :{\Delta}_{n}\times (0,\mathrm{\infty}{)}^{n}\to (0,\mathrm{\infty}){)}_{n\ge 1}<annotation\; encoding="application/x-tex">\; \backslash Bigl(\; \backslash sigma\; \backslash colon\; \backslash Delta\_n\; \backslash times\; (0,\; \backslash infty)^n\; \backslash to\; (0,\; \backslash infty)\; \backslash Bigr)\_\{n\; \backslash geq\; 1\}\; </annotation></semantics>$$

that aggregate the values of the elements to give a value to the whole set. So, if the elements of the set have relative sizes $<semantics>p=({p}_{1},\dots ,{p}_{n})<annotation\; encoding="application/x-tex">\backslash mathbf\{p\}\; =\; (p\_1,\; \backslash ldots,\; p\_n)</annotation></semantics>$ and values $<semantics>v=({v}_{1},\dots ,{v}_{n})<annotation\; encoding="application/x-tex">\backslash mathbf\{v\}\; =\; (v\_1,\; \backslash ldots,\; v\_n)</annotation></semantics>$, then the value assigned to the whole set is $<semantics>\sigma (p,v)<annotation\; encoding="application/x-tex">\backslash sigma(\backslash mathbf\{p\},\; \backslash mathbf\{v\})</annotation></semantics>$.

Here are some properties that it would be reasonable for $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$ to satisfy.

**Homogeneity** The idea is that whatever “value” means, the value of
the set and the value of the elements should be measured in the same
units. For instance, if the elements are valued in kilograms then the set
should be valued in kilograms too. A switch from kilograms to grams would then
multiply both values by 1000. So, in general, we ask that

$$<semantics>\sigma (p,cv)=c\sigma (p,v)<annotation\; encoding="application/x-tex">\; \backslash sigma(\backslash mathbf\{p\},\; c\backslash mathbf\{v\})\; =\; c\; \backslash sigma(\backslash mathbf\{p\},\; \backslash mathbf\{v\})\; </annotation></semantics>$$

for all $<semantics>p\in {\Delta}_{n}<annotation\; encoding="application/x-tex">\backslash mathbf\{p\}\; \backslash in\; \backslash Delta\_n</annotation></semantics>$, $<semantics>v\in (0,\mathrm{\infty}{)}^{n}<annotation\; encoding="application/x-tex">\backslash mathbf\{v\}\; \backslash in\; (0,\; \backslash infty)^n</annotation></semantics>$ and $<semantics>c\in (0,\mathrm{\infty})<annotation\; encoding="application/x-tex">c\; \backslash in\; (0,\; \backslash infty)</annotation></semantics>$.

**Monotonicity** The values of the elements are supposed to make a
*positive* contribution to the value of the whole, so we ask that if
$<semantics>{v}_{i}\le v{\prime}_{i}<annotation\; encoding="application/x-tex">v\_i\; \backslash leq\; v\text{\'}\_i</annotation></semantics>$ for all $<semantics>i<annotation\; encoding="application/x-tex">i</annotation></semantics>$ then

$$<semantics>\sigma (p,v)\le \sigma (p,v\prime )<annotation\; encoding="application/x-tex">\; \backslash sigma(\backslash mathbf\{p\},\; \backslash mathbf\{v\})\; \backslash leq\; \backslash sigma(\backslash mathbf\{p\},\; \backslash mathbf\{v\}\text{\'})\; </annotation></semantics>$$

for all $<semantics>p\in {\Delta}_{n}<annotation\; encoding="application/x-tex">\backslash mathbf\{p\}\; \backslash in\; \backslash Delta\_n</annotation></semantics>$.

**Replication** Suppose that our $<semantics>n<annotation\; encoding="application/x-tex">n</annotation></semantics>$ elements have the same size and
the same value, $<semantics>v<annotation\; encoding="application/x-tex">v</annotation></semantics>$. Then the value of the whole set should be $<semantics>nv<annotation\; encoding="application/x-tex">n\; v</annotation></semantics>$.
This property says, among other things, that $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$ isn’t an *average*: putting in more
elements of value $<semantics>v<annotation\; encoding="application/x-tex">v</annotation></semantics>$ increases the value of the whole set!

If $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$ is homogeneous, we might as well assume that $<semantics>v=1<annotation\; encoding="application/x-tex">v\; =\; 1</annotation></semantics>$, in which case the requirement is that

$$<semantics>\sigma ((1/n,\dots ,1/n),(1,\dots ,1))=n.<annotation\; encoding="application/x-tex">\; \backslash sigma\backslash bigl(\; (1/n,\; \backslash ldots,\; 1/n),\; (1,\; \backslash ldots,\; 1)\; \backslash bigr)\; =\; n.\; </annotation></semantics>$$

**Modularity** This one’s a basic logical axiom, best illustrated by
an example.

Imagine that we’re very ambitious and wish to evaluate the entire planet — or at least, the part that’s land. And suppose we already know the values and relative sizes of every country.

We could, of course, simply put this data into $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$ and get an answer immediately.
But we could instead begin by evaluating each *continent*, and then
compute the value of the planet using the values and sizes of the
continents. If $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$ is sensible, this should give the same answer.

The notation needed to express this formally is a bit heavy. Let $<semantics>w\in {\Delta}_{n}<annotation\; encoding="application/x-tex">\backslash mathbf\{w\}\; \backslash in\; \backslash Delta\_n</annotation></semantics>$; in our example, $<semantics>n=7<annotation\; encoding="application/x-tex">n\; =\; 7</annotation></semantics>$ (or however many continents there are) and $<semantics>w=({w}_{1},\dots ,{w}_{7})<annotation\; encoding="application/x-tex">\backslash mathbf\{w\}\; =\; (w\_1,\; \backslash ldots,\; w\_7)</annotation></semantics>$ encodes their relative sizes. For each $<semantics>i=1,\dots ,n<annotation\; encoding="application/x-tex">i\; =\; 1,\; \backslash ldots,\; n</annotation></semantics>$, let $<semantics>{p}^{i}\in {\Delta}_{{k}_{i}}<annotation\; encoding="application/x-tex">\backslash mathbf\{p\}^i\; \backslash in\; \backslash Delta\_\{k\_i\}</annotation></semantics>$; in our example, $<semantics>{p}^{i}<annotation\; encoding="application/x-tex">\backslash mathbf\{p\}^i</annotation></semantics>$ encodes the relative sizes of the countries on the $<semantics>i<annotation\; encoding="application/x-tex">i</annotation></semantics>$th continent. Then we get a probability distribution

$$<semantics>w\circ ({p}^{1},\dots ,{p}^{n})=({w}_{1}{p}_{1}^{1},\dots ,{w}_{1}{p}_{{k}_{1}}^{1},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\dots ,\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{w}_{n}{p}_{1}^{n},\dots ,{w}_{n}{p}_{{k}_{n}}^{n})\in {\Delta}_{{k}_{1}+\cdots +{k}_{n}},<annotation\; encoding="application/x-tex">\; \backslash mathbf\{w\}\; \backslash circ\; (\backslash mathbf\{p\}^1,\; \backslash ldots,\; \backslash mathbf\{p\}^n)\; =\; (w\_1\; p^1\_1,\; \backslash ldots,\; w\_1\; p^1\_\{k\_1\},\; \backslash ,\backslash ,\backslash ldots,\; \backslash ,\backslash ,\; w\_n\; p^n\_1,\; \backslash ldots,\; w\_n\; p^n\_\{k\_n\})\; \backslash in\; \backslash Delta\_\{k\_1\; +\; \backslash cdots\; +\; k\_n\},\; </annotation></semantics>$$

which in our example encodes the relative sizes of all the countries on the planet. (Incidentally, this composition makes $<semantics>({\Delta}_{n})<annotation\; encoding="application/x-tex">(\backslash Delta\_n)</annotation></semantics>$ into an operad, a fact that we’ve discussed many times before on this blog.) Also let

$$<semantics>{v}^{1}=({v}_{1}^{1},\dots ,{v}_{{k}_{1}}^{1})\in (0,\mathrm{\infty}{)}^{{k}_{1}},\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\dots ,\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}{v}^{n}=({v}_{1}^{n},\dots ,{v}_{{k}_{n}}^{n})\in (0,\mathrm{\infty}{)}^{{k}_{n}}.<annotation\; encoding="application/x-tex">\; \backslash mathbf\{v\}^1\; =\; (v^1\_1,\; \backslash ldots,\; v^1\_\{k\_1\})\; \backslash in\; (0,\; \backslash infty)^\{k\_1\},\; \backslash ,\backslash ,\backslash ldots,\backslash ,\backslash ,\; \backslash mathbf\{v\}^n\; =\; (v^n\_1,\; \backslash ldots,\; v^n\_\{k\_n\})\; \backslash in\; (0,\; \backslash infty)^\{k\_n\}.\; </annotation></semantics>$$

In the example, $<semantics>{v}_{j}^{i}<annotation\; encoding="application/x-tex">v^i\_j</annotation></semantics>$ is the value of the $<semantics>j<annotation\; encoding="application/x-tex">j</annotation></semantics>$th country on the $<semantics>i<annotation\; encoding="application/x-tex">i</annotation></semantics>$th continent. Then the value of the $<semantics>i<annotation\; encoding="application/x-tex">i</annotation></semantics>$th continent is $<semantics>\sigma ({p}^{i},{v}^{i})<annotation\; encoding="application/x-tex">\backslash sigma(\backslash mathbf\{p\}^i,\; \backslash mathbf\{v\}^i)</annotation></semantics>$, so the axiom is that

$$<semantics>\sigma (w\circ ({p}^{1},\dots ,{p}^{n}),({v}_{1}^{1},\dots ,{v}_{{k}_{1}}^{1},\dots ,{v}_{1}^{n},\dots ,{v}_{{k}_{n}}^{n}))=\sigma (w,(\sigma ({p}^{1},{v}^{1}),\dots ,\sigma ({p}^{n},{v}^{n})\left)\right).<annotation\; encoding="application/x-tex">\; \backslash sigma\; \backslash bigl(\; \backslash mathbf\{w\}\; \backslash circ\; (\backslash mathbf\{p\}^1,\; \backslash ldots,\; \backslash mathbf\{p\}^n),\; (v^1\_1,\; \backslash ldots,\; v^1\_\{k\_1\},\; \backslash ldots,\; v^n\_1,\; \backslash ldots,\; v^n\_\{k\_n\})\; \backslash bigr)\; =\; \backslash sigma\; \backslash Bigl(\; \backslash mathbf\{w\},\; \backslash bigl(\; \backslash sigma(\backslash mathbf\{p\}^1,\; \backslash mathbf\{v\}^1),\; \backslash ldots,\; \backslash sigma(\backslash mathbf\{p\}^n,\; \backslash mathbf\{v\}^n)\; \backslash bigr)\; \backslash Bigr).\; </annotation></semantics>$$

The left-hand side is the value of the planet calculated in a single step, and the right-hand side is its value when calculated in two steps, with continents as the intermediate stage.

**Symmetry** It shouldn’t matter what order we list the elements
in. So it’s natural to ask that

$$<semantics>\sigma (p,v)=\sigma (p\tau ,v\tau )<annotation\; encoding="application/x-tex">\; \backslash sigma(\backslash mathbf\{p\},\; \backslash mathbf\{v\})\; =\; \backslash sigma(\backslash mathbf\{p\}\; \backslash tau,\; \backslash mathbf\{v\}\; \backslash tau)\; </annotation></semantics>$$

for any $<semantics>\tau <annotation\; encoding="application/x-tex">\backslash tau</annotation></semantics>$ in the symmetric group $<semantics>{S}_{n}<annotation\; encoding="application/x-tex">S\_n</annotation></semantics>$, where the right-hand side refers to the obvious $<semantics>{S}_{n}<annotation\; encoding="application/x-tex">S\_n</annotation></semantics>$-actions.

**Absent elements** should count for nothing! In other words, if $<semantics>{p}_{1}=0<annotation\; encoding="application/x-tex">p\_1\; =\; 0</annotation></semantics>$
then we should have

$$<semantics>\sigma (({p}_{1},\dots ,{p}_{n}),({v}_{1},\dots ,{v}_{n}))=\sigma (({p}_{2},\dots ,{p}_{n}),({v}_{2},\dots ,{v}_{n})).<annotation\; encoding="application/x-tex">\; \backslash sigma\backslash bigl(\; (p\_1,\; \backslash ldots,\; p\_n),\; (v\_1,\; \backslash ldots,\; v\_n)\backslash bigr)\; =\; \backslash sigma\backslash bigl(\; (p\_2,\; \backslash ldots,\; p\_n),\; (v\_2,\; \backslash ldots,\; v\_n)\backslash bigr).\; </annotation></semantics>$$

This isn’t *quite* triival. I haven’t yet given you any examples of the kind of function that $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$
might be, but perhaps you already have in mind a simple one like this:

$$<semantics>\sigma (p,v)={v}_{1}+\cdots +{v}_{n}.<annotation\; encoding="application/x-tex">\; \backslash sigma(\backslash mathbf\{p\},\; \backslash mathbf\{v\})\; =\; v\_1\; +\; \backslash cdots\; +\; v\_n.\; </annotation></semantics>$$

In words, the value of the whole is simply the sum of the values of the parts, regardless of their sizes. But if $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$ is to have the “absent elements” property, this won’t do. (Intuitively, if $<semantics>{p}_{i}=0<annotation\; encoding="application/x-tex">p\_i\; =\; 0</annotation></semantics>$ then we shouldn’t count $<semantics>{v}_{i}<annotation\; encoding="application/x-tex">v\_i</annotation></semantics>$ in the sum, because the $<semantics>i<annotation\; encoding="application/x-tex">i</annotation></semantics>$th element isn’t actually there.) So we’d better modify this example slightly, instead taking

$$<semantics>\sigma (p,v)=\sum _{i\phantom{\rule{thinmathspace}{0ex}}:\phantom{\rule{thinmathspace}{0ex}}{p}_{i}>0}{v}_{i}.<annotation\; encoding="application/x-tex">\; \backslash sigma(\backslash mathbf\{p\},\; \backslash mathbf\{v\})\; =\; \backslash sum\_\{i\; \backslash ,:\backslash ,\; p\_i\; \backslash gt\; 0\}\; v\_i.\; </annotation></semantics>$$

This function (or rather, sequence of functions) *does* have the “absent elements” property.

**Continuity in positive probabilities** Finally, we ask that for
each $<semantics>v\in (0,\mathrm{\infty}{)}^{n}<annotation\; encoding="application/x-tex">\backslash mathbf\{v\}\; \backslash in\; (0,\; \backslash infty)^n</annotation></semantics>$, the function $<semantics>\sigma (-,v)<annotation\; encoding="application/x-tex">\backslash sigma(-,\; \backslash mathbf\{v\})</annotation></semantics>$
is continuous on the interior of the simplex $<semantics>{\Delta}_{n}<annotation\; encoding="application/x-tex">\backslash Delta\_n</annotation></semantics>$, that is,
continuous over those probability distributions
$<semantics>p<annotation\; encoding="application/x-tex">\backslash mathbf\{p\}</annotation></semantics>$ such that $<semantics>{p}_{1},\dots ,{p}_{n}>0<annotation\; encoding="application/x-tex">p\_1,\; \backslash ldots,\; p\_n\; \backslash gt\; 0</annotation></semantics>$.

Why only over the *interior* of the simplex? Basically because of
natural examples of $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$ like the one just given, which is continuous
on the interior of the simplex but not the boundary. Generally, it’s
sometimes useful to make a sharp, discontinuous distinction between the
cases $<semantics>{p}_{i}>0<annotation\; encoding="application/x-tex">p\_i\; \backslash gt\; 0</annotation></semantics>$ (presence) and $<semantics>{p}_{i}=0<annotation\; encoding="application/x-tex">p\_i\; =\; 0</annotation></semantics>$ (absence).

Arrow’s famous theorem states that a few apparently mild conditions on a voting system are, in fact, mutually contradictory. The mild conditions above are not mutually contradictory. In fact, there’s a one-parameter family $<semantics>{\sigma}_{q}<annotation\; encoding="application/x-tex">\backslash sigma\_q</annotation></semantics>$ of functions each of which satisfies these conditions. For real $<semantics>q\ne 1<annotation\; encoding="application/x-tex">q\; \backslash neq\; 1</annotation></semantics>$, the definition is

$$<semantics>{\sigma}_{q}(p,v)=(\sum _{i\phantom{\rule{thinmathspace}{0ex}}:\phantom{\rule{thinmathspace}{0ex}}{p}_{i}>0}{p}_{i}^{q}{v}_{i}^{1-q}{)}^{1/(1-q)}.<annotation\; encoding="application/x-tex">\; \backslash sigma\_q(\backslash mathbf\{p\},\; \backslash mathbf\{v\})\; =\; \backslash Bigl(\; \backslash sum\_\{i\; \backslash ,:\backslash ,\; p\_i\; \backslash gt\; 0\}\; p\_i^q\; v\_i^\{1\; -\; q\}\; \backslash Bigr)^\{1/(1\; -\; q)\}.\; </annotation></semantics>$$

For instance, $<semantics>{\sigma}_{0}<annotation\; encoding="application/x-tex">\backslash sigma\_0</annotation></semantics>$ is the example of $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$ given above.

The formula for $<semantics>{\sigma}_{q}<annotation\; encoding="application/x-tex">\backslash sigma\_q</annotation></semantics>$ is obviously invalid at $<semantics>q=1<annotation\; encoding="application/x-tex">q\; =\; 1</annotation></semantics>$, but it converges to a limit as $<semantics>q\to 1<annotation\; encoding="application/x-tex">q\; \backslash to\; 1</annotation></semantics>$, and we define $<semantics>{\sigma}_{1}(p,v)<annotation\; encoding="application/x-tex">\backslash sigma\_1(\backslash mathbf\{p\},\; \backslash mathbf\{v\})</annotation></semantics>$ to be that limit. Explicitly, this gives

$$<semantics>{\sigma}_{1}(p,v)=\prod _{i\phantom{\rule{thinmathspace}{0ex}}:\phantom{\rule{thinmathspace}{0ex}}{p}_{i}>0}({v}_{i}/{p}_{i}{)}^{{p}_{i}}.<annotation\; encoding="application/x-tex">\; \backslash sigma\_1(\backslash mathbf\{p\},\; \backslash mathbf\{v\})\; =\; \backslash prod\_\{i\; \backslash ,:\backslash ,\; p\_i\; \backslash gt\; 0\}\; (v\_i/p\_i)^\{p\_i\}.\; </annotation></semantics>$$

In the same way, we can define $<semantics>{\sigma}_{-\mathrm{\infty}}<annotation\; encoding="application/x-tex">\backslash sigma\_\{-\backslash infty\}</annotation></semantics>$ and $<semantics>{\sigma}_{\mathrm{\infty}}<annotation\; encoding="application/x-tex">\backslash sigma\_\backslash infty</annotation></semantics>$ as the appropriate limits:

$$<semantics>{\sigma}_{-\mathrm{\infty}}(p,v)=\underset{i\phantom{\rule{thinmathspace}{0ex}}:\phantom{\rule{thinmathspace}{0ex}}{p}_{i}>0}{\mathrm{max}}{v}_{i}/{p}_{i},\phantom{\rule{2em}{0ex}}{\sigma}_{\mathrm{\infty}}(p,v)=\underset{i\phantom{\rule{thinmathspace}{0ex}}:\phantom{\rule{thinmathspace}{0ex}}{p}_{i}>0}{\mathrm{min}}{v}_{i}/{p}_{i}.<annotation\; encoding="application/x-tex">\; \backslash sigma\_\{-\backslash infty\}(\backslash mathbf\{p\},\; \backslash mathbf\{v\})\; =\; \backslash max\_\{i\; \backslash ,:\backslash ,\; p\_i\; \backslash gt\; 0\}\; v\_i/p\_i,\; \backslash qquad\; \backslash sigma\_\{\backslash infty\}(\backslash mathbf\{p\},\; \backslash mathbf\{v\})\; =\; \backslash min\_\{i\; \backslash ,:\backslash ,\; p\_i\; \backslash gt\; 0\}\; v\_i/p\_i.\; </annotation></semantics>$$

And it’s easy to check that for each $<semantics>q\in [-\mathrm{\infty},\mathrm{\infty}]<annotation\; encoding="application/x-tex">q\; \backslash in\; [-\backslash infty,\; \backslash infty]</annotation></semantics>$, the function $<semantics>{\sigma}_{q}<annotation\; encoding="application/x-tex">\backslash sigma\_q</annotation></semantics>$ satisfies all the natural conditions listed above.

These functions $<semantics>{\sigma}_{q}<annotation\; encoding="application/x-tex">\backslash sigma\_q</annotation></semantics>$ might be unfamiliar to you, but they have some special cases that are quite well-explored. In particular:

Suppose you’re in a situation where the elements don’t have “sizes”. Then it would be natural to take $<semantics>p<annotation\; encoding="application/x-tex">\backslash mathbf\{p\}</annotation></semantics>$ to be the uniform distribution $<semantics>{u}_{n}=(1/n,\dots ,1/n)<annotation\; encoding="application/x-tex">\backslash mathbf\{u\}\_n\; =\; (1/n,\; \backslash ldots,\; 1/n)</annotation></semantics>$. In that case, $$<semantics>{\sigma}_{q}({u}_{n},v)=\mathrm{const}\cdot (\sum {v}_{i}^{1-q}{)}^{1/(1-q)},<annotation\; encoding="application/x-tex">\; \backslash sigma\_q(\backslash mathbf\{u\}\_n,\; \backslash mathbf\{v\})\; =\; const\; \backslash cdot\; \backslash bigl(\; \backslash sum\; v\_i^\{1\; -\; q\}\; \backslash bigr)^\{1/(1\; -\; q)\},\; </annotation></semantics>$$ where the constant is a certain power of $<semantics>n<annotation\; encoding="application/x-tex">n</annotation></semantics>$. When $<semantics>q\le 0<annotation\; encoding="application/x-tex">q\; \backslash leq\; 0</annotation></semantics>$, this is exactly a constant times $<semantics>\Vert v{\Vert}_{1-q}<annotation\; encoding="application/x-tex">\backslash |\backslash mathbf\{v\}\backslash |\_\{1\; -\; q\}</annotation></semantics>$, the $<semantics>(1-q)<annotation\; encoding="application/x-tex">(1\; -\; q)</annotation></semantics>$-norm of the vector $<semantics>v<annotation\; encoding="application/x-tex">\backslash mathbf\{v\}</annotation></semantics>$.

Suppose you’re in a situation where the elements don’t have “values”. Then it would be natural to take $<semantics>v<annotation\; encoding="application/x-tex">\backslash mathbf\{v\}</annotation></semantics>$ to be $<semantics>1=(1,\dots ,1)<annotation\; encoding="application/x-tex">\backslash mathbf\{1\}\; =\; (1,\; \backslash ldots,\; 1)</annotation></semantics>$. In that case, $$<semantics>{\sigma}_{q}(p,1)=(\sum {p}_{i}^{q}{)}^{1/(1-q)}.<annotation\; encoding="application/x-tex">\; \backslash sigma\_q(\backslash mathbf\{p\},\; \backslash mathbf\{1\})\; =\; \backslash bigl(\; \backslash sum\; p\_i^q\; \backslash bigr)^\{1/(1\; -\; q)\}.\; </annotation></semantics>$$ This is the quantity that ecologists know as the Hill number of order $<semantics>q<annotation\; encoding="application/x-tex">q</annotation></semantics>$ and use as a measure of biological diversity. Information theorists know it as the exponential of the Rényi entropy of order $<semantics>q<annotation\; encoding="application/x-tex">q</annotation></semantics>$, the special case $<semantics>q=1<annotation\; encoding="application/x-tex">q\; =\; 1</annotation></semantics>$ being Shannon entropy. And actually, the

*general*formula for $<semantics>{\sigma}_{q}<annotation\; encoding="application/x-tex">\backslash sigma\_q</annotation></semantics>$ is very closely related to Rényi relative entropy (which Wikipedia calls Rényi divergence).

Anyway, the big — and as far as I know, new — result is:

TheoremThe functions $<semantics>{\sigma}_{q}<annotation\; encoding="application/x-tex">\backslash sigma\_q</annotation></semantics>$ are the only functions $<semantics>\sigma <annotation\; encoding="application/x-tex">\backslash sigma</annotation></semantics>$ with the seven properties above.

So although the properties above don’t seem that demanding, they actually force our notion of “aggregate value” to be given by one of the functions in the family $<semantics>({\sigma}_{q}{)}_{q\in [-\mathrm{\infty},\mathrm{\infty}]}<annotation\; encoding="application/x-tex">(\backslash sigma\_q)\_\{q\; \backslash in\; [-\backslash infty,\; \backslash infty]\}</annotation></semantics>$. And although I didn’t even mention the notions of diversity or entropy in my justification of the axioms, they come out anyway as special cases.

I covered all this yesterday in the tenth and penultimate installment of the functional equations course that I’m giving. It’s written up on pages 38–42 of the notes so far. There you can also read how this relates to more realistic measures of biodiversity than the Hill numbers. Plus, you can see an outline of the (quite substantial) proof of the theorem above.

by leinster (Tom.Leinster@ed.ac.uk) at April 15, 2017 10:36 AM

## April 14, 2017

### Tommaso Dorigo - Scientificblogging

### Marco Frasca - The Gauge Connection

When a theory is too hard to solve people try to consider lower dimensional cases. This also happened for Yang-Mills theory. The four dimensional case is notoriously difficult to manage due to the large coupling and the three dimensional case has been treated both theoretically and by lattice computations. In this latter case, the ground state energy of the theory is known very precisely (see here). So, a sound theoretical approach from first principles should be able to get that number at the same level of precision. We know that this is the situation for Standard Model with respect to some experimental results but a pure Yang-Mills theory has not been seen in nature and we have to content ourselves with computer data. The reason is that a Yang-Mills theory is realized in nature just in interaction with other kind of fields being these scalars, fermions or vector-like.

In these days, I have received the news that my paper on three dimensional Yang-Mills theory has been accepted for publication in the European Physical Journal C. Here is tha table for the ground state for SU(N) at different values of N compared to lattice data

**N** **Lattice** **Theoretical** **Error **

**2** 4.7367(55) 4.744262871 0.16%

**3** 4.3683(73) 4.357883714 0.2%

**4** 4.242(9) 4.243397712 0.03%

**∞** 4.116(6) 4.108652166 0.18%

These results are strikingly good and the agreement is well below 1%. This in turn implies that the underlying theoretical derivation is sound. Besides, the approach proves to be successful both also in four dimensions (see here). My hope is that this means the beginning of the era of high precision theoretical computations in strong interactions.

Andreas Athenodorou, & Michael Teper (2017). SU(N) gauge theories in 2+1 dimensions: glueball spectra and k-string tensions J. High Energ. Phys. (2017) 2017: 15 arXiv: 1609.03873v1

Marco Frasca (2016). Confinement in a three-dimensional Yang-Mills theory arXiv arXiv: 1611.08182v2

Marco Frasca (2015). Quantum Yang-Mills field theory Eur. Phys. J. Plus (2017) 132: 38 arXiv: 1509.05292v2

Filed under: Particle Physics, Physics, QCD Tagged: Ground state, Lattice Gauge Theories, Mass Gap, Millenium prize, Yang-Mills theory

## April 13, 2017

### Clifford V. Johnson - Asymptotia

So an unexpected but very welcome message from my publisher a while back was a query to see if I'd be interested in doing the cover for my forthcoming book. Of course, the answer was a very definite yes! (I knew that publishers often want to control that aspect of a book themselves, and while some time ago I made a deliberately vague suggestion about what I thought the cover might be like, I was careful not to try to insert myself into that aspect of production, so this was a genuine surprise.) I'm focusing on physics research during this part of my sabbatical, so this would have to be primarily an "after hours" sort of operation, but should not take long since I had a clear idea of what to do. I worked up two or three versions of an idea and sent it along to see that they liked where I was going and once they picked one (happily, the one I liked most) I set it aside as a thing to work on once I got finished with a paper (see last post) and the (prep for as well as the actual) trip East to give a physics colloquium (see the post I never got around to doing about that trip).

Then I had terrible delays on the way back that cost me the better part of an extra day getting back. So I worked up some of the nearly final art and layout [...] Click to continue reading this post

The post Quick Oceanside Art… appeared first on Asymptotia.

## April 11, 2017

### Symmetrybreaking - Fermilab/SLAC

Experiments at CERN investigate antiparticles.

What do shrimp, tennis balls and pulsars all have in common? They are all made from matter.

Admittedly, that answer is a cop-out, but it highlights a big, persistent quandary for scientists: Why is everything made from matter when there is a perfectly good substitute—antimatter?

The European laboratory CERN hosts several experiments to ascertain the properties of antimatter particles, which almost never survive in our matter-dominated world.

Particles (such as the proton and electron) have oppositely charged antimatter doppelgangers (such as the antiproton and antielectron). Because they are opposite but equal, a matter particle and its antimatter partner annihilate when they meet.

Antimatter wasn’t always rare. Theoretical and experimental research suggests that there was an equal amount of matter and antimatter right after the birth of our universe. But 13.8 billion years later, only matter-made structures remain in the visible universe.

Scientists have found small differences between the behavior of matter and antimatter particles, but not enough to explain the imbalance that led antimatter to disappear while matter perseveres. Experiments at CERN are working to solve that riddle using three different strategies.

### Antimatter under the microscope

It’s well known that CERN is home to Large Hadron Collider, the world’s highest-energy particle accelerator. Less known is that CERN also hosts the world’s most powerful particle decelerator—a machine that slows down antiparticles to a near standstill.

The antiproton decelerator is fed by CERN’s accelerator complex. A beam of energetic protons is diverted from CERN’s Proton Synchrotron and into a metal wall, spawning a multitude of new particles, including some antiprotons. The antiprotons are focused into a particle beam and slowed by electric fields inside the antiproton decelerator. From here they are fed into various antimatter experiments, which trap the antiprotons inside powerful magnetic fields.

“All these experiments are trying to find differences between matter and antimatter that are not predicted by theory,” says Will Bertsche, a researcher at University of Manchester, who works in CERN’s antimatter factory. “We’re all trying to address the big question: Why is the universe made up of matter these days and not antimatter?”

By cooling and trapping antimatter, scientists can intimately examine its properties without worrying that their particles will spontaneously encounter a matter companion and disappear. Some of the traps can preserve antiprotons for more than a year. Scientists can also combine antiprotons with positrons (antielectrons) to make antihydrogen.

“Antihydrogen is fascinating because it lets us see how antimatter interacts with itself,” Bertsche says. “We’re getting a glimpse at how a mirror antimatter universe would behave.”

Scientists in CERN’s antimatter factory have measured the mass, charge, light spectrum, and magnetic properties of antiprotons and antihydrogen to high precision. They also look at how antihydrogen atoms are affected by gravity; that is, do the anti-atoms fall up or down? One experiment is even trying to make an assortment of matter-antimatter hybrids, such as a helium atom in which one of the electrons is replaced with an orbiting antiproton.

So far, all their measurements of trapped antimatter match the theory: Except for the opposite charge and spin, antimatter appears completely identical to matter. But these affirmative results don’t deter Bertsche from looking for antimatter surprises. There must be unpredicted disparities between these particle twins that can explain why matter won its battle with antimatter in the early universe.

“There’s something missing in this model,” Bertsche says. “And nobody is sure what that is.”

### Antimatter in motion

The LHCb experiment wants to answer this same question, but they are looking at antimatter particles that are not trapped. Instead, LHCb scientists study how free-range antimatter particles behave as they travel and transform inside the detector.

“We’re recording how unstable matter and antimatter particles decay into showers of particles and the patterns they leave behind when they do,” says Sheldon Stone, a professor at Syracuse University working on the LHCb Experiment. “We can’t make these measurements if the particles aren’t moving.”

The particles-in-motion experiments have already observed some small differences between matter and antimatter particles. In 1964 scientists at Brookhaven National Laboratory noticed that neutral kaons (a particle containing a strange and down quark) decay into matter and antimatter particles at slightly different rates, an observation that won them the Nobel Prize in 1980.

The LHCb experiment continues this legacy, looking for even more discrepancies between the metamorphoses of matter and antimatter particles. They recently observed that the daughter particles of certain antimatter baryons (particles containing three quarks) have a slightly different spatial orientation than their matter contemporaries.

But even with the success of uncovering these discrepancies, scientists are still very far from understanding why antimatter all but disappeared.

“Theory tells us that we’re still off by nine orders of magnitude,” Stone says, “so we’re left asking, where is it? What is antimatter’s Achilles heel that precipitated its disappearance?”

### Antimatter in space

Most antimatter experiments based at CERN produce antiparticles by accelerating and colliding protons. But one experiment is looking for feral antimatter freely roaming through outer space.

The Alpha Magnetic Spectrometer is an international experiment supported by the US Department of Energy and NASA. This particle detector was assembled at CERN and is now installed on the International Space Station, where it orbits Earth 400 kilometers above the surface. It records the momentum and trajectory of roughly a billion vagabond particles every month, including a million antimatter particles.

Nomadic antimatter nuclei could be lonely relics from the Big Bang or the rambling residue of nuclear fusion in antimatter stars.

But AMS searches for phenomena not explained by our current models of the cosmos. One of its missions is to look for antimatter that is so complex and robust, there is no way it could have been produced through normal particle collisions in space.

“Most scientists accept that antimatter disappeared from our universe because it is somehow less resilient than matter,” says Mike Capell, a researcher at MIT and a deputy spokesperson of the AMS experiment. “But we’re asking, what if all the antimatter never disappeared? What if it’s still out there?”

If an antimatter kingdom exists, astronomers expect that they would observe mass particle-annihilation fizzing and shimmering at its boundary with our matter-dominated space—which they don’t. Not yet, at least. Because our universe is so immense (and still expanding), researchers on AMS hypothesize that maybe these intersections are too dim or distant for our telescopes.

“We already have trouble seeing deep into our universe,” Capell says. “Because we’ve never seen a domain where matter meets antimatter, we don’t know what it would look like.”

AMS has been collecting data for six years. From about 100 billion cosmic rays, they’ve identified a few strange events with characteristics of antihelium. Because the sample is so tiny, it’s impossible to say whether these anomalous events are the first messengers from an antimatter galaxy or simply part of the chaotic background.

“It’s an exciting result,” Capell says. “However, we remain skeptical. We need data from many more cosmic rays before we can determine the identities of these anomalous particles.”