# Particle Physics Planet

## May 28, 2017

### Christian P. Robert - xi'an's og

accelerating MCMC

I have recently [well, not so recently!] been asked to write a review paper on ways of accelerating MCMC algorithms for the [review] journal WIREs Computational Statistics and would welcome all suggestions towards the goal of accelerating MCMC algorithms. Besides [and including more on]

• coupling strategies using different kernels and switching between them;
• tempering strategies using flatter or lower dimensional targets as intermediary steps, e.g., à la Neal;
• sequential Monte Carlo with particle systems targeting again flatter or lower dimensional targets and adapting proposals to this effect;
• Hamiltonian MCMC, again with connections to Radford (and more generally ways of avoiding rejections);
• adaptive MCMC, obviously;
• Rao-Blackwellisation, just as obviously (in the sense that increasing the precision in the resulting estimates means less simulations).

Filed under: Statistics Tagged: acceleration of MCMC algorithms, coupling, Hamiltonian Monte Carlo, India, MCMC, Monte Carlo Statistical Methods, motorbike, NUTS, Rajasthan, review, survey, tempering, WIREs

## May 27, 2017

### Peter Coles - In the Dark

A Black Rain Frog

No time for a post today so here’s a picture of a Black Rain Frog…

## May 26, 2017

### Christian P. Robert - xi'an's og

Children of Earth and Sky [book review]

While in Dublin last weekend, I found myself without a book to read and walking by and in a nice bookstore on Grafton Street, I discovered that Guy Gavriel Kay had published a book recently! Now, this was a terrific surprise as his Song for Arbonne was and remains one of my favourite books.

There are similarities in those two books in that they are both inspired by Mediterranean cultures and history, A Song for Arbonne being based upon the Late Medieval courts of Love in Occitany, while Children of Earth and Sky borrows to the century long feud between Venezia and the Ottoman empire, with Croatia stuck in-between. As acknowledged by the author, this novel stemmed from a visit to Croatia and the suggestion to tell the story of local bandits turned into heroes for fighting the Ottomans. Although I found unravelling the numerous borrowings from history and geography a wee bit tiresome, this is a quite enjoyable pseudo-historical novel. Except the plot is too predictable in having all its main characters crossing one another path with clockwise regularity. And all main women character eventually escaping the fate set upon them by highly patriarchal societies.  A Song for Arbonne had more of a tension and urgency, or maybe made me care more for its central characters.

Filed under: Books, Kids, Travel Tagged: A Song for Arbonne, Children of Earth and Sky, Constantinople, Croatia, fantasy, Guy Gavriel Kay, historical novels, Prague, Venezia

### Emily Lakdawalla - The Planetary Society Blog

The Planetary Society’s Canadian Initiative
It’s an exciting time for Canada in space. It’s also an exciting time for Canadian space advocacy, as The Planetary Society's Global Community Outreach Manager Kate Howells describes.

### Peter Coles - In the Dark

Summertime – Albert Ayler

George Gershwin’s beautiful song Summertime has been recorded countless times in countless ways by countless artists, but if you’re expecting it to be performed as a restful lullaby, as it is normally played, you’ll probably be shocked. This version is a heartbreaking expression of pain and anguish performed by the great Albert Ayler, and it was recorded in Copenhagen in 1963.

P.S. The painting shown in the video is by Matisse….

### Clifford V. Johnson - Asymptotia

Character Design on iPad…

Here's a video glimpse (less than 1 min. long) of my working through designing the main character for the upcoming graphic short story I'm doing for an anthology to be published next year. (See here for more.) There's a clickable still on the right. I had started sketching her out on the subway a few days ago, and then finished some of the groundwork today on the bus, taking a snap at the end. From there I pulled it into ProCreate on the iPad pro, and then drew and painted more refined lines and strokes using an apple pencil. Faces are funny things... it isn't really until the final tweaks at the end that I was happy with the drawing. I was ready to abandon the whole thing all along, having decided that it was a failed drawing. So you never know. Always good to persist until the end... wherever that is. Last note: This drawing style is more detailed than I hope to use in the story. I will work out simpler versions of her for the story... I hope. Video below.
[...] Click to continue reading this post

The post Character Design on iPad… appeared first on Asymptotia.

### Symmetrybreaking - Fermilab/SLAC

First results from search for a dark light

The Heavy Photon Search at Jefferson Lab is looking for a hypothetical particle from a hidden “dark sector.”

In 2015, a group of researchers installed a particle detector just half of a millimeter away from an extremely powerful electron beam. The detector could either start them on a new search for a hidden world of particles and forces called the “dark sector”—or its sensitive parts could burn up in the beam.

Earlier this month, scientists presented the results from that very first test run at the Heavy Photon Search collaboration meeting at the US Department of Energy’s Thomas Jefferson National Accelerator Facility. To the scientists’ delight, the experiment is working flawlessly.

Dark sector particles could be the long-sought components of dark matter, the mysterious form of matter thought to be five times more abundant in the universe than regular matter. To be specific, HPS is looking for a dark-sector version of the photon, the elementary “particle of light” that carries the fundamental electromagnetic force in the Standard Model of particle physics.

Analogously, the dark photon would be the carrier of a force between dark-sector particles. But unlike the regular photon, the dark photon would have mass. That’s why it’s also called the heavy photon.

To search for dark photons, the HPS experiment uses a very intense, nearly continuous beam of highly energetic electrons from Jefferson Lab’s CEBAF accelerator. When slammed into a tungsten target, the electrons radiate energy that could potentially produce the mystery particles. Dark photons are believed to quickly decay into pairs of electrons and their antiparticles, positrons, which leave tracks in the HPS detector.

“Dark photons would show up as an anomaly in our data—a very narrow bump on a smooth background from other processes that produce electron-positron pairs,” says Omar Moreno from SLAC National Accelerator Laboratory, who led the analysis of the first data and presented the results at the collaboration meeting.

The challenge is that, due to the large beam energy, the decay products are compressed very narrowly in beam direction. To catch them, the detector must be very close to the electron beam. But not too close—the smallest beam movements could make the beam swerve into the detector. Even if the beam doesn’t directly hit the HPS apparatus, electrons interacting in the target can scatter into the detector and cause unwanted signals.

The HPS team implemented a number of precautions to make sure their detector could handle the potentially destructive beam conditions. They installed and carefully aligned a system to intercept any large beam motions, made the detector’s support structure movable to bring the detector close to the beam and measure the exact beam position, and installed a feedback system that would shut the beam down if its motions were too large. They also placed their whole setup in vacuum because interactions of the beam with gas molecules would create too much background. Finally, they cooled the detector to negative 30 degrees Fahrenheit to reduce the effects of radiation damage. These measures allowed the team to operate their experiment so close to the beam.

“That’s maybe as close as anyone has ever come to such a particle beam,” says John Jaros, head of the HPS group at SLAC, which built the innermost part of the HPS detector, the Silicon Vertex Tracker. “So, it was fairly exciting when we gradually decreased the distance between the detector and the beam for the first time and saw that everything worked as planned. A large part of that success lies with the beautiful beams Jefferson Lab provided.”

SLAC’s Mathew Graham, who oversees the HPS analysis group, says, “In addition to figuring out if we can actually do the experiment, the first run also helped us understand the background signals in the experiment and develop the data analysis tools we need for our search for dark photons.”

So far, the team has seen no signs of dark photons. But to be fair, the data they analyzed came from just 1.7 days of accumulated running time. HPS collects data in short spurts when the CLAS experiment, which studies protons and neutrons using the same beam line, is not in use.

A second part of the analysis is still ongoing: The researchers are also closely inspecting the exact location, or vertex, from which an electron-positron pair emerges.

“If a dark photon lives long enough, it might make it out of the tungsten target where it was produced and travel some distance through the detector before it decays into an electron-positron pair,” Moreno says. The detector was specifically designed to observe such a signal.

Jefferson Lab has approved the HPS project for a total of 180 days of experimental time. Slowly but surely, HPS scientists are finding chances to use it.

### Tommaso Dorigo - Scientificblogging

Physics-Inspired Artwork In Venice 2: Symmetries

As I explained in the previous post of this series, students in high schools of the Venice area have been asked to produce artistic works inspired by LHC physics research, and in particular the Higgs boson.

### Peter Coles - In the Dark

The Sundial of Trevithick

Since it’s a lovely sunny day in Cardiff – and already very warm – I thought I’d step outside the office of the Cardiff University Data Innovation Research Institute which is situated in the Trevithick Building and take a picture of our new sundial:

This flat sundial was installed by a company called Border Sundials and is designed very carefully to be as accurate as possible for the particular wall on which it is place. It’s also corrected for longitude.

However, I took the photograph at about 10.30am, and you’ll notice that it’s showing about 9.30. That’s because it hasn’t been corrected for British Summer Time so it’s offset by an hour. Moreover, a sundial always shows the local solar time rather than mean time which is shown on clocks. These differ because of (a) the inclination of the Earth’s orbit around the Sun relative to the equator and (b) the eccentricity of the Earth’s orbit around the Sun, which means that it does not move at a constant speed. The difference between mean time and solar time can be reconciled using the equation of time. The maximum correction is about 15 minutes, which is large enough to be seen on a sundial of this type. Often a graph of the equation of time is placed next to a sundial so one can do the correct oneself, but for some reason there isn’t one here.

The sundial adds quite a lot of interest to what otherwise is a featureless brick wall and we often notice people looking at it outside our office.

### Geraint Lewis - Cosmic Horizons

Falling into a black hole: Just what do you see?
Everyone loves black holes. Immense gravity, a one-way space-time membrane, the possibility of links to other universes. All lovely stuff.

A little trawl of the internets reveals an awful lot of web pages discussing black holes, and discussions about spaghettification, firewalls, lost information, and many other things. Actually, a lot of the stuff out there on the web is nonsense, hand-waving, partly informed guesswork. And one of the questions that gets asked is "What would you see looking out into the universe?"

Some (incorrectly) say that you would never cross the event horizon, a significant mis-understanding of the coordinates of relativity. Other (incorrectly) conclude from this that you actually see the entire future history of the universe play out in front of your eyes.

What we have to remember, of course, is that relativity is a mathematical theory, and instead of hand waving, we can use mathematics to work out what we will see. And that's what I did.

I won't go through the details here, but it is based upon correctly calculating redshifts in relativity and conservation laws embodied in Killing vectors. But the result is an equation, an equation that looks like this

Here,  rs is the radius from which you start to fall, re is the radius at which the photon was emitted, and ro is the radius at which you receive the photon. On the left-hand-side is the ratio of the frequencies of the photon at the time of observation compared to emission. If this is bigger than one, then the photon is observed to have more energy than emitted, and the photon is blueshifted. If it is less than one, then it has less energy, and the photon is redshifted. Oh, and m is the mass of the black hole.

One can throw this lovely equation into python and plot it up. What do you get.

So, falling from a radius of 2.1m, we get
And falling from 3m
And from 5m
And 10m
and finally at 50m

In each of these, each line is a photon starting from different differences.

The key conclusion is that within the event horizon (r=2m) photons are generally seen to be redshifted, irrespective of where you start falling from. In fact in the last moment before you meet your ultimate end in the central singularity, the energy of the observed photon goes to zero and the outside universe is infinitely reshifted and vanishes from view.

How cool is that?

## May 25, 2017

### Christian P. Robert - xi'an's og

Le Monde puzzle [#1009]

An incomprehensible (and again double) Le Monde mathematical puzzle (despite requests to the authors! The details in brackets are mine.):

1. A [non-circular] chain of 63 papers clips can be broken into sub-chains by freeing one clip [from both neighbours] at a time. At a given stage, considering the set of the lengths of these sub-chains, the collection of all possible sums of these lengths is a subset of {1,…,63}. What is the minimal number of steps to recover the entire set {1,…,63}?  And what is the maximal length L of a chain of paper clips that allows this recovery in 8 steps?
2.  A tri-colored chain of 200 paper clips starts with a red, a blue and a green clip. Removing one clip every four clips produces a chain of 50 removed clips identical to the chain of 50 first clips of the original chain and a chain of remaining 150 clips identical to the 150 first clips of the original chain. Deduce the number of green, red, and blue clips.

The first question can be easily tackled by random exploration. Pick one number at random between 1 and 63, and keep picking attached clips until the set of sums is {1,…,63}. For instance,

rebreak0]
sumz=cumsum(sample(difz))
for (t in 1:1e3)
sumz=unique(c(sumz,cumsum(sample(difz))))
if (length(sumz)<63)
brkz=rebreak(sort(c(brkz,sample((1:63)[-brkz],1))))
return(brkz)}


where I used sampling to find the set of all possible partial sums. Which leads to a solution with three steps, at positions 5, 22, and 31. This sounds impossibly small but the corresponding lengths are

1 1 1 4 8 16 32

from which one can indeed recover by summation all numbers till 63=2⁶-1. From there, a solution in 8 steps can be found by directly considering the lengths

1 1 1 1 1 1 1 1 9 18=9+8 36=18+17+1 72 144 288 576 1152 2303

whose total sum is 4607. And with breaks

10 29 66 139 284 573 1150 2303

The second puzzle is completely independent. Running another R code reproducing the constraints leads to

tromcol=function(N=200){
vale=rep(0,N)
vale[1:3]=1:3
while (min(vale)==0){
vale[4*(1:50)]=vale[1:50]
vale[-(4*(1:50))]=vale[1:150]}
return(c(sum(vale==1),sum(vale==2),sum(vale==3)))}


and to 120 red clips, 46 blue clips and 34 green clips.

Filed under: Books, Kids Tagged: competition, Le Monde, mathematical puzzle, prime numbers, rank statistics

### Peter Coles - In the Dark

The Art of Jupiter

This amazing closeup image is of the North polar region of Jupiter. It was taken by NASA’s Juno spacecraft. Here’s a wider view:

I think it will take scientists quite some time to figure out what is going on in all those complex vortex structures!

In the meantime, though, I think these picture and the others that have been released can be enjoyed as a work of art! As a matter of fact reminds me of van Gogh’s Starry Night...

### Clifford V. Johnson - Asymptotia

Laying it All Out

So, this is what the early stage of the graphic short story laying out process looks like. For me. I actually do it old school with pencil and paper, and actual laying out. You can click for a larger view but I've blurred out some bits - because spoilers.

So...20 pages works nicely. 16? Hmmmm...

The post Laying it All Out appeared first on Asymptotia.

### Clifford V. Johnson - Asymptotia

At NPR West

Well, that was fun. And the NPR West studios in Culver City are fantastic.

I'll let you know when the piece, about science consulting for the entertainment industry, appears. Unless I really made a pig's ear of the interview in which case I may well forget to post it. ;)

The post At NPR West appeared first on Asymptotia.

### Peter Coles - In the Dark

Yellow Stars, Red Stars and Bayesian Inference

I came across a paper on the arXiv yesterday with the title Why do we find ourselves around a yellow star instead of a red star?’.  Here’s the abstract:

M-dwarf stars are more abundant than G-dwarf stars, so our position as observers on a planet orbiting a G-dwarf raises questions about the suitability of other stellar types for supporting life. If we consider ourselves as typical, in the anthropic sense that our environment is probably a typical one for conscious observers, then we are led to the conclusion that planets orbiting in the habitable zone of G-dwarf stars should be the best place for conscious life to develop. But such a conclusion neglects the possibility that K-dwarfs or M-dwarfs could provide more numerous sites for life to develop, both now and in the future. In this paper we analyze this problem through Bayesian inference to demonstrate that our occurrence around a G-dwarf might be a slight statistical anomaly, but only the sort of chance event that we expect to occur regularly. Even if M-dwarfs provide more numerous habitable planets today and in the future, we still expect mid G- to early K-dwarfs stars to be the most likely place for observers like ourselves. This suggests that observers with similar cognitive capabilities as us are most likely to be found at the present time and place, rather than in the future or around much smaller stars.

Athough astrobiology is not really my province,  I was intrigued enough to read on, until I came to the following paragraph in which the authors attempt to explain how Bayesian Inference works:

We approach this problem through the framework of Bayesian inference. As an example, consider a fair coin that is tossed three times in a row. Suppose that all three tosses turn up Heads. Can we conclude from this experiment that the coin must be weighted? In fact, we can still maintain our hypothesis that the coin is fair because the chances of getting three Heads in a row is 1/8. Many events with a probability of 1/8 occur every day, and so we should not be concerned about an event like this indicating that our initial assumptions are flawed. However, if we were to flip the same coin 70 times in a row with all 70 turning up Heads, we would readily conclude that the experiment is fixed. This is because the probability of flipping 70 Heads in a row is about 10-22, which is an exceedingly unlikely event that has probably never happened in the history of the universe. This
informal description of Bayesian inference provides a way to assess the probability of a hypothesis in light of new evidence.

Obviously I agree with the statement right at the end that Bayesian inference provides a way to assess the probability of a hypothesis in light of new evidence’. That’s certainly what Bayesian inference does, but this informal description’ is really a frequentist rather than a Bayesian argument, in that it only mentions the probability of given outcomes not the probability of different hypotheses…

Anyway, I was so unconvinced by this description’ that I stopped reading at that point and went and did something else. Since I didn’t finish the paper I won’t comment on the conclusions, although I am more than usually sceptical. You might disagree of course, so read the paper yourself and form your own opinion! For me, it goes in the file marked Bad Statistics!

### Emily Lakdawalla - The Planetary Society Blog

Pretty Pictures of the Cosmos: Waltzing Through the Universe
Award-winning astrophotographer Adam Block brings us more of his stunning images of the universe—this time of cosmic dances through space.

### The n-Category Cafe

A Type Theory for Synthetic ∞-Categories

One of the observations that launched homotopy type theory is that the rule of identity-elimination in Martin-Löf’s identity types automatically generates the structure of an $\infty \infty$-groupoid. In this way, homotopy type theory can be viewed as a “synthetic theory of $\infty \infty$-groupoids.”

It is natural to ask whether there is a similar directed type theory that describes a “synthetic theory of $\left(\infty ,1\right)\left(\infty,1\right)$-categories” (or even higher categories). Interpreting types directly as (higher) categories runs into various problems, such as the fact that not all maps between categories are exponentiable (so that not all $\prod \prod$-types exist), and that there are numerous different kinds of “fibrations” given the various possible functorialities and dimensions of categories appearing as fibers. The 2-dimensional directed type theory of Licata and Harper has semantics in 1-categories, with a syntax that distinguishes between co- and contra-variant dependencies; but since the 1-categorical structure is “put in by hand”, it’s not especially synthetic and doesn’t generalize well to higher categories.

An alternative approach was independently suggested by Mike and by Joyal, motivated by the model of homotopy type theory in the category of Reedy fibrant simplicial spaces, which contains as a full subcategory the $\infty \infty$-cosmos of complete Segal spaces, which we call Rezk spaces. It is not possible to model ordinary homotopy type theory directly in the Rezk model structure, which is not right proper, but we can model it in the Reedy model structure and then identify internally some “types with composition,” which correspond to Segal spaces, and “types with composition and univalence,” which correspond to the Rezk spaces.

Almost five years later, we are finally developing this approach in more detail. In a new paper now available on the arXiv, Mike and I give definitions of Segal and Rezk types motivated by these semantics, and demonstrate that these simple definitions suffice to develop the synthetic theory of $\left(\infty ,1\right)\left(\infty,1\right)$-categories. So far this includes functors, natural transformations, co- and contravariant type families with discrete fibers ($\infty \infty$-groupoids), the Yoneda lemma (including a “dependent” Yoneda lemma that looks like “directed identity-elimination”), and the theory of coherent adjunctions.

## Cofibrations and extension types

One of the reasons this took so long to happen is that it required a technical innovation to become feasible. To develop the synthetic theory of Segal and Rezk types, we need to detect the semantic structure of the simplicial spaces model internally, and it seems that the best way to do this is to axiomatize the presence of a strict interval $22$ (a totally ordered set with distinct top and bottom elements). This is the geometric theory of which simplicial sets are the classifying topos (and of which simplicial spaces are the classifying $\left(\infty ,1\right)\left(\infty,1\right)$-topos). We can then define an arrow in a type $AA$ to be a function $2\to A2\to A$.

However, often we want to talk about arrows with specified source and target. We can of course define the type ${\mathrm{hom}}_{A}\left(x,y\right)\hom_A\left(x,y\right)$ of such arrows to be ${\sum }_{f:2\to A}\left(f\left(0\right)=x\right)×\left(f\left(1\right)=y\right)\sum_\left\{f:2\to A\right\} \left(f\left(0\right)=x\right)\times \left(f\left(1\right)=y\right)$, but since we are in homotopy type theory, the equalities $f0=xf0=x$ and $f1=yf1=y$ are data, i.e. homotopical paths, that have to be carried around everywhere. When we start talking about 2-simplices and 3-simplices with specified boundaries as well, the complexity becomes unmanageable.

The innovation that solves this problem is to introduce a notion of cofibration in type theory, with a corresponding type of extensions. If $i:A\to Bi:A\to B$ is a cofibration and $X:B\to 𝒰X:B\to \mathcal\left\{U\right\}$ is a type family dependent on $BB$, while $f:{\prod }_{a:A}X\left(i\left(a\right)\right)f:\prod_\left\{a:A\right\} X\left(i\left(a\right)\right)$ is a section of $XX$ over $ii$, then we introduce an extension type $⟨{\prod }_{b:B}X\left(b\right){\mid }_{f}^{i}⟩\langle \prod_\left\{b:B\right\} X\left(b\right) \mid^i_f\rangle$ consisting of “those dependent functions $g:{\prod }_{b:B}X\left(b\right)g:\prod_\left\{b:B\right\} X\left(b\right)$ such that $g\left(i\left(a\right)\right)\equiv f\left(a\right)g\left(i\left(a\right)\right) \equiv f\left(a\right)$ — note the strict judgmental equality! — for any $a:Aa:A$”. This is modeled semantically by a “Leibniz” or “pullback-corner” map. In particular, we can define ${\mathrm{hom}}_{A}\left(x,y\right)=⟨{\prod }_{t:2}A{\mid }_{\left[x,y\right]}^{0,1}⟩\hom_A\left(x,y\right) = \langle \prod_\left\{t:2\right\} A \mid^\left\{0,1\right\}_\left\{\left[x,y\right]\right\} \rangle$, the type of functions $f:2\to Af:2\to A$ such that $f\left(0\right)\equiv xf\left(0\right)\equiv x$ and $f\left(1\right)\equiv yf\left(1\right) \equiv y$ strictly, and so on for higher simplices.

General extension types along cofibrations were first considered by Mike and Peter Lumsdaine for a different purpose. In addition to the pullback-corner semantics, they are inspired by the path-types of cubical type theory, which replace the inductively specified identity types of ordinary homotopy type theory with a similar sort of restricted function-type out of the cubical interval. Our paper introduces a general notion of “type theory with shapes” and extension types that includes the basic setup of cubical type theory as well as our simplicial type theory, along with potential generalizations to Joyal’s “disks” for a synthetic theory of $\left(\infty ,n\right)\left(\infty,n\right)$-categories.

## Simplices in the theory of a strict interval

In simplicial type theory, the cofibrations are the “inclusions of shapes” generated by the coherent theory of a strict interval, which is axiomatized by the interval $22$, top and bottom elements $0,1:20,1 : 2$, and an inequality relation $\le \le$ satisfying the strict interval axioms.

Simplices can then be defined as

${\Delta }^{n}:=\left\{⟨{t}_{1},\dots ,{t}_{n}⟩\mid {t}_{n}\le {t}_{n-1}\cdots {t}_{2}\le {t}_{1}\right\} \Delta^n := \\left\{ \langle t_1,\ldots, t_n\rangle \mid t_n \leq t_\left\{n-1\right\} \cdots t_2 \leq t_1 \\right\} $

Note that the 1-simplex ${\Delta }^{1}\Delta^1$ agrees with the interval $22$.

Boundaries, e.g. of the 2-simplex, can be defined similarly $\partial {\Delta }^{2}:=\left\{⟨{t}_{1},{t}_{2}⟩:2×2\mid \left(0\equiv {t}_{2}\le {t}_{1}\right)\vee \left({t}_{2}\equiv {t}_{1}\right)\vee \left({t}_{2}\le {t}_{1}\equiv 1\right)\right\} \partial\Delta^2 :=\\left\{⟨t_1,t_2⟩: 2 \times 2 \mid \left(0 \equiv t_2 \leq t_1\right) \vee \left(t_2 \equiv t_1\right) \vee \left(t_2 \leq t_1 \equiv 1\right)\\right\} $ making the inclusion of the boundary of a 2-simplex into a cofibration.

## Segal types

For any type $AA$ with terms $x,y:Ax,y : A$ define

${\mathrm{hom}}_{A}\left(x,y\right):=⟨2\to A{\mid }_{\left[x,y\right]}^{\partial {\Delta }^{1}}⟩ hom_A\left(x,y\right) := \langle 2 \to A \mid^\left\{\partial\Delta^1\right\}_\left\{ \left[x,y\right]\right\} \rangle $

That is, a term $f:{\mathrm{hom}}_{A}\left(x,y\right)f : hom_A\left(x,y\right)$, which we call an arrow from $xx$ to $yy$ in $AA$, is a function $f:2\to Af: 2 \to A$ so that $f\left(0\right)\equiv xf\left(0\right) \equiv x$ and $f\left(1\right)\equiv yf\left(1\right) \equiv y$. For $f:{\mathrm{hom}}_{A}\left(x,y\right)f : hom_A\left(x,y\right)$, $g:{\mathrm{hom}}_{A}\left(y,z\right)g : hom_A\left(y,z\right)$, and $h:{\mathrm{hom}}_{A}\left(x,z\right)h : hom_A\left(x,z\right)$, a similar extension type

${\mathrm{hom}}_{A}\left(x,y,z,f,g,h\right):=⟨{\Delta }^{2}\to A{\mid }_{\left[x,y,z,f,g,h\right]}^{\partial {\Delta }^{2}}⟩ hom_A\left(x,y,z,f,g,h\right) := \langle \Delta^2 \to A \mid^\left\{\partial\Delta^2\right\}_\left\{\left[x,y,z,f,g,h\right]\right\}\rangle $

has terms that we interpret as witnesses that $hh$ is the composite of $ff$ and $gg$. We define a Segal type to be a type in which any composable pair of arrows admits a unique (composite, witness) pair. In homotopy type theory, this may be expressed by saying that $AA$ is Segal if and only if for all $f:{\mathrm{hom}}_{A}\left(x,y\right)f : hom_A\left(x,y\right)$ and $g:{\mathrm{hom}}_{A}\left(y,z\right)g : hom_A\left(y,z\right)$ the type

$\sum _{h:{\mathrm{hom}}_{A}\left(x,z\right)}{\mathrm{hom}}_{A}\left(x,y,z,f,g,h\right) \sum_\left\{h : hom_A\left(x,z\right)\right\} hom_A\left(x,y,z,f,g,h\right) $

is contractible. A contractible type is in particular inhabited, and an inhabitant in this case defines a term $g\circ f:{\mathrm{hom}}_{A}\left(x,z\right)g \circ f : hom_A\left(x,z\right)$ that we refer to as the composite of $ff$ and $gg$, together with a 2-simplex witness $\mathrm{comp}\left(g,f\right):{\mathrm{hom}}_{A}\left(x,y,z,f,g,g\circ f\right)comp\left(g,f\right) : hom_A\left(x,y,z,f,g,g \circ f\right)$.

Somewhat surprisingly, this single contractibility condition characterizing Segal types in fact ensures coherent categorical structure at all dimensions. The reason is that if $AA$ is Segal, then the type $X\to AX \to A$ is also Segal for any type or shape $XX$. For instance, applying this result in the case $X=2X=2$ allows us to prove that the composition operation in any Segal type is associative. In an appendix we prove a conjecture of Joyal that in the simplical spaces model this condition really does characterize exactly the Segal spaces, as usually defined.

## Discrete types

An example of a Segal type is a discrete type, which is one for which the map

$\mathrm{idtoarr}:\prod _{x,y:A}\left(x{=}_{A}y\right)\to {\mathrm{hom}}_{A}\left(x,y\right) idtoarr : \prod_\left\{x,y: A\right\} \left(x=_A y\right) \to hom_A\left(x,y\right) $

defined by identity elimination by sending the reflexivity term to the identity arrow, is an equivalence. In a discrete type, the $\infty \infty$-groupoid structure encoded by the identity types and equivalent to the $\left(\infty ,1\right)\left(\infty,1\right)$-category structure encoded by the hom types. More precisely, a type $AA$ is discrete if and only if it is Segal, as well as Rezk-complete (in the sense to be defined later on), and moreover “every arrow is an isomorphism”.

## The dependent Yoneda lemma

If $AA$ and $BB$ are Segal types, then any function $f:A\to Bf:A\to B$ is automatically a “functor”, since by composition it preserves 2-simplices and hence witnesses of composition. However, not every type family $C:A\to 𝒰C:A\to \mathcal\left\{U\right\}$ is necessarily functorial; in particular, the universe $𝒰\mathcal\left\{U\right\}$ is not Segal — its hom-types ${\mathrm{hom}}_{𝒰}\left(X,Y\right)\hom_\left\{\mathcal\left\{U\right\}\right\}\left(X,Y\right)$ consist intuitively of “spans and higher spans”. We say that $C:A\to 𝒰C:A\to \mathcal\left\{U\right\}$ is covariant if for any $f:{\mathrm{hom}}_{A}\left(x,y\right)f:\hom_A\left(x,y\right)$ and $u:C\left(x\right)u:C\left(x\right)$, the type

$\sum _{v:C\left(y\right)}⟨\prod _{t:2}C\left(f\left(t\right)\right){\mid }_{\left[u,v\right]}^{\partial {\Delta }^{1}}⟩ \sum_\left\{v:C\left(y\right)\right\} \langle \prod_\left\{t:2\right\} C\left(f\left(t\right)\right) \mid^\left\{\partial\Delta^1\right\}_\left\{\left[u,v\right]\right\}\rangle $

of “liftings of $ff$ starting at $uu$” is contractible. An inhabitant of this type consists of a point ${f}_{*}\left(u\right):C\left(y\right)f_\ast\left(u\right):C\left(y\right)$, which we call the (covariant) transport of $uu$ along $ff$, along with a witness $\mathrm{trans}\left(f,u\right)trans\left(f,u\right)$. As with Segal types, this single contractibility condition suffices to ensure that this action is coherently functorial. It also ensures that the fibers $C\left(x\right)C\left(x\right)$ are discrete, and that the total space ${\sum }_{x:A}C\left(x\right)\sum_\left\{x:A\right\} C\left(x\right)$ is Segal.

In particular, for any Segal type $AA$ and any $a:Aa:A$, the hom-functor ${\mathrm{hom}}_{A}\left(a,-\right):A\to 𝒰\hom_A\left(a,-\right) :A \to \mathcal\left\{U\right\}$ is covariant. The Yoneda lemma says that for any covariant $C:A\to 𝒰C:A\to \mathcal\left\{U\right\}$, evaluation at $\left(a,{\mathrm{id}}_{a}\right)\left(a,id_a\right)$ defines an equivalence

$\left(\prod _{x:A}{\mathrm{hom}}_{A}\left(a,x\right)\to C\left(x\right)\right)\simeq C\left(a\right) \Big\left(\prod_\left\{x:A\right\} \hom_A\left(a,x\right) \to C\left(x\right)\Big\right) \simeq C\left(a\right) $

The usual proof of the Yoneda lemma applies, except that it’s simpler since we don’t need to check naturality or functoriality; in the “synthetic” world all of that comes for free.

More generally, we have a dependent Yoneda lemma, which says that for any covariant $C:\left({\sum }_{x:A}{\mathrm{hom}}_{A}\left(a,x\right)\right)\to 𝒰C : \Big\left(\sum_\left\{x:A\right\} \hom_A\left(a,x\right)\Big\right) \to \mathcal\left\{U\right\}$, we have a similar equivalence

$\left(\prod _{x:A}\prod _{f:{\mathrm{hom}}_{A}\left(a,x\right)}C\left(x,f\right)\right)\simeq C\left(a,{\mathrm{id}}_{a}\right). \Big\left(\prod_\left\{x:A\right\} \prod_\left\{f:\hom_A\left(a,x\right)\right\} C\left(x,f\right)\Big\right) \simeq C\left(a,id_a\right). $

This should be compared with the universal property of identity-elimination (path induction) in ordinary homotopy type theory, which says that for any type family $C:\left({\sum }_{x:A}\left(a=x\right)\right)\to 𝒰C : \Big\left(\sum_\left\{x:A\right\} \left(a=x\right)\Big\right) \to \mathcal\left\{U\right\}$, evaluation at $\left(a,{\mathrm{refl}}_{a}\right)\left(a,refl_a\right)$ defines an equivalence

$\left(\prod _{x:A}\prod _{f:a=x}C\left(x,f\right)\right)\simeq C\left(a,{\mathrm{refl}}_{a}\right). \Big\left(\prod_\left\{x:A\right\} \prod_\left\{f:a=x\right\} C\left(x,f\right)\Big\right) \simeq C\left(a,refl_a\right). $

In other words, the dependent Yoneda lemma really is a “directed” generalization of path induction.

## Rezk types

When is an arrow $f:{\mathrm{hom}}_{A}\left(x,y\right)f : hom_A\left(x,y\right)$ in a Segal type an isomorphism? Classically, $ff$ is an isomorphism just when it has a two-sided inverse, but in homotopy type theory more care is needed, for the same reason that we have to be careful when defining what it means for a function to be an equivalence: we want the notion of “being an isomorphism” to be a mere proposition. We could use analogues of any of the equivalent notions of equivalence in Chapter 4 of the HoTT Book, but the simplest is the following:

$\mathrm{isiso}\left(f\right):=\left(\sum _{g:{\mathrm{hom}}_{A}\left(y,x\right)}g\circ f={\mathrm{id}}_{x}\right)×\left(\sum _{h:{\mathrm{hom}}_{A}\left(y,x\right)}f\circ h={\mathrm{id}}_{y}\right) isiso\left(f\right) := \left\left(\sum_\left\{g : hom_A\left(y,x\right)\right\} g \circ f = id_x\right\right) \times \left\left(\sum_\left\{h : hom_A\left(y,x\right)\right\} f \circ h = id_y \right\right) $

An element of this type consists of a left inverse and a right inverse together with witnesses that the respective composites with $ff$ define identities. It is easy to prove that $g=hg = h$, so that $ff$ is an isomorphism if and only if it admits a two-sided inverse, but the point is that any pair of terms in the type $\mathrm{isiso}\left(f\right)isiso\left(f\right)$ are equal (i.e., $\mathrm{isiso}\left(f\right)isiso\left(f\right)$ is a mere proposition), which would not be the case for the more naive definition.

The type of isomorphisms from $xx$ to $yy$ in $AA$ is then defined to be

$\left(x{\cong }_{A}y\right):=\sum _{f:{\mathrm{hom}}_{A}\left(x,y\right)}\mathrm{isiso}\left(f\right). \left(x \cong_A y\right) := \sum_\left\{f : \hom_A\left(x,y\right)\right\} isiso\left(f\right). $

Identity arrows are in particular isomorphisms, so by identity-elimination there is a map

$\prod _{x,y:A}\left(x{=}_{A}y\right)\to \left(x{\cong }_{A}y\right) \prod_\left\{x,y: A\right\} \left(x =_A y\right) \to \left(x \cong_A y\right) $

and we say that a Segal type $AA$ is Rezk complete if this map is an equivalence, in which case $AA$ is a Rezk type.

Similarly, it is somewhat delicate to define homotopy correct types of adjunction data that are contractible when they are inhabited. In the final section to our paper, we compare transposing adjunctions, by which we mean functors $f:A\to Bf : A \to B$ and $u:B\to Au : B \to A$ (i.e. functions between Segal types) together with a fiberwise equivalence

$\prod _{a:A,b:B}{\mathrm{hom}}_{A}\left(a,ub\right)\simeq {\mathrm{hom}}_{B}\left(fa,b\right) \prod_\left\{a :A, b: B\right\} \hom_A\left(a,u b\right) \simeq \hom_B\left(f a,b\right) $

with various notions of diagrammatic adjunctions, specified in terms of units and counits and higher coherence data.

The simplest of these, which we refer to as a quasi-diagrammatic adjunction is specified by a pair of functors as above, natural transformations $\eta :{\mathrm{Id}}_{A}\to uf\eta: Id_A \to u f$ and $ϵ:fu\to {\mathrm{Id}}_{B}\epsilon : f u \to Id_B$ (a “natural transformation” is just an arrow in a function-type between Segal types), and witnesses $\alpha \alpha$ and $\beta \beta$ to both of the triangle identities. The incoherence of this type of data has been observed in bicategory theory (it is not cofibrant as a 2-category) and in $\left(\infty ,1\right)\left(\infty,1\right)$-catgory theory (as a subcomputad of the free homotopy coherent adjunction it is not parental). One homotopically correct type of adjunction data is a half-adjoint diagrammatic adjunction, which has additionally a witness that $f\alpha :ϵ\circ fuϵ\circ f\eta u\to ϵf \alpha : \epsilon \circ f u\epsilon \circ f\eta u \to \epsilon$ and $\beta u:ϵ\circ ϵfu\circ f\eta u\beta u: \epsilon \circ \epsilon f u \circ f \eta u$ commute with the naturality isomorphism for $ϵ\epsilon$.

We prove that given Segal types $AA$ and $BB$ and functors $f:A\to Bf : A \to B$ and $u:B\to Au : B \to A$, the type of half-adjoint diagrammatic adjunctions between them is equivalent to the type of transposing adjunctions. More precisely, if in the notion of transposing adjunction we interpret “equivalence” as a “half-adjoint equivalence”, i.e. a pair of maps $\varphi \phi$ and $\psi \psi$ with homotopies $\varphi \psi =1\phi \psi = 1$ and $\psi \varphi =1\psi \phi = 1$ and a witness to one of the triangle identities for an adjoint equivalence (this is another of the coherent notions of equivalence from the HoTT Book), then these data correspond exactly under the Yoneda lemma to those of a half-adjoint diagrammatic adjunction.

This suggests that similar correspondences for other kinds of coherent equivalences. For instance, if we interpret transposing adjunctions using the “bi-invertibility” notion of coherent equivalence (specification of a separate left and right inverse, as we used above to define isomorphisms in a Segal type), we obtain upon Yoneda-fication a new notion of coherent diagrammatic adjunction, consisting of a unit $\eta \eta$ and two counits $ϵ,ϵ\prime \epsilon,\epsilon\text{'}$, together with witnesses that $\eta ,ϵ\eta,\epsilon$ satisfy one triangle identity and $\eta ,ϵ\prime \eta,\epsilon\text{'}$ satisfy the other triangle identity.

Finally, if the types $AA$ and $BB$ are not just Segal but Rezk, we can show that adjoints are literally unique, not just “unique up to isomorphism”. That is, given a functor $u:B\to Au:B\to A$ between Rezk types, the “type of left adjoints to $uu$” is a mere proposition.

## May 24, 2017

### ZapperZ - Physics and Physicists

What Every Physics Major Should Know?
Chad Orzel took on the silly tweet posted by Sean Carroll on what HE thinks that every physics major should  know.

Over the weekend, cosmologist and author Sean Carroll tweeted about what physics majors should know, namely that "the Standard Model is an SU(3)xSU(2)xU(1) gauge theory, and know informally what that means." My immediate reaction to this was pretty much in line with Brian Skinner's, namely that this is an awfully specific and advanced bit of material to be a key component of undergraduate physics education. (I'm assuming an undergrad context here, because you wouldn't usually talk about a "major" at the high school or graduate school levels.)

I categorize the tweet by Carroll as silly because he has no evidence to back up WHY this is such an important piece of information and knowledge for EVERY physics major. I hate to make my own silly generalization, but I'm going to here. This type of assertion sounds like it is a typical comment made by a theorist working on an esoteric subject matter. There! I've said it, and I'm sure I've offended many people already!

I would like to make another assertion, which is that there are PLENTY (even majority?) of physics majors who got their undergraduate degree without "informally" knowing the meaning of "...the Standard Model is an SU(3)xSU(2)xU(1) gauge theory...", AND..... go on to have a meaningful career in physics. Anyone care to dispute me on that?

If that is true, then Carroll's assertion is meaningless, because there appears to be NO valid reason for why a physics major needs to know that. He/she needs to know QM, CM, and E&M. That much I will give. Orzel even listed these and other subject areas that a typical undergraduate in physics is assumed to know. But a gauge symmetry in the Standard Model? Is this even in the Physics GRE?

Considering that about HALF of B.Sc degree recipients in physics do not go on to graduate school, I can think of many other, MORE IMPORTANT skills and knowledge that we should equipped physics majors. We are trying to make physics majors more "employable" in the marketplace, especially in the private sector. Comments by Carroll simply re-enforced the DISCONNECT that many physics departments have in how they train and educate their students without paying attention to their employment possibilities beyond research and academia. This is highly irresponsible!

I'm glad that Orzel took this head on, because Sean Carroll should know better... or maybe he doesn't, and that's the problem!

Zz.

### Lubos Motl - string vacua and pheno

Hep-ph arXiv conquered by GAMBIT
Most of the new hep-ph papers on the arXiv were released by the same large collaboration called GAMBIT which stands for The Global And Modular BSM Inference Tool. Note that BSM stands for Beyond the Standard Model. Most but not all BSM models that people study or want to study are supersymmetric.

This stack of cards may actually be seen in the lower right corner of all graphs produced by GAMBIT. ;-)

Click at the hyperlink to learn about their project. I have always called for the creation of such systems and it's great that one of them seems to be born by now.

Much of the work of model builders is really about some routine work – one works with some new fields and interaction terms in the Lagrangian, some methods to calculate particle physics predictions, scan the parameter spaces, compute probability distributions, and compare predictions with the experimental data etc.

A key word is "routine": Quantum field theory and its application is nontrivial and one needs to learn many prerequisites before she gets at this level. On the other hand, it's a finite amount of knowledge and the technology has almost always the same character, independently of the particular model beyond the Standard Model that one proposes.

So this collaboration of 30 model builders proposes their code to systematize much of the work. With this help of the computer, lots of human work should be saved and the work should become faster and more effective. Many smart brains could be saved for some more creative work, especially some serious thinking about string theory. We sometimes talk about occupations that may be replaced with robots in a decade – most model builders may be among them.

I believe that experimental teams such as those at the LHC should join and/or develop their own programs that may basically produce the usual ATLAS/CMS papers about all the channels parameterized by a particular theory as outcomes generated by the same program run with different arguments or parameters. Don't the authors of the hundreds of ATLAS/CMS papers feel that they're doing a boring work that would be better done by a computer?

At any rate, most of the today's new hep-ph papers are all about GAMBIT. It's the nine papers [1], [5-7], [9-13].

You may download all the codes. It's weird that most of the archives are either 43 or 44 or 45 megabytes in size although they seem to have a different content. The code is supposed to run on supercomputers such as Prometheus but I think that Promotheus shouldn't be "absolutely required". In March, Physics World published a story about GAMBIT.

Under this avalanche of papers, it's easy to overlook a new paper by Nanopoulos, Li, Maxin who still seem excited about the $${\mathcal F}$$-$$SU(5)$$ models even though they had to raise the gluino mass to $$1.9$$-$$2.3\TeV$$.

## May 23, 2017

### Andrew Jaffe - Leaves on the Line

Not-quite hacked

This week, the New York Times, The Wall Street Journal and Twitter, along with several other news organizations, have all announced that they were attacked by (most likely) Chinese hackers.

I am not quite happy to join their ranks: for the last few months, the traffic on this blog has been vastly dominated by attempts to get into the various back-end scripts that run this site, either by direct password hacks or just denial-of-service attacks. In fact, I only noticed it because the hackers exceeded my bandwidth allowance by a factor of a few (and costing me a few hundred bucks in over-usage charged by my host in the process, unfortunately).

I’ve since attempted to block the attacks by denying access to the IP addresses which have been the most active (mostly from domains that look like 163data.com.cn, for what it’s worth). So, my apologies if any of this results in any problems for anyone else trying to access the blog.

### Andrew Jaffe - Leaves on the Line

JSONfeed

More technical stuff, but I’m trying to re-train myself to actually write on this blog, so here goes…

For no good reason other than it was easy, I have added a JSONfeed to this blog. It can be found at http://andrewjaffe.net/blog/feed.json, and accessed from the bottom of the right-hand sidebar if you’re actually reading this at andrewjaffe.net.

What does this mean? JSONfeed is an idea for a sort-of successor to something called RSS, which may stand for really simple syndication, a format for encapsulating the contents of a blog like this one so it can be indexed, consumed, and read in a variety of ways without explicitly going to my web page. RSS was created by developer, writer, and all around web-and-software guru Dave Winer, who also arguably invented — and was certainly part of the creation of — blogs and podcasting. Five or ten years ago, so-called RSS readers were starting to become a common way to consume news online. NetNewsWire was my old favourite on the Mac, although its original versions by Brent Simmons were much better than the current incarnation by a different software company; I now use something called Reeder. But the most famous one was Google Reader, which Google discontinued in 2013, thereby killing off most of the RSS-reader ecosystem.

But RSS is not dead: RSS readers still exist, and it is still used to store and transfer information between web pages. Perhaps most importantly, it is the format behind subscriptions to podcasts, whether you get them through Apple or Android or almost anyone else.

But RSS is kind of clunky, because it’s built on something called XML, an ugly but readable format for structuring information in files (HTML, used for the web, with all of its < and > “tags”, is a close cousin). Nowadays, people use a simpler family of formats called JSON for many of the same purposes as XML, but it is quite a bit easier for humans to read and write, and (not coincidentally) quite a bit easier to create computer programs to read and write.

So, finally, two more web-and-software developers/gurus, Brent Simmons and Manton Reece realised they could use JSON for the same purposes as RSS. Simmons is behind NewNewsWire and Reece’s most recent project is an “indie microblogging” platform (think Twitter without the giant company behind it), so they both have an interest in these things. And because JSON is so comparatively easy to use, there is already code that I could easily add to this blog so it would have its own JSONfeed. So I did it.

So it’s easy to create a JSONfeed. What there isn’t — so far — are any newsreaders like NetNewsWire or Reeder that can ingest them. (In fact, Maxime Vaillancourt apparently wrote a web-based reader in about an hour, but it may already be overloaded…). Still, looking forward to seeing what happens.

### Symmetrybreaking - Fermilab/SLAC

LHC swings back into action

Protons are colliding once again in the Large Hadron Collider.

This morning at CERN, operators nudged two high-energy beams of protons into a collision course inside the world’s largest and most energetic particle accelerator, the Large Hadron Collider. These first stable beams inside the LHC since the extended winter shutdown usher in another season of particle hunting.

The LHC’s 2017 run is scheduled to last until December 10. The improvements made during the winter break will ensure that scientists can continue to search for new physics and study rare subatomic phenomena. The machine exploits Albert Einstein’s principle that energy and matter are equivalent and enables physicists to transform ordinary protons into the rare massive particles that existed when our universe was still in its infancy.

“Every time the protons collide, it’s like panning for gold,” says Richard Ruiz, a theorist at Durham University. “That’s why we need so much data. It’s very rare that the LHC produces something interesting like a Higgs boson, the subatomic equivalent of a huge gold nugget. We need to find lots of these rare particles so that we can measure their properties and be confident in our results.”

During the LHC’s four-month winter shutdown, engineers replaced one of its main dipole magnets and carried out essential upgrades and maintenance work. Meanwhile, the LHC experiments installed new hardware and revamped their detectors. Over the last several weeks, scientists and engineers have been performing the final checks and preparations for the first “stable beams” collisions.

“There’s no switch for the LHC that instantly turns it on,” says Guy Crockford, an LHC operator. “It’s a long process, and even if it’s all working perfectly, we still need to check and calibrate everything. There’s a lot of power stored in the beam and it can easily damage the machine if we’re not careful.”

In preparation for data-taking, the LHC operations team first did a cold checkout of the circuits and systems without beam and then performed a series of dress rehearsals with only a handful of protons racing around the machine.

“We set up the machine with low intensity beams that are safe enough that we could relax the safety interlocks and make all the necessary tweaks and adjustments,” Crockford says. “We then deliberately made the proton beams unstable to check that all the loose particles were caught cleanly. It’s a long and painstaking process, but we need complete confidence in our settings before ramping up the beam intensity to levels that could easily do damage to the machine.”

The LHC started collisions for physics with only three proton bunches per beam. Over the course of the next month, the operations team will gradually increase the number of proton bunches until they have 2760 per beam. The higher proton intensity greatly increases the rate of collisions, enabling the experiments to collect valuable data at a much faster rate.

“We’re always trying to improve the machine and increase the number of collisions we deliver to the experiments,” Crockford says. “It’s a personal challenge to do a little better every year.”

### Emily Lakdawalla - The Planetary Society Blog

NASA's 2018 budget request is here, and we broke down the details
The White House's 2018 federal budget request includes $19.1 billion for NASA, which is a 3 percent drop from 2017. We broke down the details. ### Andrew Jaffe - Leaves on the Line SDSS 1416+13B It’s not that often that I can find a reason to write about both astrophysics and music — my obsessions, vocations and avocations — at the same time. But the recent release of Scott Walker’s (certainly weird, possibly wonderful) new record Bish Bosch has given me just such an excuse: Track 4 is a 21-minute opus of sorts, entitled “SDSS1416+13B (Zercon, A Flagpole Sitter)”. The title seems a random collection of letters, numbers and words, but that’s not what it is: SDSS1416+13B is the (very slightly mangled) identification of an object in the Sloan Digital Sky Survey (SDSS) catalog — 1416+13B means that it is located at Right Ascension 14h16m and Declination 13° (actually, its full name is SDSS J141624.08+134826.7 which gives the location more precisely) and “B” denotes that it’s actually the second of two objects (the other one is unsurprisingly called “A”). In fact it’s a pretty interesting object: it was actually discovered not by SDSS alone, but by cross-matching with another survey, the UK Infrared Deep Sky Survey (UKIDSS) and looking at the images by eye. It turns out that the two components are a binary system made up of two brown dwarfs — objects that aren’t massive enough to burn hydrogen via nuclear fusion, but are more massive than even the heaviest planets, often big enough to form at the centre of their own stellar systems, and heavy enough have some nuclear reactions in their core. In fact, the UKIDSS survey has been one of the best ways to find such comparatively cool objects; my colleagues Daniel Mortlock and Steve Warren found one of the coolest known brown dwarfs in UKIDSS in 2007, using techniques very similar to those they also used to find the most distant quasar yet known, recounted by Daniel in a guest-post here. Like that object, SDSS1416+13B is one of the coolest such objects ever found. What does all this have to do with Scott Walker? I have no idea. Since he started singing as a member of the Walker Brothers in the 60s — and even more so since his 70s solo records, Walker has been known for his classical-sounding baritone, though with his mannered, massive vibrato, he always sounds a bit like a rocker’s caricature of a classical singer. I’ve always thought it was more force of personality than actual skill that drew people — especially here in the UK — to him. His latest, Bish Bosch, the third in a purported trilogy of records he’s made since resurfacing in the mid-1990s, veers between mannered art-songs and rock’n’roll, silences punctuated with electric guitars, fart-sounds and trumpets. The song “SDSS1416” itself is an (I assume intentionally funny?) screed, alternating sophomoric insults (my favourite is “don’t go to a mind reader, go to a palmist; I know you’ve got a palm”) with recitations of Roman numerals and, finally, the only link to observations of a brown dwarf I can find, “Infrared, infrared/ I could drop/ into the darkness.” Your guess is as good as mine. It’s compelling, but I can’t tell if that’s as an epic or a train wreck. ### Andrew Jaffe - Leaves on the Line Science as metaphor In further pop-culture crossover news, I was pleased to see this paragraph in John Keane’s review of Alan Ryan’s “On Politics” in this weekend’s Financial Times: Ryan sees this period [the 1940s] as the point of triumph of liberal democracy against its Fascist and Stalinist opponents. Closer attention shows this decade was instead a moment of what physicists call dark energy: the universe of meaning of democracy underwent a dramatic expansion, in defiance of the cosmic gravity of contemporary events. The ideal of monitory democracy was born. Not a bad metaphor. Nice to see that the author, a professor of Politics from Sydney, is paying attention to the stuff that really matters. ### Andrew Jaffe - Leaves on the Line Quantum debrief A week ago, I finished my first time teaching our second-year course in quantum mechanics. After a bit of a taster in the first year, the class concentrates on the famous Schrödinger equation, which describes the properties of a particle under the influence of an external force. The simplest version of the equation is just This relates the so-called wave function, ψ, to what we know about the external forces governing its motion, encoded in the Hamiltonian operator, Ĥ. The wave function gives the probability (technically, the probability amplitude) for getting a particular result for any measurement: its position, its velocity, its energy, etc. (See also this excellent public work by our department’s artist-in-residence.) Over the course of the term, the class builds up the machinery to predict the properties of the hydrogen atom, which is the canonical real-world system for which we need quantum mechanics to make predictions. This is certainly a sensible endpoint for the 30 lectures. But it did somehow seem like a very old-fashioned way to teach the course. Even back in the 1980s when I first took a university quantum mechanics class, we learned things in a way more closely related to the way quantum mechanics is used by practicing physicists: the mathematical details of Hilbert spaces, path integrals, and Dirac Notation. Today, an up-to-date quantum course would likely start from the perspective of quantum information, distilling quantum mechanics down to its simplest constituents: qbits, systems with just two possible states (instead of the infinite possibilities usually described by the wave function). The interactions become less important, superseded by the information carried by those states. Really, it should be thought of as a full year-long course, and indeed much of the good stuff comes in the second term when the students take “Applications of Quantum Mechanics” in which they study those atoms in greater depth, learn about fermions and bosons and ultimately understand the structure of the periodic table of elements. Later on, they can take courses in the mathematical foundations of quantum mechanics, and, yes, on quantum information, quantum field theory and on the application of quantum physics to much bigger objects in “solid-state physics”. Despite these structural questions, I was pretty pleased with the course overall: the entire two-hundred-plus students take it at the beginning of their second year, thirty lectures, ten ungraded problem sheets and seven in-class problems called “classworks”. Still to come: a short test right after New Year’s and the final exam in June. Because it was my first time giving these lectures, and because it’s such an integral part of our teaching, I stuck to to the same notes and problems as my recent predecessors (so many, many thanks to my colleagues Paul Dauncey and Danny Segal). Once the students got over my funny foreign accent, bad board handwriting, and worse jokes, I think I was able to get across both the mathematics, the physical principles and, eventually, the underlying weirdness, of quantum physics. I kept to the standard Copenhagen Interpretation of quantum physics, in which we think of the aforementioned wavefunction as a real, physical thing, which evolves under that Schrödinger equation — except when we decide to make a measurement, at which point it undergoes what we call collapse, randomly and seemingly against causality: this was Einstein’s “spooky action at a distance” which seemed to indicate nature playing dice with our Universe, in contrast to the purely deterministic physics of Newton and Einstein’s own relativity. No one is satisfied with Copenhagen, although a more coherent replacement has yet to be found (I won’t enumerate the possibilities here, except to say that I find the proliferating multiverse of Everett’s Many-Worlds interpretation ontologically extravagant, and Chris FuchsQuantum Bayesianism compelling but incomplete). I am looking forward to getting this year’s SOLE results to find out for sure, but I think the students learned something, or at least enjoyed trying to, although the applause at the end of each lecture seemed somewhat tinged with British irony. ### Tommaso Dorigo - Scientificblogging Physics-Inspired Artwork In Venice 1: Sub-Lime This is the first of a series of posts that will publish the results of artistic work by high-school students of three schools in Venice, who participate in a contest and exposition connected to the initiative "Art and Science across Italy", an initiative of the network CREATIONS, funded by the Horizon 2020 programme read more ## May 22, 2017 ### CERN Bulletin 47th Relay Race! On Thursday June 1st at 12.15, Fabiola Gianotti, our Director-General, will fire the starting shot for the 47th Relay Race. This Race is above all a festive CERN event, open for runners and walkers, as well as the people cheering them on throughout the race, and those who wish to participate in the various activities organised between 11.30 and 14.30 out on the lawn in front of Restaurant 1. In order to make this sports event accessible for everyone, our Director-General will allow for flexible lunch hours on the day, applicable for all the members of personnel. An alert for the closure of roads will be send out on the day of the event. The Staff Association and the CERN Running Club thank you in advance for your participation and your continued support throughout the years. This year the CERN Running Club has announced the participation of locally and internationally renowned runners, no less! A bit over a week from the Relay Race of 1st June, the number of teams is going up nicely (already almost 40). Among them, we will have three teams this year from our main partner Berthie Sport, and I can tell you that they are not coming just for fun! The ladies’ team has been built from the best runners in the Pays de Gex, including Laetitia Matlet, winner of the Challenge des courses à pied du Pays de Gex, and Isabelle Marchand, winner of the Foulées de Crozet. But the most impressive will definitely be the men’s team, with the presence of several top-level runners: • Tristan Le Lay, triathlete of European level, 4:15 on half ironman • Pierre Baque, winner of the SaintéLyon Relay, 1:10 on half marathon • Ludovic Pommeret, winner of UTMB, one of the top ultra-trail runners in the world! You can start placing your bets on the new race record! :) ### CERN Bulletin Concert # Small Capella ## Friday 2 June at 18.00 CERN Meyrin, Main Auditorium Free admission Moscow chamber choir Small Capella arose within the walls of Children‘s musical school No. 10, and evolved over the years into a mixed choir of people of various age and occupation, open to anyone fond of choral music. The repertoire includes Russian and foreign classical music, sacred music, folk songs, contemporary choral compositions. The concert will include solo vocal and piano pieces. ### CERN Bulletin Offers for our members Summer is coming, enjoy our offers for the aquatic parcs! Walibi : Tickets "Zone terrestre": 24 € instead of 30 €. Access to Aqualibi: 5 € instead of 6 € on presentation of your SA member ticket. Free for children under 100 cm. Car park free. * * * * * Aquaparc : Day ticket: Children: 33 CHF instead of 39 CHF Adults : 33 CHF instead of 49 CHF Bonus! Free for children under 5. ### CERN Bulletin 29th June 2017 – Ordinary General Assembly of the Staff Association! In the first semester of each year, the Staff Association (SA) invites its members to attend and participate in the Ordinary General Assembly (OGA). This year the OGA will be held on Thursday, 29 June 2017 from 15.30 to 17.30, Main Auditorium, Meyrin (500-1-001). During the Ordinary General Assembly, the activity and financial reports of the SA are presented and submitted for approval to the members. This is the occasion to get a global view on the activities of the SA, its management, and an opportunity to express your opinion, particularly by taking part in votes. Other items are listed on the agenda, as proposed by the Staff Council. # Who can vote? Ordinary members (MPE) of the SA can take part in all votes. Associated members (MPA) of the SA and/or affiliated pensioners have a right to vote on those topics that are of direct interest to them. # Who can give their opinion, and how? The Ordinary General Assembly is also the opportunity for members of the SA to express themselves through the addition of discussion points to the agenda. For these points to be subjected to a vote, the request must be introduced in writing to the President of the Staff Association, at least 20 days before the General Assembly, and by at least 20 members of the SA. Additionally, members of the SA can ask the OGA to have a discussion on a specific point, after expiration of the agenda, but no decision shall be taken based on these discussions. # Can we contest the decisions? Any decision taken by the Ordinary General Assembly can be contested through a referendum as defined in the Statute of the Staff Association. Do not hesitate, take part in your Ordinary General Assembly on 29 June 2017. Come and make your voice count, and seize this occasion to exchange with your staff delegates! Statutes of the CERN Staff Association. ### CERN Bulletin GAC-EPA Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juin, juillet et décembre. La prochaine permanence se tiendra le : Mardi 30 mai de 13 h 30 à 16 h 00 Salle de réunion de l’Association du personnel Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires. Informations : http://gac-epa.org/ Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php ## May 21, 2017 ### Tommaso Dorigo - Scientificblogging Europe In My Region 2017 The European Commission has launched a couple of weeks ago the campaign "Europe in my region 2017", an initiative aimed at getting the general public informed on the projects funded by the European Community in their area of residence or activity. There are open day events scheduled a bit everywhere, a blog contest, a photo contest, and other initiatives of interest. read more ## May 20, 2017 ### Geraint Lewis - Cosmic Horizons The Chick Peck Problem So, my plans for my blog through 2017 have not quite gone to plan, but things have been horrendously busy, and it seems like the rest of the year is likely to continue this way. But I did get a chance to do some recreational mathematics, spurred on my a story in the news. It's to do with a problem presented at the 2017 Raytheon MATHCOUNTS® National Competition and reported in the New York Times. Here's the question as presented in the press: Kudos to 13 year old Texan, Luke Robitialle, who got this right. With a little thought, you should be able to realise that the answer is 25. For any particular chick, there are four potential out comes, each with equal probability. Either the chick is • pecked from the left • pecked from the right • pecked from left and right • not pecked at all Only one of these options results in the chick being unpecked, and so the expected number of chicks unpecked in a circle of 100 is one quarter of this number, or 25. ABC journalist and presenter extraordinaire, Leigh Sales, tweeted But it's the kind of maths that makes me ask more questions. While 25 is the expected number of unpecked chicks, what is the distribution of unpecked chicks? What I mean by this is that they peck left or right at random, there might be 24 unpecked chicks for one group of a hundred chicks, 25 for the next, and 23 for the next. So, the question is, given a large number of 100 chick experiments, what's the distribution of unpecked chicks? I tried to find an analytic solution, but my brain is old, and so I resorted to good old numerical methods. Generate lots of experiments on a computer, and see what the outcomes look like. But there is an issue that we have to think about, namely the question of how many different configurations of chicks pecking left and right can have? Well, left and right are two options, and for 100 chicks, the number of possible left and right configurations is That's a lot of possibilities! How are we going to sample these? Well, if we treat a 0 as "chick pecks to the left", and 1 as "check pecks to the right", then if we choose a random integer between 0 and 2100-1, and represent it as a binary number, then that will be a random sampling of the pecking order (pecking order, get it!) As an example, all chicks peck to the left would be 100 0s in binary, where as all the chicks peck to the right would be 100 1s in binary. Let's try a randomly drawn integer in the range. We get (in base 10) 333483444300232384702347234. In binary this is 0000000000010001001111011001110111011011110101101010111100001100011100100110100100000111011111100010 So, the first bunch of chicks peck to the left, then we have a mix of right to left pecks.  But how many of these chicks are unpecked (remembering what the original question)? Well, for any particular chick, it will be unpecked if the chick to its left pecks to the left, and the chick to its right pecks to the right. So, we're looking for sequences of '001' and '011', with the middle digit representing the chick we are interested in.  So, we can chick this into a little python code (had to learn it, all the cool kids are using it these days) and this is what I have There is a little extra in there to account for the fact that the chicks are sitting in a circle, but as you can see, the code is quite compact. OK. Let's run for the 100 chicks in the question. What do we get? Yay! The unpecks peak at 25, but there is a nice distribution (which, I am sure, must have an analytic solution somewhere.  But given the simplicity of the code, I can easily change the number of chicks. What about 10 chicks in circle? Hmmm. Interesting. What about 1000 chicks? And 33 chicks? Most likely number of unpecked chicks is 8, but again, a nice distribution.  Now, you might be sitting there wondering why the heck I am doing this? Well, firstly, because it is fun! And interesting! It's a question and it is fun to find the answer.  Secondly it is not obvious what the distribution would be, and how complex it would be to derive, or even if it exists, and so a numerical approach allows us to find an answer.  Finally, I can easily generalize this to questions like "what if the left pecks are more likely than right pecks by a factor of two, what would the distribution be like?" It would just take a couple of lines of code and I would have an answer.  And if you can't see how such curiosity led examinations are integral to science, then you don't know what science is.` ## May 19, 2017 ### Emily Lakdawalla - The Planetary Society Blog Here's what you need to know about the Electron rocket, which is set to launch from New Zealand Rocket Lab's Electron, a light-lift launcher for small satellites, is ready to make its debut test flight from a peninsula in New Zealand. ### Emily Lakdawalla - The Planetary Society Blog Orbital ATK discusses Antares rocket's future, confirms new NASA cargo mission Company officials say they have no plans to retire Antares, which has secured its first mission order under the second round of NASA's commercial cargo flights, known as CRS-2. ## May 18, 2017 ### ZapperZ - Physics and Physicists "Difficult" and "Easy" Are Undefined This post comes about because in an online forum, someone asked if it is "easier" to heat something than to cool it down. The issue for me here isn't the subject of the question, which is heating and cooling and object, but rather, that the person asking the question thinks that the "measure" here is the "easiness". I'm sure this person, and many others, didn't even think twice to realize that this is a rather vague and ambiguous question. After all, it is common to ask if something is easy or difficult. Yet, if you think about it carefully, this is really asking for something that is undefined. First of all, the measure of something to be "easy" or "difficult" it itself is subjective. What is easy to some, can easily be difficult to others (see what I did there?). Meryl Streep can easily memorize pages and pages of dialog, something that I find difficult to do because I am awful at memorization. But yet, I'm sure I can solve many types of differential equations that she finds difficult. So already, there is a degree of "subjectiveness" to this. But what is more important here is that, in science, for something to be considered as a valid description of something, it must be QUANTIFIABLE. In other words, a number associated with that description can be measured or obtained. Let's apply this to an example. I can ask: How difficult or easy it is to stop a 100 kg moving mass? So, what am I actually asking here when I ask if it is "easy" or "difficult"? It is vague. However, I can specify that if I use less force to make the object come to a complete stop over a specific distance, then this is EASIER than if I have to use a larger force to do the same thing. Now THAT is more well-defined, because I am using "easy" or "difficult" as a measure of the amount of force I have to apply. In fact, I can omit the use of the words "easy" and "difficult", and simply ask for the force needed to stop the object. That is a question that is well-defined and quantifiable, such that a quantitative comparison can be made. Let's come back to the original question that was the impetus of this post. This person asked if it is easier to heat things rather than to cool things. So the question now is, what does it mean for it to be "easy" to heat or cool things. One measure can be that, for a constant heat transfer, how long in time does it take to heat or cool the object by the same change in temperature? So in this case, the measure of time taken to heat and cool the object by the same amount of temperature change is the measure of "easy" or "difficult". One can compare time taken to heat the object by, say, 5 Celsius, versus time taken to cool the object by the same temperature change. Now this, is a more well-defined question. I bring this up because I often see many ordinary conversation, discussion, news reports, etc.. etc. in which statements and descriptions made appear to be clear and to make sense, when in reality, many of these are really empty statements that are ambiguous, and sometime meaningless. Describing something to be easy or difficult appears to be a "simple" and clear statement or description, but if you think about it carefully, it isn't! Ask yourself if the criteria to classify something to be easy, easier, difficult, more difficult, etc... etc. is plainly evident and universally agreed upon. Did the statement that says "such and such undermines so-and-so" is actually clear on what it is saying? What exactly does "undermines" mean in this case, and what is the measure of it? Science/Physics education has the ability to impart this kind of analytical skills, and to impart this kind of thinking to the students, especially if they are not specializing in STEM subjects. In science, the nature of the question we ask can often be as important as the answers that we seek. This is because unless we clearly define what it is that we are asking, then we can't know where to look for the answers. This is a lesson that many people in the public need to learn and to be aware of, especially in deciphering many of the things we see in the media right now. It is why science education is invaluable to everyone. Zz. ### Clifford V. Johnson - Asymptotia Writing Hat! Well, yesterday evening and today I've got an entirely different hat - SF short story writer! First let me apologize for faking it to all my friends reading who are proper short story writers with membership cards and so on. Let me go on to explain: I don't think I'm allowed to tell you the full details yet, but the current editor of an annual science fiction anthology got in touch back in February and told me about an idea they wanted to try out. They normally have their usual batch of excellent science fiction stories (from various writers) in the book, ending with a survey of some visual material such as classic SF covers, etc.... but this year they decided to do something different. Instead of the visual survey thing, why not have one of the stories be visual? In other words, a graphic novella (I suppose that's what you'd call it). After giving them several opportunities to correct their obvious error, which went a bit like this: [...] Click to continue reading this post The post Writing Hat! appeared first on Asymptotia. ## May 17, 2017 ### Tommaso Dorigo - Scientificblogging Guest Post: Dorigo’s Anomaly! And The Social Psychology Of Professional Discourse In Physics, By Alex Durig Dr. Alex Durig (see picture) is a professional freelance writer, with a PhD in social psychology from Indiana University (1992). He has authored seven books in his specialization of perception and logic. He claims to have experienced great frustration resolving his experience of perception and logic when it comes to physics, but he says he no longer feels crazy, ever since Anomaly! was published. So I am offering this space to him to hear what he has to say about that... ------ On Dorigo's Anomaly! and the Social Psychology of Professional Discourse in Physics, by Alex Durig read more ## May 16, 2017 ### Symmetrybreaking - Fermilab/SLAC The facts and nothing but the facts At a recent workshop on blind analysis, researchers discussed how to keep their expectations out of their results. Scientific experiments are designed to determine facts about our world. But in complicated analyses, there’s a risk that researchers will unintentionally skew their results to match what they were expecting to find. To reduce or eliminate this potential bias, scientists apply a method known as “blind analysis.” Blind studies are probably best known from their use in clinical drug trials, in which patients are kept in the dark about—or blind to—whether they’re receiving an actual drug or a placebo. This approach helps researchers judge whether their results stem from the treatment itself or from the patients’ belief that they are receiving it. Particle physicists and astrophysicists do blind studies, too. The approach is particularly valuable when scientists search for extremely small effects hidden among background noise that point to the existence of something new, not accounted for in the current model. Examples include the much-publicized discoveries of the Higgs boson by experiments at CERN’s Large Hadron Collider and of gravitational waves by the Advanced LIGO detector. “Scientific analyses are iterative processes, in which we make a series of small adjustments to theoretical models until the models accurately describe the experimental data,” says Elisabeth Krause, a postdoc at the Kavli Institute for Particle Astrophysics and Cosmology, which is jointly operated by Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory. “At each step of an analysis, there is the danger that prior knowledge guides the way we make adjustments. Blind analyses help us make independent and better decisions.” Krause was the main organizer of a recent workshop at KIPAC that looked into how blind analyses could be incorporated into next-generation astronomical surveys that aim to determine more precisely than ever what the universe is made of and how its components have driven cosmic evolution. ### Black boxes and salt One outcome of the workshop was a finding that there is no one-size-fits-all approach, says KIPAC postdoc Kyle Story, one of the event organizers. “Blind analyses need to be designed individually for each experiment.” The way the blinding is done needs to leave researchers with enough information to allow a meaningful analysis, and it depends on the type of data coming out of a specific experiment. A common approach is to base the analysis on only some of the data, excluding the part in which an anomaly is thought to be hiding. The excluded data is said to be in a “black box” or “hidden signal box.” Take the search for the Higgs boson. Using data collected with the Large Hadron Collider until the end of 2011, researchers saw hints of a bump as a potential sign of a new particle with a mass of about 125 gigaelectronvolts. So when they looked at new data, they deliberately quarantined the mass range around this bump and focused on the remaining data instead. They used that data to make sure they were working with a sufficiently accurate model. Then they “opened the box” and applied that same model to the untouched region. The bump turned out to be the long-sought Higgs particle. That worked well for the Higgs researchers. However, as scientists involved with the Large Underground Xenon experiment reported at the workshop, the “black box” method of blind analysis can cause problems if the data you’re expressly not looking at contains rare events crucial to figuring out your model in the first place. LUX has recently completed one of the world’s most sensitive searches for WIMPs—hypothetical particles of dark matter, an invisible form of matter that is five times more prevalent than regular matter. LUX scientists have done a lot of work to guard LUX against background particles—building the detector in a cleanroom, filling it with thoroughly purified liquid, surrounding it with shielding and installing it under a mile of rock. But a few stray particles make it through nonetheless, and the scientists need to look at all of their data to find and eliminate them. For that reason, LUX researchers chose a different blinding approach for their analyses. Instead of using a “black box,” they use a process called “salting.” LUX scientists not involved in the most recent LUX analysis added fake events to the data—simulated signals that just look like real ones. Just like the patients in a blind drug trial, the LUX scientists didn’t know whether they were analyzing real or placebo data. Once they completed their analysis, the scientists that did the “salting” revealed which events were false. A similar technique was used by LIGO scientists, who eventually made the first detection of extremely tiny ripples in space-time called gravitational waves. ### High-stakes astronomical surveys The Blind Analysis workshop at KIPAC focused on future sky surveys that will make unprecedented measurements of dark energy and the Cosmic Microwave Background—observations that will help cosmologists better understand the evolution of our universe. Dark energy is thought to be a force that is causing the universe to expand faster and faster as time goes by. The CMB is a faint microwave glow spread out over the entire sky. It is the oldest light in the universe, left over from the time the cosmos was only 380,000 years old. To shed light on the mysterious properties of dark energy, the Dark Energy Science Collaboration is preparing to use data from the Large Synoptic Survey Telescope, which is under construction in Chile. With its unique 3.2-gigapixel camera, LSST will image billions of galaxies, the distribution of which is thought to be strongly influenced by dark energy. “Blinding will help us look at the properties of galaxies picked for this analysis independent of the well-known cosmological implications of preceding studies,” DESC member Krause says. One way the collaboration plans on blinding its members to this prior knowledge is to distort the images of galaxies before they enter the analysis pipeline. Not everyone in the scientific community is convinced that blinding is necessary. Blind analyses are more complicated to design than non-blind analyses and take more time to complete. Some scientists participating in blind analyses inevitably spend time looking at fake data, which can feel like a waste. Yet others strongly advocate for going blind. KIPAC researcher Aaron Roodman, a particle-physicist-turned-astrophysicist, has been using blinding methods for the past 20 years. “Blind analyses have already become pretty standard in the particle physics world,” he says. “They’ll be also crucial for taking bias out of next-generation cosmological surveys, particularly when the stakes are high. We’ll only build one LSST, for example, to provide us with unprecedented views of the sky.” ### John Baez - Azimuth The Dodecahedron, the Icosahedron and E8 Here you can see the slides of a talk I’m giving: The dodecahedron, the icosahedron and E8, Annual General Meeting of the Hong Kong Mathematical Society, Hong Kong University of Science and Technology. It’ll take place on 10:50 am Saturday May 20th in Lecture Theatre G. You can see the program for the whole meeting here. The slides are in the form of webpages, and you can see references and some other information tucked away at the bottom of each page. In preparing this talk I learned more about the geometric McKay correspondence, which is a correspondence between the simply-laced Dynkin diagrams (also known as ADE Dynkin diagrams) and the finite subgroups of $\mathrm{SU}(2).$ There are different ways to get your hands on this correspondence, but the geometric way is to resolve the singularity in $\mathbb{C}^2/\Gamma$ where $\Gamma \subset \mathrm{SU}(2)$ is such a finite subgroup. The variety $\mathbb{C}^2/\Gamma$ has a singularity at the origin–or more precisely, the point coming from the origin in $\mathbb{C}^2.$ To make singularities go away, we ‘resolve’ them. And when you take the ‘minimal resolution’ of this variety (a concept I explain here), you get a smooth variety $S$ with a map $\pi \colon S \to \mathbb{C}^2/\Gamma$ which is one-to-one except at the origin. The points that map to the origin lie on a bunch of Riemann spheres. There’s one of these spheres for each dot in some Dynkin diagram—and two of these spheres intersect iff their two dots are connected by an edge! In particular, if $\Gamma$ is the double cover of the rotational symmetry group of the dodecahedron, the Dynkin diagram we get this way is $E_8$: The basic reason $\mathrm{E}_8$ is connected to the icosahedron is that the icosahedral group is generated by rotations of orders 2, 3 and 5 while the $\mathrm{E}_8$ Dynkin diagram has ‘legs’ of length 2, 3, and 5 if you count right: In general, whenever you have a triple of natural numbers $a,b,c$ obeying $\displaystyle{ \frac{1}{a} + \frac{1}{b} + \frac{1}{c} > 1}$ you get a finite subgroup of $\mathrm{SU}(2)$ that contains rotations of orders $a,b,c,$ and a simply-laced Dynkin diagram with legs of length $a,b,c.$ The three most exciting cases are: $(a,b,c) = (2,3,3)$: the tetrahedron, and $E_6,$ $(a,b,c) = (2,3,4)$: the octahedron, and $E_7,$ $(a,b,c) = (2,3,5)$: the icosahedron, and $E_8.$ But the puzzle is this: why does resolving the singular variety $\mathbb{C}^2/\Gamma$ gives a smooth variety with a bunch of copies of the Riemann sphere $\mathbb{C}\mathrm{P}^1$ sitting over the singular point at the origin, with these copies intersecting in a pattern given by a Dynkin diagram? It turns out the best explanation is in here: • Klaus Lamotke, Regular Solids and Isolated Singularities, Vieweg & Sohn, Braunschweig, 1986. In a nutshell, you need to start by blowing up $\mathbb{C}^2$ at the origin, getting a space $X$ containing a copy of $\mathbb{C}\mathrm{P}^1$ on which $\Gamma$ acts. The space $X/\Gamma$ has further singularities coming from the rotations of orders $a, b$ and $c$ in $\Gamma$. When you resolve these, you get more copies of $\mathbb{C}\mathrm{P}^1,$ which intersect in the pattern given by a Dynkin diagram with legs of length $a,b$ and $c.$ I would like to understand this better, and more vividly. I want a really clear understanding of the minimal resolution $S.$ For this I should keep rereading Lamotke’s book, and doing more calculations. I do, however, have a nice vivid picture of the singular space $\mathbb{C}^2/\Gamma.$ For that, read my talk! I’m hoping this will lead, someday, to an equally appealing picture of its minimal resolution. ## May 15, 2017 ### Clifford V. Johnson - Asymptotia Bolt those Engines Down… I've a train to catch and so I did not have time to think of a better title. Sorry. Anyway, for those of you who follow the more technical side of what I do, above is a screen shot to the abstract of a paper to appear tomorrow/today on the arXiv. I'll try to find some time to say more about it, but I can't promise anything since I've got to finish writing another paper today (on the train ride), and then turn myself away from all this for a little while to work on some other things. The abstract should be [...] Click to continue reading this post The post Bolt those Engines Down… appeared first on Asymptotia. ## May 12, 2017 ### The n-Category Cafe Unboxing Algebraic Theories of Generalised Arities Guest post by José Siqueira We began our journey in the second Kan Extension Seminar with a discussion of the classical concept of Lawvere theory , facilitated by Evangelia. Together with the concept of a model, this technology allows one to encapsulate the behaviour of algebraic structures defined by collections of $nn$-ary operations subject to axioms (such as the ever-popular groups and rings) in a functorial setting, with the added flexibility of easily transferring such structures to arbitrary underlying categories $𝒞\mathcal\left\{C\right\}$ with finite products (rather than sticking with $\mathrm{Set}\mathbf\left\{Set\right\}$), naturally leading to important notions such as that of a Lie group. Throughout the seminar, many features of Lawvere theories and connections to other concepts were unearthed and natural questions were addressed — notably for today’s post, we have established a correspondence between Lawvere theories and finitary monads in $\mathrm{Set}\mathbf\left\{Set\right\}$ and discussed the notion of operad, how things go in the enriched context and what changes if you tweak the definitions to allow for more general kinds of limit. We now conclude this iteration of the seminar by bringing to the table “Monads with arities and their associated theories”, by Clemens Berger, Paul-André Melliès and Mark Weber, which answers the (perhaps last) definitional “what-if”: what goes on if you allow for operations of more general arities. At this point I would like to thank Alexander Campbell, Brendan Fong and Emily Riehl for the amazing organization and support of this seminar, as well as my fellow colleagues, whose posts, presentations and comments drafted a more user-friendly map to traverse this subject. #### Allowing general arities Recall that a Lawvere theory can be defined as a pair $\left(I,L\right)\left(I,L\right)$, where $LL$ is a small category with finite coproducts and $I:{\aleph }_{0}\to LI: \aleph_0 \to L$ is an identity-on-objects finite-coproduct preserving functor. To this data we associate a nerve functor ${\nu }_{{\aleph }_{0}}:\mathrm{Set}\to \mathrm{PSh}\left({\aleph }_{0}\right)\nu_\left\{\aleph_0\right\}: \mathbf\left\{Set\right\} \to PSh\left(\aleph_0\right)$, which takes a set $XX$ to its ${\aleph }_{0}\aleph_0$-nerve ${\nu }_{{\aleph }_{0}}\left(X\right):{\aleph }_{0}^{\mathrm{op}}\to \mathrm{Set}\nu_\left\{\aleph_0\right\}\left(X\right): \aleph_0^\left\{op\right\} \to \mathbf\left\{Set\right\}$, the presheaf $\mathrm{Set}\left({i}_{{\aleph }_{0}}\left(-\right),X\right)\mathbf\left\{Set\right\}\left(i_\left\{\aleph_0\right\}\left(-\right), X\right)$ — the ${\aleph }_{0}\aleph_0$-nerve of a set $XX$ thus takes a finite cardinal $nn$ to ${X}^{n}X^n$, up to isomorphism. It is easy to check ${\nu }_{{\aleph }_{0}}\nu_\left\{\aleph_0\right\}$ is faithful, but it is also full, with $\alpha \cong {\nu }_{{\aleph }_{0}}\left({\alpha }_{1}\right)\alpha\cong \nu_\left\{\aleph_0\right\}\left(\alpha_1\right)$ for each natural transformation $\alpha :{\nu }_{{\aleph }_{0}}\left(X\right)\to {\nu }_{{\aleph }_{0}}\left(X\prime \right)\alpha: \nu_\left\{\aleph_0\right\}\left(X\right) \to \nu_\left\{\aleph_0\right\}\left(X\text{'}\right)$, seeing ${\alpha }_{1}\alpha_1$ as a function $X\to X\prime X \to X\text{'}$. This allows us to regard sets as presheaves over the small category ${\aleph }_{0}\aleph_0$, and as ${\nu }_{{\aleph }_{0}}\left(X\right)\left(\left[n\right]\right)=\mathrm{Set}\left(\left[n\right],X\right)\cong {X}^{n}\nu_\left\{\aleph_0\right\}\left(X\right)\left(\left[n\right]\right)=\mathbf\left\{Set\right\}\left(\left[n\right],X\right)\cong X^n$, the ${\aleph }_{0}\aleph_0$-nerves can be used to encode all possible $nn$-ary operations on sets. To capture this behaviour of ${\aleph }_{0}\aleph_0$, we are inclined to make the following definition: Definition. Let $𝒞\mathcal\left\{C\right\}$ be a category and $𝒜\mathcal\left\{A\right\}$ be a full small subcategory of $𝒞\mathcal\left\{C\right\}$. We say $𝒜\mathcal\left\{A\right\}$ is a dense generator of $𝒞\mathcal\left\{C\right\}$ if its associated nerve functor ${\nu }_{𝒜}:𝒞\to \mathrm{PSh}\left(𝒜\right)\nu_\left\{\mathcal\left\{A\right\}\right\}: \mathcal\left\{C\right\} \to PSh\left(\mathcal\left\{A\right\}\right)$ is fully faithful, where ${\nu }_{𝒜}\left(X\right)=𝒞\left({ı}_{𝒜}\left(-\right),X\right)\nu_\left\{\mathcal\left\{A\right\}\right\}\left(X\right)= \mathcal\left\{C\right\}\left(\imath_\left\{\mathcal\left\{A\right\}\right\}\left(-\right), X\right)$ for each $X\in 𝒞X \in \mathcal\left\{C\right\}$. The idea is that we can replace $\mathrm{Set}\mathbf\left\{Set\right\}$ and ${\aleph }_{0}\aleph_0$ in the original definition of Lawvere theory by a category $𝒞\mathcal\left\{C\right\}$ with a dense generator $𝒜\mathcal\left\{A\right\}$. This allows us to have operations with arities more diverse than simply finite cardinals, while still retaining “good behaviour” — if we think about the dense generator as giving the “allowed arities”, we end up being able to extend all the previous concepts and make the following analogies: We’ll now discuss each generalised concept and important/useful properties. If $\left(I,L\right)\left(I,L\right)$ is a Lawvere theory, the restriction functor ${I}^{*}:\mathrm{PSh}\left(L\right)\to \mathrm{PSh}\left({\aleph }_{0}\right)I^\left\{\ast\right\}: PSh\left(L\right) \to PSh\left(\aleph_0\right)$ induces a monad ${I}^{*}{I}_{!}I^\left\{\ast\right\} I_!$, where ${I}_{!}I_!$ is left Kan extension along $II$. This monad preserves the essential image of the nerve functor ${\nu }_{{\aleph }_{0}}\nu_\left\{\aleph_0\right\}$, and in fact this condition reduces to preservation of coproducts by $II$ (refer to 3.5 in the paper for further details). If $MM$ is a model of ${L}^{\mathrm{op}}L^\left\{op\right\}$ on $\mathrm{Set}\mathbf\left\{Set\right\}$ in the usual sense (i.e $M:{L}^{\mathrm{op}}\to \mathrm{Set}M: L^\left\{op\right\} \to \mathbf\left\{Set\right\}$ preserves finite products) we can see that its restriction along $II$ is isomorphic to the ${\aleph }_{0}\aleph_0$-nerve of $\mathrm{MI}\left(\left[1\right]\right)MI\left(\left[1\right]\right)$ by arguing that $\left({I}^{*}M\right)\left[n\right]=\mathrm{MI}\left[n\right]=M{\underset{⏟}{\left(\coprod _{n}I\left[1\right]\right)}}_{\text{in}\phantom{\rule{thinmathspace}{0ex}}L}=M{\underset{⏟}{\left(\prod _{n}I\left[1\right]\right)}}_{\text{in}\phantom{\rule{thinmathspace}{0ex}}{L}^{\mathrm{op}}}\cong \prod _{n}\mathrm{MI}\left[1\right]\cong \mathrm{MI}\left[1{\right]}^{n}\cong {\nu }_{{\aleph }_{0}}\left(\mathrm{MI}\left[1\right]\right)\left[n\right], \left(I^\left\{\ast\right\} M\right)\left[n\right] = MI\left[n\right] = M \underbrace\left\{\left(\coprod_n I\left[1\right]\right)\right\}_\left\{\text\left\{in\right\} \, L\right\} = M\underbrace\left\{\left(\prod_n I\left[1\right]\right)\right\}_\left\{\text\left\{in\right\}\, L^\left\{op\right\}\right\}\cong \prod_n MI\left[1\right] \cong MI\left[1\right]^n \cong \nu_\left\{\aleph_0\right\}\left(MI\left[1\right]\right)\left[n\right], $ and so we may want to define: Definition. Let $𝒞\mathcal\left\{C\right\}$ be a category with a dense generator $𝒜\mathcal\left\{A\right\}$. A theory with arities $𝒜\mathcal\left\{A\right\}$ on $𝒞\mathcal\left\{C\right\}$ is a pair $\left(\Theta ,j\right)\left(\Theta,j\right)$, where $j:𝒜\to \Theta j: \mathcal\left\{A\right\} \to \Theta$ is a bijective-on-objects functor such that the induced monad ${j}^{*}{j}_{!}j^\left\{\ast\right\}j_!$ on $\mathrm{PSh}\left(𝒜\right)PSh\left(\mathcal\left\{A\right\}\right)$ preserves the essential image of the associated nerve functor ${\nu }_{𝒜}\nu_\left\{\mathcal\left\{A\right\}\right\}$. A $\Theta \Theta$-model is a presheaf on $\Theta \Theta$ whose restriction along $jj$ is isomorphic to some $𝒜\mathcal\left\{A\right\}$-nerve. Again, for $𝒜={\aleph }_{0}\mathcal\left\{A\right\}=\aleph_0$, this requirement on models says a $\Theta \Theta$-model $MM$ restricts to powers of some object: ${I}^{*}M\left(-\right)=\mathrm{MI}\left(-\right)\cong {X}^{|-|}I^\ast M\left(-\right)=MI\left(-\right) \cong X^\left\{|-|\right\}$ for some set $XX$, the outcome we wanted for models of Lawvere theories. A morphism of models is still just a natural transformation between them as presheaves and a morphism of theories $\left({\Theta }_{1},{j}_{1}\right)\to \left({\Theta }_{2},{j}_{2}\right)\left(\Theta_1, j_1\right) \to \left(\Theta_2, j_2\right)$ is a functor $\theta :{\Theta }_{1}\to {\Theta }_{2}\theta: \Theta_1 \to \Theta_2$ that intertwines with the arity functors, i.e ${j}_{2}=\theta {j}_{1}j_2=\theta j_1$. We’ll write $\mathrm{Mod}\left(\Theta \right)Mod\left(\Theta\right)$ for the full subcategory of $\mathrm{PSh}\left(\Theta \right)PSh\left(\Theta\right)$ consisting of the models of $\Theta \Theta$ and $\mathrm{Th}\left(𝒞,𝒜\right)Th\left(\mathcal\left\{C\right\}, \mathcal\left\{A\right\}\right)$ for the category of theories with arities $𝒜\mathcal\left\{A\right\}$ on $𝒞\mathcal\left\{C\right\}$. We aim to prove a result that establishes an equivalence between $\mathrm{Th}\left(𝒞,𝒜\right)Th\left(\mathcal\left\{C\right\},\mathcal\left\{A\right\}\right)$ and some category of monads, to mirror the situation between Lawvere theories and finitary monads on $\mathrm{Set}\mathbf\left\{Set\right\}$. #### Dense generators and nerves Having a dense generator is desirable because we can then mimic the following situation: Recall that if $𝒟\mathcal\left\{D\right\}$ is small and $F:𝒟\to \mathrm{Set}F:\mathcal\left\{D\right\} \to \mathbf\left\{Set\right\}$ is a functor, then we can form a diagram of shape $\left(*↓F{\right)}^{\mathrm{op}}\left(\left\{\ast\right\}\downarrow F\right)^\left\{op\right\}$ over $\left[𝒟,\mathrm{Set}\right]\left[\mathcal\left\{D\right\}, \mathbf\left\{Set\right\}\right]$ by composing the (opposite) of the natural projection functor $\left(*↓F\right)\to 𝒟\left(\left\{\ast\right\}\downarrow F\right) \to \mathcal\left\{D\right\}$ and the Yoneda embedding. We may then consider the cocone $\mu =\left({\mu }_{\left(d,x\right)}={\mu }_{x}:𝒟\left(d,-\right)\to F\mid \left(d,x\right)\in \left(*↓F{\right)}^{\mathrm{op}}\right), \mu=\left(\mu_\left\{\left(d,x\right)\right\}=\mu_x: \mathcal\left\{D\right\}\left(d,-\right) \to F \mid \left(d,x\right) \in \left(\left\{\ast\right\} \downarrow F\right)^\left\{op\right\}\right),$ where ${\mu }_{x}\mu_x$ is the natural transformation corresponding to $x\in F\left(d\right)x \in F\left(d\right)$ via the Yoneda lemma, and find out it is actually a colimit, canonically expressing $FF$ as a colimit of representable functors — if you are so inclined, you might want to look at this as the coend identity $F\left(-\right)={\int }^{d\in 𝒟}F\left(d\right)×𝒟\left(-,d\right) F\left(-\right)= \int^\left\{d \in \mathcal\left\{D\right\}\right\} F\left(d\right) \times \mathcal\left\{D\right\}\left(-,d\right) $ when $FF$ is a presheaf on $𝒟\mathcal\left\{D\right\}$. Likewise for $XX$ an object of a category $𝒞\mathcal\left\{C\right\}$ with dense generator $𝒜\mathcal\left\{A\right\}$, there is an associated diagram ${a}_{X}:𝒜/X\to 𝒞a_X: \mathcal\left\{A\right\}/X \to \mathcal\left\{C\right\}$, which comes equipped with an obvious natural transformation to the constant functor on $XX$, whose $\left(A\stackrel{f}{\to }X\right)\left(A \xrightarrow\left\{f\right\} X\right)$-component is simply $ff$ itself — this is called the $𝒜\mathcal\left\{A\right\}$-cocone over $XX$, and it is just the cocone of vertex $XX$ under the diagram ${a}_{X}a_X$ of shape $𝒜/X\mathcal\left\{A\right\}/X$ in $𝒞\mathcal\left\{C\right\}$ whose legs consist of all morphisms $A\to XA \to X$ with $A\in 𝒜A \in \mathcal\left\{A\right\}$. Note that if $𝒜\mathcal\left\{A\right\}$ is small (as is the case), then this diagram is small and, if $𝒞=\mathrm{PSh}\left(𝒜\right)\mathcal\left\{C\right\}=PSh\left(\mathcal\left\{A\right\}\right)$, the slice category $𝒜/X\mathcal\left\{A\right\}/X$ reduces to the category of elements of the presheaf $XX$ and this construction gives the Yoneda cocone under $XX$. One can show that Proposition. A small full subcategory $𝒜\mathcal\left\{A\right\}$ of $𝒞\mathcal\left\{C\right\}$ is a dense generator precisely when the $𝒜\mathcal\left\{A\right\}$-cocones are actually colimit-cocones in $𝒞\mathcal\left\{C\right\}$. This canonically makes every object $XX$ of $𝒞\mathcal\left\{C\right\}$ a colimit of objects in $𝒜\mathcal\left\{A\right\}$ , and in view of this result it makes sense to define: Definition. Let $𝒞\mathcal\left\{C\right\}$ be a category with a dense generator $𝒜\mathcal\left\{A\right\}$. A monad $TT$ on $𝒞\mathcal\left\{C\right\}$ is a monad with arities $𝒜\mathcal\left\{A\right\}$ when ${\nu }_{𝒜}T\nu_\left\{\mathcal\left\{A\right\}\right\}T$ takes the $𝒜\mathcal\left\{A\right\}$-cocones of $𝒞\mathcal\left\{C\right\}$ to colimit-cocones in $\mathrm{PSh}\left(𝒜\right)PSh\left(\mathcal\left\{A\right\}\right)$. That is, the monad has arities $𝒜\mathcal\left\{A\right\}$ whenever scrambling the nerve functor by first applying $TT$ does not undermine its capacity of turning $𝒜\mathcal\left\{A\right\}$-cocones into colimits, which in turns preserves the status of $𝒜\mathcal\left\{A\right\}$ as a dense generator, morally speaking — the Nerve Theorem makes this statement more precise: The Nerve Theorem. Let $𝒞\mathcal\left\{C\right\}$ be a category with a dense generator $𝒜\mathcal\left\{A\right\}$. For any monad $TT$ with arities $𝒜\mathcal\left\{A\right\}$, the full subcategory ${\Theta }_{T}\Theta_T$ spanned by the free $TT$-algebras on objects of $𝒜\mathcal\left\{A\right\}$ is a dense generator of the Eilenberg-Moore category ${𝒞}^{T}\mathcal\left\{C\right\}^T$. The essential image of the associated nerve functor is spanned by those presheaves whose restriction along ${j}_{T}j_T$ belongs to the essential image of the ${\nu }_{𝒜}\nu_\left\{\mathcal\left\{A\right\}\right\}$, where ${j}_{T}:𝒜\to {\Theta }_{T}j_T: \mathcal\left\{A\right\} \to \Theta_T$ is obtained by restricting the free $TT$-algebra functor. The proof given relies on an equivalent characterization for monads with arities, namely that a monad $TT$ on a category $𝒞\mathcal\left\{C\right\}$ with arities $𝒜\mathcal\left\{A\right\}$ is a monad with arities $𝒜\mathcal\left\{A\right\}$ if and only if the “generalised lifting (pseudocommutative) diagram” $\begin{array}{ccc}{𝒞}^{T}& \stackrel{{\nu }_{T}}{⟶}& \mathrm{PSh}\left({\Theta }_{T}\right)\\ {}_{U}↓& & {↓}_{{j}_{T}^{*}}\\ 𝒞& \underset{{\nu }_{𝒜}}{⟶}& \mathrm{PSh}\left(𝒜\right)\\ \end{array}. \begin\left\{matrix\right\} \mathcal\left\{C\right\}^T & \overset\left\{\nu_T\right\}\left\{\longrightarrow\right\} & PSh\left(\Theta_T\right) \\ \left\{\right\}_\left\{U\right\}\downarrow && \downarrow_\left\{j_T^\left\{\ast\right\}\right\} \\ \mathcal\left\{C\right\} & \underset\left\{\nu_\left\{\mathcal\left\{A\right\}\right\}\right\}\left\{\longrightarrow\right\} & PSh\left(\mathcal\left\{A\right\}\right) \\ \end\left\{matrix\right\}. $ is an exact adjoint square, meaning the mate $\left({j}_{T}{\right)}_{!}{\nu }_{𝒜}⇒{\nu }_{T}F\left(j_T\right)_!\nu_\left\{\mathcal\left\{A\right\}\right\} \Rightarrow \nu_T F$ of the invertible $22$-cell implicit in the above square is also invertible, where $FF$ is the free $TT$-algebra functor. Note ${j}_{T}^{*}j_T^\left\{\ast\right\}$ is monadic, so this diagram indeed gives some sort of lifting of the nerve functor on $𝒞\mathcal\left\{C\right\}$ to the level of monad algebras. We can build on this result a little bit. Let $\alpha \alpha$ be a regular cardinal (at this point you might want to check David’s discussion on finite presentability). Definition. A category $𝒞\mathcal\left\{C\right\}$ is $\alpha \alpha$-accessible if it has $\alpha \alpha$-filtered colimits and a dense generator $𝒜\mathcal\left\{A\right\}$ comprised only of $\alpha \alpha$-presentable objects such that $𝒜/X\mathcal\left\{A\right\}/X$ is $\alpha \alpha$-filtered for each object $XX$ of $𝒞\mathcal\left\{C\right\}$. If in addition the category is cocomplete, we say it is locally $\alpha \alpha$-presentable. If $𝒞\mathcal\left\{C\right\}$ is $\alpha \alpha$-accessible, there is a god-given choice of dense generator — we take $𝒜\mathcal\left\{A\right\}$ to be a skeleton of the full subcategory $𝒞\left(\alpha \right)\mathcal\left\{C\right\}\left(\alpha\right)$ spanned by the $\alpha \alpha$-presentable objects of $𝒞\mathcal\left\{C\right\}$. As all objects in $𝒜\mathcal\left\{A\right\}$ are $\alpha \alpha$-presentable, the associated nerve functor preserves $\alpha \alpha$-filtered colimits and so any monad $TT$ preserving $\alpha \alpha$-filtered colimits is a monad with arities $𝒜\mathcal\left\{A\right\}$. The essential image of ${\nu }_{𝒜}\nu_\left\{\mathcal\left\{A\right\}\right\}$ is spanned by the $\alpha \alpha$-flat presheaves on $𝒜\mathcal\left\{A\right\}$ (meaning presheaves whose categories of elements are $\alpha \alpha$-filtered). As a consequence, any given object in an $\alpha \alpha$-accessible category is canonically an $\alpha \alpha$-filtered colimit of $\alpha \alpha$-presentable objects and we can prove: Theorem (Gabriel-Ulmer, Adámek-Rosický). If a monad $TT$ on an $\alpha \alpha$-accessible category $𝒞\mathcal\left\{C\right\}$ preserves $\alpha \alpha$-filtered colimits, then its category of algebras ${𝒞}^{T}\mathcal\left\{C\right\}^T$ is $\alpha \alpha$-accessible, with a dense generator ${\Theta }_{T}\Theta_T$ spanned by the free $TT$-algebras on (a skeleton $𝒜\mathcal\left\{A\right\}$ of) the $\alpha \alpha$-presentable objects $C\left(\alpha \right)C\left(\alpha\right)$. Moreover, this category of algebras is equivalent to the full subcategory of $\mathrm{PSh}\left({\Theta }_{T}\right)PSh\left(\Theta_T\right)$ spanned by those presheaves whose restriction along ${j}_{T}j_T$ is $\alpha \alpha$-flat. Proof. We know $TT$ is a monad with arities $𝒜\mathcal\left\{A\right\}$. That ${\Theta }_{T}\Theta_T$ is a dense generator as stated follows from its definition and the Nerve Theorem. Now, ${𝒞}^{T}\mathcal\left\{C\right\}^T$ has $\alpha \alpha$-filtered colimits since $𝒞\mathcal\left\{C\right\}$ has and $TT$ preserves them. As the forgetful functor $U:{𝒞}^{T}\to 𝒞U: \mathcal\left\{C\right\}^T \to \mathcal\left\{C\right\}$ preserves $\alpha \alpha$-filtered colimits (a monadic functor creates all colimits $𝒞\mathcal\left\{C\right\}$ has and $TT$ preserves), it follows that the free algebra functor preserves $\alpha \alpha$-presentability: $𝒞\left(\mathrm{FA},-\right)\cong 𝒞\left(A,U\left(-\right)\right)\mathcal\left\{C\right\}\left(FA,-\right) \cong \mathcal\left\{C\right\}\left(A, U\left(-\right)\right)$ preserves $\alpha \alpha$-filtered colimits whenever $AA$ is $\alpha \alpha$-presentable, and so objects of ${\Theta }_{T}\Theta_T$ are $\alpha \alpha$-presentable. One can then check each $𝒜/X\mathcal\left\{A\right\}/X$ is $\alpha \alpha$-filtered. $\square \square$ Note that this theorem says, for $\alpha ={\aleph }_{0}\alpha=\aleph_0$, that if a monad on sets is finitary, then its category of algebras (i.e models for the associated classical Lawvere theory) is accessible, with a dense generator given by all the free $TT$-algebras on finite sets: this is because a finitely-presentable (i.e ${\aleph }_{0}\aleph_0$-presentable) set is precisely the same as a finite set. As a consequence, the typical “algebraic gadgets” are canonically a colimit of free ones on finitely many generators. #### Theories and monads (with arities) are equivalent If $TT$ is a monad with arities $𝒜\mathcal\left\{A\right\}$, then $\left({\Theta }_{T},{j}_{T}\right)\left(\Theta_T, j_T\right)$ is a theory with arities $𝒜\mathcal\left\{A\right\}$. The Nerve Theorem then guarantees that ${\nu }_{T}:{𝒞}^{T}\to \mathrm{PSh}\left({\Theta }_{T}\right)\nu_T: \mathcal\left\{C\right\}^T \to PSh\left(\Theta_T\right)$ induces an equivalence of categories between ${\Theta }_{T}\Theta_T$-models and $TT$-algebras, since its essential image is, by definition, the category of ${\Theta }_{T}\Theta_T$-models and the functor is fully faithful. This gives us hope that the situation with Lawvere theories and finitary monads can be extended, and this is indeed the case: the assignment $T↦\left({\Theta }_{T},{j}_{T}\right)T \mapsto \left(\Theta_T, j_T\right)$ extends to a functor $\mathrm{Mnd}\left(𝒞,𝒜\right)\to \mathrm{Th}\left(𝒞,𝒜\right)\mathbf\left\{Mnd\right\}\left(\mathcal\left\{C\right\}, \mathcal\left\{A\right\}\right) \to \mathbf\left\{Th\right\}\left(\mathcal\left\{C\right\}, \mathcal\left\{A\right\}\right)$, which forms an equivalence of categories together with the functor $\mathrm{Th}\left(𝒞,𝒜\right)\to \mathrm{Mnd}\left(𝒞,𝒜\right)\mathbf\left\{Th\right\}\left(\mathcal\left\{C\right\}, \mathcal\left\{A\right\}\right) \to \mathbf\left\{Mnd\right\}\left(\mathcal\left\{C\right\}, \mathcal\left\{A\right\}\right)$ that takes a theory $\left(\Theta ,j\right)\left(\Theta, j\right)$ to the monad ${\rho }_{𝒜}T{\nu }_{𝒜}\rho_\left\{\mathcal\left\{A\right\}\right\}T\nu_\left\{\mathcal\left\{A\right\}\right\}$ with arities $𝒜\mathcal\left\{A\right\}$, where ${\rho }_{𝒜}\rho_\left\{\mathcal\left\{A\right\}\right\}$ is a choice of right adjoint to ${\nu }_{𝒜}:𝒞\to \mathrm{EssIm}\left({\nu }_{𝒜}\right)\nu_\left\{\mathcal\left\{A\right\}\right\}: \mathcal\left\{C\right\} \to EssIm\left(\nu_\left\{\mathcal\left\{A\right\}\right\}\right)$. When $𝒞=\mathrm{Set}\mathcal\left\{C\right\}=\mathbf\left\{Set\right\}$ and $𝒜={\aleph }_{0}\mathcal\left\{A\right\}=\aleph_0$, we recover the Lawvere theory/finitary monad equivalence. #### Relation to operads and examples Certain kinds of theories with arities are equivalent to operads. Namely, there is a notion of homogeneous globular theory that corresponds to globular (Batanin) operads. Similarly, there is a notion of $\Gamma \Gamma$-homogeneous theory that corresponds to symmetric operads. The remainder of the paper brings other equivalent definitions for monad with arities and builds a couple of examples, such as the free groupoid monad, which is a monad with arities given by (finite, connected) acyclic graphs. A notable example is that dagger categories arise as models of a theory on involutive graphs with non-trivial arities. ## May 11, 2017 ### ZapperZ - Physics and Physicists Initial Employment Of US Physics Bachelors The AIP has released the latest statistics on the initial employment of Physics Bachelors degree holders from the Class of 2013 and 2014. Almost half of the degree holders left school to go into the workforce, with about 54% going on to graduate school. This is a significant percentage, and as educators, we need to make sure we prepare physics graduates for such a career path and not assume that they will all go on to graduate schools. This means that we design a program in which they have valuable and usable skills by the time they graduate. Zz. ## May 10, 2017 ### ZapperZ - Physics and Physicists Dad Sat In On Student's Physics Class A dad finally had it with his son's disruptive behavior in a high school physics class, and finally made his threat came true. He sat next to his son during his physics class. His dad explained that his son 'likes to be the life of the party, which gets him in trouble from time to time.' 'For some reason I said, "hey, if we get another call I'm going to show up in school and sit beside you in class,"' he said. Unfortunately for the 17-year-old, that call did come. The thing that these news reports didn't clarify is if this student does this in all of his classes. If so, why is the physics teacher the one one reporting? If not, why does this student only does this in his physics class? Sometime, a lot of information is missing from a news report. Zz. ### Tommaso Dorigo - Scientificblogging Practical Tools Of The Improvised Speaker Yesterday I visited the Liceo “Benedetti” of Venice, where 40 students are preparing their artwork for a project of communicating science with art that will culminate in an exhibit at the Palazzo del Casinò of the Lido of Venice, during the week of the EPS conference in July. read more ## May 09, 2017 ### Symmetrybreaking - Fermilab/SLAC CERN unveils new linear accelerator Linac 4 will replace an older accelerator as the first step in the complex that includes the LHC. At a ceremony today, CERN European research center inaugurated its newest accelerator. Linac 4 will eventually become the first step in CERN’s accelerator chain, delivering proton beams to a wide range of experiments, including those at the Large Hadron Collider. After an extensive testing period, Linac 4 will be connected to CERN’s accelerator complex during a long technical shutdown in 2019-20. Linac 4 will replace Linac 2, which was put into service in 1978. Linac 4 will feed the CERN accelerator complex with particle beams of higher energy. “We are delighted to celebrate this remarkable accomplishment,” says CERN Director General Fabiola Gianotti. “Linac 4 is a modern injector and the first key element of our ambitious upgrade program, leading to the High-Luminosity LHC. This high-luminosity phase will considerably increase the potential of the LHC experiments for discovering new physics and measuring the properties of the Higgs particle in more detail.” “This is an achievement not only for CERN, but also for the partners from many countries who contributed in designing and building this new machine,” says CERN Director for Accelerators and Technology Frédérick Bordry. “We also today celebrate and thank the wide international collaboration that led this project, demonstrating once again what can be accomplished by bringing together the efforts of many nations.” The linear accelerator is the first essential element of an accelerator chain. In the linear accelerator, the particles are produced and receive the initial acceleration. The density and intensity of the particle beams are also shaped in the linac. Linac 4 is an almost 90-meter-long machine sitting 12 meters below the ground. It took nearly 10 years to build it. Linac 4 will send negative hydrogen ions, consisting of a hydrogen atom with two electrons, to CERN’s Proton Synchrotron Booster, which further accelerates the negative ions and removes the electrons. Linac 4 will bring the beam up to an energy of 160 million electronvolts, more than 3 times the energy of its predecessor. The increase in energy, together with the use of hydrogen ions, will enable doubling the beam intensity delivered to the LHC, contributing to an increase in the luminosity of the LHC by 2021. Luminosity is a parameter indicating the number of particles colliding within a defined amount of time. The peak luminosity of the LHC is planned to be increased by a factor of 5 by the year 2025. This will make it possible for the experiments to accumulate about 10 times more data over the period 2025 to 2035 than before. Editor's note: This article is based on a CERN press release. ### Symmetrybreaking - Fermilab/SLAC Understanding the unknown universe The authors of We Have No Idea remind us that there are still many unsolved mysteries in science. What is dark energy? Why aren’t we made of antimatter? How many dimensions are there? These are a few of the many unanswered questions that Jorge Cham, creator of the online comic Piled Higher and Deeper, and Daniel Whiteson, an experimental particle physicist at the University of California, Irvine, explain in their new book, We Have No Idea. In the process, they remind readers of one key point: When it comes to our universe, there’s a lot we still don’t know. The duo started working together in 2008 after Whiteson reached out to Cham, asking if he’d be willing to help create physics cartoons. “I always thought physics was well connected to the way comics work,” Whiteson says. “Because, what’s a Feynman diagram but a little cartoon of particles hitting each other?” (Feynman diagrams are pictures commonly used in particle physics papers that represent the interactions of subatomic particles.) Before working on this book, the pair made a handful of popular YouTube videos on topics like dark matter, extra dimensions and the Higgs boson. Many of these subjects are also covered in We Have No Idea. One of the main motivators of this latest project was to address a “certain apathy toward science,” Cham says. “I think we both came into it having this feeling that the general public either thinks scientists have everything figured out, or they don't really understand what scientists are doing.” To get at this issue, the pair focused on topics that even someone without a science background could find compelling. “You don’t need 10 years of physics background to know [that] questions about how the universe started or what it’s made of are interesting,” Whiteson says. “We tried to find questions that were gut-level approachable.” Another key theme of the book, the authors say, is the line between what science can and cannot tell us. While some of the possible solutions to the universe’s mysteries have testable predictions, others (such as string theory) currently do not. “We wanted questions that were accessible yet answerable,” says Whiteson. “We wanted to show people that there were deep, basic, simple questions that we all had, but that the answers were out there.” Many scientists are hard at work trying to fill the gaping holes in our knowledge about the universe. Particle physicists, for example, are exploring a number of these questions, such as those about the nature of antimatter and mass. Artwork by Jorge Cham Some lines of inquiry have brought different research communities together. Dark matter searches, for example, were primarily the realm of cosmologists, who probe large-scale structures of the universe. However, as the focus shifted to finding out what particle—or particles—dark matter was made of, this area of study started to attract astrophysicists as well. Why are people trying to answer these questions? “I think science is an expression of humanity and our curiosity to know the answers to basic questions we ask ourselves: Who are we? Why are we here? How does the world work?” Whiteson says. “On the other hand, questions like these lead to understanding, and understanding leads to being able to have greater power over the environment to solve our problems. In the very last chapter of the book, the authors explain the idea of a “testable universe,” or the parts of the universe that fall within the bounds of science. In the Stone Ages, when humans had very few tools at their disposal, the testable universe was very small. But it increased as people built telescopes, satellites and particle colliders, and it continues to expand with ongoing advances in science and technology. “That’s the exciting thing,” Cham says. “Our ability to answer these questions is growing.” Some mysteries of the universe still live in the realm of philosophy. But tomorrow, next year or a thousand years from now, a scientist may come along and devise an experiment that will be able to find the answers. “We’re in a special place in history when most of the world seems explained,” Whiteson says. Thousands of years ago, basic questions, such as why fire burns or where rain comes from, were still largely a mystery. “These days, all those mysteries seem answered, but the truth is, there’s a lot of mysteries left. [If] you want to make a massive imprint on human intellectual history, there’s plenty of room for that.” ## May 06, 2017 ### The n-Category Cafe A Discussion on Notions of Lawvere Theories Guest post by Daniel Cicala The Kan Extension Seminar II continues with a discussion of the paper Notions of Lawvere Theory by Stephen Lack and Jirí Rosický. In his landmark thesis, William Lawvere introduced a method to the study of universal algebra that was vastly more abstract than those previously used. This method actually turns certain mathematical stuff, structure, and properties into a mathematical object! This is achieved with a Lawvere theory: a bijective-on-objects product preserving functor $T:{\aleph }_{0}^{\text{op}}\to LT \colon \aleph^\left\{\text\left\{op\right\}\right\}_0 \to \mathbf\left\{L\right\}$ where ${\aleph }_{0}\aleph_0$ is a skeleton of the category $\mathrm{FinSet}\mathbf\left\{FinSet\right\}$ and $L\mathbf\left\{L\right\}$ is a category with finite products. The analogy between algebraic gadgets and Lawvere theories reads as: stuff, structure, and properties correspond respectively to 1, morphisms, and commuting diagrams. To get an actual instance, or a model, of an algebraic gadget from a Lawvere theory, we take a product preserving functor $m:T\to \mathrm{Set}m \colon \mathbf\left\{T\right\} \to \mathbf\left\{Set\right\}$. A model picks out a set $m\left(1\right)m\left(1\right)$ and $nn$-ary operations $m\left(f\right):m\left(1{\right)}^{n}\to m\left(1\right)m\left(f\right) \colon m\left(1\right)^n \to m\left(1\right)$ for every $TT$-morphism $f:n\to 1f \colon n \to 1$. To read more about classical Lawvere theories, you can read Evangelia Aleiferi’s discussion of Hyland and Power’s paper on the topic. With this elegant perspective on universal algebra, we do what mathematicians are wont to do: generalize it. However, there is much to consider undertaking such a project. Firstly, what elements of the theory ought to be generalized? Lack and Rosický provide a clear answer to this question. They generalize along the following three tracks: • consider a class of limits besides finite products, • replace the base category $\mathrm{Set}\mathbf\left\{Set\right\}$ with some other suitable category, and • enrich everything. Another important consideration is to determine exactly how far to generalize. Why not just go as far as possible? Here are two reasons. First, there are a number of results in this paper that stand up to further generalization if one doesn’t care about constructibility. A second limiting factor of generalization is that one should ensure that central properties still hold. In Notions of Lawvere Theory, the properties lifted from classical Lawvere theories are • the correspondence between Lawvere theories and monads, • that algebraic functors have left adjoints, and • models form reflective subcategories of certain functor categories. Before starting the discussion of the paper, I would like to take a moment to thank Alexander, Brendan and Emily for running this seminar. I have truly learned a lot and have enjoyed wonderful conversations with everyone involved. ## Replacing finite limits To find a suitable class of limits to replace finite products, we require the concept of presentability. The best entry point is to learn about local finite presentability, which David Myers has discussed here. With a little modification to the ideas there, we define a notion of local strong finite presentability and local $\Phi \Phi$-presentability for a class of limits $\Phi \Phi$. We begin with sifted colimits, which are those $\mathrm{Set}\mathbf\left\{Set\right\}$-valued colimits that commute with finite products. Note the familiarity of this definition with the commutativity property of filtered colimits. Of course, filtered colimits are also sifted. Another example is a reflexive pair, that is a category with shape Anyway, we now look at the strongly finitely presentable objects in a category $C\mathbf\left\{C\right\}$. These are those objects $xx$ whose representable $C\left(x,-\right):C\to \mathrm{Set}\mathbf\left\{C\right\}\left(x,-\right) \colon \mathbf\left\{C\right\} \to \mathbf\left\{Set\right\}$ preserves sifted colimits. Denote the full subcategory of these by ${C}_{\text{sfp}}\mathbf\left\{C\right\}_\left\{\text\left\{sfp\right\}\right\}$. Some simple examples include ${\mathrm{Set}}_{\text{sfp}}\mathbf\left\{Set\right\}_\left\{\text\left\{sfp\right\}\right\}$, which consists of the finite sets, and ${\mathrm{Ab}}_{\text{sfp}}\mathbf\left\{Ab\right\}_\left\{\text\left\{sfp\right\}\right\}$, which has the free and finitely generated Abelian groups. Also, given a category $C\mathbf\left\{C\right\}$ of models for a Lawvere theory, ${C}_{\text{sfp}}\mathbf\left\{C\right\}_\left\{\text\left\{sfp\right\}\right\}$ is exactly those finitely presentable objects that are regular projective. A category $C\mathbf\left\{C\right\}$ is locally strongly finitely presentable if it is cocomplete, ${C}_{\text{sfp}}\mathbf\left\{C\right\}_\left\{\text\left\{sfp\right\}\right\}$ is small, and any $C\mathbf\left\{C\right\}$-object is a sifted colimit of a diagram in ${C}_{\text{sfp}}\mathbf\left\{C\right\}_\left\{\text\left\{sfp\right\}\right\}$. There is also a nice characterization (Theorem 3.1 in the paper) that states $C\mathbf\left\{C\right\}$ is locally strongly finitely presentable if and only if ${C}_{\text{sfp}}\mathbf\left\{C\right\}_\left\{\text\left\{sfp\right\}\right\}$ has finite coproducts and we can identity $C\mathbf\left\{C\right\}$ with the category of finite product-preserving functors ${C}_{\text{sfp}}^{\text{op}}\to \mathrm{Set}\mathbf\left\{C\right\}^\left\{\text\left\{op\right\}\right\}_\left\{\text\left\{sfp\right\}\right\} \to \mathbf\left\{Set\right\}$. One of the most important results of Notions of Lawvere Theory, was in expanding the theory to encompass sifted (weighted) colimits. More on this later. We can play this game with any class of limits $\Phi \Phi$. Before defining $\mathrm{Phi}Phi$-presentability, here is a bit of jargon. Definition. A functor is $\Phi \Phi$-flat if its colimit commutes with $\Phi \Phi$-limits. We call an object $xx$ of a category $C\mathbf\left\{C\right\}$ $\Phi \Phi$-presentable if $C\left(x,-\right):C\to \mathrm{Set}\mathbf\left\{C\right\}\left(x,-\right) \colon \mathbf\left\{C\right\} \to \mathbf\left\{Set\right\}$ preserves $\Phi \Phi$-flat colimits. Given the full subcategory ${C}_{\Phi }\mathbf\left\{C\right\}_\Phi$ of $\Phi \Phi$-presentable objects, we call $C\mathbf\left\{C\right\}$ locally $\Phi \Phi$-presentable if it is cocomplete, ${C}_{\Phi }\mathbf\left\{C\right\}_\left\{\Phi\right\}$ is small, and any $C\mathbf\left\{C\right\}$-object is a $\Phi \Phi$-flat colimit of a diagram in ${C}_{\Phi }\mathbf\left\{C\right\}_\left\{\Phi\right\}$. Fortunately, we retain the characterization of $C\mathbf\left\{C\right\}$ being locally $\Phi \Phi$-presentable if and only if ${C}_{\Phi }\mathbf\left\{C\right\}_\left\{\Phi\right\}$ has $\Phi \Phi$-colimits and $C\mathbf\left\{C\right\}$ is equivalent to the category $\Phi \Phi$-$\mathrm{Cts}\left({C}_{\Phi }^{\text{op}},\mathrm{Set}\right)\mathbf\left\{Cts\right\}\left(\mathbf\left\{C\right\}^\left\{\text\left\{op\right\}\right\}_\Phi, \mathbf\left\{Set\right\}\right)$ of $\Phi \Phi$-continuous functors ${C}_{\Phi }^{\text{op}}\to \mathrm{Set}\mathbf\left\{C\right\}^\left\{\text\left\{op\right\}\right\}_\Phi \to \mathbf\left\{Set\right\}$. Important results in Notions of Lawvere Theory use the assumption of $\Phi \Phi$-presentability. Let’s come back to Lawvere theories. From this point on, we fix a symmetric monoidal closed category $𝒱\mathcal\left\{V\right\}$ that is both complete and cocomplete. Also, $\Phi \Phi$ will refer to a class of weights over $𝒱\mathcal\left\{V\right\}$. Our first task will be to determine what class of limits can replace finite products in the classical case. To this end, we take the following assumption. Axiom A. $\Phi \Phi$-continuous weights are $\Phi \Phi$-flat. This axiom is an analogy with how filtered colimits commute with finite limits in $\mathrm{Set}\mathbf\left\{Set\right\}$. But for what classes of limits $\Phi \Phi$ does this hold? To answer this question, we fix a sound doctrine $𝔻\mathbb\left\{D\right\}$. Very roughly, a sound doctrine is a collection of small categories whose limits behave nicely with respect to certain colimits. After putting some small assumptions on the underlying category ${𝒱}_{0}\mathcal\left\{V\right\}_0$ which we’ll sweep under the rug, define ${𝒱}_{𝔻}\mathcal\left\{V\right\}_\mathbb\left\{D\right\}$ to be the full sub $𝒱\mathcal\left\{V\right\}$-category consisting of those objects $xx$ such that $\left[x,-\right]:𝒱\to 𝒱\left[x,-\right] \colon \mathcal\left\{V\right\} \to \mathcal\left\{V\right\}$ preserves $𝔻\mathbb\left\{D\right\}$-flat colimits. Let $\Phi \Phi$ be the class of limits ‘built from’ conical $𝔻\mathbb\left\{D\right\}$-limits and ${𝒱}_{𝔻}\mathcal\left\{V\right\}_\mathbb\left\{D\right\}$-powers in the sense that we take $\varphi \in \Phi \phi \in \Phi$ if • any $𝒱\mathcal\left\{V\right\}$-category with conical $𝔻\mathbb\left\{D\right\}$ limits and ${𝒱}_{𝔻}\mathcal\left\{V\right\}_\mathbb\left\{D\right\}$-powers also admits $\varphi \phi$-weighted limits, and • any $𝒱\mathcal\left\{V\right\}$-functor conical $𝔻\mathbb\left\{D\right\}$ limits and ${𝒱}_{𝔻}\mathcal\left\{V\right\}_\mathbb\left\{D\right\}$-powers also preserves $\varphi \phi$-weighted limits. The fancy way of saying this is that $\Phi \Phi$ is the saturation class of conical $𝔻\mathbb\left\{D\right\}$-limits and ${𝒱}_{𝔻}\mathcal\left\{V\right\}_\mathbb\left\{D\right\}$-powers. It’s easy enough to see that $\Phi \Phi$ contains the conical $𝔻\mathbb\left\{D\right\}$-limits and $𝒱\mathcal\left\{V\right\}$-powers. Having constructed a class of limits $\Phi \Phi$ from a sound doctrine $𝔻\mathbb\left\{D\right\}$, we use the following theorem to imply that $\Phi \Phi$ satisfies the axiom above. Theorem. Let $𝒦\mathcal\left\{K\right\}$ be a small $V\mathbf\left\{V\right\}$-category with $\Phi \Phi$-weighted limits and $F:𝒦\to 𝒱F \colon \mathcal\left\{K\right\} \to \mathcal\left\{V\right\}$ be a $V\mathbf\left\{V\right\}$-functor. The following are equivalent: • $FF$ is $𝔻\mathbb\left\{D\right\}$-continuous; • $FF$ is $\Phi \Phi$-flat • $FF$ is $\Phi \Phi$-continuous. In particular, the first item allows us to construct $\Phi \Phi$ using sound limits and the equivalence between the second and third item is precisely the axiom of interest. Here are some examples. Example. Let $𝔻\mathbb\left\{D\right\}$ be the collection of all finite categories. We will also take ${𝒱}_{0}\mathcal\left\{V\right\}_0$ be locally finitely presentable with the additional requirement that the monoidal unit $II$ is finitely presentable as is the tensor product of two finitely presentable objects. Examples of such a $𝒱\mathcal\left\{V\right\}$ are categories of sets, abelian groups, modules over a commutative ring, chain complexes, categories, groupoids, and simplicial sets. Then $\Phi \Phi$, as constructed from $𝔻\mathbb\left\{D\right\}$ above gives a good notion of $𝒱\mathcal\left\{V\right\}$-enriched finite limits. Example. A second example, and one of the main contributions of Notions of Lawvere Theory is when $𝔻\mathbb\left\{D\right\}$ is the class of all finite, discrete categories. Here, we take our $𝒱\mathcal\left\{V\right\}$ as in the first example, though we do not require the monoidal unit to be strongly finitely presentable. We do this because, by requiring the monoidal unit to be strongly finitely presentable, we lose the example where $𝒱\mathcal\left\{V\right\}$ is the category of directed graphs, which happens to be a key example, particularly to realizing categories as an model of a Lawvere theory. In this case, the induced class $\Phi \Phi$ gives an enriched version of strongly finite limits as discussed above. This $\Phi \Phi$ generalizes finite products in the sense that they coincide when $𝒱\mathcal\left\{V\right\}$ is $\mathrm{Set}\mathbf\left\{Set\right\}$. ## Correspondence between Lawvere theories and monads Now that we’ve gotten our hands on some suitable limits, let’s see how we can obtain the classical correspondence between Lawvere theories and monads. Naturally, we’ll be assuming axiom A. In addition, we fix a $𝒱\mathcal\left\{V\right\}$-category $𝒦\mathcal\left\{K\right\}$ that satisfies the following. Axiom B1. $𝒦\mathcal\left\{K\right\}$ is locally $\Phi \Phi$-presentable. This axiom implies, as in our discussions above, that $𝒦\cong \Phi \mathcal\left\{K\right\} \cong \Phi$-$\mathrm{Cts}\left({𝒦}_{\Phi }^{\text{op}},𝒱\right)\mathbf\left\{Cts\right\}\left(\mathcal\left\{K\right\}^\left\{\text\left\{op\right\}\right\}_\Phi,\mathcal\left\{V\right\}\right)$. This is not particularly restrictive, as presheaf $𝒱\mathcal\left\{V\right\}$-categories are locally $\Phi \Phi$-presentable. Now, define a Lawvere $\Phi \Phi$-theory on $𝒦\mathcal\left\{K\right\}$ to be a bijective-on-objects $V\mathbf\left\{V\right\}$-functor $g:{𝒦}_{\Phi }^{\text{op}}\to ℒg\colon \mathcal\left\{K\right\}^\left\{\text\left\{op\right\}\right\}_\left\{\Phi\right\} \to \mathcal\left\{L\right\}$ that preserves $\Phi \Phi$-limits. A striking difference between a Lawvere $\Phi \Phi$-theory and the classical notion is that the former does not require $\mathcal\left\{L\right\}$ to have the limits under consideration. This makes defining the models of a Lawvere $\Phi \Phi$-theory a subtler issue than in the classical case. Instead of defining a model to be a $\Phi \Phi$-continuous functor as one might expect, we instead use the pullback square To understand what a model looks like, use the intuition for a pullback in the category $\mathrm{Set}\mathbf\left\{Set\right\}$ and the fact that $𝒦\mathcal\left\{K\right\}$ is equivalent to $\Phi \Phi$-$\mathrm{Cts}\left({𝒦}_{\Phi }^{\text{op}},𝒱\right)\mathbf\left\{Cts\right\}\left(\mathcal\left\{K\right\}^\left\{\text\left\{op\right\}\right\}_\Phi,\mathcal\left\{V\right\}\right)$. So a model will be a $𝒱\mathcal\left\{V\right\}$-functor $ℒ\to 𝒱\mathcal\left\{L\right\} \to \mathcal\left\{V\right\}$ whose restriction along $gg$ is $\Phi \Phi$-continuous. The other major player in this section is the category of $\Phi \Phi$-flat monads ${\mathrm{Mnd}}_{\Phi }\left(𝒦\right)\mathbf\left\{Mnd\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right)$. We claim that there is an equivalence between ${\mathrm{Law}}_{\Phi }\left(𝒦\right)\mathbf\left\{Law\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right)$ and ${\mathrm{Mnd}}_{\Phi }\left(𝒦\right)\mathbf\left\{Mnd\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right)$. To verify this, we construct a pair of functors between ${\mathrm{Law}}_{\Phi }\left(𝒦\right)\mathbf\left\{Law\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right)$ and ${\mathrm{Mnd}}_{\Phi }\left(𝒦\right)\mathbf\left\{Mnd\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right)$. The first under consideration: $\mathrm{mnd}:{\mathrm{Law}}_{\Phi }\left(𝒦\right)\to {\mathrm{Mnd}}_{\Phi }\left(𝒦\right)\left\{mnd\right\} \colon \mathbf\left\{Law\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right) \to \mathbf\left\{Mnd\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right)$. We define this with the help of the following proposition. Proposition. The functor $uu$ from the above pullback diagram is monadic via a $\Phi \Phi$-flat monad $tt$. Hence, a $\Phi \Phi$-theory $\mathcal\left\{L\right\}$ gives a monadic functor $u:\mathrm{Mod}\left(ℒ\right)\to 𝒦u \colon \mathbf\left\{Mod\right\}\left(\mathcal\left\{L\right\}\right) \to \mathcal\left\{K\right\}$ that yields a monad $tt$ on $𝒦\mathcal\left\{K\right\}$. Moreover, this monad preserves all the limits required to be an object in ${\mathrm{Mnd}}_{\Phi }\left(𝒦\right)\mathbf\left\{Mnd\right\}_\Phi \left(\mathcal\left\{K\right\}\right)$. So, define $\mathrm{mnd}\left(ℒ\right)=t\left\{mnd\right\} \left(\mathcal\left\{L\right\}\right) = t$. Next, we define a functor $\mathrm{th}:{\mathrm{Mnd}}_{\Phi }\left(𝒦\right)\to {\mathrm{Law}}_{\Phi }\left(𝒦\right)\left\{th\right\} \colon \mathbf\left\{Mnd\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right) \to \mathbf\left\{Law\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right)$. Consider a monad $tt$ in ${\mathrm{Mnd}}_{\Phi }\left(𝒦\right)\mathbf\left\{Mnd\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right)$. As per usual, $tt$ factors through the Eilenberg-Moore category $𝒦\to {𝒦}^{t}\mathcal\left\{K\right\} \to \mathcal\left\{K\right\}^t$ which we precompose with the inclusion ${𝒦}_{\Phi }\to 𝒦\mathcal\left\{K\right\}_\Phi \to \mathcal\left\{K\right\}$ giving $f:{𝒦}_{\Phi }\to {𝒦}^{m}f \colon \mathcal\left\{K\right\}_\Phi \to \mathcal\left\{K\right\}^m$. Now defining a $𝒱\mathcal\left\{V\right\}$-category $𝒢\mathcal\left\{G\right\}$ that has objects from ${𝒦}_{\Phi }\mathcal\left\{K\right\}_\Phi$ and $𝒢\left(x,y\right)={𝒦}^{m}\left(\mathrm{fx},\mathrm{fy}\right)\mathcal\left\{G\right\}\left(x,y\right) = \mathcal\left\{K\right\}^m\left(fx,fy\right)$, we factor $ff$ where $\ell \ell$ is bijective-on-objects and $rr$ is full and faithful. This factorization is unique up to unique isomorphism. Define $\text{th}\left(t\right)={𝒢}^{\text{op}}\text\left\{th\right\} \left(t\right) = \mathcal\left\{G\right\}^\left\{\text\left\{op\right\}\right\}$. At this point, we have functors $\text{mnd}:{\mathrm{Law}}_{\Phi }\left(𝒦\right)\to {\mathrm{Mnd}}_{\Phi }\left(𝒦\right)\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}\text{th}:{\mathrm{Mnd}}_{\Phi }\left(𝒦\right)\to {\mathrm{Law}}_{\Phi }\left(𝒦\right) \text\left\{mnd\right\} \colon \mathbf\left\{Law\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right) \to \mathbf\left\{Mnd\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right) \quad \text\left\{ and \right\} \quad \text\left\{th\right\} \colon \mathbf\left\{Mnd\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right) \to \mathbf\left\{Law\right\}_\left\{\Phi\right\}\left(\mathcal\left\{K\right\}\right) $ so let’s turn our attention to showing that these are mutual weak inverses. The first step is to show that the category of algebras ${𝒦}^{t}\mathcal\left\{K\right\}^t$ for a given monad $tt$ is the category of models $\mathrm{Mod}\left(\text{th}\left(t\right)\right)\mathbf\left\{Mod\right\}\left(\text\left\{th\right\} \left(t\right)\right)$. Theorem 6.6. The $𝒱\mathcal\left\{V\right\}$-functor ${𝒦}^{t}\left(r-,-\right):{𝒦}^{t}\to \left[\text{th}\left(t{\right)}^{\text{op}},𝒱\right],\phantom{\rule{1em}{0ex}}x↦{𝒦}^{m}\left(r-,x\right) \mathcal\left\{K\right\}^t\left(r-,-\right) \colon \mathcal\left\{K\right\}^t \to \left[\text\left\{th\right\} \left(t\right)^\left\{\text\left\{op\right\}\right\},\mathcal\left\{V\right\}\right], \quad x \mapsto \mathcal\left\{K\right\}^m\left(r-,x\right) $ restricts to an isomorphism of $𝒱\mathcal\left\{V\right\}$-categories ${𝒦}^{t}\cong \mathrm{Mod}\left(\text{th}\left(t\right)\right)\mathcal\left\{K\right\}^t \cong \mathbf\left\{Mod\right\}\left(\text\left\{th\right\} \left(t\right)\right)$. This theorem gives us that $\text{mnd}\circ \text{th}\cong \text{id}\text\left\{mnd\right\} \circ \text\left\{th\right\} \cong \text\left\{id\right\}$. The next theorem gives us the other direction. Theorem 6.7. There is an isomorphism $\text{th}\circ \text{mnd}\cong \text{id}\text\left\{th\right\} \circ \text\left\{mnd\right\} \cong \text\left\{id\right\}$. Let’s sketch the proof. Let $g:{𝒦}_{\Phi }^{\text{op}}\to ℒg \colon \mathcal\left\{K\right\}^\left\{\text\left\{op\right\}\right\}_\left\{\Phi\right\} \to \mathcal\left\{L\right\}$ be a Lawvere $\Phi \Phi$-theory. If we denote $\text{mnd}\left(𝒯\right)\text\left\{mnd\right\} \left(\mathcal\left\{T\right\}\right)$ by $tt$, we get $\text{th}\circ \text{mnd}\left(𝒯\right)=\text{th}\left(t\right)={𝒢}^{\text{op}}\text\left\{th\right\} \circ \text\left\{mnd\right\} \left(\mathcal\left\{T\right\}\right) = \text\left\{th\right\} \left(t\right) = \mathcal\left\{G\right\}^\left\{\text\left\{op\right\}\right\}$ via the factorization where $\ell \ell$ is bijective-on-objects and $rr$ is fully faithful. It remains to show that $𝒯={𝒢}^{\text{op}}\mathcal\left\{T\right\} = \mathcal\left\{G\right\}^\left\{\text\left\{op\right\}\right\}$. Let’s compute the image of an ${𝒦}_{\Phi }\mathcal\left\{K\right\}_\Phi$-object $xx$ in ${\mathrm{Mod}}_{\Phi }\left(\text{th}\left(t\right)\right)\mathbf\left\{Mod\right\}_\left\{\Phi\right\} \left(\text\left\{th\right\} \left(t\right)\right)$. For this, recall that we have $𝒦\simeq \Phi \mathcal\left\{K\right\} \simeq \Phi$-$\mathrm{Cts}\left({𝒦}_{\Phi }^{\text{op}},𝒱\right)\mathbf\left\{Cts\right\}\left(\mathcal\left\{K\right\}^\left\{\text\left\{op\right\}\right\}_\left\{\Phi\right\},\mathcal\left\{V\right\}\right)$ by assumption. Embedding $xx$ into $\Phi \Phi$-$\mathrm{Cats}\left({𝒦}_{\Phi },𝒱\right)\mathbf\left\{Cats\right\}\left(\mathcal\left\{K\right\}_\left\{\Phi\right\},\mathcal\left\{V\right\}\right)$ gives us ${𝒦}_{\Phi }\left(-,x\right):{𝒦}_{\Phi }^{\text{op}}\to 𝒱. \mathcal\left\{K\right\}_\Phi\left(-,x\right) \colon \mathcal\left\{K\right\}^\left\{\text\left\{op\right\}\right\}_\left\{\Phi\right\} \to \mathcal\left\{V\right\}. $ This, in turn, is mapped to the left Kan extension ${\text{Lan}}_{g}\left({𝒦}_{\Phi }\left(-,x\right)\right):𝒯\to 𝒱 \text\left\{Lan\right\}_g \left(\mathcal\left\{K\right\}_\Phi\left(-,x\right)\right) \colon \mathcal\left\{T\right\} \to \mathcal\left\{V\right\} $ along $g:{𝒦}_{\Phi }^{\text{op}}\to 𝒯g \colon \mathcal\left\{K\right\}^\left\{\text\left\{op\right\}\right\}_\left\{\Phi\right\} \to \mathcal\left\{T\right\}$ (the Lawvere $\Phi \Phi$ theory we began with). Here, we can compute that ${\text{Lan}}_{g}\left({𝒦}_{\Phi }\left(-,x\right)\right)\text\left\{Lan\right\}_g \left(\mathcal\left\{K\right\}_\Phi\left(-,x\right)\right)$ is $𝒯\left(-,\mathrm{gx}\right)\mathcal\left\{T\right\}\left(-,gx\right)$ meaning the factorization above is Therefore, $𝒯={𝒢}^{\text{op}}\mathcal\left\{T\right\} = \mathcal\left\{G\right\}^\left\{\text\left\{op\right\}\right\}$ as desired. ## Many-sorted theories Moving from single-sorted to many-sorted theories, we will take a different assumption on our $𝒱\mathcal\left\{V\right\}$-category $𝒦\mathcal\left\{K\right\}$. Axiom B2. $𝒦\mathcal\left\{K\right\}$ is a $𝒱\mathcal\left\{V\right\}$-category with $\Phi \Phi$-limits and such that the Yoneda inclusion $𝒦\to \left[{𝒦}^{\text{op}},𝒱\right]\mathcal\left\{K\right\} \to \left[\mathcal\left\{K\right\}^\left\{\text\left\{op\right\}\right\} , \mathcal\left\{V\right\}\right]$ has a $\Phi \Phi$-continuous left adjoint. This requirement on $𝒦\mathcal\left\{K\right\}$ is not overly restrictive as it holds for all presheaf $𝒱\mathcal\left\{V\right\}$-categories and all Grothendieck topoi when $𝒱\mathcal\left\{V\right\}$ is $\mathrm{Set}\mathbf\left\{Set\right\}$. The nice thing about this assumption is that we can compute all colimits and $\Phi \Phi$-limits in $𝒦\mathcal\left\{K\right\}$ by passing to $\left[{𝒦}^{\text{op}},𝒱\right]\left[\mathcal\left\{K\right\}^\left\{\text\left\{op\right\}\right\} , \mathcal\left\{V\right\}\right]$, where they commute, then reflecting back. Generalizing Lawvere theories here is a bit simpler than in the previous section. Indeed, call any small $𝒱\mathcal\left\{V\right\}$-category $\mathcal\left\{L\right\}$ with $\Phi \Phi$-limits a $\Phi \Phi$-theory. Notice that we no longer have a bijective-on-objects functor involved in the definition. That functor forced the single-sortedness. With the functor no longer constraining the structure, we have the possibility for many sorts. Also, a $\Phi \Phi$-theory does have all $\Phi \Phi$-limits here, unlike in the single-sorted case. This allows for a much simpler definition of a model. Indeed, the category of models for a $\Phi \Phi$-theory $LL$ is the full subcategory $\Phi \Phi$-$\mathrm{Cts}\left(ℒ,𝒦\right)\mathbf\left\{Cts\right\}\left(\mathcal\left\{L\right\}, \mathcal\left\{K\right\}\right)$ of $\left[ℒ,𝒦\right]\left[\mathcal\left\{L\right\}, \mathcal\left\{K\right\}\right]$. Presently, we are interested in generalizing two important properties of Lawvere theory to $\Phi \Phi$-theories. The first is that algebraic functors have left adjoints. The second is the reflectiveness of models. Algebraic functors have left adjoints. A morphism of $\Phi \Phi$-theories is a $\Phi \Phi$-continuous $𝒱\mathcal\left\{V\right\}$-functor $g:ℒ\to ℒ\prime g \colon \mathcal\left\{L\right\} \to \mathcal\left\{L\right\}\prime$. Any such morphism induces a pullback $𝒱\mathcal\left\{V\right\}$-functor ${g}^{*}:\Phi g^\left\{\ast\right\} \colon \Phi$-$\mathrm{Cts}\left(ℒ\prime ,𝒦\right)\to \Phi \mathbf\left\{Cts\right\}\left(\mathcal\left\{L\right\}\prime, \mathcal\left\{K\right\}\right) \to \Phi$-$\mathrm{Cts}\left(ℒ,𝒦\right)\mathbf\left\{Cts\right\}\left(\mathcal\left\{L\right\}, \mathcal\left\{K\right\}\right)$ between model $𝒱\mathcal\left\{V\right\}$-categories. We call such functors $\Phi \Phi$-algebraic. And yes, these do have left adjoints just as in the context of classical Lawvere theories. Theorem. Let $\mathcal\left\{L\right\}$ and $ℒ\prime \mathcal\left\{L\right\}\prime$ be $\Phi \Phi$-theories and $g:ℒ\to ℒ\prime g \colon \mathcal\left\{L\right\} \to \mathcal\left\{L\right\}\prime$ a $𝒱\mathcal\left\{V\right\}$-functor between them. Given a model $m:ℒ\to 𝒦m \colon \mathcal\left\{L\right\} \to \mathcal\left\{K\right\}$, then ${\text{Lan}}_{g}m:ℒ\prime \to 𝒦\text\left\{Lan\right\}_g m \colon \mathcal\left\{L\right\}\prime \to \mathcal\left\{K\right\}$ is a model. What is happening here? Of course, pulling back by $gg$ gives a way to turn models of $ℒ\prime \mathcal\left\{L\right\}\prime$ into models of $\mathcal\left\{L\right\}$ – this is the algebraic functor ${g}^{*}g^\left\{\ast\right\}$. But the left Kan extension along $gg$ gives a way to turn a model $mm$ of $\mathcal\left\{L\right\}$ into a model of $ℒ\prime \mathcal\left\{L\right\}\prime$ as depicted in the diagram This theorem says that process gives a functor ${g}_{*}:\Phi g_\left\{\ast\right\} \colon \Phi$-$\mathrm{Cts}\left(ℒ,𝒦\right)\to \Phi \mathbf\left\{Cts\right\}\left(\mathcal\left\{L\right\}, \mathcal\left\{K\right\}\right) \to \Phi$-$\mathrm{Cts}\left(ℒ\prime ,𝒦\right)\mathbf\left\{Cts\right\}\left(\mathcal\left\{L\right\}\prime, \mathcal\left\{K\right\}\right)$ given by $m↦{\text{Lan}}_{g}mm \mapsto \text\left\{Lan\right\}_g m$. We can prove this theorem for $𝒦=𝒱\mathcal\left\{K\right\} = \mathcal\left\{V\right\}$ without requiring axiom B2. This axiom is used to extend this result to a $𝒱\mathcal\left\{V\right\}$-category $𝒦\mathcal\left\{K\right\}$. The existence of the left adjoint $\ell \ell$ to the Yoneda embedding $yy$ of $𝒦\mathcal\left\{K\right\}$ gives a factorization ${\text{Lan}}_{g}m=\ell {\text{Lan}}_{g}ym\text\left\{Lan\right\}_g m = \ell \text\left\{Lan\right\}_g y m$. The proof then reduces to showing that ${\text{Lan}}_{g}ym\text\left\{Lan\right\}_g y m$ is $\Phi \Phi$-continuous, since we are already assuming that $\ell \ell$ is. But because the codomain of ${\text{Lan}}_{g}ym\text\left\{Lan\right\}_g y m$ is $\left[{𝒦}^{\text{op}},𝒱\right]\left[\mathcal\left\{K\right\}^\left\{\text\left\{op\right\}\right\},\mathcal\left\{V\right\}\right]$, we can rest on the fact that we have proven the result for $𝒦=𝒱\mathcal\left\{K\right\} = \mathcal\left\{V\right\}$. Limits are taken pointwise, after all. Actually, the left adjoint to ${g}^{*}g^\left\{\ast\right\}$ holds more generally, but our assumptions on $𝒦\mathcal\left\{K\right\}$ allow us to explicitly compute the left adjoint with left Kan extensions. Reflexiveness of models. Having discussed left adjoints of algebraic functors, we now move on to show that categories of models $\Phi \Phi$-$\mathrm{Cts}\left(𝒯,𝒱\right)\mathbf\left\{Cts\right\}\left(\mathcal\left\{T\right\},\mathcal\left\{V\right\}\right)$ are reflexive in $\left[𝒯,𝒱\right]\left[\mathcal\left\{T\right\}, \mathcal\left\{V\right\}\right]$. Consider the free-forgetful (ordinary) adjunction between $𝒱\mathcal\left\{V\right\}$-categories and those $𝒱\mathcal\left\{V\right\}$-categories with $\Phi \Phi$-limits and functors preserving them. Given a $𝒱\mathcal\left\{V\right\}$-category $\mathcal\left\{L\right\}$ in the image of $UU$. Note that $\mathcal\left\{L\right\}$ is a $\Phi \Phi$-theory. It follows from this adjunction that $\Phi \Phi$-$\mathrm{Cts}\left(\mathrm{ℱℒ},𝒱\right)\mathbf\left\{Cts\right\}\left(\mathcal\left\{FL\right\},\mathcal\left\{V\right\}\right)$ is equivalent to the category $\left[ℒ,𝒱\right]\left[\mathcal\left\{L\right\},\mathcal\left\{V\right\}\right]$. Moreover, since $\mathcal\left\{L\right\}$ has $\Phi \Phi$-limits, the inclusion $ℒ↪\mathrm{ℱℒ}\mathcal\left\{L\right\} \hookrightarrow \mathcal\left\{FL\right\}$ has a right adjoint $RR$ inducing an algebraic functor ${R}^{*}:\Phi R^\left\{\ast\right\} \colon \Phi$-$\mathrm{Cts}\left(ℒ,𝒱\right)\to \Phi \mathbf\left\{Cts\right\}\left(\mathcal\left\{L\right\},\mathcal\left\{V\right\}\right) \to \Phi$-$\mathrm{Cts}\left(\mathrm{ℱℒ},𝒱\right)\simeq \left[ℒ,𝒱\right]\mathbf\left\{Cts\right\}\left(\mathcal\left\{FL\right\},\mathcal\left\{V\right\}\right) \simeq \left[\mathcal\left\{L\right\},\mathcal\left\{V\right\}\right]$. But we just showed that algebraic functors have left adjoints, giving us the following theorem. Theorem. $\Phi \Phi$-$\mathrm{Cts}\left(ℒ,𝒱\right)\mathbf\left\{Cts\right\}\left(\mathcal\left\{L\right\},\mathcal\left\{V\right\}\right)$ is reflective in $\left[ℒ,𝒱\right]\left[\mathcal\left\{L\right\},\mathcal\left\{V\right\}\right]$. As promised, in the two general contexts corresponding to the axioms B1 and B2, we have the Lawvere theory-monad correspondence, that algebraic functors have left adjoints, and that categories of models are reflective. ## An example After all of that abstract nonsense, let’s get our feet back on the ground. Here is an example of manifesting a category with a chosen terminal object as a generalized Lawvere theory. This comes courtesy of Nishizawa and Power. Let $0\mathbf\left\{0\right\}$ denote the empty category, $1\mathbf\left\{1\right\}$ the terminal category, and $2\mathbf\left\{2\right\}$ the category $\left\{a\to b\right\}\\left\{a \to b\\right\}$ with two objects and a single arrow between them. We will also take $𝒦=𝒱=\mathrm{Cat}\mathcal\left\{K\right\} = \mathcal\left\{V\right\} = \mathbf\left\{Cat\right\}$. The class of limits $\Phi \Phi$ here is the finite $\mathrm{Cat}\mathbf\left\{Cat\right\}$-powers. We define a Lawvere $\Phi \Phi$-theory $\mathcal\left\{L\right\}$ to be the $\mathrm{Cat}\mathbf\left\{Cat\right\}$-category where we formally add to ${\mathrm{Cat}}_{\text{fp}}^{\text{op}}\mathbf\left\{Cat\right\}^\left\{\text\left\{op\right\}\right\}_\left\{\text\left\{fp\right\}\right\}$ (the opposite full subcategory on the finitely presentable objects) two arrows: $\tau :0\to 1\tau \colon \mathbf\left\{0\right\} \to \mathbf\left\{1\right\}$ and $\sigma :1\to 2\sigma \colon \mathbf\left\{1\right\} \to \mathbf\left\{2\right\}$. We then close up under finite $\mathrm{Cat}\mathbf\left\{Cat\right\}$-powers and modulo the commutative diagrams $\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\quad \quad$ $\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\quad \quad$ Now $\mathcal\left\{L\right\}$ is the Lawvere $\Phi \Phi$-theory for a category with a chosen terminal object. A model of $\mathcal\left\{L\right\}$ is a $\Phi \Phi$-continuous $\mathrm{Cat}\mathbf\left\{Cat\right\}$-functor $ℒ\to \mathrm{Cat}\mathcal\left\{L\right\} \to \mathbf\left\{Cat\right\}$. This means that if $MM$ is a model, it must preserve powers and so the following diagrams must commute: $\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\quad \quad$ $\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\quad \quad$ Here $\text{dom}\text\left\{dom\right\}$ and $\text{cod}\text\left\{cod\right\}$ choose the domain and codomain, and $\Delta \Delta$ the diagonal functor. Note that the commutativity of these diagrams witnesses the preservation of raising the terminal category to the first three diagrams given. Let’s parse these diagrams out. The category we get from the model $MM$ is $M1M1$ and the distinguished terminal object $tt$ is chosen by $M\tau M\tau$. The first two diagrams provide a morphism $x\to tx \to t$ for every object $xx$ in $M1M1$. The third diagram gives the identity map on $tt$. The uniqueness of maps into $tt$ follows from the functorality of $M\sigma M\sigma$ and $\text{cod}\text\left\{cod\right\}$. Conversely, given a category $C\mathbf\left\{C\right\}$ with a chosen terminal object $tt$, define a model $M:ℒ\to \mathrm{Cat}M \colon \mathcal\left\{L\right\} \to \mathbf\left\{Cat\right\}$ by $1\to C\mathbf\left\{1\right\} \to \mathbf\left\{C\right\}$ and ${1}^{x}\to {C}^{x}\mathbf\left\{1\right\}^x \to \mathbf\left\{C\right\}^x$ for $\mathcal\left\{L\right\}$-objects $xx$. Also, let $M\tau M\tau$ choose $tt$ and let $M\sigma M \sigma$ send each $\mathcal\left\{L\right\}$-object $xx$ to the $!:x\to t! \colon x \to t$. ## May 05, 2017 ### John Baez - Azimuth Phosphorus Sulfides I think of sulfur and phosphorus as clever chameleons of the periodic table: both come in many different forms, called allotropes. There’s white phosphorus, red phosphorus, violet phosphorus and black phosphorus: and there are about two dozen allotropes of sulfur, with a phase diagram like this: So I should have guessed that sulfur and phosphorus combine to make many different compounds. But I never thought about this until yesterday! I’m a great fan of diamonds, not for their monetary value but for the math of their crystal structure: In a diamond the carbon atoms do not form a lattice in the strict mathematical sense (which is more restrictive than the sense of this word in crystallography). The reason is that there aren’t translational symmetries carrying any atom to any other. Instead, there are two lattices of atoms, shown as red and blue in this picture by Greg Egan. Each atom has 4 nearest neighbors arranged at the vertices of a regular tetrahedron; the tetrahedra centered at the blue atoms are ‘right-side up’, while those centered at the red atoms are ‘upside down’. Having thought about this a lot, I was happy to read about adamantane. It’s a compound with 10 carbons and 16 hydrogens. There are 4 carbons at the vertices of a regular tetrahedron, and 6 along the edges—but the edges bend out in such a way that the carbons form a tiny piece of a diamond crystal: or more abstractly, focusing on the carbons and their bonds: Yesterday I learned that phosphorus decasulfide, P4S10, follows the same pattern: The angles deviate slightly from the value of $\arccos (-1/3) \approx 109.4712^\circ$ that we’d have in a fragment of a mathematically ideal diamond crystal, but that’s to be expected. It turns out there are lots of other phosphorus sulfides! Here are some of them: Puzzle 1. Why do each of these compounds have exactly 4 phosphorus atoms? I don’t know the answer! I can’t believe it’s impossible to form phosphorus–sulfur compounds with some other number of phosphorus atoms, but the Wikipedia article containing this chart says All known molecular phosphorus sulfides contain a tetrahedral array of four phosphorus atoms. P4S2 is also known but is unstable above −30 °C. All these phosphorus sulfides contain at most 10 sulfur atoms. If we remove one sulfur from phosphorus decasulfide we can get this: This is the ‘alpha form’ of P4S9. There’s also a beta form, shown in the chart above. Some of the phosphorus sulfides have pleasing symmetries, like the alpha form of P4S4: or the epsilon form of P4S6: Others look awkward. The alpha form of P4S5 is an ungainly beast: They all seem to have a few things in common: • There are 4 phosphorus atoms. • Each phosphorus atom is connected to 3 or 4 atoms, at most one of which is phosphorus. • Each sulfur atom is connected to 1 or 2 atoms, which must all be phosphorus. The pictures seem pretty consistent about showing a ‘double bond’ when a sulfur atom is connected to just 1 phosphorus. However, they don’t show a double bond when a phosphorus atom is connected to just 3 sulfurs. Puzzle 2. Can you draw molecules obeying the 3 rules listed above that aren’t on the chart? Of all the phosphorus sulfides, P4S10 is not only the biggest and most symmetrical, it’s also the most widely used. Humans make thousands of tons of the stuff! It’s used for producing organic sulfur compounds. People also make P4S3: it’s used in strike-anywhere matches. This molecule is not on the chart I showed you, and it also violates one of the rules I made up: Somewhat confusingly, P4S10 is not only called phosphorus decasulfide: it’s also called phosphorus pentasulfide. Similarly, P4S3 is called phosphorus sesquisulfide. Since the prefix ‘sesqui-’ means ‘one and a half’, there seems to be some kind of division by 2 going on here. ## May 04, 2017 ### Symmetrybreaking - Fermilab/SLAC Sterile neutrino search hits roadblock at reactors A new result from the Daya Bay collaboration reveals both limitations and strengths of experiments studying antineutrinos at nuclear reactors. As nuclear reactors burn through fuel, they produce a steady flow of particles called neutrinos. Neutrinos interact so rarely with other matter that they can flow past the steel and concrete of a power plant’s containment structures and keep on moving through anything else that gets in their way. Physicists interested in studying these wandering particles have taken advantage of this fact by installing neutrino detectors nearby. A recent result using some of these detectors demonstrated both their limitations and strengths. ### The reactor antineutrino anomaly In 2011, a group of theorists noticed that several reactor-based neutrino experiments had been publishing the same, surprising result: They weren’t detecting as many neutrinos as they thought they would. Or rather, to be technically correct, they weren’t seeing as many antineutrinos as they thought they would; nuclear reactors actually produce the antimatter partners of the elusive particles. About 6 percent of the expected antineutrinos just weren’t showing up. They called it “the reactor antineutrino anomaly.” The case of the missing neutrinos was a familiar one. In the 1960s, the Davis experiment located in Homestake Mine in South Dakota reported a shortage of neutrinos coming from processes in the sun. Other experiments confirmed the finding. In 2001, the Sudbury Neutrino Observatory in Ontario demonstrated that the missing neutrinos weren’t missing at all; they had only undergone a bit of a costume change. Neutrinos come in three types. Scientists discovered that neutrinos could transform from one type to another. The missing neutrinos had changed into a different type of neutrino that the Davis experiment couldn’t detect. Since 2011, scientists have wondered whether the reactor antineutrino anomaly was a sign of an undiscovered type of neutrino, one that was even harder to detect, called a sterile neutrino. A new result from the Daya Bay experiment in China not only casts doubt on that theory, it also casts doubt on the idea that scientists understand their model of reactor processes well enough at this time to use it to search for sterile neutrinos. ### The word from Daya Bay The Daya Bay experiment studies antineutrinos coming from six nuclear reactors on the southern coast of China, about 35 miles northeast of Hong Kong. The reactors are powered by the fission of uranium. Over time, the amount of uranium inside the reactor decreases while the amount of plutonium increases. The fuel is changed—or cycled—about every 18 months. The main goal of the Daya Bay experiment was to look for the rarest of the known neutrino oscillations. It did that, making a groundbreaking discovery after just nine weeks of data-taking. But that wasn’t the only goal of the experiment. “We realized right from the beginning that it is important for Daya Bay to address as many interesting physics problems as possible,” says Daya Bay co-spokesperson Kam-Biu Luk of the University of California, Berkeley and the US Department of Energy’s Lawrence Berkeley National Laboratory. For this result, Daya Bay scientists took advantage of their enormous collection of antineutrino data to expand their investigation to the reactor antineutrino anomaly. Using data from more than 2 million antineutrino interactions and information about when the power plants refreshed the uranium in each reactor, Daya Bay physicists compared the measurements of antineutrinos coming from different parts of the fuel cycle: early ones dominated by uranium through later ones dominated by both uranium and plutonium. In theory, the type of fuel producing the antineutrinos should not affect the rate at which they transform into sterile neutrinos. According to Bob Svoboda, chair of the Department of Physics at the University of California, Davis, “a neutrino wouldn’t care how it got made.” But Daya Bay scientists found that the shortage of antineutrinos existed only in processes dominated by uranium. Their conclusion is that, once again, the missing neutrinos aren’t actually missing. This time, the problem of the missing antineutrinos seems to stem from our understanding of how uranium burns in nuclear power plants. The predictions for how many antineutrinos the scientists should detect may have been overestimated. “Most of the problem appears to come from the uranium-235 model (uranium-235 is a fissile isotope of uranium), not from the neutrinos themselves,” Svoboda says. “We don’t fully understand uranium, so we have to take any anomaly we measured with a grain of salt.” This knock against the reactor antineutrino anomaly does not disprove the existence of sterile neutrinos. Other, non-reactor experiments have seen different possible signs of their influence. But it does put a damper on the only evidence of sterile neutrinos to have come from reactor experiments so far. Other reactor neutrino experiments, such as NEOS in South Korea and PROSPECT in the United States will fill in some missing details. NEOS scientists directly measured antineutrinos coming from reactors in the Hanbit nuclear power complex using a detector placed about 80 feet away, a distance some scientists believe is optimal for detecting sterile neutrinos should they exist. PROSPECT scientists will make the first precision measurement of antineutrinos coming from a highly enriched uranium core, one that does not produce plutonium as it burns. ### A silver lining The Daya Bay result offers the most detailed demonstration yet of scientists’ ability to use neutrino detectors to peer inside running nuclear reactors. “As a study of reactors, this is a tour de force,” says theorist Alexander Friedland of SLAC National Accelerator Laboratory. “This is an explicit demonstration that the composition of the reactor fuel has an impact on the neutrinos.” Some scientists are interested in monitoring nuclear power plants to find out if nuclear fuel is being diverted to build nuclear weapons. “Suppose I declare my reactor produces 100 kilograms of plutonium per year,” says Adam Bernstein of Lawrence Livermore National Laboratory. “Then I operate it in a slightly different way, and at the end of the year I have 120 kilograms.” That 20-kilogram surplus, left unmeasured, could potentially be moved into a weapons program. Current monitoring techniques involve checking what goes into a nuclear power plant before the fuel cycle begins and then checking what comes out after it ends. In the meantime, what happens inside is a mystery. Neutrino detectors allow scientists to understand what’s going on in a nuclear reactor in real time. Scientists have known for decades that neutrino detectors could be useful for nuclear nonproliferation purposes. Scientists studying neutrinos at the Rovno Nuclear Power Plant in Ukraine first demonstrated that neutrino detectors could differentiate between uranium and plutonium fuel. Most of the experiments have done this by looking at changes in the aggregate number of antineutrinos coming from a detector. Daya Bay showed that neutrino detectors could track the plutonium inventory in nuclear fuel by studying the energy spectrum of antineutrinos produced. “The most likely use of neutrino detectors in the near future is in so-called ‘cooperative agreements,’ where a$20-million-scale neutrino detector is installed in the vicinity of a reactor site as part of a treaty,” Svoboda says. “The site can be monitored very reliably without having to make intrusive inspections that bring up issues of national sovereignty.”

Luk says he is dubious that the idea will take off, but he agrees that Daya Bay has shown that neutrino detectors can give an incredibly precise report. “This result is the best demonstration so far of using a neutrino detector to probe the heartbeat of a nuclear reactor.”

## May 03, 2017

### ZapperZ - Physics and Physicists

The US 2017 Omnibus Budget
Finally, the US Congress has a 2017 budget, and this is the time that I'm glad they didn't follow the disastrous budget proposal of Donald Trump. Both NSF and DOE Office of Science didn't fare badly, with NSF doing worse than I expected. Still, what a surprise to see an increase in funding for HEP after years of neglect and budget cuts.

The Office of Science supports six research programs, and there were winners and losers among them. On the plus side, advanced scientific computing research, which funds much of DOE's supercomputing capabilities, gets a 4.2% increase to $647 million. High energy physics gets a boost of 3.8% to$825 million. Basic energy sciences, which funds work in chemistry, material science, and condensed matter physics and runs most of DOE's large user facilities, gets a bump up of 1.2% to $1.872 billion. Nuclear physics gets a 0.8% raise to$622 million; biological and environmental research inches up 0.5% to $612 million. In contrast, the fusion energy sciences program sees its budget fall a whopping 13.2% to$380 million.

It will continue to be challenging for physics funding during the next foreseeable future, but at least this will not cause a major panic. I've been highly critical of the US Congress on many issues, but I will tip my hat to them this time for standing up to the ridiculous budget that came out of the Trump administration earlier.

Zz.

## May 02, 2017

### Symmetrybreaking - Fermilab/SLAC

Mystery glow of Milky Way likely not dark matter

According to the Fermi LAT collaboration, the galaxy’s excessive gamma-ray glow likely comes from pulsars, the remains of collapsed ancient stars.

A mysterious gamma-ray glow at the center of the Milky Way is most likely caused by pulsars, the incredibly dense, rapidly spinning cores of collapsed ancient stars that were up to 30 times more massive than the sun.

That’s the conclusion of a new analysis by an international team of astrophysicists on the Fermi LAT collaboration. The findings cast doubt on previous interpretations of the signal as a potential sign of dark matter, a form of matter that accounts for 85 percent of all matter in the universe but that so far has evaded detection.

“Our study shows that we don’t need dark matter to understand the gamma-ray emissions of our galaxy,” says Mattia Di Mauro from the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the US Department of Energy's SLAC National Accelerator Laboratory. “Instead, we have identified a population of pulsars in the region around the galactic center, which sheds new light on the formation history of the Milky Way.”

Di Mauro led the analysis, which looked at the glow with the Large Area Telescope on NASA’s Fermi Gamma-ray Space Telescope, which has been orbiting Earth since 2008. The LAT, a sensitive “eye” for gamma rays, the most energetic form of light, was conceived of and assembled at SLAC, which also hosts its operations center.

The collaboration’s findings, submitted to The Astrophysical Journal for publication, are available as a preprint.

### A mysterious glow

Dark matter is one of the biggest mysteries of modern physics. Researchers know that dark matter exists because it bends light from distant galaxies and affects how galaxies rotate. But they don’t know what the substance is made of. Most scientists believe it’s composed of yet-to-be-discovered particles that almost never interact with regular matter other than through gravity, making it very hard to detect them.

One way scientific instruments might catch a glimpse of dark matter particles is when the particles either decay or collide and destroy each other. “Widely studied theories predict that these processes would produce gamma rays,” says Seth Digel, head of KIPAC’s Fermi group. “We search for this radiation with the LAT in regions of the universe that are rich in dark matter, such as the center of our galaxy.”

Previous studies have indeed shown that there are more gamma rays coming from the galactic center than expected, fueling some scientific papers and media reports that suggest the signal might hint at long-sought dark matter particles. However, gamma rays are produced in a number of other cosmic processes, which must be ruled out before any conclusion about dark matter can be drawn. This is particularly challenging because the galactic center is extremely complex, and astrophysicists don’t know all the details of what’s going on in that region.

Most of the Milky Way’s gamma rays originate in gas between the stars that is lit up by cosmic rays, charged particles produced in powerful star explosions called supernovae. This creates a diffuse gamma-ray glow that extends throughout the galaxy. Gamma rays are also produced by supernova remnants, pulsars—collapsed stars that emit “beams” of gamma rays like cosmic lighthouses—and more exotic objects that appear as points of light.

“Two recent studies by teams in the US and the Netherlands have shown that the gamma-ray excess at the galactic center is speckled, not smooth as we would expect for a dark matter signal,” says KIPAC’s Eric Charles, who contributed to the new analysis. “Those results suggest the speckles may be due to point sources that we can’t see as individual sources with the LAT because the density of gamma-ray sources is very high and the diffuse glow is brightest at the galactic center.”

### Remains of ancient stars

The new study takes the earlier analyses to the next level, demonstrating that the speckled gamma-ray signal is consistent with pulsars.

“Considering that about 70 percent of all point sources in the Milky Way are pulsars, they were the most likely candidates,” Di Mauro says. “But we used one of their physical properties to come to our conclusion. Pulsars have very distinct spectra—that is, their emissions vary in a specific way with the energy of the gamma rays they emit. Using the shape of these spectra, we were able to model the glow of the galactic center correctly with a population of about 1,000 pulsars and without introducing processes that involve dark matter particles.”

The team is now planning follow-up studies with radio telescopes to determine whether the identified sources are emitting their light as a series of brief light pulses—the trademark that gives pulsars their name.

Discoveries in the halo of stars around the center of the galaxy, the oldest part of the Milky Way, also reveal details about the evolution of our galactic home, just as ancient remains teach archaeologists about human history.

“Isolated pulsars have a typical lifetime of 10 million years, which is much shorter than the age of the oldest stars near the galactic center,” Charles says. “The fact that we can still see gamma rays from the identified pulsar population today suggests that the pulsars are in binary systems with companion stars, from which they leach energy. This extends the life of the pulsars tremendously.”

### Dark matter remains elusive

The new results add to other data that are challenging the interpretation of the gamma-ray excess as a dark matter signal.

“If the signal were due to dark matter, we would expect to see it also at the centers of other galaxies,” Digel says. “The signal should be particularly clear in dwarf galaxies orbiting the Milky Way. These galaxies have very few stars, typically don’t have pulsars and are held together because they have a lot of dark matter. However, we don’t see any significant gamma-ray emissions from them.”

The researchers believe that a recently discovered strong gamma-ray glow at the center of the Andromeda galaxy, the major galaxy closest to the Milky Way, may also be caused by pulsars rather than dark matter.

But the last word may not have been spoken. Although the Fermi-LAT team studied a large area of 40 degrees by 40 degrees around the Milky Way’s galactic center (the diameter of the full moon is about half a degree), the extremely high density of sources in the innermost four degrees makes it very difficult to see individual ones and rule out a smooth, dark matter-like gamma-ray distribution, leaving limited room for dark matter signals to hide.

This work was funded by NASA and the DOE Office of Science, as well as agencies and institutes in France, Italy, Japan and Sweden.

Editor's note: A version of this article was originally published by SLAC National Accelerator Laboratory.

### John Baez - Azimuth

Diamondoids

I have a new favorite molecule: adamantane. As you probably know, someone is said to be ‘adamant’ if they are unshakeable, immovable, inflexible, unwavering, uncompromising, resolute, resolved, determined, firm, rigid, or steadfast. But ‘adamant’ is also a legendary mineral, and the etymology is the same as that for ‘diamond’.

The molecule adamantane, shown above, features 10 carbon atoms arranged just like a small portion of a diamond crystal! It’s a bit easier to see this if you ignore the 16 hydrogen atoms and focus on the carbon atoms and bonds between those:

It’s a somewhat strange shape.

Puzzle 1. Give a clear, elegant description of this shape.

Puzzle 2. What is its symmetry group? This is really two questions: I’m asking about the symmetry group of this shape as an abstract graph, but also the symmetry group of this graph as embedded in 3d Euclidean space, counting both rotations and reflections.

Puzzle 3. How many ‘kinds’ of carbon atoms does adamantane have? In other words, when we let the symmetry group of this graph act on the set of vertices, how many orbits are there? (Again this is really two questions, depending on which symmetry group we use.)

Puzzle 4. How many kinds of bonds between carbon atoms does adamantane have? In other words, when we let the symmetry group of this graph act on the set of edges, how many orbits are there? (Again, this is really two questions.)

You can see the relation between adamantane and a diamond if you look carefully at a diamond crystal, as shown in this image by H. K. D. H. Bhadeshia:

or this one by Greg Egan:

Even with these pictures at hand, I find it a bit tough to see the adamantane pattern lurking in the diamond! Look again:

Adamantane has an interesting history. The possibility of its existence was first suggested by a chemist named Decker at a conference in 1924. Decker called this molecule ‘decaterpene’, and registered surprise that nobody had made it yet. After some failed attempts, it was first synthesized by the Croatian-Swiss chemist Vladimir Prelog in 1941. He later won the Nobel prize for his work on stereochemistry.

However, long before it was synthesized, adamantane was isolated from petroleum by the Czech chemists Landa, Machacek and Mzourek! They did it in 1932. They only managed to make a few milligrams of the stuff, but we now know that petroleum naturally contains between .0001% and 0.03% adamantane!

Adamantane can be crystallized:

but ironically, the crystals are rather soft. It’s all that hydrogen. It’s also amusing that adamantane has an odor: supposedly it smells like camphor!

Adamantane is just the simplest of the molecules called diamondoids.
These are a few:

2 is called diamantane.

3 is called triamantane.

4 is called isotetramantane, and it comes in two mirror-image forms.

Here are some better pictures of diamantane:

People have done lots of chemical reactions with diamondoids. Here are some things they’ve done with the next one, pentamantane:

Many different diamondoids occur naturally in petroleum. Though the carbon in diamonds is not biological in origin, the carbon in diamondoids found in petroleum is. This was shown by studying ratios of carbon isotopes.

Eric Drexler has proposed using diamondoids for nanotechnology, but he’s talking about larger molecules than those shown here.

For more fun along these lines, try:

Diamonds and triamonds, Azimuth, 11 April 2016.

## April 28, 2017

### Symmetrybreaking - Fermilab/SLAC

See Boston University physicist Tulika Bose's answers to readers’ questions about research at the Large Hadron Collider.

<noscript>[<a href="http://storify.com/Symmetry/asksymmetry-twitter-chat-with-tulika-bose" target="_blank">View the story "#AskSymmetry Twitter chat with Tulika Bose 4/28/17" on Storify</a>]</noscript>

## April 27, 2017

### John Baez - Azimuth

Biology as Information Dynamics (Part 2)

Here’s a video of the talk I gave at the Stanford Complexity Group:

You can see slides here:

Abstract. If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’ — a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Liebler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clearer, more general formulation of Fisher’s fundamental theorem of natural selection.

I’d given a version of this talk earlier this year at a workshop on Quantifying biological complexity, but I’m glad this second try got videotaped and not the first, because I was a lot happier about my talk this time. And as you’ll see at the end, there were a lot of interesting questions.

### Symmetrybreaking - Fermilab/SLAC

Did you see it?

Boston University physicist Tulika Bose explains why there's more than one large, general-purpose particle detector at the Large Hadron Collider.

Physicist Tulika Bose of the CMS experiment at CERN explains how the CMS and ATLAS experiments complement one another at the Large Hadron Collider.

## Ask Symmetry - Why is there more than one detector at the Large Hadron Collider?

Video of Ask Symmetry - Why is there more than one detector at the Large Hadron Collider?

Have a burning question about particle physics? Let us know via email or Twitter (using the hashtag #AskSymmetry). We might answer you in a future video!

You can watch a playlist of the #AskSymmetry videos here. You can see Tulika Bose's answers to readers' questions about the LHC on Twitter here.​

### Axel Maas - Looking Inside the Standard Model

A shift in perspective - or: what makes an electron an electron?
We have recently published a new paper. It is based partially on the master thesis of my student Larissa Egger, but involves also another scientist from a different university. In this paper, we look at a quite fundamental question: How do we distinguish the matter particles? What makes an electron an electron and a muon a muon?

In a standard treatment, this identity is just an integral part of the particle. However, results from the late 1970ies and early 1980ies as well as our own research point to a somewhat different direction. I have described the basic idea sometime back. The basic idea back then was that what we perceive as an electron is not really just an electron. It consists itself out of two particles. A Higgs and something I would call a constituent electron. Back then, we were just thinking about how to test this idea.

This took some time.

We thought this was an outrageous question, putting almost certain things into question.

Now we see: Oh, this was just the beginning. And things got more crazy in every step.

But, as a theoretician, if I determine the consequences of a theory, we should not stop because something sounds crazy. Almost everything what we take for granted today, like quantum physics, sounded crazy in the beginning. But if you have reason to believe that a theory is right, then you have to take it seriously. And then its consequences are what they are. Of course, we may just have made an error somewhere. But that remains to be checked, preferably by independent research groups. After all, at some point, it is hard to see the forest for the trees. But so far, we are convinced that we made at most quantitative errors, but no qualitative errors. So the concept appears to us sound. And therefore I keep on writing about it here.

The older works was just the beginning. And we just followed their suggestion to take the standard model of particle physics not only serious, but also literal.

I will start out with the leptons, i.e. electrons, muons, and tauons as well as the three neutrinos. I come back to the quarks later.

The first thing we established was that it is indeed possible to think of particles like the electron as a kind of bound state of other particles, without upsetting what we have measured in experiment. We also gave an estimate what would be necessary to test this statement in an experiment. Though really exact numbers are as always complicated, we believe that the next generation of experiments which use electrons and positrons and collide them could be able to detect difference between the conventional picture and our results. In fact, the way they are currently designed makes them ideally suited to do so. However, they will not provide a measurement before, roughly, 2035 or so. We also understand quite well, why we would need these machines to see the effect. So right now, we will have to sit and wait for this. Keep your fingers crossed that they will be build, if you are interested in the answer.

Naturally, we therefore asked ourselves if there is no alternative. The unfortunate thing is that you will need at least enough energy to copiously produce the Higgs to test this. The only existing machine being able to do so is the LHC at CERN. However, to do so they collide protons. So we had to discuss whether the same effect also occurs for protons. Now a proton is much more complicated than any lepton, because it is already build from quarks and gluons. Still, what we found is the following: If we take the standard model serious as a theory, then a proton cannot be a theoretically well-defined entity if it is only made out of three quarks. Rather, it needs to have some kind of Higgs component. And this should be felt somehow. However, for the same reason as with the lepton, only the LHC could test it. And here comes the problem. Because the proton is made up out of three quarks, it has already a very complicated structure. Furthermore, even at the LHC, the effect of the additional Higgs component will likely be tiny. In fact, the probably best chance to probe it will be if this Higgs component can be linked to the production of the heaviest known quark, the top quark The reason is that the the top quark is so very sensitive to the Higgs. While the LHC indeed produces a lot of top quarks, producing a top quark linked to a Higgs is much harder. Even just the strongest effect has not yet been seen above doubt. And what we find will only be a (likely small) correction to it. There is still a chance, but this will need much more data. But the LHC will keep on running for a long time. So maybe, it will be enough. We will see.

So, this is what we did. In fact, this will all be part of the review I am writing. So, more will be told about this.

If you are still reading, I want to give you some more of the really weird stuff, which came out.

The first is that live is actually even more complicated. Even without all of what I have written about above, there are actually two types of electrons in the standard model. One which is affected by the weak interaction, and one which is not. Other than that, they are the same. They have the same mass, and they are electromagnetically the same. The same is actually true for all leptons and quarks. The matter all around us is actually a mixture of both types. However, the subtle effects I have been talking so far about only affect those which are affected by the weak interaction. There is a technical reason for this (the weak interaction is a so-called gauge symmetry). However, it makes detecting everything more harder, because it only works if we get the 'right' type of an electron.

The second is that electrons and quarks come in three sets of four particles each, the so-called generations or families. The only difference between these copies is the mass. Other than that, there is no difference that we know of. Though we cannot exclude it, but we have no experiment saying otherwise with sufficient confidence. This is one of the central mysteries. It occupies, and keeps occupying, many physicist. Now, we had the following idea: If we provide internal structure to the members of the family - could it be that the different generations are just different arrangements of the internal structure? That such things are in principle possible is known already from atoms. Here, the problem is even more involved, because of the two types of each of the quarks and leptons. This was just a speculation. However, we found that this is, at least logically, possible. Unfortunately, it is yet too complicated to provide definite quantitative prediction how this can be tested. But, at least, it seems to be not at odds with what we know already. If this would be true, this would be a major step in understanding particle physics. But we are still far, far away from this. Still, we are motivated to continue this road.

## April 25, 2017

### Symmetrybreaking - Fermilab/SLAC

Archaeology meets particle physics

Undergraduates search for hidden tombs in Turkey using cosmic-ray muons.

While the human eye is an amazing feat of evolution, it has its limitations. What we can see tells only a sliver of the whole story. Often, it is what is on the inside that counts.

To see a broken femur, we pass X-rays through a leg and create an image on a metal film. Archaeologists can use a similar technique to look for ancient cities buried in hillsides. Instead of using X-rays, they use muons, particles that are constantly raining down on us from the upper atmosphere.

Muons are heavy cousins of the electron and are produced when single-atom meteorites called cosmic rays collide with the Earth’s atmosphere. Hold your hand up and a few muons will pass through it every second.

Physics undergraduates at Texas Tech University, led by Professors Nural Akchurin and Shuichi Kunori, are currently developing detectors that will act like an X-ray film and record the patterns left behind by muons as they pass through hillsides in Turkey. Archaeologists will use these detectors to map the internal structure of hills and look for promising places to dig for buried archaeological sites.

Like X-rays, muons are readily absorbed by thick, dense materials but can traverse through lighter materials. So they can be stopped by rock but move easily through the air in a buried cavern.

The detector under development at Texas Tech will measure the amount of cosmic-ray muons that make it through the hill.  An unexpected excess could mean that there’s a hollow subterranean structure facilitating the muon’s passage.

“We’re looking for a void, or a tomb, that the archaeologists can investigate to learn more about the history of the people that were buried there,” says Hunter Cymes, one of the students working on the project.

The technique of using cosmic muons to probe for subterranean structures was developed almost half a century ago. Luis Alvarez, a Nobel Laureate in Physics, first used this technique to look inside the Second Pyramid of Chephren, one of the three great pyramids of Egypt. Since then, it has been used for many different applications, including searching for hidden cavities in other pyramids and estimating the lava content of volcanoes.

According to Jason Peirce, another undergraduate student working on this project, those previous applications had resolutions of about 10 meters. “We’re trying to make that smaller, somewhere in the range of 2 to 5 meters, to find a smaller room than what’s previously been done.”

They hope to accomplish this by using an array of scintillators, a type of plastic that can be used to detect particles. “When a muon passes through it, it absorbs some of that energy and creates light,” says student Hunter Cymes. That light can then be detected and measured and the data stored for later analysis.

Unfortunately, muons with enough energy to travel through a hill and reach the detector are relatively rare, meaning that the students will need to develop robust detectors which can collect data over a long period of time. Just like it’s hard to see in dim light, it’s difficult to reconstruct the internal structure of a hill with only a handful of muons.

Aashish Gupta, another undergraduate working on this project, is currently developing a simulation of cosmic-ray muons, the hill, and the detector prototype. The group hopes to use the simulation to guide their design process by predicting how well different designs will work and much data they will need to take.

As Peirce describes it, they are “getting some real, hands-on experience putting this together while also keeping in mind that we need to have some more of these results from the simulation to put together the final design.”

They hope to finish building the prototype detector within the next few months and are optimistic about having a final design by next fall.

## April 24, 2017

### Symmetrybreaking - Fermilab/SLAC

A tiny droplet of the early universe?

Particles seen by the ALICE experiment hint at the formation of quark-gluon plasma during proton-proton collisions.

About 13.8 billion years ago, the universe was a hot, thick soup of quarks and gluons—the fundamental components that eventually combined into protons, neutrons and other hadrons.

Scientists can produce this primitive particle soup, called the quark-gluon plasma, in collisions between heavy ions. But for the first time physicists on an experiment at the Large Hadron Collider have observed particle evidence of its creation in collisions between protons as well.

The LHC collides protons during the majority of its run time. This new result, published in Nature Physics by the ALICE collaboration, challenges long-held notions about the nature of those proton-proton collisions and about possible phenomena that were previously missed.

“Many people think that protons are too light to produce this extremely hot and dense plasma,” says Livio Bianchi, a postdoc at the University of Houston who worked on this analysis. “But these new results are making us question this assumption.”

Scientists at the LHC and at the US Department of Energy’s Brookhaven National Laboratory’s Relativistic Heavy Ion Collider, or RHIC, have previously created quark-gluon plasma in gold-gold and lead-lead collisions.

In the quark gluon plasma, mid-sized quarks—such as strange quarks—freely roam and eventually bond into bigger, composite particles (similar to the way quartz crystals grow within molten granite rocks as they slowly cool). These hadrons are ejected as the plasma fizzles out and serve as a telltale signature of their soupy origin. ALICE researchers noticed numerous proton-proton collisions emitting strange hadrons at an elevated rate.

“In proton collisions that produced many particles, we saw more hadrons containing strange quarks than predicted,” says Rene Bellwied, a professor at the University of Houston. “And interestingly, we saw an even bigger gap between the predicted number and our experimental results when we examined particles containing two or three strange quarks.”

From a theoretical perspective, a proliferation of strange hadrons is not enough to definitively confirm the existence of quark-gluon plasma. Rather, it could be the result of some other unknown processes occurring at the subatomic scale.

“This measurement is of great interest to quark-gluon-plasma researchers who wonder how a possible QGP signature can arise in proton-proton collisions,” says Urs Wiedemann, a theorist at CERN. “But it is also of great interest for high energy physicists who have never encountered such a phenomenon in proton-proton collisions.”

Earlier research at the LHC found that the spatial orientation of particles produced during some proton-proton collisions mirrored the patterns created during heavy-ion collisions, suggesting that maybe these two types of collisions have more in common than originally predicted. Scientists working on the ALICE experiment will need to explore multiple characteristics of these strange proton-proton collisions before they can confirm if they are really seeing a miniscule droplet of the early universe.

“Quark-gluon plasma is a liquid, so we also need to look at the hydrodynamic features,” Bianchi says. “The composition of the escaping particles is not enough on its own.”

This finding comes from data collected the first run of the LHC between 2009 and 2013. More research over the next few years will help scientists determine whether the LHC can really make quark-gluon plasma in proton-proton collisions.

“We are very excited about this discovery,” says Federico Antinori, spokesperson of the ALICE collaboration. “We are again learning a lot about this extreme state of matter. Being able to isolate the quark-gluon-plasma-like phenomena in a smaller and simpler system, such as the collision between two protons, opens up an entirely new dimension for the study of the properties of the primordial state that our universe emerged from.”

Other experiments, such as those using RHIC, will provide more information about the observable traits and experimental characteristics of quark-gluon plasmas at lower energies, enabling researchers to gain a more complete picture of the characteristics of this primordial particle soup.

“The field makes far more progress by sharing techniques and comparing results than we would be able to with one facility alone,” says James Dunlop, a researcher at RHIC. “We look forward to seeing further discoveries from our colleagues in ALICE.”

### John Baez - Azimuth

Complexity Theory and Evolution in Economics

This book looks interesting:

• David S. Wilson and Alan Kirman, editors, Complexity and Evolution: Toward a New Synthesis for Economics, MIT Press, Cambridge Mass., 2016.

You can get some chapters for free here. I’ve only looked carefully at this one:

• Joshua M. Epstein and Julia Chelen, Advancing Agent_Zero.

Agent_Zero is a simple toy model of an agent that’s not the idealized rational actor often studied in economics: rather, it has emotional, deliberative, and social modules which interact with each other to make decisions. Epstein and Chelen simulate collections of such agents and see what they do:

Abstract. Agent_Zero is a mathematical and computational individual that can generate important, but insufficiently understood, social dynamics from the bottom up. First published by Epstein (2013), this new theoretical entity possesses emotional, deliberative, and social modules, each grounded in contemporary neuroscience. Agent_Zero’s observable behavior results from the interaction of these internal modules. When multiple Agent_Zeros interact with one another, a wide range of important, even disturbing, collective dynamics emerge. These dynamics are not straightforwardly generated using the canonical rational actor which has dominated mathematical social science since the 1940s. Following a concise exposition of the Agent_Zero model, this chapter offers a range of fertile research directions, including the use of realistic geographies and population levels, the exploration of new internal modules and new interactions among them, the development of formal axioms for modular agents, empirical testing, the replication of historical episodes, and practical applications. These may all serve to advance the Agent_Zero research program.

It sounds like a fun and productive project as long as one keeps ones wits about one. It’s hard to draw conclusions about human behavior from such simplified agents. One can argue about this, and of course economists will. But regardless of this, one can draw conclusions about which kinds of simplified agents will engage in which kinds of collective behavior under which conditions.

Basically, one can start mapping out a small simple corner of the huge ‘phase space’ of possible societies. And that’s bound to lead to interesting new ideas that one wouldn’t get from either 1) empirical research on human and animal societies or 2) pure theoretical pondering without the help of simulations.

Here’s an article whose title, at least, takes a vastly more sanguine attitude toward benefits of such work:

• Kate Douglas, Orthodox economics is broken: how evolution, ecology, and collective behavior can help us avoid catastrophe, Evonomics, 22 July 2016.

I’ll quote just a bit:

For simplicity’s sake, orthodox economics assumes that Homo economicus, when making a fundamental decision such as whether to buy or sell something, has access to all relevant information. And because our made-up economic cousins are so rational and self-interested, when the price of an asset is too high, say, they wouldn’t buy—so the price falls. This leads to the notion that economies self-organise into an equilibrium state, where supply and demand are equal.

Real humans—be they Wall Street traders or customers in Walmart—don’t always have accurate information to hand, nor do they act rationally. And they certainly don’t act in isolation. We learn from each other, and what we value, buy and invest in is strongly influenced by our beliefs and cultural norms, which themselves change over time and space.

“Many preferences are dynamic, especially as individuals move between groups, and completely new preferences may arise through the mixing of peoples as they create new identities,” says anthropologist Adrian Bell at the University of Utah in Salt Lake City. “Economists need to take cultural evolution more seriously,” he says, because it would help them understand who or what drives shifts in behaviour.

Using a mathematical model of price fluctuations, for example, Bell has shown that prestige bias—our tendency to copy successful or prestigious individuals—influences pricing and investor behaviour in a way that creates or exacerbates market bubbles.

We also adapt our decisions according to the situation, which in turn changes the situations faced by others, and so on. The stability or otherwise of financial markets, for instance, depends to a great extent on traders, whose strategies vary according to what they expect to be most profitable at any one time. “The economy should be considered as a complex adaptive system in which the agents constantly react to, influence and are influenced by the other individuals in the economy,” says Kirman.

This is where biologists might help. Some researchers are used to exploring the nature and functions of complex interactions between networks of individuals as part of their attempts to understand swarms of locusts, termite colonies or entire ecosystems. Their work has provided insights into how information spreads within groups and how that influences consensus decision-making, says Iain Couzin from the Max Planck Institute for Ornithology in Konstanz, Germany—insights that could potentially improve our understanding of financial markets.

Take the popular notion of the “wisdom of the crowd”—the belief that large groups of people can make smart decisions even when poorly informed, because individual errors of judgement based on imperfect information tend to cancel out. In orthodox economics, the wisdom of the crowd helps to determine the prices of assets and ensure that markets function efficiently. “This is often misplaced,” says Couzin, who studies collective behaviour in animals from locusts to fish and baboons.

By creating a computer model based on how these animals make consensus decisions, Couzin and his colleagues showed last year that the wisdom of the crowd works only under certain conditions—and that contrary to popular belief, small groups with access to many sources of information tend to make the best decisions.

That’s because the individual decisions that make up the consensus are based on two types of environmental cue: those to which the entire group are exposed—known as high-correlation cues—and those that only some individuals see, or low-correlation cues. Couzin found that in larger groups, the information known by all members drowns out that which only a few individuals noticed. So if the widely known information is unreliable, larger groups make poor decisions. Smaller groups, on the other hand, still make good decisions because they rely on a greater diversity of information.

So when it comes to organising large businesses or financial institutions, “we need to think about leaders, hierarchies and who has what information”, says Couzin. Decision-making structures based on groups of between eight and 12 individuals, rather than larger boards of directors, might prevent over-reliance on highly correlated information, which can compromise collective intelligence. Operating in a series of smaller groups may help prevent decision-makers from indulging their natural tendency to follow the pack, says Kirman.

Taking into account such effects requires economists to abandon one-size-fits-all mathematical formulae in favour of “agent-based” modelling—computer programs that give virtual economic agents differing characteristics that in turn determine interactions. That’s easier said than done: just like economists, biologists usually model relatively simple agents with simple rules of interaction. How do you model a human?

It’s a nut we’re beginning to crack. One attendee at the forum was Joshua Epstein, director of the Center for Advanced Modelling at Johns Hopkins University in Baltimore, Maryland. He and his colleagues have come up with Agent_Zero, an open-source software template for a more human-like actor influenced by emotion, reason and social pressures. Collections of Agent_Zeros think, feel and deliberate. They have more human-like relationships with other agents and groups, and their interactions lead to social conflict, violence and financial panic. Agent_Zero offers economists a way to explore a range of scenarios and see which best matches what is going on in the real world. This kind of sophistication means they could potentially create scenarios approaching the complexity of real life.

Orthodox economics likes to portray economies as stately ships proceeding forwards on an even keel, occasionally buffeted by unforeseen storms. Kirman prefers a different metaphor, one borrowed from biology: economies are like slime moulds, collections of single-celled organisms that move as a single body, constantly reorganising themselves to slide in directions that are neither understood nor necessarily desired by their component parts.

For Kirman, viewing economies as complex adaptive systems might help us understand how they evolve over time—and perhaps even suggest ways to make them more robust and adaptable. He’s not alone. Drawing analogies between financial and biological networks, the Bank of England’s research chief Andrew Haldane and University of Oxford ecologist Robert May have together argued that we should be less concerned with the robustness of individual banks than the contagious effects of one bank’s problems on others to which it is connected. Approaches like this might help markets to avoid failures that come from within the system itself, Kirman says.

To put this view of macroeconomics into practice, however, might mean making it more like weather forecasting, which has improved its accuracy by feeding enormous amounts of real-time data into computer simulation models that are tested against each other. That’s not going to be easy.

## April 23, 2017

### The n-Category Cafe

On Clubs and Data-Type Constructors

Guest post by Pierre Cagne

The Kan Extension Seminar II continues with a third consecutive of Kelly, entitled On clubs and data-type constructors. It deals with the notion of club, first introduced by Kelly as an attempt to encode theories of categories with structure involving some kind of coherence issues. Astonishing enough, there is no mention of operads whatsoever in this article. (To be fair, there is a mention of “those Lawvere theories with only associativity axioms”…) Is it because the notion of club was developed in several stages at various time periods, making operads less identifiable among this work? Or does Kelly judge irrelevant the link between the two notions? I am not sure, but anyway I think it is quite interesting to read this article in the light of what we now know about operads.

Before starting with the mathematical content, I would like to thank Alexander, Brendan and Emily for organizing this online seminar. It is a great opportunity to take a deeper look at seminal papers that would have been hard to explore all by oneself. On that note, I am also very grateful for the rich discussions we have with my fellow participants.

### Non symmetric Set-operads

Let us take a look at the simplest kind of operads: non symmetric $\mathrm{Set}\mathsf\left\{Set\right\}$-operads. Those are informally collections of operations with given arities closed under compositions. The usual way to define them is to endow the category $\left[N,\mathrm{Set}\right]\left[\mathbf\left\{N\right\},\mathsf\left\{Set\right\}\right]$ of $N\mathbf\left\{N\right\}$-indexed families of sets with the substitution monoidal product (see Simon’s post): for two such families $RR$ and $SS$, $\left(R\circ S{\right)}_{n}=\sum _{{k}_{1}+\dots +{k}_{m}=n}{R}_{m}×{S}_{{k}_{1}}×\dots ×{S}_{{k}_{m}}\phantom{\rule{1em}{0ex}}\forall n\in N \left(R \circ S\right)_n = \sum_\left\{k_1+\dots+k_m = n\right\} R_m \times S_\left\{k_1\right\} \times \dots \times S_\left\{k_m\right\} \quad \forall n \in \mathbf\left\{N\right\} $ This monoidal product is better understood when elements of ${R}_{n}R_n$ and ${S}_{n}S_n$ are thought as branching with $nn$ inputs and one output: $R\circ SR\circ S$ is then obtained by plugging outputs of elements of $SS$ to the inputs of elements of $RR$. A non symmetric operad is defined to be a monoid for that monoidal product, a typical example being the family $\left(\mathrm{Set}\left({X}^{n},X\right){\right)}_{n\in N}\left(\mathsf\left\{Set\right\}\left(X^n,X\right)\right)_\left\{n\in\mathbf\left\{N\right\}\right\}$ for a set $XX$.

We can now take advantage of the equivalence $\left[N,\mathrm{Set}\right]\stackrel{\sim }{\to }\mathrm{Set}/N\left[\mathbf\left\{N\right\},\mathsf\left\{Set\right\}\right] \overset \sim \to \mathsf\left\{Set\right\}/\mathbf\left\{N\right\}$ to equip the category $\mathrm{Set}/N\mathsf\left\{Set\right\}/\mathbf\left\{N\right\}$ with a monoidal product. This equivalence maps a family $SS$ to the coproduct ${\sum }_{n}{S}_{n}\sum_n S_n$ with the canonical map to $N\mathbf\left\{N\right\}$, while the inverse equivalence maps a function $a:A\to Na: A \to \mathbf\left\{N\right\}$ to the family of fibers $\left({a}^{-1}\left(n\right){\right)}_{n\in N}\left(a^\left\{-1\right\}\left(n\right)\right)_\left\{n\in\mathbf\left\{N\right\}\right\}$. It means that a $N\mathbf\left\{N\right\}$-indexed family can be thought either as a set of operations of arity $nn$ for each $nn$ or as a bunch of operations, each labeled by an integer given its arity. Let us transport the monoidal product of $\left[N,\mathrm{Set}\right]\left[\mathbf\left\{N\right\}, \mathsf\left\{Set\right\}\right]$ to $\mathrm{Set}/N\mathsf\left\{Set\right\}/\mathbf\left\{N\right\}$: given two maps $a:A\to Na: A \to \mathbf\left\{N\right\}$ and $b:B\to Nb: B \to \mathbf\left\{N\right\}$, we compute the $\circ \circ$-product of the family of fibers, and then take the coproduct to get $A\circ B=\left\{\left(x,{y}_{1},\dots ,{y}_{m}\right):x\in A,{y}_{i}\in B,a\left(x\right)=m\right\} A\circ B = \\left\{ \left(x,y_1,\dots,y_m\right) : x \in A, y_i \in B, a\left(x\right) = m \\right\} $ with the map $A\circ B\to NA\circ B \to \mathbf\left\{N\right\}$ mapping $\left(x,{y}_{1},\dots ,{y}_{m}\right)↦{\sum }_{i}b\left({y}_{i}\right)\left(x,y_1,\dots,y_m\right)\mapsto \sum_i b\left(y_i\right)$. That is, the monoidal product is achieved by computing the following pullback:

where $LL$ is the free monoid monad (or list monad) on $\mathrm{Set}\mathsf\left\{Set\right\}$. Hence a non symmetric operad is equivalently a monoid in $\mathrm{Set}/N\mathsf\left\{Set\right\}/\mathbf\left\{N\right\}$ for this monoidal product. In Burroni’s terminology, it would be called a $LL$-category with one object.

In my opinion, Kelly’s clubs are a way to generalize this point of view to other kind of operads, replacing $N\mathbf\left\{N\right\}$ by the groupoid $P\mathbf P$ of bijections (to get symmetric operads) or the category $\mathrm{Fin}\mathsf\left\{Fin\right\}$ of finite sets (to get Lawvere theories). Obviously, $\mathrm{Set}/P\mathsf\left\{Set\right\}/\mathbf P$ or $\mathrm{Set}/\mathrm{Fin}\mathsf\left\{Set\right\}/\mathsf\left\{Fin\right\}$ does not make much sense, but the coproduct functor of earlier can be easily understood as a Grothendieck construction that adapts neatly in this context, providing functors: $\left[P,\mathrm{Set}\right]\to \mathrm{Cat}/P,\phantom{\rule{2em}{0ex}}\left[\mathrm{Fin},\mathrm{Set}\right]\to \mathrm{Cat}/\mathrm{Fin} \left[\mathbf P,\mathsf\left\{Set\right\}\right] \to \mathsf\left\{Cat\right\}/\mathbf P,\qquad \left[\mathsf\left\{Fin\right\},\mathsf\left\{Set\right\}\right] \to \mathsf\left\{Cat\right\}/\mathsf\left\{Fin\right\} $ Of course, these functors are not equivalences anymore, but it does not prevent us from looking for monoidal products on $\mathrm{Cat}/P\mathsf\left\{Cat\right\}/\mathbf P$ and $\mathrm{Cat}/\mathrm{Fin}\mathsf\left\{Cat\right\}/\mathsf\left\{Fin\right\}$ that restrict to the substitution product on the essential images of these functors (i.e. the discrete opfibrations). Before going to the abstract definitions, you might keep in mind the following goal: we are seeking those small categories $𝒞\mathcal\left\{C\right\}$ such that $\mathrm{Cat}/𝒞\mathsf\left\{Cat\right\}/\mathcal\left\{C\right\}$ admits a monoidal product reflecting through the Grothendieck construction the substition product in $\left[𝒞,\mathrm{Set}\right]\left[\mathcal\left\{C\right\},\mathsf\left\{Set\right\}\right]$.

### Abstract clubs

Recall that in a monoidal category $\mathcal\left\{E\right\}$ with product $\otimes \otimes$ and unit $II$, any monoid $MM$ with multiplication $m:M\otimes M\to Mm: M\otimes M \to M$ and unit $u:I\to Mu: I \to M$ induces a monoidal structure on $ℰ/M\mathcal\left\{E\right\}/M$ as follows: the unit is $u:I\to Mu: I \to M$ and the product of $f:X\to Mf: X \to M$ by $g:Y\to Mg: Y \to M$ is the composite $X\otimes Y\stackrel{f\otimes g}{\to }M\otimes M\stackrel{m}{\to }M X\otimes Y \overset \left\{f\otimes g\right\}\to M \otimes M \overset\left\{m\right\}\to M $ Be aware that this monoidal structure depends heavily on the monoid $MM$. For example, even if $\mathcal\left\{E\right\}$ is finitely complete and $\otimes \otimes$ is the cartesian product, the induced structure on $ℰ/M\mathcal\left\{E\right\}/M$ is almost never the cartesian one. A notable fact about this structure on $ℰ/M\mathcal\left\{E\right\}/M$ is that the monoids in it are exactly the morphisms of monoids with codomain $MM$.

We will use this property in the monoidal category $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ of endofunctors on a category $𝒜\mathcal\left\{A\right\}$. I will not say a lot about size issues here, but of course we assume that there exist enough universes to make sense of $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ as a category even when $𝒜\mathcal\left\{A\right\}$ is not small but only locally small: that is, if smallness is relative to a universe $𝕌\mathbb\left\{U\right\}$, then we posit a universe $𝕍\ni 𝕌\mathbb\left\{V\right\} \ni \mathbb\left\{U\right\}$ big enough to contain the set of objects of $𝒜\mathcal\left\{A\right\}$, making $𝒜\mathcal\left\{A\right\}$ a $𝕍\mathbb\left\{V\right\}$-small category hence $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ a locally $𝕍\mathbb\left\{V\right\}$-small category. The monoidal product on $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ is just the composition of endofunctors and the unit is the identity functor $\mathrm{Id}\mathrm\left\{Id\right\}$. The monoids in that category are precisely the monads on $𝒜\mathcal\left\{A\right\}$, and for any such $S:𝒜\to 𝒜S: \mathcal\left\{A\right\} \to \mathcal\left\{A\right\}$ with multiplication $n:\mathrm{SS}\to Sn: SS \to S$ and unit $j:\mathrm{Id}\to Sj: \mathrm\left\{Id\right\} \to S$, the slice category $\left[𝒜,𝒜\right]/S\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]/S$ inherits a monoidal structure with unit $jj$ and product $\alpha {\circ }^{S}\beta \alpha \circ^S \beta$ the composite $TR\stackrel{\alpha \beta }{\to }SS\stackrel{n}{\to }S T R \overset\left\{\alpha\beta\right\} \to S S \overset n \to S $ for any $\alpha :T\to S\alpha: T \to S$ and $\beta :R\to S\beta: R \to S$.

Now a natural transformation $\gamma \gamma$ between two functors $F,G:𝒜\to 𝒜F,G: \mathcal\left\{A\right\} \to \mathcal\left\{A\right\}$ is said to be cartesian whenever the naturality squares

are pullback diagrams. If $𝒜\mathcal\left\{A\right\}$ is finitely complete, as it will be for the rest of the post, it admits in particular a terminal object $11$ and the pasting lemma ensures that we only have to check for the pullback property of the naturality squares of the form

to know if $\gamma \gamma$ is cartesian. Let us denote by $\mathcal\left\{M\right\}$ the (possibly large) set of morphsisms in $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ that are cartesian in this sense, and denote by $ℳ/S\mathcal\left\{M\right\}/S$ the full subcategory of $\left[𝒜,𝒜\right]/S\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]/S$ whose objects are in $\mathcal\left\{M\right\}$.

Definition. A club in $𝒜\mathcal\left\{A\right\}$ is a monad $SS$ such that $ℳ/S\mathcal\left\{M\right\}/S$ is closed under the monoidal product ${\circ }^{S}\circ^S$.

By “closed under ${\circ }^{S}\circ^S$”, it is understood that the unit $jj$ of $SS$ is in $\mathcal\left\{M\right\}$ and that the product $\alpha {\circ }^{S}\beta \alpha \circ^S \beta$ of two elements of $\mathcal\left\{M\right\}$ with codomain $SS$ still is in $\mathcal\left\{M\right\}$. A useful alternate characterization is the following:

Lemma. A monad $\left(S,n,j\right)\left(S,n,j\right)$ is a club if and only if $n,j\in ℳn,j \in \mathcal\left\{M\right\}$ and $Sℳ\subseteq ℳS\mathcal\left\{M\right\}\subseteq \mathcal\left\{M\right\}$.

It is clear from the definition of ${\circ }^{S}\circ^S$ that the condition is sufficient, as the $\alpha {\circ }^{S}\beta \alpha \circ^S \beta$ can be written as $n\cdot \left(S\beta \right)\cdot \left(\alpha T\right)n\cdot\left(S\beta\right)\cdot\left(\alpha T\right)$ via the exchange rule. Now suppose $SS$ is a club: $j\in ℳj \in \mathcal\left\{M\right\}$ as it is the monoidal unit; $n\in ℳn \in \mathcal\left\{M\right\}$ comes from ${\mathrm{id}}_{S}{\circ }^{S}{\mathrm{id}}_{S}\in ℳ\mathrm\left\{id\right\}_S \circ^S \mathrm\left\{id\right\}_S \in \mathcal\left\{M\right\}$; finally for any $\alpha :T\to S\in ℳ\alpha: T \to S \in \mathcal\left\{M\right\}$, we should have ${\mathrm{id}}_{S}{\circ }^{S}\alpha =n\cdot \left(S\alpha \right)\in ℳ\mathrm\left\{id\right\}_S \circ^S \alpha = n\cdot\left(S\alpha\right) \in \mathcal\left\{M\right\}$, and having already $n\in ℳn\in\mathcal\left\{M\right\}$ this yields $S\alpha \in ℳS\alpha \in \mathcal\left\{M\right\}$ by the pasting lemma.

In particular, this lemma shows that monoids in $ℳ/S\mathcal\left\{M\right\}/S$, which coincide with monad maps $T\to S\in ℳT \to S \in \mathcal\left\{M\right\}$ for some monad $TT$, are clubs too. We shall denote the category of these by $\mathrm{Club}\left(𝒜\right)/S\mathbf\left\{Club\right\}\left(\mathcal\left\{A\right\}\right)/S$.

The lemma also implies that any cartesian monad, by which is meant a pullbacks preserving monad with cartesian unit and multiplication, is automatically a club.

Now note that evaluation at $11$ provides an equivalence $ℳ/S\stackrel{\sim }{\to }𝒜/S1\mathcal\left\{M\right\}/S \overset\sim\to \mathcal\left\{A\right\}/S1$ whose pseudo inverse is given for a map $f:K\to S1f:K \to S1$ by the natural transformation pointwise defined as the pullback

The previous monoidal product on $ℳ/S\mathcal\left\{M\right\}/S$ can be transported on $𝒜/S1\mathcal\left\{A\right\}/S1$ and bears a fairly simple description: given $f:K\to S1f:K \to S1$ and $g:H\to S1g:H \to S1$, the product, still denoted $f{\circ }^{S}gf\circ^S g$, is the evaluation at $11$ of the composite $\mathrm{TR}\to \mathrm{SS}\to STR \to SS \to S$ where $T\to ST \to S$ corresponds to $ff$ and $R\to SR\to S$ to $gg$. Hence the explicit equivalence given above allows us to write this as

Definition. By abuse of terminology, a monoid in $𝒜/S1\mathcal\left\{A\right\}/S1$ is said to be a club over $S1S1$.

### Examples of clubs

On $\mathrm{Set}\mathsf\left\{Set\right\}$, the free monoid monad $LL$ is cartesian, hence a club on $\mathrm{Set}\mathsf\left\{Set\right\}$ in the above sense. Of course, we retrieve as ${\circ }^{L}\circ^L$ the monoidal product of the introduction on $\mathrm{Set}/N\mathsf\left\{Set\right\}/\mathbf\left\{N\right\}$. Hence, clubs over $N\mathbf\left\{N\right\}$ in $\mathrm{Set}\mathsf\left\{Set\right\}$ are exactly the non symmetric $\mathrm{Set}\mathsf\left\{Set\right\}$-operads.

Considering $\mathrm{Cat}\mathsf\left\{Cat\right\}$ as a $11$-category, the free finite coproduct category monad $FF$ on $\mathrm{Cat}\mathsf\left\{Cat\right\}$ is a club in the above sense. This can be shown directly through the charaterization we stated earlier: its unit and multiplication are cartesian and it maps cartesian transformations to cartesian transformations. Moreover, the obvious monad map $P\to FP \to F$ is cartesian, where $PP$ is the free strict symmetric monoidal category monad on $\mathrm{Cat}\mathsf\left\{Cat\right\}$. Hence it yields for free that $PP$ is also a club on $\mathrm{Cat}\mathsf\left\{Cat\right\}$. Note that the groupoid $P\mathbf\left\{P\right\}$ of bijections is $P1P1$ and the category $\mathrm{Fin}\mathsf\left\{Fin\right\}$ of finite sets is $F1F1$. So it is now a matter of careful bookkeeping to establish that the functors (given by the Grothendieck construction) $\left[P,\mathrm{Set}\right]\to \mathrm{Cat}/P,\phantom{\rule{2em}{0ex}}\left[\mathrm{Fin},\mathrm{Set}\right]\to \mathrm{Cat}/\mathrm{Fin} \left[\mathbf\left\{P\right\},\mathsf\left\{Set\right\}\right] \to \mathsf\left\{Cat\right\}/\mathbf\left\{P\right\}, \qquad \left[\mathsf\left\{Fin\right\},\mathsf\left\{Set\right\}\right] \to \mathsf\left\{Cat\right\}/\mathsf\left\{Fin\right\} $ are strong monoidal where the domain categories are given Kelly’s substition product. In other words, it exhibits symmetric $\mathrm{Set}\mathsf\left\{Set\right\}$-operads and non enriched Lawvere theories as special clubs over $P\mathbf\left\{P\right\}$ and $\mathrm{Fin}\mathsf\left\{Fin\right\}$.

We could say that we are done: we have a polished abstract notion of clubs that can encompass the different notions of operads on $\mathrm{Set}\mathsf\left\{Set\right\}$ that we are used to. But what about operads on other categories? Also, the above monads $PP$ and $FF$ are actually $22$-monads on $\mathrm{Cat}\mathsf\left\{Cat\right\}$ when seen as a $22$-category. Can we extend the notion to this enrichement?

### Enriched clubs

We shall fix a cosmos $𝒱\mathcal\left\{V\right\}$ to enriched over (and denote as usual the underlying ordinary notions by a $00$-index), but we want it to have good properties, so that finite completeness makes sense in this enriched framework. Hence we ask that $𝒱\mathcal\left\{V\right\}$ is locally finitely presentable as a closed category (see David’s post). Taking a look at what we did in the ordinary case, we see that it heavily relies on the possibility of defining slice categories, which is not possible in full generality. Hence we ask for $𝒱\mathcal\left\{V\right\}$ to be semicartesian, meaning that the monoidal unit of $𝒱\mathcal\left\{V\right\}$ is its terminal object: then for a $𝒱\mathcal\left\{V\right\}$-category $\mathcal\left\{B\right\}$, the slice category $ℬ/B\mathcal\left\{B\right\}/B$ is defined to have elements $1\to ℬ\left(X,B\right)1 \to \mathcal\left\{B\right\}\left(X,B\right)$ as objects, and the space of morphisms between such $f:1\to ℬ\left(X,B\right)f:1 \to \mathcal\left\{B\right\}\left(X,B\right)$ and $f\prime :1\to ℬ\left(X\prime ,B\right)f\text{'}:1 \to \mathcal\left\{B\right\}\left(X\text{'},B\right)$ is given by the following pullback in ${𝒱}_{0}\mathcal\left\{V\right\}_0$:

If we also want to be able to talk about the category of enriched clubs over something, we should be able to make a $𝒱\mathcal\left\{V\right\}$-category out of the monoids in a monoidal $𝒱\mathcal\left\{V\right\}$-category. Again, this is a priori not possible to do: the space of monoid maps between $\left(M,m,i\right)\left(M,m,i\right)$ and $\left(N,n,j\right)\left(N,n,j\right)$ is supposed to interpret “the subspace of those $f:M\to Nf: M \to N$ such that $\mathrm{fi}=jfi=j$ and $\mathrm{fm}\left(x,y\right)=n\left(\mathrm{fx},\mathrm{fy}\right)fm\left(x,y\right)=n\left(fx,fy\right)$ for all $x,yx,y$”, where the later equation has two occurences of $ff$ on the right. Hence we ask that $𝒱\mathcal\left\{V\right\}$ is actually a cartesian cosmos, so that the interpretation of such a subspace is the joint equalizer of

Moreover, these hypothesis also resolve the set theoretical issues: because of all the hypotheses on $𝒱\mathcal\left\{V\right\}$, the underlying ${𝒱}_{0}\mathcal\left\{V\right\}_0$ identifies with the category $\mathrm{Lex}\left[{𝒯}_{0},\mathrm{Set}\right]\mathrm\left\{Lex\right\}\left[\mathcal\left\{T\right\}_0,\mathsf\left\{Set\right\}\right]$ of $\mathrm{Set}\mathsf\left\{Set\right\}$-valued left exact functors from the finitely presentables of ${𝒱}_{0}\mathcal\left\{V\right\}_0$. Hence, for a $𝒱\mathcal\left\{V\right\}$-category $𝒜\mathcal\left\{A\right\}$, the category of $𝒱\mathcal\left\{V\right\}$-endofunctors $\left[𝒜,𝒜\right]\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]$ is naturally a $𝒱\prime \mathcal\left\{V\right\}\text{'}$-category for the cartesian cosmos $𝒱\prime =\mathrm{Lex}\left[{𝒯}_{0},\mathrm{Set}\prime \right]\mathcal\left\{V\right\}\text{'}=\mathrm\left\{Lex\right\}\left[\mathcal\left\{T\right\}_0,\mathsf\left\{Set\right\}\text{'}\right]$ where $\mathrm{Set}\prime \mathsf\left\{Set\right\}\text{'}$ is the category of $𝕍\mathbb\left\{V\right\}$-small sets for a universe $𝕍\mathbb\left\{V\right\}$ big enough to contain the set of objects of $𝒜\mathcal\left\{A\right\}$. Hence we do not care so much about size issues and consider everything to be a $𝒱\mathcal\left\{V\right\}$-category; the careful reader will replace $𝒱\mathcal\left\{V\right\}$ by $𝒱\prime \mathcal\left\{V\right\}\text{'}$ when necessary.

In the context of categories enriched over a locally finitely presentable cartesian closed cosmos $𝒱\mathcal\left\{V\right\}$, all we did in the ordinary case is directly enrichable. We call a $𝒱\mathcal\left\{V\right\}$-natural transformation $\alpha :T\to S\alpha: T \to S$ cartesian just when it is so as a natural transformation ${T}_{0}\to {S}_{0}T_0 \to S_0$, and denote the set of these by $\mathcal\left\{M\right\}$. For a $𝒱\mathcal\left\{V\right\}$-monad $SS$ on $𝒜\mathcal\left\{A\right\}$, the category $ℳ/S\mathcal\left\{M\right\}/S$ is the full subcategory of the slice $\left[𝒜,𝒜\right]/S\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]/S$ spanned by the objects in $\mathcal\left\{M\right\}$.

Definition. A $𝒱\mathcal\left\{V\right\}$-club on $𝒜\mathcal\left\{A\right\}$ is a $𝒱\mathcal\left\{V\right\}$-monad $SS$ such that $ℳ/S\mathcal\left\{M\right\}/S$ is closed under the induced $𝒱\mathcal\left\{V\right\}$-monoidal product of $\left[𝒜,𝒜\right]/S\left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]/S$.

Now comes the fundamental proposition about enriched clubs:

Proposition. A $𝒱\mathcal\left\{V\right\}$-monad $SS$ is a $𝒱\mathcal\left\{V\right\}$-club if and only if ${S}_{0}S_0$ is an ordinary club.

In that case, the category of monoids in $ℳ/S\mathcal\left\{M\right\}/S$ is composed of the clubs $TT$ together with a $𝒱\mathcal\left\{V\right\}$-monad map $1\to \left[𝒜,𝒜\right]\left(T,S\right)1 \to \left[\mathcal\left\{A\right\},\mathcal\left\{A\right\}\right]\left(T,S\right)$ in $\mathcal\left\{M\right\}$. We will still denote it $\mathrm{Club}\left(𝒜\right)/S\mathbf\left\{Club\right\}\left(\mathcal\left\{A\right\}\right)/S$ and its underlying ordinary category is $\mathrm{Club}\left({𝒜}_{0}\right)/{S}_{0}\mathbf\left\{Club\right\}\left(\mathcal\left\{A\right\}_0\right)/S_0$. We can once again take advantage of the $𝒱\mathcal\left\{V\right\}$-equivalence $ℳ/S\simeq 𝒜/S1\mathcal\left\{M\right\}/S \simeq \mathcal\left\{A\right\}/S1$ to equip the later with a $𝒱\mathcal\left\{V\right\}$-monoidal product, and abuse terminlogy to call its monoids $𝒱\mathcal\left\{V\right\}$-clubs over $S1S1$. Proving all that carefully require notions of enriched factorization systems that are of no use for this post.

So basically, the slogan is: as long as $𝒱\mathcal\left\{V\right\}$ is a cartesian cosmos which is loccally presentable as a closed category, everything works the same way as in the ordinary case, and $\left(-{\right)}_{0}\left(-\right)_0$ preserves and reflects clubs.

### Examples of enriched clubs

As we said earlier, $FF$ and $PP$ are $22$-monads on $\mathrm{Cat}\mathsf\left\{Cat\right\}$, and the underlying ${F}_{0}F_0$ and ${P}_{0}P_0$ (earlier just denoted $FF$ and $PP$) are ordinary clubs. So $FF$ and $PP$ are $\mathrm{Cat}\mathsf\left\{Cat\right\}$-clubs, maybe better called $22$-clubs. Moreover, the map ${P}_{0}\to {F}_{0}P_0 \to F_0$ mentioned earlier is easily promoted to a $22$-natural transformation making $P\mathbf\left\{P\right\}$ a $22$-club over $\mathrm{Fin}\mathsf\left\{Fin\right\}$.

The free monoid monad on a cartesian cosmos $𝒱\mathcal\left\{V\right\}$ is a $𝒱\mathcal\left\{V\right\}$-club and the clubs over $L1L1$ are precisely the non symmetric $𝒱\mathcal\left\{V\right\}$-operads.

Last but not least, a quite surprising example at first sight. Any small ordinary category ${𝒜}_{0}\mathcal\left\{A\right\}_0$ is naturally enriched in its category of presheaves $\mathrm{Psh}\left({𝒜}_{0}\right)\mathrm\left\{Psh\right\}\left(\mathcal\left\{A\right\}_0\right)$, as the full subcategory of the cartesian cosmos $𝒱=\mathrm{Psh}\left({𝒜}_{0}\right)\mathcal\left\{V\right\}=\mathrm\left\{Psh\right\}\left(\mathcal\left\{A\right\}_0\right)$ spanned by the representables. Concretely, the space of morphisms between $AA$ and $BB$ is given by the presheaf $𝒜\left(A,B\right):C↦{𝒜}_{0}\left(A×C,B\right) \mathcal\left\{A\right\}\left(A,B\right): C \mapsto \mathcal\left\{A\right\}_0\left(A \times C, B\right) $ Hence an $𝒱\mathcal\left\{V\right\}$-endofunctor $SS$ on $𝒜\mathcal\left\{A\right\}$ is the data of a map $A↦\mathrm{SA}A \mapsto SA$ on objects, together with for any $A,BA,B$ a $𝒱\mathcal\left\{V\right\}$-natural transformation ${\sigma }_{A,B}:𝒜\left(A,B\right)\to 𝒜\left(\mathrm{SA},\mathrm{SB}\right)\sigma_\left\{A,B\right\}: \mathcal\left\{A\right\}\left(A,B\right) \to \mathcal\left\{A\right\}\left(SA,SB\right)$ satisfying some axioms. Now fixing $A,C\in 𝒜A,C \in \mathcal\left\{A\right\}$, the collection of $\left({\sigma }_{A,B}{\right)}_{C}:{𝒜}_{0}\left(A×C,B\right)\to {𝒜}_{0}\left(\mathrm{SA}×C,\mathrm{SB}\right) \left(\sigma_\left\{A,B\right\}\right)_C : \mathcal\left\{A\right\}_0\left(A\times C,B\right) \to \mathcal\left\{A\right\}_0\left(SA \times C, SB\right) $ is equivalently, via Yoneda, a collection of ${\stackrel{˜}{\sigma }}_{A,C}:{𝒜}_{0}\left(\mathrm{SA}×C,S\left(A×C\right)\right). \tilde\left\{\sigma\right\}_\left\{A,C\right\} : \mathcal\left\{A\right\}_0\left(SA\times C,S\left(A \times C\right)\right). $ The axioms that $\sigma \sigma$ satisfies as a $𝒱\mathcal\left\{V\right\}$-enriched natural transformation make $\stackrel{˜}{\sigma }\tilde \sigma$ a strength for the endofunctor ${S}_{0}S_0$. Along this translation, a strong monad on $𝒜\mathcal\left\{A\right\}$ is then just a $\mathrm{Psh}\left({𝒜}_{0}\right)\mathrm\left\{Psh\right\}\left(\mathcal\left\{A\right\}_0\right)$-monad. And it is very common, when modelling side effects by monads in Computer Science, to end up with strong cartesian monads. As cartesian monads, they are in particular ordinary clubs on ${𝒜}_{0}\mathcal\left\{A\right\}_0$. Hence, those are $\mathrm{Psh}\left({𝒜}_{0}\right)\mathrm\left\{Psh\right\}\left(\mathcal\left\{A\right\}_0\right)$-monads whose underlying ordinary monad is a club: that is, they are $\mathrm{Psh}\left({𝒜}_{0}\right)\mathrm\left\{Psh\right\}\left(\mathcal\left\{A\right\}_0\right)$-clubs on $𝒜\mathcal\left\{A\right\}$.

In conclusion, let me point out that there is much more in Kelly’s article than presented here, especially on local factorisation systems and their link to (replete) reflexive subcategories with a left exact reflexion. It is by the way quite surprising that he does not stay in full generality longer, as one could define an abstract club in just that framework. Maybe there is just no interesting example to come up with at that level of generality…

Also, a great deal of examples of club comes from never published work of Robin Cockett (or at least, I was not able to find it), so these motivations are quite difficult to follow.

Going a little further in the generalization, the cautious reader should have noticed that we did not say anything about coloured operads. For then we would not have to look at slice categories of the form $𝒜/S1\mathcal\left\{A\right\}/S1$, but at categories of span with one leg pointing to $SCS C$ (morally mapping an operation to its coloured arity) and the other one to $CC$ (morally picking the output colour), where the $CC$ is the object of colours. Those spans actually appear above implicitly whenever a map or the form $!:X\to 1!:X \to 1$ is involved (morally, this is the map picking the “only output colour” in a non coloured operad). This somehow should be contained somewhere in Garner’s work on double clubs or in Shulman’s and Cruttwell’s unified framework for generalized multicategories. I am looking forward to learn more about that in the comments!

## April 22, 2017

### Lubos Motl - string vacua and pheno

Physicists, smart folks use same symbols for Lie groups, algebras for good reasons
I have always been amazed by the sheer stupidity and tastelessness of the people who aren't ashamed of the likes of Peter Woit. He is obviously a mediocre man with no talents, no achievements, no ethics, and no charisma but because of the existence of many people who have no taste and who want to have a leader in their jihad against modern physics, he was allowed to talk about physics as if his opinions mattered.

Woit is a typical failing-grade student who simply isn't and has never been the right material for college. His inability to learn string theory is a well-known aspect of this fact. But most people in the world – and maybe even most of the physics students – misunderstand string theory. But his low math-related intelligence is often manifested in things that are comprehensible to all average or better students of physics.

Two years ago, Woit argued that
the West Coast metric is the wrong one.
Now, unless you are a complete idiot, you must understand that the choice of the metric tensor – either $$({+}{-}{-}{-})$$ or $$({-}{+}{+}{+})$$ – is a pure convention. The metric tensor $$g^E_{\mu\nu}$$ of the first culture is simply equal to minus the metric tensor of the second culture $$g^W_{\mu\nu}$$, i.e. $$g^E_{\mu\nu} = - g^W_{\mu\nu}$$, and every statement or formula written with one set of conventions may obviously be translated to a statement written in the other, and vice versa. The equations or statements basically differ just by some signs. The translation from one convention to another is always possible and is no more mysterious than the translation from British to U.S. English or vice versa.

How stupid do you have to be to misunderstand this point, that there can't be any "wrong" convention for the sign? And how many people are willing to believe that someone's inability to get this simple point is compatible with the credibility of his comments about string theory?

Well, this individual has brought us a new ludicrous triviality of the same type,
Two Pet Peeves
We're told that we mustn't use the same notation for a Lie group and a Lie algebra. Why? Because Tony Zee, Pierre Ramond, and partially Howard Georgi were using the unified notation and Woit "remember[s] being very confused about this when I first started studying the subject". Well, Mr Woit, you were confused simply because you have never been college material. But it's easier to look for flaws in Lie groups and Lie algebras than in your own worthless existence, right?

Many physicists use the same symbols for Lie groups and the corresponding Lie algebras for a simple reason: they – or at least their behavior near the identity (or any other point on the group manifold) – is completely equivalent. Except for some global behavior, the information about the Lie group is completely equivalent to the information about the corresponding Lie algebra. They're just two languages to talk about the same thing.

Just to be sure, in my and Dr Zahradník's textbook on linear algebra, we used the separate symbols and I love the fraktur fonts. In Czechia and maybe elsewhere, most people who are familiar with similar fonts at all call them "Schwabacher" but strictly speaking, Textura, Rotunda, Schwabacher, and Fraktur are four different typefaces. Schwabacher is older and was replaced by Fraktura in the 16th century. In 1941, Hitler decided that there were too many typos in the newspapers and that foreigners couldn't decode Fraktura which diminishes the importance of Germany abroad, so he banned Fraktura and replaced it with Antiqua.

When we published our textbook, I was bragging about the extensive index that was automatically created by a $${\rm \LaTeX}$$ macro. I told somebody: Tell me any word and you will see that we can find it in the index. In front of several witnesses, the first person wanted to humiliate me so he said: "A broken bone." So I abruptly responded: "The index doesn't include a 'broken bone' literally but there's a fracture in it!" ;-) Yes, I did include a comment about the font in the index. You know, the composition of the index was as simple as placing the command like \placeInTheIndex{fraktura} in a given place of the source. After several compilations, the correct index was automatically created. I remember that in 1993 when I began to type it, one compilation of the book took 15 minutes on the PCs in the computer lab of our hostel! When we received new 90 GHz frequency PCs, the speed was almost doubled. ;-)

OK, I don't want to review elementary things because some readers know them and wouldn't learn anything new, while others don't know these things and a brief introduction wouldn't help them. But there is a simple relationship between a Lie algebra and a Lie group. You may obtain the elements of the group by a simple exponentiation of an element of a Lie algebra. For this reason, all the "structure coefficients" $$f_{ij}{}^k$$ that remember the structure of commutators$[T_i,T_j] = f_{ij}{}^k T_k$ contain the same information as all the curvature information about the group manifold near the identity. The Lie algebra simply is the tangent space of the group manifold around the identity (or any element) and all the commutators in the Lie algebra are equivalent to the information about the distortions that a projection of the neighborhood of the identity in the group manifold to a flat space causes.

We often use the same symbols because it's harder to write the gothic fonts. More importantly,
whenever a theory, a solution, or a situation is connected with a particular Lie group, it's also connected with the corresponding Lie algebra, and vice versa!
That's the real reason why it doesn't matter whether you talk about a Lie group or a Lie algebra. We use their labels for "identification purposes" and the identification is the same whether you have a Lie group or a Lie algebra in mind. A very simple example:
There exist two rank-8, dimension-496 heterotic string theories whose gauge groups in the 10-dimensional spacetime are $$SO(32)$$ and $$E_8\times E_8$$, respectively.

There exist two rank-8, dimension-496 heterotic string theories whose gauge groups in the 10-dimensional spacetime are (or have the Lie algebras) $${\mathfrak so}(32)$$ and $${\mathfrak e}_8\oplus {\mathfrak e}_8$$, respectively.
I wrote the sentence in two ways. The first one sort of talks about the group manifolds while the second talks about Lie algebras. The information is obviously almost completely equivalent.

Well, except for subtleties – the global choices and identifications in the group manifold that don't affect the behavior of the group manifold in the vicinity of the identity element. If you want to be careful about these subtleties, you need to talk about the group manifolds, not just Lie algebras, because the Lie algebras "forget" the information about these global issues.

So you might want to be accurate and talk about the Lie groups in 10 dimensions – and say that the allowed heterotic gauge groups are $$E_8\times E_8$$ and $$SO(32)$$. However, this effort of yours would actually make things worse because when you use a language that has the ambition of being correct about the global issues, it's your responsibility to be correct about them, indeed, and chances are that your first guess will be wrong!

In particular, the "$$SO(32)$$" heterotic string also contains spinors. So a somewhat smart person could say that the gauge group of that heterotic string is actually $$Spin(32)$$, not $$SO(32)$$. However, that would be about as wrong as $$SO(32)$$ itself – almost no improvement – because the actual perturbative gauge group of this heterotic theory is isomorphic to$Spin(32) / \ZZ_2$ where the $$\ZZ_2$$ is chosen in such a way that the group is not isomorphic to $$SO(32)$$. It's another $$\ZZ_2$$ from the center isomorphic to $$\ZZ_2\times \ZZ_2$$ that allows left-handed spinors but not the right-handed ones! By the way, funnily, the S-dual theory is type I superstring theory whose gauge group – arising from Chan-Paton factors of the open strings – seems to be $$O(32)$$. However, the global form of the gauge group gets modified by D-particles, the other half of $$O(32)$$ beyond $$SO(32)$$ is broken, and spinors of $$Spin(32)$$ are allowed by the D-particles so non-perturbatively, the gauge group of type I superstring theory agrees with that of the heterotic S-dual theory including the global subtleties.

(Peter Woit also ludicrously claims that physicists only need three groups, $$U(1),SU(2), SO(3)$$. That may have been almost correct in the 1920s but it's surely not true in the 21st century particle physics. If you're an undergraduate with plans to do particle physics and someone offers you to quickly learn about symplectic or exceptional groups, and perhaps a few others, you shouldn't refuse it.)

You don't need to talk about string theory to encounter similar subtleties. Ask a simple question. What is the gauge group of the Standard Model? Well, people will normally answer $$SU(3)\times SU(2)\times U(1)$$. But what they actually mean is just the statement that the Lie algebra of the gauge group is${\mathfrak su}(3) \oplus {\mathfrak su}(2) \oplus {\mathfrak u}(1).$ Note that the simple, Cartesian $$\times$$ product of Lie groups gets translated to the direct $$\oplus$$ sum of the Lie algebras – the latter are linear vector spaces. OK, so the statement that the Lie algebra of the gauge group of the Standard Model is the displayed expression above is correct.

But if you have the ambition to talk about the precise group manifolds, those know about all the "global subtleties" and it turns out that $$SU(3)\times SU(2)\times U(1)$$ is not isomorphic to the Standard Model gauge group. Instead, the Standard Model gauge group is$[SU(3)\times SU(2)\times U(1)] / \ZZ_6.$ The quotient by $$\ZZ_6$$ must be present because all the fields of the Standard Model have a correlation between the hypercharge $$Y$$ modulo $$1/6$$ and the spin under the $$SU(2)$$ as well as the representation under the $$SU(3)$$. It is therefore impossible to construct states that wouldn't be invariant under this $$\ZZ_6$$ even a priori which means that this $$\ZZ_6$$ acts trivially even on the original Hilbert space and "it's not there".

The $$\ZZ_6$$ must be divided by for the same reasons why we usually say that the Standard Model gauge group doesn't contain an $$E_8$$ factor. You could also say that there's also an $$E_8$$ factor except that all fields transform as a singlet. ;-) We don't do it – when we say that there is a symmetry or a gauge group, we want at least something to transform nontrivially.

OK, you see that the analysis of the correlations of the discrete charges modulo $$1/6$$ may be subtle. We usually don't care about these details when we want to determine much more important things – how many gauge bosons there are and what their couplings are. These important things are given purely by the Lie algebra which is why our statements about the identity of the gauge group should mostly be understood as statements about Lie algebras.

At some level, you may want to be picky and discuss the global properties of the gauge group and correlations. But you usually don't need to know these answers for anything else. The knowledge of these facts is usually only good for its own sake. You can't calculate any couplings from it, and so on. That's why our sentences should be assumed not to talk about these details at all – and/or be sloppy about these details.

(Just to be sure, the global subtleties, centers of the group, differences between $$SO(N)$$ and $$O(N)$$ and $$Spin(N)$$, differences for even and odd $$N$$, or dependence on $$N$$ modulo 8, may still lead to interesting physical consequences and consistency checks and several papers of mine, especially about the heterotic matrix models, were obsessed with these details, too. But this kind of concerns only represents a minority of physicists' interests, especially in the case of beginners.)

By the way, the second "pet sleeve" by Woit is that one should distinguish real and complexified versions of the same Lie algebras (and groups). Well, I agree you should distinguish them. But at some general analytic or algebraic level, all algebras and other structures should always be understood as the complexified ones – and only afterwards, we may impose some reality conditions on fields (and therefore the allowed symmetries, too). So I would say that to a large extent, even this complaint of Woit reflects his misunderstanding of something important – the fact that the most important information about the Lie groups is hiding in the structure constants of the corresponding Lie algebra, and those are identical for all Lie groups with the same Lie algebra, and they're also identical for real and complex versions of the groups.

(By the way, he pretends to be very careful about the complexification, but he writes the condition for matrix elements of an $$SU(2)$$ matrix as $$\alpha^2+\beta^2=1$$ instead of $$|\alpha|^2+|\beta|^2 = 1$$. Too bad. You just shouldn't insist on people's distinguishing non-essential things about the complexification if you can't even write the essential ones correctly yourself.)

In the futile conversations about the foundations of quantum mechanics, I often hear or read comments like:
Please, don't use the confusing word "observation" which makes it look like quantum mechanics depends on what is an observation and what isn't etc. and it's scary.
Well, the reason why my – and Heisenberg's – statements look like we are saying that quantum mechanics depends on observations is that quantum mechanics depends on observations, indeed. So the dissatisfied laymen or beginners really ask the physicists to use the language that would strengthen the listeners' belief that classical physics is still basically right. Except that it's not! We mostly use this language – including the word "observation" – because it really is essential in the new framework of physics.

In the same way, failing-grade students such as Peter Woit may be constantly asking whether a physicist talks about a Lie group or the corresponding Lie algebra. They are basically complaining:
Georgi, Ramond, Zee, don't use this notation that looks like it suggests that the Lie group and the Lie algebra are basically the same thing even though they are something completely different.
The problem is, of course, that the failing-grade students such as Peter Woit are wrong. Georgi, Ramond, Zee, and others often use the same symbols for the Lie groups and the Lie algebras because they really are basically the same thing. And it's just too bad if you don't understand this tight relationship – basically an equivalence.

I think that there exist many lousy teachers of mathematics and physics that are similar to Peter Woit. Those don't understand the substance – what is really important, what is true. So they focus on what they understand – arbitrarily invented rules what the students are obliged to parrot for the teacher to feel more important. So the poor students who have such teachers are often being punished for using a different metric tensor convention once or for using a wrong font for a Lie algebra. These teachers don't understand the power and beauty of mathematics and physics and they're working hard to make sure that their students won't understand them, either.

## April 21, 2017

### Sean Carroll - Preposterous Universe

Marching for Science

The March for Science, happening tomorrow 22 April in Washington DC and in satellite events around the globe (including here in LA), is on the one hand an obviously good idea, and at the same time quite controversial. As in many controversies, both sides have their good points!

Marching for science is a good idea because 1) science is good, 2) science is in some ways threatened, and 3) marching to show support might in some way ameliorate that threat. Admittedly, as with all rallies of support, there is a heavily emotive factor at work — even if it had no effect whatsoever, many people are motivated to march in favor of ideas they believe in, just because it feels good to show support for things you feel strongly about. Nothing wrong with that at all.

But in a democracy, marching in favor of things is a little  more meaningful than that. Even if it doesn’t directly cause politicians to change their minds (“Wait, people actually like science? I’ll have to revise my stance on a few key pieces of upcoming legislation…”), it gets ideas into the general conversation, which can lead to benefits down the road. Support for science is easy to take for granted — we live in a society where even the most anti-science forces try to portray their positions as being compatible with a scientific outlook of some sort, even if it takes doing a few evidentiary backflips to paper over the obvious inconsistencies. But just because the majority of people claim to be in favor of science, that doesn’t mean they will actually listen to what science has to say, much less vote to spend real money supporting it. Reminding them how much the general public is pro-science is an important task.

Charles Plateau, Reuters. Borrowed from The Atlantic.

Not everyone sees it that way. Scientists, bless their hearts, like to fret and argue about things, as I note in this short essay at The Atlantic. (That piece basically what I’ll be saying when I give my talk tomorrow noonish at the LA march — so if you can’t make it, you can get the gist at the link. If you will be marching in LA — spoiler alert.) A favorite source of fretting and worrying is “getting science mixed up with politics.” We scientists, the idea goes, are seekers of eternal truths — or at least we should aim to be — and that lofty pursuit is incompatible with mucking around in tawdry political battles. Or more pragmatically, there is a worry that if science is seen to be overly political, then one political party will react by aligning itself explicitly against science, and that won’t be good for anyone. (Ironically, this latter argument is an attempt at being strategic and political, rather than a seeker of universal truths.)

I don’t agree, as should be clear. First, science is political, like it or not. That’s because science is done by human beings, and just about everything human beings do is political. Science isn’t partisan — it doesn’t function for the benefit of one party over the other. But if we look up “political” in the dictionary, we get something like “of or relating to the affairs of government,” or more broadly “related to decisions applying to all members of a group.” It’s hard to question that science is inextricably intertwined with this notion of politics. The output of science, which purports to be true knowledge of the world, is apolitical. But we obtain that output by actually doing science, which involves hard questions about what questions to ask, what research to fund, and what to do with the findings of that research. There is no way to pretend that politics has nothing to do with the actual practice of science. Great scientists, from Einstein on down, have historically been more than willing to become involved in political disputes when the stakes were sufficiently high.

It would certainly be bad if scientists tarnished their reputations as unbiased researchers by explicitly aligning “science” with any individual political party. And we can’t ignore the fact that various high-profile examples of denying scientific reality — Darwinian evolution comes to mind, or more recently the fact that human activity is dramatically affecting the Earth’s climate — are, in our current climate, largely associated with one political party more than the other one. But people of all political persuasions will occasionally find scientific truths to be a bit inconvenient. And more importantly, we can march in favor of science without having to point out that one party is working much harder than the other one to undermine it. That’s a separate kind of march.

It reminds me of this year’s Super Bowl ads. Though largely set in motion before the election ever occurred, several of the ads were labeled as “anti-Trump” after the fact. But they weren’t explicitly political; they were simply stating messages that would, in better days, have been considered anodyne and unobjectionable, like “people of all creeds and ethnicities should come together in harmony.” If you can’t help but perceive a message like that as a veiled attack on your political philosophy, maybe your political philosophy needs a bit of updating.

Likewise for science. This particular March was, without question, created in part because people were shocked into fear by the prospect of power being concentrated in the hands of a political party that seems to happily reject scientific findings that it deems inconvenient. But it grew into something bigger and better: a way to rally in support of science, full stop.

That’s something everyone should be able to get behind. It’s a mistake to think that the best way to support science is to stay out of politics. Politics is there, whether we like it or not. (And if we don’t like it, we should at least respect it — as unappetizing as the process of politics may be at times, it’s a necessary part of how we make decisions in a representative democracy, and should be honored as such.) The question isn’t “should scientists play with politics, or rise above it?” The question is “should we exert our political will in favor of science, or just let other people make the decisions and hope for the best?”

Democracy can be difficult, exhausting, and heartbreaking. It’s a messy, chaotic process, a far cry from the beautiful regularities of the natural world that science works to uncover. But participating in democracy as actively as we can is one of the most straightforward ways available to us to make the world a better place. And there aren’t many causes more worth rallying behind than that of science itself.

## April 19, 2017

### The n-Category Cafe

Functional Equations, Entropy and Diversity: A Seminar Course

I’ve just finished teaching a seminar course officially called “Functional Equations”, but really more about the concepts of entropy and diversity.

I’m grateful to the participants — from many parts of mathematics, biology and physics, at levels from undergraduate to professor — who kept coming and contributing, week after week. It was lots of fun, and I learned a great deal.

This post collects together all the material in one place. First, the notes:

Now, the posts I wrote every week:

### Lubos Motl - string vacua and pheno

All of string theory's power, beauty depends on quantum mechanics
Wednesday papers: Arkani-Hamed et al. show that the amplituhedron is all about sign flips. Maldacena et al. study the double-trace deformations that make a wormhole traversable. Among other things, they argue that the cloning is avoided because the extraction (by "Bob") eliminates the interior copy of the quantum information.
String/M-theory is the most beautiful, powerful, and predictive theory we know – and, most likely, the #1 with these adjectives among those that are mathematically possible – but the degree of one's appreciation for its exceptional credentials depends on one's general knowledge of physics, especially quantum mechanics.

Click to see an animation (info).

Quantum mechanics was basically discovered at one point in the mid 1920s and forced physics to make a one-time quantum jump. On the other hand, it also defines a trend because the novelties of quantum mechanics may be taken more or less seriously, exploited more or less cleverly and completely, and as physics was evolving towards more advanced, stringy theories and explanations of things, the role of the quantum mechanical thinking was undoubtedly increasing.

When we say "classical string theory", it is a slightly ambiguous term. We can take various classical limits of various theories that emerge from string theory, e.g. the classical field theory limit of some effective field theories in the spacetime. But the most typical representation of "classical string theory" is given by the dull yellow animation above. A classical string is literally a curve in a pre-existing spacetime that oscillates according to a wave equation of a sort.

OK, on that picture, you see a vibrating rope. It is not better or more exceptional than an oscillating membrane, a Chladni pattern, a little green man with Parkinson's disease, or anything else that moves and jiggles. The power of string theory only emerges once you consider the real, adult theory where all the observables such as the positions of points along the string are given by non-commuting operators.

Just to be sure, the rule that "observable = measurable quantities are associated with non-commuting operators" is what I mean by quantum mechanics.

What does quantum mechanics do for a humble string like the yellow string above?

First, it makes the spectrum of vibrations discrete.

Classically, you may change the initial state of the vibrating string arbitrarily and continuously, and the energy carried by the string is therefore continuous, too. That's not the case in quantum mechanics. Quantum mechanics got its name from the quantized, discrete eigenvalues of the energy. A vibrating string is basically equivalent to a collection of infinitely many harmonic oscillators. Each quantum mechanical harmonic oscillator only carries an integer number of excitations, not a continuous amount of energy.

The discreteness of the spectrum – which depends on quantum mechanics for understandable reasons – is obviously needed for strings in string theory to coincide with a finite number of particle species we know in particle physics – or a countable one that we may know in the future. Without the quantization, the number of species would be uncountably infinite. The species would form a continuum. There would be not just an electron and a muon but also elemuon and all other things in between, in an infinite-dimensional space.

Quantum mechanics is needed for some vibrating strings to act as gravitons and other exceptional particles.

String theory predicts gravity. It makes Einstein's general relativity – and the curved spacetime and gravitational waves that result from it – unavoidable. Why is it so? It's because some of the low-energy vibrating strings, when they're added into the spacetime, have exactly the same effect as a deformation of the underlying geometry – or other low-energy fields defining the background.

Why is it so? It's ultimately because of the state-operator correspondence. The internal dynamics of a string depends on the underlying spacetime geometry. And the spacetime geometry may be changed. But the infinitesimal change of the action etc. for a string is equivalent to the interaction of the string with another, "tiny" string that is equivalent to the geometry change.

We may determine the right vibration of the "tiny" string that makes the previous sentence work because for every operator on the world sheet (2D history of a fundamental string), there exists a state of the string in the Hilbert space of the stringy vibrations. And this state-operator correspondence totally depends on quantum mechanics, too.

In classical physics, the number of observables – any function $$f(x_i,p_i)$$ on a phase space – is vastly greater than the number of states. The states are just points given by the coordinates $$(x_i,p_i)$$ themselves. It's not hard to see that the first set is much greater – an infinite-dimensional vector space – than the second. However, quantum mechanics increases the number of states (by allowing all the superpositions) and reduces the number of observables (by making them quantized, or respectful towards the quantization of the phase space) and the two numbers become equivalent up to a simple tensoring with the functions of the parameter $$\sigma$$ along the string.

I don't want to explain the state-operator correspondence, other blog posts have tried it and it is a rather technical issue in conformal field theory that you should study once you are really serious about learning string theory. But here, I want to emphasize that it wouldn't be possible in any classical world.

Let me point out that the world of the "interpreters" of quantum mechanics who imagine that the wave function is on par with a classical wave is a classical world, so it is exactly as impotent as any other world.

T-duality depends on quantum mechanics

A nice elementary symmetry that you discover in string theory compactified on tori is the so-called T-duality. The compactified string theory on a circle of radius $$R$$ is the same as the theory on a circle of radius $$\alpha' / R$$ where $$T=1/2 \pi \alpha'$$ is the string tension (energy or mass per unit length of the string). Well, this property depends on quantum mechanics as well because the T-duality map exchanges the momentum $$n$$ with the winding $$w$$ which are two integers.

But in a classical string theory, the winding number $$w\in \ZZ$$ would still be integer (it counts how many times a closed string is wrapped around the circle) while the momentum would be continuous, $$n\in\RR$$. So they couldn't be related by a permutation symmetry. The T-duality couldn't exist.

Enhanced gauge symmetry on a self-dual radius depends on quantum mechanics

The fancier features of string theory you look at, the more obviously unavoidable quantum mechanics becomes. One of the funny things of bosonic string theory compactified on a circle is that the generic gauge group $$U(1)\times U(1)$$ gets enhanced to $$SU(2)\times SU(2)$$ on the self-dual radius. Even though you start with a theory where everything is "Abelian" or "linear" in some simple sense – a string propagating on a circle – you discover that the non-Abelian $$SU(2)$$ automatically arises if the radius obeys $$R = \alpha' / R$$, if it is self-dual.

I have discussed the enhanced symmetries in string theory some years ago but let's shorten the story. Why does the group get enhanced?

First, one must understand that for a generic radius, the unbroken gauge group is $$U(1)\times U(1)$$. One gets two $$U(1)$$ gauge groups because the gauge fields are basically $$g_{\mu,25}$$ and $$B_{\mu,25}$$. They arise as "last columns" of a symmetric tensor, the metric tensor, and an antisymmetric tensor, the $$B$$-field. The first (metric tensor-based) $$U(1)$$ group is the standard Kaluza-Klein gauge group and it is $$U(1)$$ because $$U(1)$$ is the isometry group of the compactification manifold. There is another gauge group arising from the gauge field that you get from a pre-existing 2-index gauge field $$B_{\mu\nu}$$, a two-form, if you set the second index equal to the compactified direction.

These two gauge fields are permuted by the T-duality symmetry (just like the momentum and winding are permuted, because the momentum and winding are really the charges under these two symmetries).

OK, how do you get the $$SU(2)$$? The funny thing is that the $$U(1)$$ gauge bosons are associated, via the operator-state correspondence mentioned above, with the operators on the world sheet$(\partial_z X^{25}, \quad \partial_{\bar z} X^{25}).$ One of them is holomorphic, the other one is anti-holomorphic, we say. T-duality maps these operators to$(\partial_z X^{25}, \quad -\partial_{\bar z} X^{25}).$ so it may be understood as a mirror reflection of the $$X^{25}$$ coordinate of the spacetime except that it only acts on the anti-holomorphic (or right-moving) oscillations propagating along the string. That's great. You have something like a discrete T-duality which is just some sign flip or, equivalently, the exchange of the momentum and winding. How do you get a continuous $$SU(2)$$, I ask again?

The funny thing is that at the self-dual radius, there are not just two operators like that but six. The holomorphic one, $$\partial_z X^{25}$$, becomes just one component of a three-dimensional vector$(\partial_z X_L^{25},\,\, :\exp(+i X_L^{25}):, :\exp(-i X_L^{25}):)$ Classically, the first operator looks nothing like the last two. If you have a holomorphic function $$X_L^{25}(z)$$ of some coordinate $$z$$, its $$z$$-derivative seems to be something completely different than its exponential, right? But quantum mechanically, they are almost the same thing! Why is it so?

If you want to describe all physically meaningful properties of three operators like that, the algebra of all their commutators encodes all the information. Just like string theory has the state-operator correspondence that allows you to translate between states and operators, it also has the OPEs – operator-product expansions – that allow you to extract the commutators of operators from the singularities in a decomposition of their products etc.

And it just happens that the singularities in the OPEs of any such operators are compatible with the statement that these three operators are components of a triplet that transforms under an $$SU(2)$$ symmetry. So you get one $$SU(2)$$ from the left-moving, $$z$$-dependent part $$X_L^{25}$$, and one $$SU(2)$$ from the $$\bar z$$-dependent $$X_R^{25}$$.

All other non-Abelian and sporadic or otherwise cool groups that you get from perturbative string theory arise similarly, and are therefore similarly dependent on quantum mechanics. For example, the monster group in the string theory model explaining the monstrous moonshine only exists because of a similar "equivalence" that is only true at the quantum level.

Spacetime dimension and sizes of group are only predictable in quantum mechanics

String theory is so predictive that it forces you to choose a preferred dimension of the spacetime. The simple bosonic string theory has $$D=26$$ and superstring theory, the more realistic and fancy one, similarly demands $$D=10$$. This contrasts with the relatively unconstrained, "anything goes" theories of the pre-stringy era.

Polchinski's book contains "seven" ways to calculate the critical dimension, according to the counting by the author. But here, what is important is that all of them depend on a cancellation of some quantum anomalies.

In the covariant quantization, $$D=26$$ basically arises as the number of bosonic fields $$X^\mu$$ whose conformal anomaly cancels that from the $$bc$$ ghost system. The latter has $$c=1-3k^2=-26$$ because some constant is $$k=3$$: the central charge describes a coefficient in front of a standard term to the conformal anomaly. Well, you need to add $$c=+26$$ – from 26 bosons – to get zero. And you need to get zero for the conformal symmetry to hold, even in the quantum theory. And the conformal symmetry is needed for the state-operator correspondence and other things – it is a basic axioms of covariant perturbative string theory.

Alternatively, you may define string theory in the light-cone gauge. The full Lorentz symmetry won't be obvious anymore. You will find out that some commutators$[j^{i-},j^{j-}] = \dots$ in the light-cone coordinates behaves almost correctly. Except that when you substitute the "bilinear in stringy oscillators" expressions for the generators $$j^{i-}$$, the calculation of the commutator will contain not only the "single contractions" – this part of the calculation is basically copying a classical calculation – but also the "double contraction" terms. And those don't trivially cancel. You will find out that they only cancel for 24 transverse coordinates. Needless to say, the "double contraction" is something invisible at the level of the Poisson brackets. You really need to talk about the "full commutators" – and therefore full quantum mechanics, not just some Poisson-bracket-like approximation – to get these terms at all.

Again, the correct spacetime dimension $$D=26$$ or $$D=10$$ arises from the cancellation of some quantum anomaly – some new quantum mechanical effects that have the potential of spoiling some symmetries that "trivially" hold in the classical limit that may have inspired you. The prediction couldn't be there if you ignored quantum mechanics.

The field equations in the spacetime result from an anomaly cancellation, too.

If you order perturbative strings to propagate on a curved spacetime background, you may derive Einstein's equations (plus stringy short-distance corrections), which in the vacuum simply demand the Ricci-flatness $R_{\mu\nu} = 0.$ A century ago, Einstein had to discover that this is what the geometry has to obey in the vacuum. It's an elegant equation and among similarly simple ones, it's basically unique that is diffeomorphism-symmetric. And you may derive it from the extremization of the Einstein-Hilbert action, too.

However, string theory is capable of doing all this guesswork for you. In other words, string theory is capable of replacing Einstein's 10 years of work. You may derive the Ricci-flatness from the cancellation of the conformal anomaly, too. You need the world sheet theory to stay invariant under the scaling of the world sheet coordinates, even at the quantum level.

But the world sheet theory depends on the functions$g_{\mu\nu} (X^\lambda(\sigma,\tau))$ and for every point in the spacetime given by the numbers $$\{X^\lambda\}$$, you have a whole symmetric tensor $$g_{\mu\nu}$$ of parameters that behave like "coupling constants" in the theory. But in a quantum field theory, and the world sheet theory is a quantum field theory, every coupling constant generically "runs". Its value depends on the chosen energy scale $$E$$. And the derivative with respect to the scale$\frac{dg_{\mu\nu}(X^\lambda)}{d (\ln E)} = \beta_{\mu\nu}(X^\lambda)$ is known as the beta-function. Here you have as many beta-functions as you have the numbers that determine the metric tensor at each spacetime point. The beta-functions have to vanish for the theory to remain scale-invariant on the world sheet – and you need it. And you will find out that$\beta_{\mu\nu}(X^\lambda) = R_{\mu\nu} (X^\lambda).$ The beta-function is nothing else than the Ricci tensor. Well, it could be the Einstein tensor and there could be extra constants and corrections. But I want to please you with the cool stuff; I hope that you don't doubt that if you want to work with these things, you have to take care of many details that make the exact answers deviate from the most elegant, naive Ansatz with the given amount of beauty.

So Einstein's equations result from the cancellation of the conformal anomaly as well. The very requirement that the theory remains consistent at the quantum level – and the preservation of gauge symmetries is indeed needed for the consistency – is enough to derive the equations for the metric tensor in the spacetime.

Needless to say, this rule generalizes to all the fields that you may get from particular vibrating strings in the spacetime. Dirac, Weyl, Maxwell, Yang-Mills, Proca, Higgs, and other equations of motions for the fields in the spacetime (including all their desirable interactions) may be derived from the scale-invariance of the world sheet theory, too.

In this sense, the logical consistency of the quantum mechanical theory dictates not only the right spacetime dimension and other numbers of degrees of freedom, sizes of groups such as $$E_8\times E_8$$ or $$SO(32)$$ for the heterotic string (the rank must be $$16$$ and the dimension has to be $$496$$, among other conditions), but the consistency also determines all the dynamical equations of motion.

S-duality, T-duality, mirror symmetry, AdS/CFT and holography, ER-EPR, and so on

And I could continue. S-duality – the symmetry of the theories under the $$g\to 1/g$$ maps of the coupling constant – also depend on quantum mechanics. It's absolutely obvious that no S-duality could ever work in a classical world, not even in quantum field theory. Among other things, S-dualities exchange the elementary electrically charged particles such as electrons with the magnetically charged ones, the magnetic monopoles. But classically, those are very different: electrons are point-like objects with an "intrinsic" charge while the magnetic monopoles are solitonic solutions where the charge is spread over the solution and quantized because of topological considerations.

However, quantum mechanically, they may be related by a permutation symmetry.

Mirror symmetry is an application of T-duality in the Calabi-Yau context, so everything I said about the quantum mechanical dependence of T-duality obviously holds for mirror symmetry, too.

Holography in quantum gravity – as seen in AdS/CFT and elsewhere – obviously depends on quantum mechanics, too. The extra holographic dimension morally arises from the "energy scale" in the boundary theory. But the AdS space has an isometry relating all these dimensions. Classically, "energy scale" cannot be indistinguishable from a "spacetime coordinate". Classically, the energy and momentum live in a spacetime, they have different roles.

Quantum mechanically, there may be such symmetries between energy/momentum and position/timing. The harmonic oscillator is a basic template for such a symmetry: $$x$$ and $$p$$ may be rotated to each other.

ER-EPR talks about the quantum entanglement so it's obvious that it would be impossible in a classical world.

I could make the same point about basically anything that is attractive about string theory – and even about comparably but less intriguing features of quantum field theories. All these things depend on quantum mechanics. They would be impossible in a classical world.

Summary: quantum mechanics erases qualitative differences, creates new symmetries, merges concepts, magnifies new degrees of freedom to make singularities harmless.

Quantum mechanics does a lot of things. You have seen many examples – and there are many others – that quantum mechanics generally allows you to find symmetries between objects that look classically totally different. Like the momentum and winding of a string. Or the derivative of $$X$$ with the exponential of $$X$$ – at the self-dual radius. Or the states and operators. Or elementary particles and composite objects such as magnetic monopoles. And so on, and so on.

Sometimes, the spectrum of a quantity becomes discrete in order for the map or symmetry to be possible.

Sometimes, just the qualitative differences are erased. Sometimes, all the differences are erased and quantum mechanics enables the emergence of exact new symmetries that would be totally crazy within classical physics. Sometimes, these symmetries are combined with some naive ones that already exist classically. $$U(1)\times U(1)$$ may be extended to $$SU(2)\times SU(2)$$ quantum mechanically. Similarly, $$SO(16)\times SO(16)$$ in the fermionic definition or $$U(1)^{16}$$ in the bosonic formulation of the heterotic string gets extended to $$E_8\times E_8$$. A much smaller, classically visible discrete group gets extended to the monster group in the full quantum string theory explaining the monstrous moonshine.

Whenever a classical theory would be getting dangerously singular, quantum mechanics changes the situation so that either the dangerous states disappear or they're supplemented with new degrees of freedom or another cure. In many typical cases, the "potentially dangerous regime" of a theory – where you could be afraid of an inconsistency – is protected and consistent because quantum mechanics makes all the modifications and additions needed for that regime to be exactly equivalent to another theory that you have known – or whose classical limit you have encountered. Quantum mechanics is what allows all the dualities and the continuous connection of all seemingly inequivalent vacua of string/M-theory into one master theory.

All the constraints - on the number of dimensions, sizes of gauge groups, and even equations of motion for the fields in spacetime – arise from the quantum mechanical consistency, e.g. from the anomaly cancellation conditions.

When you become familiar with all these amazing effects of string theory and others, you are forced to start to think quantum mechanically. You will understand that the interesting theory – with the uniqueness, predictive power, consistency, symmetries, unification of concepts – is unavoidably just the quantum mechanical one. There is really no cool classical theory. The classical theories that you encounter anywhere in string theory are the classical limits of the full theory.

You will unavoidably get rid of the bad habit of thinking of a classical theory as the "primary one", while the quantum mechanical theory is often considered "derived" from it by the beginners (including permanent beginners). Within string/M-theory, it's spectacularly clear that the right relationship is going in the opposite direction. The quantum mechanical theory – with its quantum rules, objects, statements, and relationships – is the primary one while classical theories are just approximations and caricatures that lack the full glory of the quantum mechanical theory.

## April 18, 2017

### Symmetrybreaking - Fermilab/SLAC

A new search to watch from LHCb

A new result from the LHCb experiment could be an early indicator of an inconsistency in the Standard Model.

The subatomic universe is an intricate mosaic of particles and forces. The Standard Model of particle physics is a time-tested instruction manual that precisely predicts how particles and forces behave. But it’s incomplete, ignoring phenomena such as gravity and dark matter.

Today the LHCb experiment at CERN European research center released a result that could be an early indication of new, undiscovered physics beyond the Standard Model.

However, more data is needed before LHCb scientists can definitively claim they’ve found a crack in the world’s most robust roadmap to the subatomic universe.

“In particle physics, you can’t just snap your fingers and claim a discovery,” says Marie-Hélène Schune, a researcher on the LHCb experiment from Le Centre National de la Recherche Scientifique in Orsay, France. “It’s not magic. It’s long, hard work and you must be obstinate when facing problems. We always question everything and never take anything for granted.”

The LHCb experiment records and analyzes the decay patterns of rare hadrons—particles made of quarks—that are produced in the Large Hadron Collider’s energetic proton-proton collisions. By comparing the experimental results to the Standard Model’s predictions, scientists can search for discrepancies. Significant deviations between the theory and experimental results could be an early indication of an undiscovered particle or force at play.

This new result looks at hadrons containing a bottom quark as they transform into hadrons containing a strange quark. This rare decay pattern can generate either two electrons or two muons as byproducts. Electrons and muons are different types or “flavors” of particles called leptons. The Standard Model predicts that the production of electrons and muons should be equally favorable—essentially a subatomic coin toss every time this transformation occurs.

“As far as the Standard Model is concerned, electrons, muons and tau leptons are completely interchangeable,” Schune says. “It’s completely blind to lepton flavors; only the large mass difference of the tau lepton plays a role in certain processes. This 50-50 prediction for muons and electrons is very precise.”

But instead of finding a 50-50 ratio between muons and electrons, the latest results from the LHCb experiment show that it’s more like 40 muons generated for every 60 electrons.

“If this initial result becomes stronger with more data, it could mean that there are other, invisible particles involved in this process that see flavor,” Schune says. “We’ll leave it up to the theorists’ imaginations to figure out what’s going on.”

However, just like any coin-toss, it’s difficult to know if this discrepancy is the result of an unknown favoritism or the consequence of chance. To delineate between these two possibilities, scientists wait until they hit a certain statistical threshold before claiming a discovery, often 5 sigma.

“Five sigma is a measurement of statistical deviation and means there is only a 1-in-3.5-million chance that the Standard Model is correct and our result is just an unlucky statistical fluke,” Schune says. “That’s a pretty good indication that it’s not chance, but rather the first sightings of a new subatomic process.”

Currently, this new result is at approximately 2.5 standard deviations, which means there is about a 1-in-125 possibility that there’s no new physics at play and the experimenters are just the unfortunate victims of statistical fluctuation.

This isn’t the first time that the LHCb experiment has seen unexpected behavior in related processes. Hassan Jawahery from the University of Maryland also works on the LHCb experiment and is studying another particle decay involving bottom quarks transforming into charm quarks. He and his colleagues are measuring the ratio of muons to tau leptons generated during this decay.

“Correcting for the large mass differences between muons and tau leptons, we’d expect to see about 25 taus produced for every 100 muons,” Jawahery says. “We measured a ratio of 34 taus for every 100 muons.”

On its own, this measurement is below the line of statistical significance needed to raise an eyebrow. However, two other experiments—the BaBar experiment at SLAC and the Belle experiment in Japan—also measured this process and saw something similar.

“We might be seeing the first hints of a new particle or force throwing its weight around during two independent subatomic processes,” Jawahery says. “It’s tantalizing, but as experimentalists we are still waiting for all these individual results to grow in significance before we get too excited.”

More data and improved experimental techniques will help the LHCb experiment and its counterparts narrow in on these processes and confirm if there really is something funny happening behind the scenes in the subatomic universe.

“Conceptually, these measurements are very simple,” Schune says. “But practically, they are very challenging to perform. These first results are all from data collected between 2011 and 2012 during Run 1 of the LHC. It will be intriguing to see if data from Run 2 shows the same thing.”

### Symmetrybreaking - Fermilab/SLAC

How blue-sky research shapes the future

While driven by the desire to pursue curiosity, fundamental investigations are the crucial first step to innovation.

When scientists announced their discovery of gravitational waves in 2016, it made headlines all over the world. The existence of these invisible ripples in space-time had finally been confirmed.

It was a momentous feat in basic research, the curiosity-driven search for fundamental knowledge about the universe and the elements within it. Basic (or “blue-sky”) research is distinct from applied research, which is targeted toward developing or advancing technologies to solve a specific problem or to create a new product.

But the two are deeply connected.

“Applied research is exploring the continents you know, whereas basic research is setting off in a ship and seeing where you get,” says Frank Wilczek, a theoretical physicist at MIT. “You might just have to return, or sink at sea, or you might discover a whole new continent. So it’s much more long-term, it’s riskier and it doesn’t always pay dividends.”

When it does, he says, it opens up entirely new possibilities available only to those who set sail into uncharted waters.

Most of physics—especially particle physics—falls under the umbrella of basic research. In particle physics “we’re asking some of the deepest questions that are accessible by observations about the nature of matter and energy—and ultimately about space and time also, because all of these things are tied together,” says Jim Gates, a theoretical physicist at the University of Maryland.

Physicists seek answers to questions about the early universe, the nature of dark energy, and theoretical phenomena, such as supersymmetry, string theory and extra dimensions.

Perhaps one of the most well-known basic researchers was the physicist who predicted the existence of gravitational waves: Albert Einstein.

Einstein devoted his life to elucidating elementary concepts such as the nature of gravity and the relationship between space and time. According to Wilczek, “it was clear that what drove what he did was not the desire to produce a product, or anything so worldly, but to resolve puzzles and perceived imperfections in our understanding.”

In addition to advancing our understanding of the world, Einstein’s work led to important technological developments. The Global Positioning System, for instance, would not have been possible without the theories of special and general relativity. A GPS receiver, like the one in your smart phone, determines its location based on timed signals it receives from the nearest four of a collection of GPS satellites orbiting Earth. Because the satellites are moving so quickly while also orbiting at a great distance from the gravitational pull of Earth, they experience time differently from the receiver on Earth’s surface. Thanks to Einstein’s theories, engineers can calculate and correct for this difference.

Illustration by Corinne Mucha

There’s a long history of serendipitous output from basic research. For example, in 1989 at CERN European research center, computer scientist Tim Berners-Lee was looking for a way to facilitate information-sharing between researchers. He invented the World Wide Web.

While investigating the properties of nuclei within a magnetic field at Columbia University in the 1930s, physicist Isidor Isaac Rabi discovered the basic principles of nuclear magnetic resonance. These principles eventually formed the basis of Magnetic Resonance Imaging, MRI.

It would be another 50 years before MRI machines were widely used—again with the help of basic research. MRI machines require big, superconducting magnets to function. Luckily, around the same time that Rabi’s discovery was being investigated for medical imaging, scientists and engineers at the US Department of Energy’s Fermi National Accelerator Laboratory began building the Tevatron particle accelerator to enable research into the fundamental nature of particles, a task that called for huge amounts of superconducting wire.

“We were the first large, demanding customer for superconducting cable,” says Chris Quigg, a theoretical physicist at Fermilab. “We were spending a lot of money to get the performance that we needed.” The Tevatron created a commercial market for superconducting wire, making it practical for companies to build MRI machines on a large scale for places like hospitals.

Doctors now use MRI to produce detailed images of the insides of the human body, helpful tools in diagnosing and treating a variety of medical complications, including cancer, heart problems, and diseases in organs such as the liver, pancreas and bowels.

Another tool of particle physics, the particle detector, has also been adopted for uses in various industries. In the 1980s, for example, particle physicists developed technology precise enough to detect a single photon. Today doctors use this same technology to detect tumors, heart disease and central nervous system disorders. They do this by conducting positron emission tomography scans, or PET scans. Before undergoing a PET scan, the patient is given a dye containing radioactive tracers, either through an injection or by ingesting or inhaling. The tracers emit antimatter particles, which interact with matter particles and release photons, which are picked up by the PET scanner to create a picture detailed enough to reveal problems at the cellular level.

As Gates says, “a lot of the devices and concepts that you see in science fiction stories will never come into existence unless we pursue the concept of basic research. You’re not going to be able to construct starships unless you do the research now in order to build these in the future.”

It’s unclear what applications could come of humanity’s new knowledge of the existence of gravitational waves.

It could be enough that we have learned something new about how our universe works. But if history gives us any indication, continued exploration will also provide additional benefits along the way.

### Lubos Motl - string vacua and pheno

LHCb insists on tension with lepton universality in $$1$$-$$6\GeV^2$$
The number of references to B-mesons on this blog significantly exceeds my degree of excitement about these bound states of quarks and antiquarks but what can I do? They are among the leaders of the revolt against the Standard Model.

Various physicists have mentioned a new announcement by the LHCb collaboration which is smaller than ATLAS and CMS but at least equally assertive.

Another physicist has embedded the key graph where you should notice that the black crosses sit well below the dotted line where they're predicted to sit

and we were told about the LHCb PowerPoint presentation where this graph was taken from.

To make the story short, some ratio describing the decays of B-mesons that should be one according to the Standard Model if the electron, muon, and tau are equally behaved – except for their differing masses which are rather irrelevant here – ends up being $\Large {\mathcal R}_{K^{*0}} = 0.69 + 0.12 - 0.08$ especially in the interval of momentum transfer $$q^2 \in (1,6)\GeV^2$$.

There are some similar deviations at higher values of $$q^2$$, it's always about 2.2-2.5 standard deviations below the Standard Model. Sadly, it seems that neither BaBar nor Belle saw these deficits: their mean values are slightly greater than one although their error margin was greater than that of the LHCb collaboration. On the other hand, the deficit seems rather compatible with the LHCb's recent announcements based on a (hopefully) disjoint set of decays.

An obvious reaction is that the deviation in this low-energy range isn't too exciting, anyway, because

Well, unless it's some new physics (new even for Jester) that affects this energy range. ;-)

I find this deviation rather small and our survival of the 4-sigma excess at $$750\GeV$$ should have made us a little bit more demanding when it comes to the significance level that is needed to make us aroused. But those who are interested in the existing or potentially emerging experimental anomalies should be aware of this deviation because the competition in this field is very limited.

## April 14, 2017

### Marco Frasca - The Gauge Connection

Well below 1%

When a theory is too hard to solve people try to consider lower dimensional cases. This also happened for Yang-Mills theory. The four dimensional case is notoriously difficult to manage due to the large coupling and the three dimensional case has been treated both theoretically and by lattice computations. In this latter case, the ground state energy of the theory is known very precisely (see here). So, a sound theoretical approach from first principles should be able to get that number at the same level of precision. We know that this is the situation for Standard Model with respect to some experimental results but a pure Yang-Mills theory has not been seen in nature and we have to content ourselves with computer data. The reason is that a Yang-Mills theory is realized in nature just in interaction with other kind of fields being these scalars, fermions or vector-like.

In these days, I have received the news that my paper on three dimensional Yang-Mills theory has been accepted for publication in the European Physical Journal C. Here is tha table for the ground state for SU(N) at different values of N compared to lattice data

N Lattice     Theoretical Error

2 4.7367(55) 4.744262871 0.16%

3 4.3683(73) 4.357883714 0.2%

4 4.242(9)     4.243397712 0.03%

4.116(6)    4.108652166 0.18%

These results are strikingly good and the agreement is well below 1%. This in turn implies that the underlying theoretical derivation is sound. Besides, the approach proves to be successful both also in four dimensions (see here). My hope is that this means the beginning of the era of high precision theoretical computations in strong interactions.

Andreas Athenodorou, & Michael Teper (2017). SU(N) gauge theories in 2+1 dimensions: glueball spectra and k-string tensions J. High Energ. Phys. (2017) 2017: 15 arXiv: 1609.03873v1

Marco Frasca (2016). Confinement in a three-dimensional Yang-Mills theory arXiv arXiv: 1611.08182v2

Marco Frasca (2015). Quantum Yang-Mills field theory Eur. Phys. J. Plus (2017) 132: 38 arXiv: 1509.05292v2

Filed under: Particle Physics, Physics, QCD Tagged: Ground state, Lattice Gauge Theories, Mass Gap, Millenium prize, Yang-Mills theory

## April 11, 2017

### Symmetrybreaking - Fermilab/SLAC

What’s left to learn about antimatter?

Experiments at CERN investigate antiparticles.

What do shrimp, tennis balls and pulsars all have in common? They are all made from matter.

Admittedly, that answer is a cop-out, but it highlights a big, persistent quandary for scientists: Why is everything made from matter when there is a perfectly good substitute—antimatter?

The European laboratory CERN hosts several experiments to ascertain the properties of antimatter particles, which almost never survive in our matter-dominated world.

Particles (such as the proton and electron) have oppositely charged antimatter doppelgangers (such as the antiproton and antielectron). Because they are opposite but equal, a matter particle and its antimatter partner annihilate when they meet.

Antimatter wasn’t always rare. Theoretical and experimental research suggests that there was an equal amount of matter and antimatter right after the birth of our universe. But 13.8 billion years later, only matter-made structures remain in the visible universe.

Scientists have found small differences between the behavior of matter and antimatter particles, but not enough to explain the imbalance that led antimatter to disappear while matter perseveres. Experiments at CERN are working to solve that riddle using three different strategies.

Illustration by Sandbox Studio, Chicago

### Antimatter under the microscope

It’s well known that CERN is home to Large Hadron Collider, the world’s highest-energy particle accelerator. Less known is that CERN also hosts the world’s most powerful particle decelerator—a machine that slows down antiparticles to a near standstill.

The antiproton decelerator is fed by CERN’s accelerator complex. A beam of energetic protons is diverted from CERN’s Proton Synchrotron and into a metal wall, spawning a multitude of new particles, including some antiprotons. The antiprotons are focused into a particle beam and slowed by electric fields inside the antiproton decelerator. From here they are fed into various antimatter experiments, which trap the antiprotons inside powerful magnetic fields.

“All these experiments are trying to find differences between matter and antimatter that are not predicted by theory,” says Will Bertsche, a researcher at University of Manchester, who works in CERN’s antimatter factory. “We’re all trying to address the big question: Why is the universe made up of matter these days and not antimatter?”

By cooling and trapping antimatter, scientists can intimately examine its properties without worrying that their particles will spontaneously encounter a matter companion and disappear. Some of the traps can preserve antiprotons for more than a year. Scientists can also combine antiprotons with positrons (antielectrons) to make antihydrogen.

“Antihydrogen is fascinating because it lets us see how antimatter interacts with itself,” Bertsche says. “We’re getting a glimpse at how a mirror antimatter universe would behave.”

Scientists in CERN’s antimatter factory have measured the mass, charge, light spectrum, and magnetic properties of antiprotons and antihydrogen to high precision. They also look at how antihydrogen atoms are affected by gravity; that is, do the anti-atoms fall up or down? One experiment is even trying to make an assortment of matter-antimatter hybrids, such as a helium atom in which one of the electrons is replaced with an orbiting antiproton.

So far, all their measurements of trapped antimatter match the theory: Except for the opposite charge and spin, antimatter appears completely identical to matter. But these affirmative results don’t deter Bertsche from looking for antimatter surprises. There must be unpredicted disparities between these particle twins that can explain why matter won its battle with antimatter in the early universe.

“There’s something missing in this model,” Bertsche says. “And nobody is sure what that is.”

### Antimatter in motion

The LHCb experiment wants to answer this same question, but they are looking at antimatter particles that are not trapped. Instead, LHCb scientists study how free-range antimatter particles behave as they travel and transform inside the detector.

“We’re recording how unstable matter and antimatter particles decay into showers of particles and the patterns they leave behind when they do,” says Sheldon Stone, a professor at Syracuse University working on the LHCb Experiment. “We can’t make these measurements if the particles aren’t moving.”

The particles-in-motion experiments have already observed some small differences between matter and antimatter particles. In 1964 scientists at Brookhaven National Laboratory noticed that neutral kaons (a particle containing a strange and down quark) decay into matter and antimatter particles at slightly different rates, an observation that won them the Nobel Prize in 1980.

The LHCb experiment continues this legacy, looking for even more discrepancies between the metamorphoses of matter and antimatter particles. They recently observed that the daughter particles of certain antimatter baryons (particles containing three quarks) have a slightly different spatial orientation than their matter contemporaries.

But even with the success of uncovering these discrepancies, scientists are still very far from understanding why antimatter all but disappeared.

“Theory tells us that we’re still off by nine orders of magnitude,” Stone says, “so we’re left asking, where is it? What is antimatter’s Achilles heel that precipitated its disappearance?”

Illustration by Sandbox Studio, Chicago

### Antimatter in space

Most antimatter experiments based at CERN produce antiparticles by accelerating and colliding protons. But one experiment is looking for feral antimatter freely roaming through outer space.

The Alpha Magnetic Spectrometer is an international experiment supported by the US Department of Energy and NASA. This particle detector was assembled at CERN and is now installed on the International Space Station, where it orbits Earth 400 kilometers above the surface. It records the momentum and trajectory of roughly a billion vagabond particles every month, including a million antimatter particles.

Nomadic antimatter nuclei could be lonely relics from the Big Bang or the rambling residue of nuclear fusion in antimatter stars.

But AMS searches for phenomena not explained by our current models of the cosmos. One of its missions is to look for antimatter that is so complex and robust, there is no way it could have been produced through normal particle collisions in space.

“Most scientists accept that antimatter disappeared from our universe because it is somehow less resilient than matter,” says Mike Capell, a researcher at MIT and a deputy spokesperson of the AMS experiment. “But we’re asking, what if all the antimatter never disappeared? What if it’s still out there?”

If an antimatter kingdom exists, astronomers expect that they would observe mass particle-annihilation fizzing and shimmering at its boundary with our matter-dominated space—which they don’t. Not yet, at least. Because our universe is so immense (and still expanding), researchers on AMS hypothesize that maybe these intersections are too dim or distant for our telescopes.

“We already have trouble seeing deep into our universe,” Capell says. “Because we’ve never seen a domain where matter meets antimatter, we don’t know what it would look like.”

AMS has been collecting data for six years. From about 100 billion cosmic rays, they’ve identified a few strange events with characteristics of antihelium. Because the sample is so tiny, it’s impossible to say whether these anomalous events are the first messengers from an antimatter galaxy or simply part of the chaotic background.

“It’s an exciting result,” Capell says. “However, we remain skeptical. We need data from many more cosmic rays before we can determine the identities of these anomalous particles.”

## April 10, 2017

### Axel Maas - Looking Inside the Standard Model

Making connections inside dead stars
Last time I wrote about our research on neutron stars. In that case we were concerned with the properties of neutron stars - its mass and size. But these are determined by the particles inside the star, the quarks and gluons and how they influence each other by the strong force.

However, a neutron star is much more than just quarks and gluons bound by gravity and the strong force.

Neutron stars are also affected by the weak force. This happens in a quite subtle way. The weak force can transform a neutron into a proton, an electron and an (anti)neutrino, and back. In a neutron star, this happens all the time. Still, the neutron are neutrons most of the time, hence the name neutron stars. Looking into this process more microscopically, the protons and neutrons consist out of quarks. The proton out of two up quarks and a down quark, and the neutron out of one up quark and two down quarks. Thus, what really happens is that a down quark changes into an up quark and an electron and an (anti)neutrino and back.

As noted, this does not happen too often. But this is actually only true for a neutron star just hanging around. When neutron stars are created in a supernova, this happens very often. In particular, the star which becomes a supernova is mostly protons, which have to be converted to neutrons for the neutron star. Another case is when two neutron stars collide. Then this process becomes much more important, and more rapid. The latter is quite exciting, as the consequences maybe observable in astronomy in the next few years.

So, how can the process be described? Usually, the weak force is weak, as the name says. Thus, it is usually possible to consider it a small effect. Such small effects are well described by perturbation theory. This is OK, if the neutron star just hangs around. But for collisions, or forming, the effect is no longer small. And then other methods are necessary. For the same reasons as in the case of inert neutron stars we cannot use simulations to do so. But our third possibility, the so-called equations of motion, work.

Therefore Walid Mian, a PhD student of mine, and myself used these equations to study how quarks behave, if we offer to them a background of electrons and (anti)neutrinos. We have published a paper about our results, and I would like to outline what we found.

Unfortunately, we still cannot do the calculations exactly. So, in a sense, we cannot independently vary the amount of electrons and (anti)neutrinos, and the strength of their coupling to the quarks. Thus, we can only estimate what a more intense combination of both together means. Since this is qualitatively what we expect to happen during the collision of two neutron stars, this should be a reasonable approximation.

For a very small intensity we do not see anything but what we expect in perturbation theory. But the first surprise was already when we cranked up the intensity. Much earlier than expected new effects which showed up. In fact, they started to be there at intensities some factor 10-1000 smaller than expected. Thus, the weak interaction could play a much larger role in such environments than usually assumed. That was the first insight.

The second was that the type of quarks - whether it is an up or a down quark is more relevant than expected. In particular, whether they have a different mass, like it is in nature, or the same mass makes a big difference. If the mass is different qualitatively new effects arise, which was not expected in this form.

The observed effects themselves are actually quite interesting: They make the quarks, depending on their type, either more sensitive or less sensitive to the weak force. This is important. When neutron stars are created or collide, they become very hot. The main way to get cooler is by dumping (anti)neutrinos into space. This becomes more efficient if the quarks react less to the weak force. Thus, our findings could have consequences on how quickly neutron stars could become colder.

We also saw that these effects only start to play a role if the quark can move inside the neutron star over a sufficiently large distance. Where sufficiently large is here about the size of a neutron. Thus the environment of a neutron star shows itself already when the quarks start to feel that they do not live in a single neutron, but rather in a neutron star, where there neutrons touch each other. All of the qualitative new effects then started to appear.

Unfortunately, to estimate how important these new effects for the neutron star really are, we first have to understand what it means for the neutrons. Essentially, we have to somehow pull our results on a larger scale - what does this mean for the whole neutron - before we can recreate our investigation of the full neutron star with these effects included. Not even to mention the impact for a collision, which is even more complicated.

Thus, our current next step is to understand what the weak interaction implies for hadrons, i.e. states of multiple quarks like the neutron. The first step is to understand how the hadron can decay and reform by the weak force, as I described earlier. The decay itself can be described already quite well using perturbation theory. But decay and reforming, or even an endless chain of these processes, cannot yet. To become able to do so is where we head next.