# Particle Physics Planet

## July 27, 2015

### astrobites - astro-ph reader's digest

Shifting the Pillars – Constraining Lithium Production in Big Bang Nucleosynthesis

Title: Constraining Big Bang lithium production with recent solar neutrino data
Authors: Marcell P. Takacs, Daniel Bemmerer, Tamas Szucs, Kai Zuber
First Author’s Institution: Helmholtz-Zentrum, Dresden-Rossendorf
Notes: in press at Phys. Rev. D

Guest author Tom McClintock

Today’s post was written by Tom McClintock, third year graduate student in Physics at the University of Arizona. His research interests include cosmology and large scale structure. Tom did his undergrad at Amherst College and a MSc in high performance computing at the University of Edinburgh. In addition to his research, he is in a long term relationship with ultimate frisbee and dungeons and dragons.

Among the tests passed by the standard cosmological model, Big Bang Nucleosynthesis (BBN) may be the most rigorous, in that predictions of light element abundances are consistent with observations over ten orders of magnitude. All of this production occurs within the first fifteen minutes(!) following the Big Bang, and ceases once weak reactions producing neutrons fall out of equilibrium. However, for over thirty years there has been tension over the abundance of lithium-7 between theoretical BBN calculations and measurements of metal-poor stars known as the Cosmic Lithium Problem (which astrobites has discussed here and here). The numerical simulations of $\Lambda$CDM predict an abundance that is over three times that found on the surface of Population II stars. Something has to give.

The authors of today’s paper investigate a nuclear physics solution, the reaction rate 3He + 4He $\rightarrow \gamma$ + 7Be, shortened to 3He(a,g)7Be. Production of beryllium-7 is important because 7Be eventually decays to 7Li through electron capture. Nuclear reactions are described by reaction rates, which in turn are described by interaction cross sections, which can be measured by experiments. In the case of 3He(a,g)7Be, any change in the measured cross section affects the theoretical BBN 7Li yield, and thus the compatibility between the standard cosmological model and abundance observations.

In addition, 3He(a,g)7Be is a critical step in both the pp-2 and pp-3 branches of the pp-chain of hydrogen burning in the Sun. Both of these branches also produce electron neutrinos, observable on Earth. The authors use new stellar neutrino flux data published by the BOREXINO collaboration in order to constrain the 3He(a,g)7Be reaction rate. From there they recalculate the theoretical 7Li yield and confirm the significant tension between theory and observation.

The Tricky Part

Nuclear reaction cross sections have a temperature dependent sweet spot, called the Gamow peak, which allows for a maximum reaction rate to occur. For this reason, it is much easier for experiments to probe cross sections near the Gamow peak; at lower energies there isn’t enough juice to get the nuclei to smash together and at higher energies they whiz by each other too fast. Unfortunately, the energy range of interest (0.1 – 0.5 MeV) for BBN temperatures (~500,000 K) is too low, and lies just out of the capabilities of most experiments. Therefore, in order to perform BBN calculations it has been necessary to extrapolate the cross section down to these energies.

Takacs et al. sidestepped this limitation by utilizing the solar neutrino data to constrain the reaction rate at an energy lower than that of BBN, thereby removing the need for extrapolation.

Method

By assuming a standard solar model (SSM) as well as the standard neutrino oscillation model, the authors determine that the predicted neutrino flux depends on a variety of parameters such as solar luminosity, age, opacity, and nuclear reaction rates. They then use calculations of the sensitivity of the neutrino flux for a variation in the 3He(a,g)7Be reaction rate in order to write this rate in terms of the observed flux, the expected flux from SSM, and the best theoretical reaction rate from SSM. As shown in the figure below, their data point was measured at an energy almost a factor of ten below all previous measurements of the cross section.

The so-called “astrophysical S-factor” S34 is a parameterization of, and is directly proportional to, the interaction cross section. Takacs et al. were able to measure S at an energy almost a factor of ten lower than the best accelerator experiments. The fit to S is given as a red line, while the blue dashed line is calculated analytically from theory but due to numerical limitations cannot reach energies considered in this paper. The solar Gamow peak is given as the red shaded region, and the blue shaded region indicates the energy range for BBN. This figure is from Takacs et al.

Results

The cross section for 3He(a,g)7Be was lower by about 5% compared to the value previously used in several BBN calculations, and the precision increased almost by a factor of three, mostly due to the elimination of extrapolation. Using this cross section, the authors updated the reaction rate in a public BBN code and determined a small increase in the disagreement between the theoretical 7Li abundance and the abundance observed on the surface of metal poor stars. However, they caution that further work on the SSM may change their error budget for the 3He(a,g)7Be cross section.

This study both confirms and exacerbates the cosmic lithium problem (albeit slightly), yet it demonstrates how astrophysical processes even in our solar system can serve as probes into fundamental physics. BBN marks the boundary between precision and speculative cosmology, and the lithium problem restricts researchers from pushing this boundary further.

### Christian P. Robert - xi'an's og

Egyptian fractions [Le Monde puzzle #922]

For its summer edition, Le Monde mathematical puzzle switched to a lighter version with immediate solution. This #922 considers Egyptian fractions which only have distinct denominators (meaning the numerator is always 1) and can be summed. This means 3/4 is represented as ½+¼. Each denominator only appears once. As I discovered when looking on line, a lot of people are fascinated with this representation and have devised different algorithms to achieve decompositions with various properties. Including Fibonacci who devised a specific algorithm called the greedy algorithm in 1202 in the Liber Abaci. In the current Le Monde edition, the questions were somewhat modest and dealt with the smallest decompositions of 2/5, 5/12, and 50/77 under some additional constraint.

Since the issue was covered in so many places, I just spent one hour or so constructing a basic solution à la Fibonacci and then tried to improve it against a length criterion. Here are my R codes (using the numbers library):

osiris=function(a,b){
#can the fraction a/b be simplified
diva=primeFactors(a)
divb=primeFactors(b)
divc=c(unique(diva),unique(divb))
while (sum(duplicated(divc))>0){
n=divc[duplicated(divc)]
for (i in n){a=div(a,i);b=div(b,i)}
diva=primeFactors(a)
divb=primeFactors(b)
divc=c(unique(diva),unique(divb))
}
return(list(a=a,b=b))
}


presumably superfluous for simplifying fractions

horus=function(a,b,teth=NULL){
#simplification
anubis=osiris(a,b)
a=anubis$a;b=anubis$b
#decomposition by removing 1/b
isis=NULL
if (!(b %in% teth)){
a=a-1
isis=c(isis,b)
teth=c(teth,b)}
if (a>0){
#simplification
anubis=osiris(a,b)
bet=b;a=anubis$a;b=anubis$b
if (bet>b){ isis=c(isis,horus(a,b,teth))}else{
# find largest integer
k=ceiling(b/a)
while (k %in% teth) k=k+1
a=k*a-b
b=k*b
isis=c(isis,k,horus(a,b,teth=c(teth,k)))
}}
return(isis)}


which produces a Fibonacci solution (with the additional inclusion of the original denominator) and

nut=20
seth=function(a,b,isis=NULL){
#simplification
anubis=osiris(a,b)
a=anubis$a;b=anubis$b
if ((a==1)&(!(b %in% isis))){isis=c(isis,b)}else{
ra=hapy=ceiling(b/a)
if (max(a,b)<1e5) hapy=horus(a,b,teth=isis)
k=unique(c(hapy,ceiling(ra/runif(nut,min=.1,max=1))))
propa=propb=propc=propd=rep(NaN,le=length((k %in% isis)))
bastet=1
for (i in k[!(k %in% isis)]){
propa[bastet]=i*a-b
propb[bastet]=i*b
propc[bastet]=i
propd[bastet]=length(horus(i*a-b,i*b,teth=c(isis,i)))
bastet=bastet+1
}
k=propc[order(propd)[1]]
isis=seth(k*a-b,k*b,isis=c(isis,k))
}
return(isis)}


which compares solutions against their lengths. When calling those functions for the three fractions above the solutions are

> seth(2,5)
[1] 15 3
> seth(5,12)
[1] 12  3
> seth(50,77)
[1]   2 154   7


with no pretension whatsoever to return anything optimal (and with some like crashes when the magnitude of the entries grows, try for instance 5/121). For this latest counter-example, the alternative horus works quite superbly:

> horus(5,121)
[1] 121 31 3751 1876 7036876


Filed under: Books, Kids, R Tagged: Egyptian fractions, Fibonacci, greedy algorithm, Le Monde, Liber Abaci, mathematical puzzle, numerics, Rhind papyrus

### Tommaso Dorigo - Scientificblogging

New Results From The LHC At 13 TeV!
Well, as some of you may have heard, the restart of the LHC has not been as smooth as we had hoped. In a machine as complex as this the chance that something gets in the way of a well-followed schedule is quite significant. So there have been slight delays, but the important thing is that the data at 13 TeV centre-of-mass energy are coming, and the first results are being extracted from them.

### Emily Lakdawalla - The Planetary Society Blog

Proposals to Explore the Solar System’s Smallest Worlds
Van Kane rounds up some of the latest NASA Discovery mission proposals aiming to explore our solar system's smallest bodies.

### Peter Coles - In the Dark

inflation, evidence and falsifiability

At the risk of labouring the point here’s another critque of the Gubitosi et al, paper I posted about a couple of days ago…

Originally posted on Xi'an's Og:

[Ewan Cameron pointed this paper to me and blogged about his impressions a few weeks ago. And then Peter Coles wrote a (properly) critical blog entry yesterday. Here are my quick impressions, as an add-on.]

“As the cosmological data continues to improve with its inevitable twists, it has become evident that whatever the observations turn out to be they will be lauded as proof of inflation”.”G. Gubitosi et al.

In an arXive with the above title, Gubitosi et al. embark upon a generic and critical [and astrostatistical] evaluation of Bayesian evidence and the Bayesian paradigm. Perfect topic and material for another blog post!

“Part of the problem stems from the widespread use of the concept of Bayesian evidence and the Bayes factor (…) The limitations of the existing formalism emerge, however, as soon as we insist on falsifiability as a pre-requisite for a scientific theory (….) the…

View original 677 more words

### Symmetrybreaking - Fermilab/SLAC

W bosons remain left-handed

A new result from the LHCb collaboration weakens previous hints at the existence of a new type of W boson.

A measurement released today by the LHCb collaboration dumped some cold water on previous results that suggested an expanded cast of characters mediating the weak force.

The weak force is one of the four fundamental forces, along with the electromagnetic, gravitational and strong forces. The weak force acts on quarks, fundamental building blocks of nature, through particles called W and Z bosons.

Just like a pair of gloves, particles can in principle be left-handed or right-handed. The new result from LHCb presents evidence that the W bosons that mediate the weak force are all left-handed; they interact only with left-handed quarks.

This weakens earlier hints from the Belle and BaBar experiments of the existence of right-handed W bosons.

The LHCb experiment at the Large Hadron Collider examined the decays of a heavy and unstable particle called Lambda-b—a baryon consisting of an up quark, down quark and bottom quark. Weak decays can change a bottom quark into either a charm quark, about 1 percent of the time, or into a lighter up quark. The LHCb experiment measured how often the bottom quark in this particle transformed into an up quark, resulting in a proton, muon and neutrino in the final state.

“We found no evidence for a new right-handed W boson,” says Marina Artuso, a Professor of Physics at Syracuse University and a scientist working on the LHCb experiment.

If the scientists on LHCb had seen bottom quarks turning into up quarks more often than predicted, it could have meant that a new interaction with right-handed W bosons had been uncovered, Artuso says. “But our measured value agreed with our model’s value, indicating that the right-handed universe may not be there.”

Earlier experiments by the Belle and BaBar collaborations studied transformations of bottom quarks into up quarks in two different ways: in studies of a single, specific type of transformation, and in studies that ideally included all the different ways the transformation occurs.

If nothing were interfering with the process (like, say, a right-handed W boson), then these two types of studies would give the same value of the bottom-to-up transformation parameter. However, that wasn’t the case.

The difference, however, was small enough that it could have come from calculations used in interpreting the result. Today’s LHCb result makes it seem like right-handed W bosons might not exist after all, at least not in a way that is revealed in these measurements.

Michael Roney, spokesperson for the BaBar experiment, says, "This result not only provides a new, precise measurement of this important Standard Model parameter, but it also rules out one of the interesting theoretical explanations for the discrepancy... which still leaves us with this puzzle to solve."

Like what you see? Sign up for a free subscription to symmetry!

### ATLAS Experiment

From ATLAS Around the World: Triggers (and dark) matter

To the best of our knowledge, it took the Universe about 13.798 billion years (plus or minus 37 million) to allow funny looking condensates of mostly oxygen, carbon and hydrogen to ponder on their own existence, the fate of the cosmos and all the rest. Some particularly curious specimens became scientists, founded CERN, dug several rings into the ground near Geneva, Switzerland, built the Large Hadron Collider in the biggest ring, and also installed a handful of large detectors along the way. All of that just in order to understand a bit better why we are here in the first place. Well, here we are!

CERN was founded after World War II as a research facility dedicated to peaceful science (in contrast to military research). Germany is one of CERN’s founding members and it is great to be a part of it. Thousands of scientists are associated with CERN from over 100 countries, including some nations that do not have the most relaxed diplomatic relationships with each other. Yet this doesn’t matter at CERN, as we are working hand-in-hand for the greater good of science and technology.

Monitoring and analysing events provided by the LHC. (Picture by R. Stamen)

In the ATLAS collaboration, Germany has institutes from 14 different cities contributing to one of the largest and most complex detectors ever built. My institute, the Kirchhoff-Institut für Physik (KIP) in Heidelberg, was (and is) involved in the development and operation of the trigger mechanism that selects the interesting interactions from the not so interesting ones. Furthermore, we are doing analyses on the data to confirm the Standard Model of Particle Physics or – better yet – to find hints of excess events that point to dark matter particles (although we are still waiting for that…).

But let’s start with the trigger. The interaction rate (that is the rate at which bunches of LHC protons collide within the ATLAS detector) is way too high to save every single event. That is why a selection process is needed to decide which events to save and which to let go. This trigger mechanism is split up into several stages; the first of which handles such high rates that it needs to be implemented using custom hardware, as commercial PCs are not fast enough.

This first stage (also called the level-1 trigger) is what we work on here at KIP. For instance, together with a fellow student, I took care of one of the first timing checks after the long shutdown. This was important, because we wanted to know if the extensive maintenance that started after the Run 1 (wherein we had personally installed new hardware) had somehow changed the timing behaviour of the level-1 trigger. Having a timed system is crucial, since if you are off by even a few nanoseconds, your trigger starts misbehaving and you might miss Higgs bosons or other interesting events.

In order to determine the timing of our system we used “splash” events. Instead of collisions at the centre of the detector, a “splash” is an energetic spray of a huge number of particles that comes from only one direction (more information on splashes here). They are great for timing the system, because they light up the entire detector like a Christmas tree. Also, they came from the first LHC beam since Run 1 – so it was the first opportunity to see the detector at work. This work was intense and cool. The beam splashes were scheduled over Easter, but we did not care. We gladly spent our holiday together in the ATLAS control room with other highly motivated people who sacrificed their long weekend for science. To see the first beams live in the control room after a long shutdown was a special experience. Extremely enthusiastic!

Murrough Landon (right) and Manuel (left) discussing results from the beam splashes. (Picture by R. Stamen)

But of course, timing is not the only thing that has to be done. We also write the firmware for our hardware, code software (for instance, to monitor our system in real time), plan future upgrades (in both hardware and software) and do even more calibration. Each of these items is important for the operation of the detector and also very exciting to work on. I find it cool to know that the stuff I worked on helps keep ATLAS running.

Once we have the data – what do we do with it? Each student at KIP can choose which topic he or she wants to work on, yet the majority of us study processes that are related to electroweak interactions. This part of the Standard Model has become even more interesting after the discovery of the Higgs boson and has potential for the discovery of new physics. For example, dark matter. Many models predict dark matter interacts electroweakly, which is what I am working on. We can search for this in the data by looking for events from which we know that particles escaped the detector without interacting with it (leaving “missing transverse energy“; neutrinos do this too) and than comparing the results to models of electroweak coupling to dark matter. The discovery of dark matter would be awesome. The cosmological evidence for dark matter is convincing (for instance galactic rotation curves or the agreement between observations from the Planck satellite and models such as ΛCDM). It is just a matter of finding it…

Going back to the beginning – literally. I am extremely curious to see what we – those funny-looking condensates of mostly oxygen, carbon and hydrogen – will find out about the Universe, its beginning, end, in-between, composition, geometry, behaviour and countless other aspects. And CERN, and especially the ATLAS collaboration, is a great environment in which to do so.

 Manuel is a PhD student at the Kirchhoff-Institut für Physik at the University of Heidelberg, Germany. He joined ATLAS in 2014 and has since been working on both the level-1 calorimeter trigger and an analysis searching for dark matter. He did his Bachelor’s and Master’s degrees in Physics in Bielefeld, Germany, in the fields of molecular magnetism theory and material science. For his PhD he decided to switch fields and become an experimental particle physicist.

### arXiv blog

How the New Science of Game Stories Could Change the Future of Sports

Every sporting event tells a story. Now the first computational analysis of “game stories” suggests that future sports could be designed to prefer certain kinds of stories over others.

“Serious sport is war minus the shooting.” Many athletes will agree with George Orwell’s famous observation. But many fans might add that the best sport is a form of unscripted storytelling: the dominant one-sided thrashing, the back-and forth struggle, the improbable comeback, and so on.

### Quantum Diaries

Trop passionnant pour ne pas partager

La plupart des physiciens et physiciennes sont d’accord: la physique est bien trop passionnante pour la réserver seulement aux scientifiques. Et pour la première fois, la Société Européenne de Physique (EPS) y a consacré une session entière samedi lors de sa conférence de physique des particules en cours à Vienne. Plusieurs y ont rapporté des initiatives variées visant à partager le meilleur de la physique des particules avec le grand public.

La plupart des activités décrites visaient des étudiants et étudiantes de tous âges, venant de pays développés ou en développement. Kate Shaw, chercheure au Centre International de Physique Théorique (ICTP) de Trieste en Italie, a souligné comment la science peut aider à résoudre divers problèmes d’environnement et de développement. Le monde a besoin de plus de scientifiques, a déclaré Kate. Investir dans l’éducation, ainsi que dans les institutions technologiques et culturelles jouent un rôle-clé dans le développement d’une économie basée sur la connaissance. La recherche fondamentale stimule les sciences appliquées par l’innovation, la technologie et l’ingénierie. Elle a aussi souligné l’importance d’inclure toutes les minorités et les jeunes issus de familles à faible revenu.

Kate a fondé le programme “Physique sans Frontières” au ICTP et organisé des “Masterclasses” (voir ci-dessous) et autres activités en Palestine, en Égypte, au Népal, au Liban, au Viêt-Nam et en Algérie. Non seulement elle inspire les jeunes à entreprendre des études en science, mais elle les assiste aussi, les aidant à accéder à des programmes de maîtrise et de doctorat. Kate a reçu aujourd’hui le Outreach Award de l’EPS « pour son travail de dissémination de la physique des particules dans des pays qui n’ont pas de programmes bien établis ».

Etudiantes participant à une Masterclasse en Palestine dans le cadre du programme “Physique sans Frontières”

Une Masterclasse consiste en une journée entière d’activités interactives conçues pour des élèves. Des scientifiques décrivent d’abord la physique des particules et l’expérience à laquelle ils ou elles participent. Un repas pris en commun facilite les échanges avant de se lancer dans de vraies analyses avec de vraies données. Chaque année, une masterclasse internationale réunit environ 10 000 élèves de 42 pays. Ils et elles rejoignent des scientifiques de 200 universités ou laboratoires voisins, pour effectuer de véritables mesures de physique en collaboration internationale avec les autres élèves. Pourquoi ne pas participer à une Masterclasse?

Ces élèves ainsi que d’autres groupes peuvent aussi prendre part à une visite virtuelle d’une expérience de physique. Un ou une scientifique sur place au laboratoire interagit avec le groupe, avant de leur faire visiter les installations à l’aide d’une connexion vidéo en direct.

Vous cherchez une activité inspirante qui soit simple, bon marché et accessible pour un événement spécial, une conférence ou un groupe? Invitez-les à une visite virtuelle au CERN (ATLAS ou CMS). Ainsi en janvier, 500 élèves de Mumbai ont profité de leur “visite” de l’expérience IceCube située à 12 000 km au pôle sud, pour bombarder les scientifiques avec leurs questions.

Le Teacher Programme du CERN a déjà accueilli un millier de personnes. Les enseignants et enseignantes du niveau secondaire venus de partout dans le monde s’en font mettre plein la vue pendant plusieurs semaines afin de s’assurer qu’ils partageront leur enthousiasme avec leurs élèves à leur retour.

Les présentations publiques et les livres de vulgarisation scientifique visent un public plus général. Beaucoup de scientifiques, moi y compris, se feront un plaisir de venir donner une conférence près de chez vous. Il suffit de demander.

Pauline Gagnon

Pour recevoir un avis lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution ou consultez mon site web.

### Quantum Diaries

Too exciting to leave it only to physicists

Most physicists agree: physics is too interesting to leave it only to physicists. For the first time, the European Physics Society (EPS) dedicated a whole session to Outreach this year at its ongoing Particle Physics conference in Vienna. The participants reported on a wealth of creative initiatives undertaken by individuals or institutions to share the best of particle physics with the general public.

Most activities described aimed at students of all ages, in developed and developing countries. Kate Shaw, a researcher from the International Centre for Theoretical Physics (ICTP) in Trieste, Italy, stressed how science can help solve various environmental and developmental problems. The world needs more scientists, Kate stated, and investing in education, technology and cultural institutions plays a key-role in developing a knowledge-based economy. Fundamental research stimulates applied sciences through innovation, technology and engineering. She also mentioned the importance of reaching out to all minorities and low-income students everywhere.

Kate initiated the Program “Physics without Frontiers” at ICTP and conducted “Masterclasses” (see below) in the Palestinian Territories, Egypt, Lebanon, Nepal, Vietnam and Algeria. Not only does she inspire students to study in science, but she also mentors them to help them access Masters and PhD programs. Kate received today the EPS Outreach Prize “for bringing particle physics to countries with no strong tradition in particle physics”.

Students taking part in a Masterclass in Palestine sponsored by “Physics without Frontiers”

Masterclasses refer to a full-day of interactive activities designed for high-school and undergraduate students. Physicists first describe their fields and their experiment. Then the students can interact with them over lunch before launching into real analysis with real data. Every year, an international Masterclass brings together some 10000 students from 42 countries. They join scientists at 200 nearby universities or research centres, measuring meaningful quantities in collaboration with the other international students. You too could participate in a Masterclass.

Masterclasses participants and other groups are also often treated to a virtual visit of a top-notch experiment. A scientist located at the laboratory interacts with the group, then “walks” them through the facilities using a live video connection.

Are you looking for an inspiring activity that is simple, cheap and accessible to all for a special event, conference or group? Treat them to a virtual visit to CERN (ATLAS or CMS). In January, 500 students from Mumbai “visited” the IceCube. experiment at the South Pole 12,000 km away, flooding the scientists with questions.

The CERN’s Teacher Programme is also thriving, with one thousand participants so far. High-school teachers from all over the world are treated to unforgettable experiences to make sure they will share their enthusiasm and excitement with their students when they return home.

Public lectures and popular science books aim at more general audiences. Many scientists worldwide, including myself, will be happy to come give a public lecture in your area upon request. Just ask.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline  or sign-up on this mailing list to receive an e-mail notification. You can also visit my website

## July 26, 2015

### Christian P. Robert - xi'an's og

inflation, evidence and falsifiability

[Ewan Cameron pointed this paper to me and blogged about his impressions a few weeks ago. And then Peter Coles wrote a (properly) critical blog entry yesterday. Here are my quick impressions, as an add-on.]

“As the cosmological data continues to improve with its inevitable twists, it has become evident that whatever the observations turn out to be they will be lauded as \proof of inflation”.” G. Gubitosi et al.

In an arXive with the above title, Gubitosi et al. embark upon a generic and critical [and astrostatistical] evaluation of Bayesian evidence and the Bayesian paradigm. Perfect topic and material for another blog post!

“Part of the problem stems from the widespread use of the concept of Bayesian evidence and the Bayes factor (…) The limitations of the existing formalism emerge, however, as soon as we insist on falsifiability as a pre-requisite for a scientific theory (….) the concept is more suited to playing the lottery than to enforcing falsifiability: winning is more important than being predictive.” G. Gubitosi et al.

It is somehow quite hard not to quote most of the paper, because prose such as the above abounds. Now, compared with standards, the authors introduce an higher level than models, called paradigms, as collections of models. (I wonder what is the next level, monads? universes? paradises?) Each paradigm is associated with a marginal likelihood, obtained by integrating over models and model parameters. Which is also the evidence of or for the paradigm. And then, assuming a prior on the paradigms, one can compute the posterior over the paradigms… What is the novelty, then, that “forces” falsifiability upon Bayesian testing (or the reverse)?!

“However, science is not about playing the lottery and winning, but falsifiability instead, that is, about winning given that you have bore the full brunt of potential loss, by taking full chances of not winning a priori. This is not well incorporated into the Bayesian evidence because the framework is designed for other ends, those of model selection rather than paradigm evaluation.” G. Gubitosi et al.

The paper starts by a criticism of the Bayes factor in the point null test of a Gaussian mean, as overly penalising the null against the alternative being only a power law. Not much new there, it is well known that the Bayes factor does not converge at the same speed under the null and under the alternative… The first proposal of those authors is to consider the distribution of the marginal likelihood of the null model under the [or a] prior predictive encompassing both hypotheses or only the alternative [there is a lack of precision at this stage of the paper], in order to calibrate the observed value against the expected. What is the connection with falsifiability? The notion that, under the prior predictive, most of the mass is on very low values of the evidence, leading to concluding against the null. If replacing the null with the alternative marginal likelihood, its mass then becomes concentrated on the largest values of the evidence, which is translated as an unfalsifiable theory. In simpler terms, it means you can never prove a mean θ is different from zero. Not a tremendously item of news, all things considered…

“…we can measure the predictivity of a model (or paradigm) by examining the distribution of the Bayesian evidence assuming uniformly distributed data.”

The alternative is to define a tail probability for the evidence, i.e. the probability to be below an arbitrarily set bound. What remains unclear to me in this notion is the definition of a prior on the data, as it seems to be model dependent, hence prohibits comparison between models since this would involve incompatible priors. The paper goes further into that direction by penalising models according to their predictability, P, as exp{-(1-P²)/P²}. And paradigms as well.

“(…) theoretical matters may end up being far more relevant than any probabilistic issues, of whatever nature. The fact that inflation is not an unavoidable part of any quantum gravity framework may prove to be its greatest undoing.”

Establishing a principled way to weight models would certainly be a major step in the validation of posterior probabilities as a quantitative tool for Bayesian inference, as hinted at in my 1993 paper on the Lindley-Jeffreys paradox, but I do not see such a principle emerging from the paper. Not only because of the arbitrariness in constructing both the predictivity and the associated prior weight, but also because of the impossibility to define a joint predictive, that is a predictive across models, without including the weights of those models. This makes the prior probabilities appearing on “both sides” of the defining equation… (And I will not mention the issues of constructing a prior distribution of a Bayes factor that are related to Aitkin‘s integrated likelihood. And won’t obviously try to enter the cosmological debate about inflation.)

Filed under: Books, pictures, Statistics, University life Tagged: astrostatistics, Bayes factor, Bayesian model choice, Bayesian paradigm, Ewan Cameron, Gottfried Leibnitz, Imperial College London, inflation, Karl Popper, monad, paradigm shift, Peter Coles, quantum gravity

### Clifford V. Johnson - Asymptotia

Santiago
I'm in Santiago, Chile, for a short stay. My first thought, in a very similar thought process to the one I had over ten years ago in a similar context, is one of surprise as to how wonderfully far south of the equator I now am! Somehow, just like last time I was in chile (even further south in Valdivia), I only properly looked at the latitude on a map when I was most of the way here (due to being somewhat preoccupied with other things right up to leaving), and it is a bit of a jolt. You will perhaps be happy to know that I will refrain from digressions about the Coriolis force and bathtubs, hurricanes and typhoons, and the like. I arrived too early to check into my hotel and so after leaving my bag there I went wandering for a while using the subway, finding a place to sit and have lunch and coffee while watching the world go by for a while. It happened to be at Plaza de Armaz. I sketched a piece of what I saw, and that's what you see in the snap above. I think the main building I sketched is in fact the Central Post Office... And that is a bit of some statuary in front of the Metropolitan Cathedral to the left. I like that the main cathedral and post office are next to each other like that. And yes, [...] Click to continue reading this post

### Peter Coles - In the Dark

The Renewed Threat to STEM

A couple of years ago, soon after taking over as Head of the School of Mathematical and Physical Sciences (MPS) at the University of Sussex, I wrote a blog post called The Threat to STEM from HEFCE’s Funding Policies about how the funding policies of the Higher Education Funding Council for England (HEFCE) were extremely biased against STEM disciplines. The main complaint I raised then was that the income per student for science subjects does not adequately reflect the huge expense of teaching these subjects compared to disciplines in the arts and humanities. The point is that universities now charge the same tuition fee for all subjects (usually £9K per annum) while the cost varies hugely across disciplines: science disciplines can cost as much as £16K per annum per student whereas arts subjects can cost as little as £6K. HEFCE makes a small gesture towards addressing this imbalance by providing an additional grant for “high cost” subjects, but that is only just over £1K per annum per student, not enough to make such courses financially viable on their own. And even that paltry contribution has been steadily dwindling.  In effect, fees paid by arts students are heavily subsidising the sciences across the Higher Education sector.

The situation was bad enough before last week’s announcement of an immediate £150M cut in HEFCE’s budget. Once again the axe has fallen hardest on STEM disciplines. Worst of all, a large part of the savings will be made retrospectively, i.e. by clawing back money that had already been allocated and which institutions had assumed in order to plan their budgets. To be fair, HEFCE had warned institutions that cuts were coming in 2015/16:

This means that any subsequent changes to the funding available to us from Government for 2015-16, or that we have assumed for 2016-17, are likely to affect the funding we are able to distribute to institutions in the 2015-16 academic year. This may include revising allocations after they have already been announced. Accordingly, institutions should plan their budgets prudently.

However, this warning does not mention the possibility of cuts to the current year (i.e. 2014-15). No amount of prudent planning of budgets will help when funding is taken away retrospectively, as it is now to the case. I should perhaps explain that funding allocations are made by HEFCE in a lagged fashion, based on actual student numbers, so that income for the academic year 2014-15 is received by institutions during 15/16. In fact my institution, in common with most others, operates a financial year that runs from August 1st to July 31st and I’ve just been through a lengthy process of setting the budget from August 1st 2015 onward; budgets are what I do most of the time these days, if I’m honest. I thought I had finished that job for the time being, but look:

In October 2015, we will notify institutions of changes to the adjusted 2014-15 teaching grants we announced in March 20158. These revised grant tables will incorporate the pro rata reduction of 2.4 per cent. This reduction, and any other changes for individual institutions to 2014-15 grant, will be implemented through our grant payments from November 2015. We do not intend to reissue 2014-15 grant tables to institutions before October 2015, but institutions will need to reflect any changes relating to 2014-15 in their accounts for that year (i.e. the current academic year). Any cash repayments due will be confirmed as part of the October announcements.

On top of this, any extra students recruited as as  result of the government scrapping student number controls won’t attract any support at all from HEFCE, so we wll only get the tuition fee.And the government says it wants the number of STEM students to increase? Someone tell me how that makes sense.

What a mess! It’s going to be back to the drawing board for me and my budget. And if a 2.4 per cent cut doesn’t sound much to you then you need to understand it in terms of how University budgets work. It is my job – as the budget holder for MPS – to ensure that the funding that comes in to my School is spent as efficiently and effectively on what the School is meant to do, i.e. teaching and research. To that end I have to match income and expenditure as closely as possible. It is emphatically not the job of the School to make a profit: the target I am given is to return a small surplus (actually 4 per cent of our turnover) to contribute to longer-term investments. I’ve set a budget that does this, but now I’ll have to wait until October to find out how much I have to find in terms of savings to absorb the grant cut. It’s exasperating when people keep moving the goalposts like this. One would almost think the government doesn’t care about the consequences of its decisions, as long as it satisfies its fixation with cuts.

And it’s not only teaching that is going to suffer. Another big slice of savings (£52M) is coming from scrapping the so-called “transitional relief” for STEM departments who lost out as a result of the last Research Excellence Framework. This again is a policy that singles out STEM disciplines for cuts. You can find the previous allocations of transitional relief in an excel spreadsheet here. The cash cuts are largest in large universities with big activities in STEM disciplines – e.g. Imperial College will lose £10.9M previous allocated, UCL about £4.3M, and Cambridge about £4M. These are quite wealthy institutions of course, and they will no doubt cope, but that doesn’t make it any more acceptable for HEFCE to break a promise.

This cut in fact won’t alter my School’s budget either. Although we were disappointed with the REF outcome in terms of league table position, we actually increased our QR income. As an institution the University of Sussex only attracted £237,174 in transitional relief so this cut is small potatoes for us, but that doesn’t make this clawback any more palatable from the point of view of the general state of health of STEM disciplines in the United Kingdom.

These cuts are also directly contrary to the claim that the UK research budget is “ring-fenced”. It clearly isn’t, and with a Comprehensive Spending Review coming up many of us are nervous that these cuts are just a foretaste of much worse things to come. Research Councils are being asked to come up with plans based on a 40% cut in cash.

Be afraid. Be very afraid.

## July 25, 2015

### Christian P. Robert - xi'an's og

Mýrin aka Jar City [book review]

Mýrin (“The Bog”) is the third novel in the Inspector Erlendur series written by Arnaldur Indridason. It contains the major themes of the series, from the fascination for unexplained disappearances in Iceland to Elendur’s inability to deal with his family responsibilities, to domestic violence, to exhumations. The death that starts the novel takes place in the district of Norðurmýri, “the northern marsh”, not far from the iconic Hallgrimskirkja, and not far either from DeCODE, the genetic company I visited last June and which stores genetic information about close to a million Icelanders, the Íslendingabók. And which plays an important and nefarious role in the current novel. While this episode takes place mostly between Reykjavik and Keflavik, hence does not offer any foray into Icelandic landscapes, it reflects quite vividly on the cultural pressure still present in the recent years to keep rapes and sexual violence a private matter, hidden from an indifferent or worse police force. It also shows how the police misses (in 2001) the important genetic clues for being yet unaware of the immense and frightening possibilities of handling the genetic code of an entire population. (The English and French titles refer to the unauthorised private collections of body part accumulated [in jars] by doctors after autopsies, families being unaware of the fact.) As usual, solving the case is the least important part of the story, which tells about broken lifes and survivors against all odds.

Filed under: Books, Mountains, pictures, Travel Tagged: Arnaldur Indridason, Íslendingabók, book review, deCODE, Iceland, Iceland noir, Jar City, Keflavik, Mýrin, Norðurmýri, Reykjavik

### Peter Coles - In the Dark

On (un)falsifiability of paradigms with Bayesian model selection …

Yesterday’s post is generating quite a lot if traffic for a weekend so I thought I would reblog this piece on the same topic..

Originally posted on Another Astrostatistics Blog:

I noticed an unusual contribution on the philosophy of science with Bayesian model selection by Gubitosi et al. on astro ph the other day, in which some rather bold claims are made, e.g.

“By considering toy models we illustrate how unfalsifiable models and paradigms are always favoured by the Bayes factor.”

Despite the authors making a number of sniping comments about the sociology of “proof of inflation” claims in astronomy, their meta-reflections did not reach a point of self-awareness at which they were able to escape my own sociological observation: the bolder the claims made by astronomers about Bayes theorem, the narrower their reading of the past literature on the subject. Indeed, in this manuscript there are no references at all to any previous work on the role of Bayes factors in scientific decision making, even from within the astronomical canon (leaving beside the history of statistics); more precisely, it…

View original 572 more words

### Emily Lakdawalla - The Planetary Society Blog

Looking back at Pluto
I don't think anyone was prepared for the beauty -- or the instant scientific discoveries -- in this "lookback" image of Pluto, captured by New Horizons shortly after it flew by.

## July 24, 2015

### Clifford V. Johnson - Asymptotia

Page Samples…!
There's something really satisfying about getting copies of printed pages back from the publisher. Makes it all seem a bit more real. This is a second batch of samples (first batch had some errors resulting from miscommunication, so don't count), and already I think we are converging. The colours are closer to what I intended, although you can't of course see that since the camera I used to take the snap, and the screen you are using, have made changes to them (I'll spare you lots of mumblings about CMYK vs RGB and monitor profiles and various PDF formats and conventions and so forth) and this is all done with pages I redid to fit the new page sizes I talked about in the last post on the book project. Our next step is to work on more paper choices, keeping in mind that this will adjust colours a bit again, and so forth - and we must also keep an eye on things like projected production costs and so forth. Some samples have been mailed to me and I shall get them next week. Looking forward to seeing them. For those who care, the pages you can see have a mixture of digital colours (most of it in fact) and analogue colours (Derwent watercolour pencils, applied [...] Click to continue reading this post

### Tommaso Dorigo - Scientificblogging

Interna
I apologize for my lack of posting in the past few days. I will resume it very soon... So as a means of apology I thought I would explain what I have been up to this week.

### Emily Lakdawalla - The Planetary Society Blog

Help map Mars' south polar region!
The science team of NASA's Mars Reconnaissance Orbiter wants your help in mapping out the weird and wonderful features of Mars' south polar region!

### Peter Coles - In the Dark

Falisifiability versus Testability in Cosmology

A paper came out a few weeks ago on the arXiv that’s ruffled a few feathers here and there so I thought I would make a few inflammatory comments about it on this blog. The article concerned, by Gubitosi et al., has the abstract:

I have to be a little careful as one of the authors is a good friend of mine. Also there’s already been a critique of some of the claims in this paper here. For the record, I agree with the critique and disagree with the original paper, that the claim below cannot be justfied.

…we illustrate how unfalsifiable models and paradigms are always favoured by the Bayes factor.

If I get a bit of time I’ll write a more technical post explaining why I think that. However, for the purposes of this post I want to take issue with a more fundamental problem I have with the philosophy of this paper, namely the way it adopts “falsifiablity” as a required characteristic for a theory to be scientific. The adoption of this criterion can be traced back to the influence of Karl Popper and particularly his insistence that science is deductive rather than inductive. Part of Popper’s claim is just a semantic confusion. It is necessary at some point to deduce what the measurable consequences of a theory might be before one does any experiments, but that doesn’t mean the whole process of science is deductive. As a non-deductivist I’ll frame my argument in the language of Bayesian (inductive) inference.

Popper rejects the basic application of inductive reasoning in updating probabilities in the light of measured data; he asserts that no theory ever becomes more probable when evidence is found in its favour. Every scientific theory begins infinitely improbable, and is doomed to remain so. There is a grain of truth in this, or can be if the space of possibilities is infinite. Standard methods for assigning priors often spread the unit total probability over an infinite space, leading to a prior probability which is formally zero. This is the problem of improper priors. But this is not a killer blow to Bayesianism. Even if the prior is not strictly normalizable, the posterior probability can be. In any case, given sufficient relevant data the cycle of experiment-measurement-update of probability assignment usually soon leaves the prior far behind. Data usually count in the end.

I believe that deductvism fails to describe how science actually works in practice and is actually a dangerous road to start out on. It is indeed a very short ride, philosophically speaking, from deductivism (as espoused by, e.g., David Hume) to irrationalism (as espoused by, e.g., Paul Feyeraband).

The idea by which Popper is best known is the dogma of falsification. According to this doctrine, a hypothesis is only said to be scientific if it is capable of being proved false. In real science certain “falsehood” and certain “truth” are almost never achieved. The claimed detection of primordial B-mode polarization in the cosmic microwave background by BICEP2 was claimed by some to be “proof” of cosmic inflation, which it wouldn’t have been even if it hadn’t subsequently shown not to be a cosmological signal at all. What we now know to be the failure of BICEP2 to detect primordial B-mode polarization doesn’t disprove inflation either.

Theories are simply more probable or less probable than the alternatives available on the market at a given time. The idea that experimental scientists struggle through their entire life simply to prove theorists wrong is a very strange one, although I definitely know some experimentalists who chase theories like lions chase gazelles. The disparaging implication that scientists live only to prove themselves wrong comes from concentrating exclusively on the possibility that a theory might be found to be less probable than a challenger. In fact, evidence neither confirms nor discounts a theory; it either makes the theory more probable (supports it) or makes it less probable (undermines it). For a theory to be scientific it must be capable having its probability influenced in this way, i.e. amenable to being altered by incoming data “i.e. evidence”. The right criterion for a scientific theory is therefore not falsifiability but testability. It follows straightforwardly from Bayes theorem that a testable theory will not predict all things with equal facility. Scientific theories generally do have untestable components. Any theory has its interpretation, which is the untestable penumbra that we need to supply to make it comprehensible to us. But whatever can be tested can be regared as scientific.

So I think the Gubitosi et al. paper starts on the wrong foot by focussing exclusively on “falsifiability”. The issue of whether a theory is testable is complicated in the context of inflation because prior probabilities for most observables are difficult to determine with any confidence because we know next to nothing about either (a) the conditions prevailing in the early Universe prior to the onset of inflation or (b) how properly to define a measure on the space of inflationary models. Even restricting consideration to the simplest models with a single scalar field, initial data are required for the scalar field (and its time derivative) and there is also a potential whose functional form is not known. It is therfore a far from trivial task to assign meaningful prior probabilities on inflationary models and thus extremely difficult to determine the relative probabilities of observables and how these probabilities may or may not be influenced by interactions with data. Moreover, the Bayesian approach involves comparing probabilities of competing theories, so we also have the issue of what to compare inflation with…

The question of whether cosmic inflation (whether in general concept or in the form of a specific model) is testable or not seems to me to boil down to whether it predicts all possible values of relevant observables with equal ease. A theory might be testable in principle, but not testable at a given time if the available technology at that time is not able to make measurements that can distingish between that theory and another. Most theories have to wait some time for experiments can be designed and built to test them. On the other hand a theory might be untestable even in principle, if it is constructed in such a way that its probability can’t be changed at all by any amount of experimental data. As long as a theory is testable in principle, however, it has the right to be called scientific. If the current available evidence can’t test it we need to do better experiments. On other words, there’s a problem with the evidence not the theory.

Gubitosi et al. are correct in identifying the important distinction between the inflationary paradigm, which encompasses a large set of specific models each formulated in a different way, and an individual member of that set. I also agree – in contrast to many of my colleagues – that it is actually difficult to argue that the inflationary paradigm is currently falsfiable testable. But that doesn’t necessarily mean that it isn’t scientific. A theory doesn’t have to have been tested in order to be testable.

### astrobites - astro-ph reader's digest

Nature’s Starships Vol. II – A Hitchhiker’s Guide to the Bloody Cold Beginnings

Title: Nature’s Starships II: Simulating the Synthesis of Amino Acids in Meteorite Parent Bodies
Authors: Alyssa K. Cobb, Ralph E. Pudritz, Ben K. D. Pearce.
First author’s institution: Origins Institute, McMaster University, Hamilton, Canada.
Status: Accepted for publication in Astrophysical Journal.

### Life, the Universe and … Amino Acids!

In Nature’s Starships Vol. I we learned that amino acids, the fundamental building blocks of DNA, might have formed in the interiors of planetesimals, asteroid-sized rocks in the early solar system that will later grow to form planets. Why would that matter to us? Because, according to the ‘late veneer’ hypothesis, all of Earth’s surface water (and possibly organics) was added by impactors long after the formation of the bulk Earth – in other words our planet was polluted by miniature versions of itself, harbouring the essential ingredients for life’s boiling pot!
The authors of Nature’s Starships I found an increased abundance of amino acids only in a very specific subclass of the most primitive meteorites and interpreted this as evidence for specific layers within planetesimals, where temperature and pressure are optimal for the needed chemical reactions to happen.

### A Deep Thought about … chemistry! (again)

To test this hypothesis Cobb et al. ran computer models of amino acid synthesis with temperature and pressure conditions they expect in planetesimals. Figure 1 shows the surprising results of these considerations.

Fig. 1: The influence of different temperatures on the total yield of amino acids by chemical reactions within the planetesimal interior. Each data point (and the corresponding line) represents the yield for a different type of planetesimal. An increase in temperature is equivalent for a different layer within the planetesimal. The blue and red dashed lines correspond to the minimum and maximum values of data measured in meteorites.  The total yield of amino acids is nearly unaffected by changing the temperature (i.e., the total amino acid abundance does not vary by much with different temperatures). This means that the specific location within the planetesimal does not matter for amino acid production! Source: Cobb et al. (2015)

Therefore, it seems that the layer where the amino acids formed is not crucially important for its yield. As explained in Nature’s Starships I the reaction rate of amino acids seems to be related to water in a way – in meteorites which are associated with very dry (or heated) environments a lot less amino acids have been found than for meteorites associated with watery environments. Thus, it seems that we need more water to achieve an increase in reaction potential and to change the picture above.

### So Long, and Thanks for All the … Water!

Artist concept of an ice line transition within a protoplanetary disk. In this case for the TW Hydrae system. Credit: Bill Saxton and Alexandra Angelich, NRAO/AUI/NSF

From planet formation models it is fairly well known that during the early era of the solar system the water snow line, the distance to the sun at which water is present as ice because of less sunlight, was located at roughly to 2-2.5 astronomical units (~ nowaday’s asteroid belt). This means planetesimals which were present around this location during the planet formation epoch featured huge gradients in water abundance – planetesimals within the snow line had less liquid water than their outer companions. This transition in the chemical state is artistically shown in Figure 2.

This is very peculiar, since most meteorites that fell on Earth are thought to originate from the asteroid belt, where the snow line used to be. Therefore, instead of scaling the total amino acid yield with different temperatures, the author’s try to vary the total water content within the planetesimals, which is shown in Figure 2.

Fig. 2: The total abundance of amino acids in parts per billion according to the model of Cobb et al. (black line). The colored areas correspond to the spread in data in available meteorites and the dots represent single measurements of specific amino acid abundances within the most important samples for this study. The left plot shows the data points unaccounted for weathering effects (when meteorites lie on the surface for a long time, they incorporate water in their bulk material, which was not present in space). Therefore, the right hand side tries to account for that backdraw.  Source: Cobb et al. (2015)

So, when including the effects of weathering on the surface, it seems that expectations of amino acid abundances are consistent with our observations and fit well with the idea that water content seems to be the dominant factor for amino acid production. As often in nowaday’s literature, this finding underlines the deep connections of different fields and the need for interdisciplinary studies, in this case to achieve a better understanding of a possible formation pathway of organic molecules. What do we get from this study with regards to astrobiology? The formation of amino acids within planetesimals is not an isolated process. You can’t look at a planetesimal decoupled from its surrounding, instead the physical and chemical properties are directly inherited from the astrochemistry of the protoplanetary disk.

### Sean Carroll - Preposterous Universe

Guest Post: Aidan Chatwin-Davies on Recovering One Qubit from a Black Hole

The question of how information escapes from evaporating black holes has puzzled physicists for almost forty years now, and while we’ve learned a lot we still don’t seem close to an answer. Increasingly, people who care about such things have been taking more seriously the intricacies of quantum information theory, and learning how to apply that general formalism to the specific issues of black hole information.

Now two students and I have offered a small contribution to this effort. Aidan Chatwin-Davies is a grad student here at Caltech, while Adam Jermyn was an undergraduate who has now gone on to do graduate work at Cambridge. Aidan came up with a simple method for getting out one “quantum bit” (qubit) of information from a black hole, using a strategy similar to “quantum teleportation.” Here’s our paper that just appeared on arxiv:

How to Recover a Qubit That Has Fallen Into a Black Hole
Aidan Chatwin-Davies, Adam S. Jermyn, Sean M. Carroll

We demonstrate an algorithm for the retrieval of a qubit, encoded in spin angular momentum, that has been dropped into a no-firewall unitary black hole. Retrieval is achieved analogously to quantum teleportation by collecting Hawking radiation and performing measurements on the black hole. Importantly, these methods only require the ability to perform measurements from outside the event horizon and to collect the Hawking radiation emitted after the state of interest is dropped into the black hole.

It’s a very specific — i.e. not very general — method: you have to have done measurements on the black hole ahead of time, and then drop in one qubit, and we show how to get it back out. Sadly it doesn’t work for two qubits (or more), so there’s no obvious way to generalize the procedure. But maybe the imagination of some clever person will be inspired by this particular thought experiment to come up with a way to get out two qubits, and we’ll be off.

I’m happy to host this guest post by Aidan, explaining the general method behind our madness.

If you were to ask someone on the bus which of Stephen Hawking’s contributions to physics he or she thought was most notable, the answer that you would almost certainly get is his prediction that a black hole should glow as if it were an object with some temperature. This glow is made up of thermal radiation which, unsurprisingly, we call Hawking radiation. As the black hole radiates, its mass slowly decreases and the black hole decreases in size. So, if you waited long enough and were careful not to enlarge the black hole by throwing stuff back in, then eventually it would completely evaporate away, leaving behind nothing but a bunch of Hawking radiation.

At a first glance, this phenomenon of black hole evaporation challenges a central notion in quantum theory, which is that it should not be possible to destroy information. Suppose, for example, that you were to toss a book, or a handful of atoms in a particular quantum state into the black hole. As the black hole evaporates into a collection of thermal Hawking particles, what happens to the information that was contained in that book or in the state of (what were formerly) your atoms? One possibility is that the information actually is destroyed, but then we would have to contend with some pretty ugly foundational consequences for quantum theory. Instead, it could be that the information is preserved in the state of the leftover Hawking radiation, albeit highly scrambled and difficult to distinguish from a thermal state. Besides being very pleasing on philosophical grounds, we also have evidence for the latter possibility from the AdS/CFT correspondence. Moreover, if the process of converting a black hole to Hawking radiation conserves information, then a stunning result of Hayden and Preskill says that for sufficiently old black holes, any information that you toss in comes back out almost a fast as possible!

Even so, exactly how information leaks out of a black hole and how one would go about converting a bunch of Hawking radiation to a useful state is quite mysterious. On that note, what we did in a recent piece of work was to propose a protocol whereby, under very modest and special circumstances, you can toss one qubit (a single unit of quantum information) into a black hole and then recover its state, and hence the information that it carried.

More precisely, the protocol describes how to recover a single qubit that is encoded in the spin angular momentum of a particle, i.e., a spin qubit. Spin is a property that any given particle possesses, just like mass or electric charge. For particles that have spin equal to 1/2 (like those that we consider in our protocol), at least classically, you can think of spin as a little arrow which points up or down and says whether the particle is spinning clockwise or counterclockwise about a line drawn through the arrow. In this classical picture, whether the arrow points up or down constitutes one classical bit of information. According to quantum mechanics, however, spin can actually exist in a superposition of being part up and part down; these proportions constitute one qubit of quantum information.

So, how does one throw a spin qubit into a black hole and get it back out again? Suppose that Alice is sitting outside of a black hole, the properties of which she is monitoring. From the outside, a black hole is characterized by only three properties: its total mass, total charge, and total spin. This latter property is essentially just a much bigger version of the spin of an individual particle and will be important for the protocol.

Next, suppose that Alice accidentally drops a spin qubit into the black hole. First, she doesn’t panic. Instead, she patiently waits and collects one particle of Hawking radiation from the black hole. Crucially, when a Hawking particle is produced by the black hole, a bizarro version of the same particle is also produced, but just behind the black hole’s horizon (boundary) so that it falls into the black hole. This bizarro ingoing particle is the same as the outgoing Hawking particle, but with opposite properties. In particular, its spin state will always be flipped relative to the outgoing Hawking particle. (The outgoing Hawking particle and the ingoing particle are entangled, for those in the know.)

The picture so far is that Alice, who is outside of the black hole, collects a single particle of Hawking radiation whilst the spin qubit that she dropped and the ingoing bizarro Hawking particle fall into the black hole. When the dropped particle and the bizarro particle fall into the black hole, their spins combine with the spin of the black hole—but remember! The bizarro particle’s spin was highly correlated with the spin of the outgoing Hawking particle. As such, the new combined total spin of the black hole becomes highly correlated with the spin of the outgoing Hawking particle, which Alice now holds. So, Alice measures the black hole’s new total spin state. Then, essentially, she can exploit the correlations between her held Hawking particle and the black hole to transfer the old spin state of the particle that she dropped into the hole to the Hawking particle that she now holds. Alice’s lost qubit is thus restored. Furthermore, Alice didn’t even need to know the precise state that her initial particle was in to begin with; the qubit is recovered regardless!

That’s the protocol in a nutshell. If the words “quantum teleportation” mean anything to you, then you can think of the protocol as a variation on the quantum teleportation protocol where the transmitting party is the black hole and measurement is performed in the total angular momentum basis instead of the Bell basis. Of course, this is far from a resolution of the information problem for black holes. However, it is certainly a neat trick which shows, in a special set of circumstances, how to “bounce” a qubit of quantum information off of a black hole.

### Quantum Diaries

Matière sombre et énergie sombre n’ont qu’à bien se tenir

La matière sombre et l’énergie sombre sont bien en évidence à la conférence de physique des particules de la Société de Physique Européenne à Vienne. Bien que les physiciens et physiciennes comprennent maintenant assez bien les constituants de base de la matière, tout ce que l’on voit sur la Terre, dans les étoiles et les galaxies, cette énorme quantité de matière ne représente que 5 % du contenu total de l’Univers. Pas étonnant alors qu’autant d’efforts soient déployés pour élucider le mystère de la matière sombre (27 % de l’Univers) et de l’énergie sombre (68 %).

Depuis le Big Bang, non seulement l’Univers s’étend mais cette expansion va en accélérant. Quelle énergie alimente cette accélération ? Nous l’appelons énergie sombre. Cela demeure absolument inconnu mais l’équipe du Dark Energy Survey cherche à obtenir des éléments de réponse. Ces scientifiques vont examiner un quart du ciel de l’hémisphère sud, cataloguant l’emplacement, la forme et la distribution d’objets astronomiques tels que des amas galactiques (regroupements de galaxies) et de supernovæ (étoiles en explosion). Leur but est de recueillir de l’information sur 300 millions de galaxies et 2500 supernovæ.

Les galaxies se sont formées grâce à l’effet attractif de la gravité, qui a permis à la matière de se regrouper, malgré l’effet dispersif de l’énergie sombre, qui disperse la matière avec l’expansion de l’Univers. Les scientifiques de DES étudient essentiellement comment les grandes structures telles que les amas galactiques se sont développées dans le temps en observant des objets situés à différentes distances et dont la lumière provient de différentes époques dans le temps. Avec plus de données, ces scientifiques espèrent mieux comprendre la dynamique de l’expansion.

La matière sombre est tout aussi inconnue. Jusqu’ici, elle ne s’est manifestée qu’à travers ses effets gravitationnels. Nous pouvons “sentir” sa présence mais pas la voir, puisqu’elle n’émet aucune lumière, contrairement à la matière ordinaire contenue dans les étoiles et supernovæ. Comme si l’Univers entier était rempli de fantômes.

Une douzaine de détecteurs, utilisant des techniques différentes, essaient d’attraper ces particules fantômes. Pas facile de les traquer quand on ne sait ni comment, ni même si ces particules interagissent avec la matière. Elles doivent cependant interagir très rarement car autrement, elles auraient déjà été décelées. On utilise donc des détecteurs massifs dans l’espoir qu’une de ces particules de matière sombre frappe un noyau d’un des atomes du détecteur, induisant une petite vibration décelable. Les différentes équipes de scientifiques tentent de sonder toute la gamme de possibilités. Celles-ci dépendent de la masse possible des particules de matière sombre et leur affinité à interagir avec la matière.

Le graphe ci-dessous illustre la possibilité qu’une particule de matière sombre interagisse avec un noyau (axe vertical) en fonction de leur masse (axe horizontal). Cela couvre une vaste région de possibilités qu’il faut tester. Chaque courbe sur le graphe représente les résultats d’une expérience différente. Les régions au-dessus de ces courbes représentent les possibilités qui sont exclues. La partie gauche du graphe est la plus difficile à explorer car plus les particules de matière noire sont légères, plus la vibration induite est petite.

La Collaboration CRESST utilise de petits cristaux opérant à très basse température. Ils peuvent déceler la hausse de température minime que provoquerait une particule de matière sombre en frappant un noyau atomique. Cela leur a permis de réussir là où des dizaines d’expériences précédentes avaient échoué : la recherche de particules très légères. C’est ce que l’on peut voir sur le graphe. Toutes les possibilités au-dessus du trait continu rouge dans le coin supérieur gauche sont désormais exclues. Jusqu’ici, cette zone n’était accessible qu’aux expériences du Grand Collisionneur de Hadron (LHC) du CERN (non incluses dans ce graphe), mais au prix de plusieurs suppositions. CRESST vient d’ouvrir tout un monde de possibilités. Les particules de matière sombre légères n’ont qu’à bien se tenir.

Pauline Gagnon

Pour recevoir un avis lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution ou consultez mon site web

### Quantum Diaries

Dark matter and dark energy better watch out

Dark matter and dark energy feature prominently at the European Physics Society conference on particle physics in Vienna. Although physicists now understand pretty well the basic constituents of matter, all what one sees on Earth, in stars and galaxies, this huge amount of matter only accounts for 5% of the whole content of the Universe. Not surprising then that much efforts are deployed to elucidate the nature of dark matter (27% of the Universe), and dark energy (68%).

Since the Big Bang, the Universe is not only expanding, but this expansion is also accelerating. So which energy fuels this acceleration? We call it dark energy. This is still something absolutely unknown but the Dark Energy Survey (DES) team is determined to get some answers. To do so, they are searching a quarter of the southern sky, mapping the location, shape and distribution of various astronomical objects such as galactic clusters (large groups of galaxies) and supernovae (exploding stars). Their goal is to record information on 300 million galaxies and 2500 supernovae.

Galaxies formed thanks to gravity that allowed matter to cluster. But this happened against the dispersive effect of dark energy, since the expansion of the Universe scattered matter away. The DES scientists essentially study how large structures such as galactic clusters evolved in time by looking at objects at various distances, and whose light comes from different times in the past. With more data, they hope to better understand the dynamic of expansion.

Dark matter is just as unknown. So far, it has only manifested itself through gravitational effects. We can “feel” its presence but we cannot see it, since it emits no light, unlike regular matter found in stars and supernovae. As if the whole Universe was full of ghosts. A dozen detectors, using different techniques, are trying to find dark matter particles.

Not easy to catch such elusive particles when no one knows how and if these particles interact with matter. Moreover, these particles must interact very rarely with regular matter (otherwise, they would already have been found), the name of the game is to use massive detectors, in the hope one nucleus from one of the detector atoms will recoil when hit by a dark matter particle, inducing a small but detectable vibration in the detector. The experiments search for a range of possibilities, depending on the mass of the dark matter particles and how often they can interact with matter.

The plot below shows how often dark matter particles could interact with a nucleus (vertical axis) as a function of their mass (horizontal axis). This spans a wide region of possibilities one must test. The various curves indicate what has been achieved so far by different experiments. All possibilites above the curves are excluded. The left part of the plot is harder to probe since the lighter the dark matter particles is, the smaller the vibration induced.

The CRESST Collaboration uses small crystals operating at extremely low temperature. They are sensitive to the temperature rise that would occur if a dark matter particle deposited the smallest amount of energy. This allowed them to succeed where tens of previous experiments had failed: looking for very light particles. This is shown on the plot by the solid red curve in the upper left corner. All possibilities above are now excluded. So far, this area was only accessible to the Large Hadron Collider (LHC) experiments (results not shown here) but only when making various theoretical hypotheses. CRESST has just opened a new world of possibilities and they will sweep nearly the entire area in the coming years. Light dark matter particles better watch out.

Pauline Gagnon

To be alerted of new postings, follow me on Twitter: @GagnonPauline  or sign-up on this mailing list to receive an e-mail notification. You can also visit my website.

### Quantum Diaries

Fermilab magnet team helps bring brighter beams to APS Upgrade Project at Argonne

Argonne National Laboratory was attracted to the expertise of this Fermilab magnet team. The team recently developed a pre-prototype magnet for Argonne’s APS Upgrade Project. Photo: Doug Howard, Fermilab

A magnet two meters long sits in the Experiment Assembly Area of the Advanced Photon Source at Argonne National Laboratory. The magnet, built by Fermilab’s Technical Division, is fire engine red and has on its back a copper coil that doesn’t quite reach from one end to the other. An opening on one end of the magnet’s steel casing gives it the appearance of a rectangular alligator with its mouth slightly ajar.

“It’s a very pretty magnet,” said Argonne’s Glenn Decker, associate project manager for the accelerator. “It’s simple and it’s easy to understand conceptually. It’s been a very big first step in the APS Upgrade.”

The APS is a synchrotron light source that accelerates electrons nearly to the speed of light and then uses magnets to steer them around a circular storage ring the size of a major-league baseball stadium. As the electrons bend, they release energy in the form of synchrotron radiation — light that spans the energy range from visible to x-rays. This radiation can be used for a number of applications, such as microscopy and spectroscopy.

In 2013, the federal Basic Energy Sciences Advisory Committee, which advises the Director of the Department of Energy’s Office of Science, recommended a more ambitious approach to upgrades of U.S. light sources. The APS Upgrade will create a world-leading facility by using new state-of-the-art magnets to tighten the focus of the APS electron beam and dramatically increase the brightness of its X-rays, expanding its experimental capabilities by orders of magnitude.

Instead of the APS’ present magnet configuration, which uses two bending magnets in each of 40 identical sectors, the upgraded ring will deploy seven bending magnets per sector to produce a brighter, highly focused beam.

Because the APS Upgrade requires hundreds of magnets — many of them quite unusual — Argonne called on experts at Fermilab and Brookhaven National Laboratory for assistance in magnet design and development.

Fermilab took on the task of designing, building and testing a pre-prototype for a groundbreaking M1 magnet — the first in the string of bending magnets that makes up the new APS arrangement.

“At Fermilab we have the whole cycle,” said Fermilab’s Vladimir Kashikhin, who is in charge of magnet designs and simulations. “Because of our experience in magnet technology and the people who can simulate and fabricate magnets and make magnetic measurements, we are capable of making any type of accelerator magnet.”

The M1’s magnetic field is strong at one end and tapers off at the other end, reducing the impact of processes that increase the beam size, producing a brighter beam. Because of this change in field, this magnet is different from anything Fermilab had ever built. But by May, Fermilab’s team had completed and tested the magnet and shipped it to Argonne, where it charged triumphantly through a series of tests.

“The magnetic field shape they were asking for was a little bit challenging,” said Dave Harding, the principal investigator leading the project at Fermilab. “Getting the shape of the steel to produce that distribution and magnetic field required some tinkering. But we did it.”

Although this pre-prototype magnet is unlikely to be installed in the complete storage ring, scientists working in this collaboration view the M1 development as an opportunity to learn about technical difficulties, validate their designs and strengthen their skills.

“Getting our hands on some real hardware injected a dose of reality into our process,” Decker said. “We’re going to take the lessons we learned from this M1 magnet and fold them into the next iteration of the magnet. We’re looking forward to a continuing collaboration with Fermilab’s Technical Division on magnetic measurements and refinement of our magnet designs, working toward the next world-leading hard X-ray synchrotron light source.”

Ali Sundermier

### Emily Lakdawalla - The Planetary Society Blog

Jupiter's changing face, 2009-2015
Damian Peach's photo-documentation of Jupiter helps us monitor the giant planet's ever-changing patterns of belts, zones, storms, and barges, during a time when no orbiting missions are there to take pictures.

### Ben Still - Neutrino Blog

Pentaquark Series 4: Pentaquark Prediction and Search
This is the fourth in a series of posts I am releasing over the next two weeks, aimed at covering the physics behind Pentaquarks, the history of "discovery", and the implications of the latest results from LHCb. Post 3 here. Today we discuss the prediction of pentaquarks and first tentative sightings.

 The pentaquark might be a whole new type of particle containing 4 quarks and 1 antiquark within itself.
Particle as announced by LHCb last week would have to be comprised of four quarks and one antiquark in some, currently, unknown arrangement. All quarks could be contained within some single particle; this is a pentaquark, or they could be a bound pair of one Baryon and one Meson - a Baryon-Meson molecule. From what we have discussed so far about the strong force there should be nothing stopping us from creating pentaquarks or Baryon-Meson molecules. A white strong charge Baryon plus a white strong charge Meson would simply result in a white strong charge bound molecule. Also if we have 4 quarks and 1 antiquark we can also create a white charge pentaquark in a number of different ways:

red + green + blue + red + anti-red = white
red + green + blue + green + anti-green = white
red + green + blue + blue + anti-blue = white

 Or the pentaquark might be a bound state of a Baryon and Meson.
In 1997 Dmitri Diakonov, Victor Petrov, and Maxim Polyakov [1] employed similar methods to Gell-Mann in his Eightfold way, using the symmetries of the quarks to predict not only the existence but also the expected mass of pentaquark particles. Again like Gell-Mann they predicted a pattern in these symmetries called an Exotic Baryon anti-decuplet; exotic because these particles (or combinations thereof) are not constructed in the same way as other Baryons; baryon because they have some properties common with Baryons (there is at least one baryon’s worth of quarks making up these particles); anti-decuplet because there were 10 particles, as in Gell-Mann’s decuplet, but pointing in the opposite direction. I have drawn one representation of this anti-decuplet below using my LEGO analogy*. This is just one of a number of patterns that can be, and have been, drawn from quark symmetries.

 The Exotic Baryon Anti-decuplet: an extension of quark symmetries showing the lightest possible pentaquark states. Here I show the states as Baryon-Meson molecules.
 The Exotic Baryon Anti-decuplet: an extension of quark symmetries showing the lightest possible pentaquark states. Here I show the pentaquark states as 4 quark, 1 antiquark bound states.

With the prediction out there it is was now the job of the experimentalists to smash particles into one another and sift through the debris to see if any of these particles existed. They chose to focus their searches upon those particles at the extreme points of the anti-decuplet triangle. The lighter particles produced when these pentaquarks decay can only be explained by these exotic states. Let us take the Θ+ as an example.

 Detection of particles used to reconstruct the pentaquark state. Borrowed from here.
The Θ+ can be identified experimentally by the fact it is uniquely strange. The Θ+ contains an anti-strange quark while all three quark baryons can only contain a strange quark, because no baryon contains an antiquark. We can say that the Θ+ has an opposite strangeness to all traditional Baryons; this is something that can be identified in particle detectors. The Θ+ is similar to Baryons as it has the same quality known as baryon number; related to the colour charge of the quarks and antiquarks. Both pentaquarks and three quark Baryons have a baryon number of 1; each quark has baryon number +1/3 and the antiquark has baryon number -1/3. Experiments have shown that the strangeness and baryon number must be conserved when a particle decays to other lighter particles. By tracking strangeness and baryon number, experiments are able to pick out groups of particles which could only have come from the decay of a pentaquark. As we will discuss in future posts, this shows up in experimental data as a large amount of extra data around a single particle mass which sits on top of a broad number of other possible background data.

In 2003 the LEPS experiment in Japan published a paper [2] which suggested evidence that a particle with a mass the same as the Θ+ (within errors) had been seen within its detectors. Over the next year this claim was followed by some nine other experiments all saying that they too had seen an excess in their data around the predicted Θ+ mass. The evidence for this pentaquark seemed compelling, but there were some problems and questions surrounding the data. In some cases the number of background events were underestimated, which exaggerated and excesses there might have been. Some experiments chose specific techniques to enhance data around the predicted mass of the Θ+. When considering the results of all ten experiments the range of masses determined by each, although similar, varied far more than one would expect from the given theory. It was obvious that further experiments were needed, with much more data, if the existence of the Θ+ were to be confirmed or refuted.

*Notice I have not combined the quarks into a pentaquark particle but instead leave them next to one another as Baryon-Meson molecule..

Next time: The search continues - the rollercoaster years leading up to the 2015 LHCb discovery.

### arXiv blog

Deep Neural Nets Can Now Recognize Your Face in Thermal Images

Matching an infrared image of a face to its visible light counterpart is a difficult task, but one that deep neural networks are now coming to grips with.

One problem with infrared surveillance videos or infrared CCTV images is that it is hard to recognize the people in them. Faces look different in the infrared and matching these images to their normal appearance is a significant unsolved challenge.

### Christian P. Robert - xi'an's og

astronomical evidence

As I have a huge arXiv backlog and an even higher non-arXiv backlog, I cannot be certain I will find time to comment on those three recent and quite exciting postings connecting ABC with astro- and cosmo-statistics [thanks to Ewan for pointing out those to me!]:

Filed under: pictures, Statistics, University life Tagged: ABC, ABC-MCMC, arXiv, astrostatistics, Bayes factor, Bayesian model selection, BAYSM 2014, dark energy, evidence, Ewan Cameron, falsification, reversible jump MCMC, Vienna

## July 23, 2015

### Emily Lakdawalla - The Planetary Society Blog

A New Way to Prepare Samples of Mars for Return to the Earth
Mars 2020, NASA’s next and yet-to-be-named Mars rover, will be the first mission to collect and prepare samples of the martian surface for return to Earth. The rover's engineering team has proposed a new sampling caching strategy that differs from previous concepts in some interesting ways.

### Symmetrybreaking - Fermilab/SLAC

A new first for T2K

The Japan-based neutrino experiment has seen its first three candidate electron antineutrinos.

Scientists on the T2K neutrino experiment in Japan announced today that they have spotted their first possible electron antineutrinos.

When the T2K experiment first began taking data in January 2010, it studied a beam of neutrinos traveling 295 kilometers from the J-PARC facility in Tokai, on the east coast, to the Super-Kamiokande detector in Kamioka in western Japan. Neutrinos rarely interact with matter, so they can stream straight through the earth from source to detector.

From May 2014 to June 2015, scientists used a different beamline configuration to produce predominantly the antimatter partners of neutrinos, antineutrinos. After scientists eliminated signals that could have come from other particles, three candidate electron antineutrino events remained.

T2K scientists hope to determine if there is a difference in the behavior of neutrinos and antineutrinos.

“That is the holy grail of neutrino physics,” says Chang Kee Jung of State University of New York at Stony Brook, who until recently served as international co-spokesperson for the experiment.

If scientists caught neutrinos and their antiparticles acting differently, it could help explain how matter came to dominate over antimatter after the big bang. The big bang should have produced equal amounts of each, which would have annihilated one another completely, leaving nothing to form our universe. And yet, here we are; scientists are looking for a way to explain that.

“In the current paradigm of particle physics, this is the best bet,” Jung says.

Scientists have previously seen differences in the ways that other matter and antimatter particles behave, but the differences have never been enough to explain our universe. Whether neutrinos and antineutrinos act differently is still an open question.

Neutrinos come in three types: electron neutrinos, muon neutrinos and tau neutrinos. As they travel, they morph from one type to another. T2K scientists want to know if there’s a difference between the oscillations of muon neutrinos and muon antineutrinos. A possible upgrade to the Super-Kamiokande detector could help with future data-taking.

One other currently operating experiment can look for this matter-antimatter difference: the NOvA experiment, which studies a beam that originates at Fermilab near Chicago with a detector near the Canadian border in Minnesota.

“This result shows the principle of the experiment is going to work,” says Indiana University physicist Mark Messier, co-spokesperson for the NOvA experiment. “With more data, we will be on the path to answering the big questions.”

It might take T2K and NOvA data combined to get scientists closer to the answer, Jung says, and it will likely take until the construction of the even larger DUNE neutrino experiment in South Dakota to get a final verdict.

Like what you see? Sign up for a free subscription to symmetry!

### Ben Still - Neutrino Blog

Pentaquark Series 3: Antiquarks and Anti-colour
This is the third in a series of posts I am releasing over the next two weeks, aimed at covering the physics behind Pentaquarks, the history of "discovery", and the implications of the latest results from LHCb. Post 2 here. Today we discuss particles that can be made from less than three quarks.
 Antiparticles have opposite properties like electric charge.

## Antiquarks and Anti-colour

In the last post I mentioned that particles made from quarks must be strong charge neutral, this can be achieved if each quark is colour charged with a primary colour of light (red, green, and blue) so that the overall colour charge of the particle is white. There is another way to build particles with a neutral, white colour, overall strong charge but for this we must talk about antiparticles. The three generations of fundamental particle also have mirror versions of themselves; the antiparticles. When you look into a mirror left becomes right but you still look the same size. A similar thing is true in the particle world - mirror antiparticle versions of particles have the same mass but they see the world in opposite ways. They see the world differently in the way they feel and interact through the forces of nature. We say an electron particle has a negative electric charge, then its antimatter version, the positron, will have a positive electric charge. The anti-electron (positron) was first seen in experiment in 1932 (the same year the neutron was discovered), and since then it has been confirmed that antiparticles do indeed exist for all of the three generations of particle.

Antiquarks, the antimatter versions of the quark, also have their electric charges mirrored from positive to negative. As Antiquarks also feel the strong force they must also have their strong colour charges mirrored too - but what is an anti-colour? Let us think about the colours produced when mixing the primary colours of light. If we shine white light through a prism refraction splits it into a rainbow. Looking at the rainbow spectrum (diagram below) we see that directly in between the primary colours blue and green there is the colour cyan. It turns out that if we mix pure blue and pure green light we would see cyan as a result. As it is made up from two primary colours (green and blue) cyan is said to be a secondary colour. In-between green and red in the rainbow is another secondary colour; yellow. We would perceive the colour yellow from a mixture green and red light. What about a third secondary colour?

 Rainbow spectrum of white light.

 Mixing the three primary colours of lightto make the secondary colours and white.
The only mixture of primary colours not yet mentioned is red and blue; but wait - the colour in the middle of blue (at one end of the spectrum) and red (at the other end) is green. As I have already mentioned, green is a primary colour so it can’t be a secondary as well. The third and final secondary colour is not in fact a true rainbow colour at all, but one constructed by our mind. If we see blue and red light mixed we do not end up at green but instead we perceive the colour magenta. If you were to shine magenta light through a prism to split it into its component rainbow colours you would see the blue and red parts of the rainbow only. In this magenta 'rainbow' the middle green part would be entirely missing (see the spectra at the bottom). In this sense magenta is the anti-green - everything that green is not. To demonstrate this look at the optical illusion below (gif "borrowed" from Steve Mould) - stare at the centre cross. Do you see a green circle appearing? Now look away from the cross: a green circle is not present at all, what is in fact happening is that there is a lack of a magenta circle in the pattern not that a green one is appearing. Your mind is putting green where there is a lack of magenta!

 Magenta: the anti-green. Image "borrowed" from Steve Mould

So magenta is anti-green. It turns out that all three secondary colours are in fact the anti colours we are looking for to be the strong charges of our antiquarks. Cyan has no red if split by a prism, only green and blue, so is therefore anti-red. Yellow would not contain blue in its spectrum, just red and green, so is therefore anti-blue. We could then say that the opposite, antiparticle versions, of the red, green, and blue strong charges of quarks would be either cyan, magenta, or yellow.

 The whole set of Quarks and Antiquarks that are know to exist; they are one half of the building blocks that make up all particles in our visible Universe.

Now what happens if we combine a quark with an anti quark? Magenta is made from red and blue, add green to it and you would have white light; yellow made from red and green, add blue and you get white light; cyan is made from blue and green, add red to it and you would get white light. So to create white, strong charge neutral, particles with quarks and antiquarks you would only need to have one quark and one antiquark. A green quark and a magenta antiquark; a red quark and a cyan antiquark; a blue quark and a yellow antiquark would all make particles. These quark-antiquark combinations are a group of particles called Mesons.

Just like the Baryons there is a pattern that Gell-Mann theorised in his Eightfold Way for the possible Meson particles that can be made from up, down and strange quarks; the Meson Octet (below). Mesons do not survive very long because particles and antiparticles are not very stable around one another. Generally when a particle meets its own antiparticle they annihilate one another to produce pure energy. Mesons, as they are constructed by quark and antiquark, use the first opportunity available to either form pure energy or a number of lighter particles. The middle row of the Meson Octet are particles called pions (π) which play a role in keeping protons and neutrons together in the nucleus but also in the production of neutrino particle beams.

 The Meson Octet shows all possible Mesons that can be constructed with up, down, strange, anti-down, anti-up, and anti-strange quark.

 The refracted spectrum, or 'rainbow', of the secondary colours of light.
 The refracted spectrum, or 'rainbow', of the primary colours of light.

### Ben Still - Neutrino Blog

Pentaquark Series 2: Rule of Three...
This is the second in a series of posts I am releasing over the next two weeks, aimed at covering the physics behind Pentaquarks, the history of "discovery", and the implications of the latest results from LHCb (previous post here). Today we discuss why quarks like to come in threes.

 The two charges of the electromagnetic force and the three charges of the strong force.

## Rule of three …

Protons, neutrons, and other particles that are made up from 3 quarks are called Baryons. But why do they all have 3 quarks? Why not 4 or 6 or 10? It is all down to the way the strong force, responsible for binding the quarks together, works. The electromagnetic force has a possible two charges; which we label positive electric charge (like protons) and negative electric charge (like electrons).  These different charges attract, which is the reason electrons remain orbiting the proton rich nucleus of an atom. The strong force it seems has not two but three possible charges! As there is no clear way to describe this in terms of whole numbers like positive and negative another analogy had to be found. The best way to think of strong charge is as colours of light.

 Overlapping light
**Disclaimer** Before I start talking of colours of light I want to clarify that I am not talking of colours and mixing that you may have come across when using paints or other pigments in art. Colours of light add to each other when mixed to create new colours. Colours of paint and and other pigments change because they subtract by absorbing different colours of light that is reflected from them.

The three primary colours of light we see are red, green, and blue. The reason we have decided upon these colours is a selfish biological one; our eyes have evolved to be sensitive in particular to these three colours individually. When these three colours are combined, added together, they form what we perceive as white light. If we assigned the three primary colours of light to the three possible strong charges we could say that a quark can have a strong charge of red, green, or blue.

 It doesn't matter which of the quarks have which strong charge just that there is at least one of each primary colour.
An atom is electrically neutral because it has a balance of positively charged protons in the nucleus and negatively charged electrons surrounding it; a helium nucleus contains two protons and has two electrons surrounding it which means the electric charge is +2 -2 = 0. In the same vein a proton has to be strong force neutral, it must have a balance of the three strong charges; composed from one green charged quark, one red charged quark, and one blue charged quark. Which of the two up or one down quarks is charged with each colour doesn’t matter - the fact is just that we need one of each to make a stable proton.

We can then say that the stable proton is white as green plus blue plus red light equals white. The same rules applies for all other particles made in a similar way, the group of particles known as Baryons. Almost any combinations of three quarks can create a Baryon as long as the Baryon is white in strong charge. Remember I am in no way saying that quarks have colour in the traditional sense, because we cannot see quarks in the traditional sense - assigning them a colour is an analogy that fits the way in which the strong force behaves. Below are diagrams showing Murray Gell-Mann's mathematical idea of explaining experimental data of the time, called the Eightfold way. These two diagrams shows all ways you can create Baryons made from up, down, and strange quark building blocks. The particle made of three strange quarks at the very bottom of the second diagram (Baryon Decuplet) is the Ωparticle that Gell-Mann predicted to exist and won him the nobel prize in 1968 after it was discovered.

 The Baryon Octet: The central combination of quarks manifests itself as two distinct type of particle so there is eight in all, hence the name Oct.

 The Baryon Decuplet: Show more possibilities of Baryon using the up, down, and strange quarks. In the 60's the heaviest quark known of was the strange quark.

### Peter Coles - In the Dark

Verity

Something rather different from my usual poetry postings. This poem was written in memory of celebrated cricketer Hedley Verity, who was wounded in action in Caserta, Sicily and taken prisoner; he later died of his wounds in a Prisoner-of-War camp at the age of 38. It was a tragic end to a life that had given so much to the world of cricket.

The following is a brief account of his playing career taken from the website where I found the poem. You can find a longer biography here.

Verity was born in 1905 within sight of Headingley Cricket Ground. It seems strange to think that Verity was originally turned down by Yorkshire at trials in 1926, but he was eventually given a chance by the county in 1930 and, of course, became a fixture until the start of the war. He was the natural successor to that other great Yorkshire left-arm spinner, Wilfred Rhodes, whose career drew to a close in 1930 after an amazing 883 games for the county. Verity was never going to get close – Hitler saw to that – but he did turn out for Yorkshire 278 times and in that time he produced some remarkable bowling analyses.

In 1931 he took ten for 36 off 18.4 overs against Warwickshire at Leeds, but incredibly he bettered these figures the following season by taking ten for ten in 19.4 overs against Nottinghamshire, also at Headingley. They remain the county’s best bowling figures for an innings while Verity’s 17 for 91 against Essex at Leyton in 1933 remain Yorkshire’s best bowling in a match. Verity claimed nine wickets in an innings seven times for Yorkshire. He took 100 wickets in a season nine times and took 200 wickets in three consecutive seasons between 1935-37. He ended with 1,956 first-class wickets at an average of 14.9, took five wickets in an innings 164 times and ten wickets in a match 54 times. On 1 September, 1939, in the last first-class match before war was declared, he took seven for nine at Hove against Sussex.

The year after he first appeared for Yorkshire, Verity made his England debut against New Zealand at The Oval, finishing the game with four wickets. After that summer he was ignored until 1932/33, the Bodyline Series, in which he took 11 wickets, including Bradman twice. By the time his career was over, Verity had dismissed Bradman ten times, a figure matched only by Grimmett. As with his domestic career, Verity’s international performances threw up some astonishing bowling figures. He took eight for 43 and finished with match figures of 15 for 104 against Australia at Lord’s in 1934. His stamina was demonstrated during the 1938-39 tour of South Africa when he bowled 95.6 eight-ball overs in an innings at Durban, taking four for 184. By the time war arrived, Verity had taken 144 wickets at an average of 24.37.

During the war he was a captain in the Green Howards. He sustained his wounds in the battle of Catania in Sicily and died on 31 July, 1943. His grave is at Caserta Military Cemetery, some 16 miles from Naples.

Ironically, the poet, Drummond Allison, was also killed in action during World War 2.

The ruth and truth you taught have come full-circle
On that fell island all whose history lies,
Far now from Bramhall Lane and far from Scarborough
You recollect how foolish are the wise.

On this great ground more marvellous than Lord’s
– Time takes more spin than nineteen thirty four –
You face at last that vast that Bradman-shaming
Batsman whose cuts obey no natural law.

Run up again, as gravely smile as ever,
Veer without fear your left unlucky arm
In His so dark direction, but no length
However lovely can disturb the harm
That is His style, defer the winning drive
Or shake the crowd from their uproarious calm.

by Drummond Allison (1921-1943).

### arXiv blog

How Next-Generation Fabrics Will Keep You Cool in Summer Heat

Fabrics that are transparent in the infrared can radiate body heat at rates that will significantly reduce the burden on power-hungry air-conditioning systems.

### astrobites - astro-ph reader's digest

The Middle Child of Exoplanet Characterization

Authors: Bjoern Benneke and Sara Seager

First Author’s Institution: MIT

Paper Status: Accepted to ApJ, 2012

“Atmospheric retrievalists” are the middle child of exoplanet studies. The exoplanet saga usually goes as follows: A group of astronomers observe planet X for _____ hours and get fantastic data! Theorists agree that this planet could be ______. In this rendition of story there are two groups of people doing the work: the observationalists and the theorists. Here is where the problem comes into play. There is a whole other branch of exoplanet studies that actually builds the bridge between what we observe and what we model. Sadly, there have been no astrobites to date, on any of these techniques. Therefore, in today’s astrobites I would like to take some time acknowledge the work that has been done in this field by a relatively old paper by Benneke & Seager.

When I talk about exoplanet studies, I am specifically talking about characterization of different planetary atmospheres. Take for example GJ 1214 b and WASP 12b. In both these cases, observers studied them in transit to get relatively good transmission spectroscopy of their planetary atmospheres. But, the long-standing question is: what information about the planet can you get out of these transmission spectra? If you were looking at individual absorption features of certain gases, you would certainly be able to ascertain something about what gases are present in the atmosphere. However, what if you want to probe deeper than that? It’s much more useful to ask not only if you could tell if a gas was present in an atmosphere but also if you could tell how much of the gas was present. If there was methane in hypothetical Planet X, is there 40% methane or only 1% methane? It is here where the work by Benneke & Seager becomes crucial. In order to uniquely constrain these planetary atmospheres they follow this procedure:

1. Analyze all of your spectral features
2. Decide what parameters you are interested in recovering
3. Produce millions of models with those parameters
4. Decide which best fit your data

When put like this, it seems natural that the retrievalist often becomes the middle child. Let’s be honest, no one is interested in spending too much time on the statistical analysis of a paper. Almost 100% of the time, you care about the observation and the results. But, this four-step process is all but a simple one. So, it behooves us to go through it step by step and carefully understand what assumptions are going into the work.

Analysis of spectral features

Below is a model of an exoplanet transmission spectrum for a mystery planet X. In reality, our observations don’t look nearly as good as these models (yet) but this tells us what features we need to measure when deriving planet properties from transmission spectra. The first thing you’ll notice is the amount of features that are needed in order to get absolute mixing ratios.

On the left, is an example transmission spectrum of a planetary atmosphere. Labeled, are specific features that are important for atmospheric characterization. The diagram on the right shows how these spectral features combine to give you different planetary parameters.

For reference, I would take a look at the WASP 12b observation where they observed a single water feature from 1.1 to 1.7 micron. In that study though, they were able to get an approximate mixing ratio for water, which completely disagrees with the figure you see above. The difference here is that Benneke & Seager are trying to build an atmospheric retrieval technique that will work on not only gaseous planets, like WASP 12b, but also super Earth planets like GJ 1214b. The subtlety, therefore, lies in what you assume is the dominant constituent of your atmosphere. Gaseous planets like Jupiter have huge surface gravities so they can retain massive hydrogen-helium atmospheres. Smaller planets can’t hold on to the hydrogen and helium and end up with much smaller atmospheres made up of a heavier gas. Earth, for example ended up with mostly nitrogen and Venus ended up with mostly carbon dioxide. Therefore, if you know you are looking at a gaseous planet, you can assume it’s mostly hydrogen and helium. This is why for WASP 12b, they were able to get an approximate water abundance even though they didn’t have access to all the features outlined in that first figure.

Knowing this information, let’s take a look at figure one again. Because you don’t see any hydrogen features and you don’t see any nitrogen features you might be inclined to say that our planet X is dominated by carbon dioxide and methane. Sadly, this would certainly lead you down the wrong path since nitrogen and hydrogen are what we call “spectrally inactive gases”. Meaning, although they don’t appear in the spectra, they might still be there and might still be making up most of the atmospheric component! This is a striking fact and it does not bode well for us in terms of trying to figure out what these planetary atmospheres are made of. This is why we need so many spectral features to get any planet information at all.

What Parameters to Recover

This is a philosophical question along with scientific one. Technically, you would like to recover as much information as possible about these exoplanets. But, at what point do you draw the line? These exoplanet atmospheres only offer us a small peak into what is happening to the planet as a whole. Therefore, our models must retain a certain level of simplicity so that we are not “over-fitting” any of our features. For example, let’s say someone blind folded you and asked you to determine the composition of a bite of food. If you have a nice taste pallet, you’d probably be able to determine bulk ingredients: chicken, carrots, peas, etc. If you are an expert you might be able to say something along the lines of: 80% chicken, 10% carrots, 10% peas. If you tried to then determine how the chef spent his time preparing the dish, that might be a stretch. At that point, we would say you are putting too much weight on just that one bite you took, and are therefore “over-fitting” the data. In their model, Benneke and Seager determine that the parameters we can get from planetary spectra are the following:

1. Volume mixing ratios of atmospheric constituents: i.e., planet X is 99% hydrogen, 0.5% carbon dioxide, 0.5% nitrogen.
2. Surface or cloud deck pressure: i.e., planet X has a surface pressure of 1 bar, like Earth. Or maybe, we can’t see a surface at all! Instead, we are looking at a thick layer of clouds with a pressure of 0.001 bars!
3. Planet radius:e., if planet X does have a very high cloud later, where is the surface?? Or, if the planet has no surface at all (like a gaseous Jupiter planet) where do we define the deepest atmospheric layer?
4. Planetary albedo: i.e., how much light is planet X absorbing, versus reflecting

Modeling Thousands of Spectra and Statistics

In order to pin down these four parameters, we need to find the most exact combination of parameters that fit the spectrum of planet X. And as you might imagine, trying to guess what those might be is worse than trying to guess lottery ticket numbers! So the only way to make it work is to try thousands of times! Benneke & Seager generate about 100,000 of these models before they actually can move on to their statistical analysis. Now you can appreciate why it’s so sad that the atmospheric retrievalist get forgotten about! Once 100,000 models have been created, each model is carefully compared to the spectrum of planet X (the data) in order to determine a best match. This is done through very complicated statistical models which a guest writer Ben Nelson did a wonderful job of explaining in this post. In the spirit of completeness I’ll give a short summary here.

The basic principle is to start with four initial conditions for your four parameters and calculate the probability that those are correct. Chances are they will not be… So we jump to another combination of those four parameters and test those out. Is the probability of those four parameters being correct higher than your initial guess? If not, go back to your initial guess and try again. A great analogy for this, is the idea of climbing up a mountain. You start at the bottom of the mountain with very low probability of being correct and your goal is to make it to the very top. In some cases, you might accidentally choose a path that will lead you down the mountain. But if you evaluate your altitude (i.e. your probability) at regular intervals you would hope to make it to the top as fast as possible. The same idea applies here to the idea of picking four correct parameters in a sea of an almost infinite number of choices.

The last figure I will show is this pyramid like structure that appears at the end of Benneke and Seager’s work. Each leg of this pyramid is the mixing ratio of a different type of gaseous species: oxygen, carbon, nitrogen and hydrogen/helium. Therefore, it’s a great insight into the amount of parameter space we are dealing with (remember that getting mixing ratios is just one of the four parameters we are trying to fit). But more importantly, it gives us insights into what can be gained from these retrieval techniques. Here, they’ve fed three different “pretend” observations through their retrieval technique: one hot-neptune like planet (green), one nitrogen rich planet (red) and one methane rich planet (blue). You can see by the very distinct groups of dots, that you can really tell the difference between these planets with this complicated retrieval technique. This is a great sanity check.

This pyramid like structure is called a quaternary diagram. It illustrates a large range of parameter space for different gaseous species: hydrogen/helium, nitrogen, carbon and oxygen. The different groupings of points show the retrieval results for three different planets. The fact that they are grouped in very different regions shows that we could theoretically tell the difference between these three kinds of planets: hot and neptune like planet (green), hot methane rich planet (blue) and hot nitrogen rich planet (red). Main point: atmospheric retrieval works!!

The study of exoplanet atmospheres will take off with launch of the James Webb Space Telescope and it is so important that the technique for retrieving planetary parameters from spectra works well! Without them, we have no way of testing our theoretical models against the real data. So even though retrievalists are the middle child of exoplanet characterization, we should really spend some time taking note of what they are doing!

## July 22, 2015

### Symmetrybreaking - Fermilab/SLAC

Underground plans

The Super-Kamiokande collaboration has approved a project to improve the sensitivity of the Super-K neutrino detector.

Super-Kamiokande, buried under about 1 kilometer of mountain rock in Kamioka, Japan, is one of the largest neutrino detectors on Earth. Its tank is full of 50,000 tons (about 13 million gallons) of ultrapure water, which it uses to search for signs of notoriously difficult-to-catch particles.

Recently members of the Super-K collaboration gave the go-ahead to a plan to make the detector a thousand times more sensitive with the help of a chemical compound called gadolinium sulfate.

Neutrinos are made in a variety of natural processes. They are also produced in nuclear reactors, and scientists can create beams of neutrinos in particle accelerators. These particles are electrically neutral, have little mass and interact only weakly with matter—characteristics that make them extremely difficult to detect even though trillions fly through any given detector each second.

Super-K catches about 30 neutrinos that interact with the hydrogen and oxygen in the water molecules in its tank each day. It keeps its water ultrapure with a filtration system that removes bacteria, ions and gases.

Scientists take extra precautions both to keep the ultrapure water clean and to avoid contact with the highly corrosive substance.

“Somebody once dropped a hammer into the tank,” says experimentalist Mark Vagins of the University of Tokyo's Kavli Institute for the Physics and Mathematics of the Universe. “It was chrome-plated to look nice and shiny. Eventually we found the chrome and not the hammer.”

When a neutrino interacts in the Super-K detector, it creates other particles that travel through the water faster than the speed of light, creating a blue flash. The tank is lined with about 13,000 phototube detectors that can see the light.

#### Looking for relic neutrinos

On average, several massive stars explode as supernovae every second somewhere in the universe. If theory is correct, all supernovae to have exploded throughout the universe’s 13.8 billion years have thrown out trillions upon trillions of neutrinos. That means the cosmos would glow in a faint background of relic neutrinos—if scientists could just find a way to see even a fraction of those ghostlike particles.

For about half of the year, the Super-K detector is used in the T2K experiment, which produces a beam of neutrinos in Tokai, Japan, some 183 miles (295 kilometers) away, and aims it at Super-K. During the trip to the detector, some of the neutrinos change from one type of neutrino to another. T2K studies that change, which could give scientists hints as to why our universe holds so much more matter than antimatter.

But a T2K beam doesn’t run continuously during that half year. Instead, researchers send a beam pulse every few seconds, and each pulse lasts just a few microseconds long. Super-K still detects neutrinos from natural processes while scientists are running T2K.

In 2002, at a neutrino meeting in Munich, Germany, experimentalist Vagins and theorist John Beacom of The Ohio State University began thinking of how they could better use Super-K to spy the universe’s relic supernova neutrinos.

“For at least a few hours we were standing there in the Munich subway station somewhere deep underground, hatching our underground plans,” Beacom says.

To pick out the few signals that come from neutrino events, you have to battle a constant clatter of background noise of other particles. Other incoming cosmic particles such as muons (the electron’s heavier cousin) or even electrons emitted from naturally occurring radioactive substances in rock can produce signals that look like the ones scientists hope to find from neutrinos. No one wants to claim a discovery that later turns out to be a signal from a nearby rock.

Super-K already guards against some of this background noise by being buried underground. But some unwanted particles can get through, and so scientists need ways to separate the signals they want from deceiving background signals.

Vagins and Beacom settled on an idea—and a name for the next stage of the experiment:  Gadolinium Antineutrino Detector Zealously Outperforming Old Kamiokande, Super! (GADZOOKS!). They proposed to add 100 tons of the compound gadolinium sulfate—Gd2(SO4)3—to Super-K’s ultrapure water.

When a neutrino interacts with a molecule, it releases a charged lepton (a muon, electron, tau or one of their antiparticles) along with a neutron. Neutrons are thousands of times more likely to interact with the gadolinium sulfate than with another water molecule. So when a neutrino traverses Super-K and interacts with a molecule, its muon, electron, or antiparticle (Super-K can’t see tau particles) will generate a first pulse of light, and the neutron will create a second pulse of light: “two pulses, like a knock-knock,” Beacom says.

By contrast, a background muon or electron will make only one light pulse.

To extract only the neutrino interactions, scientists will use GADZOOKS! to focus on the two-signal events and throw out the single-signal events, reducing the background noise considerably.

#### The prototype

But you can’t just add 100 tons of a chemical compound to a huge detector without doing some tests first. So Vagins and colleagues built a scaled-down version, which they called Evaluating Gadolinium’s Action on Detector Systems (EGADS). At 0.4 percent the size of Super-K, it uses 240 of the same phototubes and 200 tons (52,000 gallons) of ultrapure water.

Over the past several years, Vagins’ team has worked extensively to show the benefits of their idea. One aspect of their efforts has been to build a filtration system that removes everything from the ultrapure water except for the gadolinium sulfate. They presented their results at a collaboration meeting in late June.

On June 27, the Super-K team officially approved the proposal to add gadolinium sulfate but renamed the project SuperK-Gd. The next steps are to drain Super-K to check for leaks and fix them, replace any burned out phototubes, and then refill the tank.

But this process must be coordinated with T2K, says Masayuki Nakahata, the Super-K collaboration spokesperson.

Once the tank is refilled with ultrapure water, scientists will add in the 100 tons of gadolinium sulfate. Once the compound is added, the current filtration system could remove it any time researchers would like, Vagins says.

“But I believe that once we get this into Super-K and we see the power of it, it’s going to become indispensable,” he says. “It’s going to be the kind of thing that people wouldn’t want to give up the extra physics once they’re used to it.”

Like what you see? Sign up for a free subscription to symmetry!

### ZapperZ - Physics and Physicists

The Standard Model Interactive Chart
Symmetry has published a webpage of an interactive chart for the Standard Model of elementary particle. It is almost like a periodic table, but with only the most basic, necessary information. A rather useful link when you need just the basic info.

Zz.

### CERN Bulletin

CERN Bulletin Issue No. 30-31/2015
Link to e-Bulletin Issue No. 30-31/2015Link to all articles in this issue No.

## July 21, 2015

### astrobites - astro-ph reader's digest

Supernova with magnetar as origin of gamma-ray burst

Authors: J. Greiner et al.

First author’s affiliation: Max-Planck-Institut für Extraterrestrische Physik, Garching and Excellence Cluster Universe, Technische Universität München

Paper status: published in Nature

Beware! It is summer time and today’s Astrobite deals with two very hot topics that are prevalent on any astronomy related TV program: supernovae and gamma-ray bursts. Observations of supernovae have changed mankind’s world view significantly in the past centuries. Supernovae are crucially important for stellar nucleosynthesis because they produce all elements heavier than iron. Moreover, the observation of SN1572 by Tycho Brahe was extremely important. Even though he interpreted it as a new star rather than a dying star, his observation gave strong evidence that the universe cannot be static beyond the solar system as postulated by the Aristotelian idea. (fun fact: his actual name in Danish is “Tyge Brahe”, which is pronounced like this.)

The first Gamma-ray burst observations, though much more recent in history than SN observations, were enthralling from a scientific aspect, and maybe even more from a political history aspect. During the cold war, the US military launched the Vela satellites to monitor potential nuclear bomb activities in the Soviet Union. In 1967, two of these satellites (Vela 3 and Vela 4) detected a strong signal of gamma radiation that could not be explained by nuclear activities. Later on, researchers in Los Alamos analysed more datasets and found several other gamma-ray burst events. After detailed analysis they concluded that these flashes of gamma-ray signals must be due to very energetic explosions far away from the solar system and they gave it the name gamma-ray burst. Eventually, they published their results six years later in ApJ triggering plenty of papers dealing with the phenomenon. Nowadays, there is agreement among astronomers that a specific type of supernova (SN Ic) is linked to at least some of the bursts in the most common group of gamma-ray bursts. The commonly observed gamma-ray bursts last for more than 2 seconds, which is quite long compared to observations of other bursts. Hence, they are called long-lasting gamma-ray bursts and 70 % of all observed gamma-ray bursts belong to this group.

Finding a link between gamma-ray bursts and magnetars

The authors of today’s paper extend the picture further to gamma-ray bursts of even longer lifetimes than common long-lasting bursts. They observed a so called ultra-long-duration gamma-ray burst, which is a burst that lasts for more than 10 thousand seconds (more than 2.8 hours). This particular burst (GRB111209) was observed for about 50 days, and it turned out that the burst evolved in time and colour in a very similar way to known GRBs related to type Ic supernovae. However, there are some substantial differences in the spectrum at long wavelengths compared to the spectra observed in supernovae linked with ordinary long-lasting gamma-ray bursts. The observed spectrum hints at a significantly lower metal abundance in the vicinity of this supernova and the authors interpret the observations in the following way:

The supernova is the result of the death of a very heavy star. During the explosion, the inner part of the former heavy star might collapse and form a fast-rotating neutron star with an extremely strong magnetic field around it. The object is therefore called a magnetar and it is extremely energetic. If this energy – or a significant part of it – is released in form of a bipolar jet, it has two consequences in Greiner et al.’s interpretation:

1. it causes a strong and long lasting gamma-ray burst.
2. the supernova gets powered with additional energy, also increasing its duration.

Finally, the authors test their idea with a physical model that considers the additional injection from a magnetar to the supernova spectrum and they find that it fits the data pretty well, although the best fit of a supernova powered by radioactive decay is also within the error bars (compare dark and light blue curve to dark blue points in figure 1, which is figure 2 in the paper). However, Greiner et al. rule out radioactive decay as the reason for the observed curve, because the derived mass of 56Ni significantly exceeds the known mass in known gamma-ray burst related supernovae significantly.

Fig. 1: Light curve of the supernova suggested to be linked to be linked to the ultra-long-gamma-ray burst (111209A) plotted together with the evolution of known gamma-ray bursts (941, 061aj, 98bw) as well as for super-luminous supernovae (PTF11rks, PS1-10bzj). The dark blue line shows the best fit, when considering the spectrum of a supernova enriched by a magnetar. The light blue line illustrates the best fit if the supernova was powered by radioactive decay.

The attractive part of their interpretation is that it links two distinct explanations. The considered supernova is more luminous than all known gamma-ray burst related supernovae, though it is less luminous than super-luminous supernovae that are linked to the formation of magnetars. Loosely speaking, you can summarize their interpretation in this way: The particular ultra-long-lasting gamma-ray burst observation combined with the particular supernova is a hybrid of two known effects.

Featured image: Artist impression of a magnetar; credit: Credit: ESA/ATG medialab

### Lubos Motl - string vacua and pheno

The $$2\TeV$$ LHC excess could prove string theory
On Friday, I praised the beauty of the left-right-symmetric models that replace the hypercharge $$U(1)_Y$$ by a new $$SU(2)_R$$ group. They could explain the excess that especially ATLAS but also (in a different search) CMS seems to be seeing at the invariant mass around $$1.9\TeV$$, an excess that I placed at the first place of attractiveness among the known bumps at the LHC.

A random picture of intersecting D-branes

Alternatively, if that bump were real, it could have been a sign of compositeness, a heavy scalar (instead of a spin-one boson), or a triboson pretending to be a diboson. However, on Sunday, six string phenomenologists proposed a much more exciting explanation:
Stringy origin of diboson and dijet excesses at the LHC
The multinational corporation (SUNY, Paris, Munich, Taiwan, Bern, Boston) consisting of Anchordoqui, Antoniadis, Goldberg, Huang, Lüst, and Taylor argues that the bump has the required features to grow into the first package of exclusive collider evidence in favor of string theory – yes, I mean the theory that stinky brainless chimps yell to be disconnected from experiments.

Why would such an ambitious conclusion follow from such a seemingly innocent bump on the road? We need just a little bit of patience to understand this point.

They agree with the defenders of the left-right-symmetric explanation of the bump that the particle that decays in order to manifest itself as the bump is a new spin-one boson, namely a $$Z'$$. But its corresponding $$U(1)_a$$ symmetry may be anomalous: there may exist a mixed anomaly in the triangle$U(1)_a SU(2)_L SU(2)_L$ with two copies of the regular electroweak $$SU(2)$$ gauge group. An anomaly in the gauge group would mean that the field theory is inconsistent. In the characteristic field theory constructions, the right multiplicities and charges of the spectrum are needed to cancel the anomaly. However, string theory has one more trick that may cancel gauge anomalies. It's a trick that actually launched the First Superstring Revolution in 1984.

It's the Green-Schwarz mechanism.

In 1984, Green and Schwarz figured out how the anomaly works in type I superstring theory with the $$SO(32)$$ gauge group – which is given by a hexagon diagram in $$d=10$$ much like it needs a triangle in $$d=4$$ – but the same trick may apply even after compactification. The new spin-one gauge field is told to transform surprisingly nontrivially under a gauge invariance of a seemingly independent field, a two-index field, and the hexagon is then cancelled against a 2+4 tree diagram with the exchange of the two-index field.

In the $$d=4$$ case, we may see that this Green-Schwarz mechanism makes the previously anomalous $$U(1)_a$$ gauge boson massive – and the "Stückelberg" mass is just an order of magnitude or so lower than the string scale (which they therefore assume to be $$M_s\approx 20\TeV$$). This is normally viewed as an extremely high energy scale which is why these possibilities don't enter the conventional quantum field theoretical models.

But string theory may also be around the corner – in the case of some stringy braneworld models, particularly the intersecting braneworlds. In these braneworlds, which are very concrete stringy realizations of the "old large dimensions" paradigm, the Standard Model fields live on stacks of branes, they have the form of open strings whose basic duty is to stay attached to a D-brane. Some string modes (particles) live near the intersections of the D-brane stacks because one of their endpoint is attached to one stack and the other to the other stack and the strings always want to be stringy short, not to carry insanely high energy.

To make the story short, the anomaly-producing triangle diagram may also be interpreted as the Feynman diagram for a decay of the new $$Z'$$ boson of the $$U(1)_a$$ groups into two $$SU(2)_L$$ gauge bosons. When the latter pair is decomposed into the basis of the usual particles we know, the decays may be$\eq{ Z' &\to W^+ W^-,\\ Z' &\to Z^0 Z^0,\\ Z' &\to Z^0 \gamma }$ All these three decays are made unavoidable in the Green-Schwarz-mechanism-based models – and the relative branching ratios are pretty much given. Note that $$W^0\equiv W_3$$ is a mixture of $$Z^0$$ and $$\gamma$$ so all three pairs created from $$Z^0$$ and $$\gamma$$ would be possible but the Landau-Yang theorem implies that the $$\gamma\gamma$$ decay of $$Z'$$ is forbidden (the rate is zero) for symmetry reasons.

Their storyline is so predictive that then may tell you that the new coupling constant is $$g_a\approx 0.36$$, too.

So if their explanation is right, the bump near $$2\TeV$$ will be growing – it may already be growing now: the first Run II results will be announced on EPS-HEP in Vienna, a meeting that starts tomorrow (follow the conference website)! Only about 1 inverse femtobarn of $$13\TeV$$ data has been accumulated in 2015 so far – much less than 20-30/fb at $$8\TeV$$ in 2012. And if the authors of the paper discussed here are right, one more thing is true. The decay channel $$Z\gamma$$ of the new particle will soon be detected as well – and it will be a smoking gun for low-scale string theory!

No known consistent field theory predicts a nonzero $$Z\gamma$$ decay rate of the new massive gauge boson. The string-theoretical Green-Schwarz mechanism mixes what looks like a field-theoretical tree-level diagram with a one-loop diagram. Their being on equal footing implies that the regular QFT-like perturbation theory breaks down and instead, there is a hidden loop inside a vertex of the would-be tree-level diagram. This loop can't be expanded in terms of regular particles in a loop, however: it implies some stringy compositeness of the particles and processes.

A smoking gun. This particular one is a smoking gun of someone else than string theory, however.

This sounds to good to be true but it may be true. I still think it's very unlikely but these smart authors obviously think it's a totally sensible scenario. It's hard to figure out whether they really impartially believe that these low-scale intersecting braneworlds are likely; or their belief mostly boils down to a wishful thinking.

If these ideas were right, we could observe megatons of stringy physics with finite-price colliders!

### Clifford V. Johnson - Asymptotia

Ian McKellen on Fresh Air!
I had a major treat last night! While making myself an evening meal I turned on the radio to find Ian McKellen (whose voice and delivery I love so very much I can listen to him slowly reading an arbitrarily long list of random numbers) being interviewed by Dave Davies on NPR's Fresh Air. It was of course delightful, and some of the best radio I've enjoyed in a while (and I listen to a ton of good radio every day, between having either NPR or BBC Radio 4 on most of the time) since it was the medium at its simple best - a splendid conversation with an interesting, thoughtful, well-spoken person. They also played and discussed a number of clips from his work, recent (I've been hugely excited to see Mr. Holmes, just released) and less recent (less well known delights such as Gods and Monsters -you should see it if you have not- and popular material like the first Hobbit film), and spoke at length about his private and public life and the intersection between the two, for example how his coming out as gay in 1988 positively affected his acting, and why.... There's so much in that 35 minutes! [...] Click to continue reading this post

### Lubos Motl - string vacua and pheno

A new LHC Kaggle contest: discover "$$\tau \to 3 \mu$$" decay
A year ago, the Kaggle.com machine learning contest server along with the ATLAS Collaboration at the LHC organized a contest in which you were asked to determine whether a collision of two protons was involving the Higgs boson (that later decayed to the $$\tau^+\tau^-$$ pair, one of the taus is leptonic and the other is hadronic). To make the story short, there's a new similar contest out there:
Identify an unknown decay phenomenon
Again, you will submit a file in which each "test" collision is labeled as either "interesting" or "uninteresting". But in this case, you may actually discover a phenomenon that is believed not to exist at the LHC, according to the state-of-the-art theory (the Standard Model)!

The Higgs contest was all about the simulated data. They looked real but they were not real and several technicalities were switched off in the simulation, to simplify things. Incredibly enough, here you are going to work with the real data from the relevant detector at the LHC, the LHCb detector: the LHCb collaboration is the co-organizer.

For each test event, you will have to announce a probability $$P_i$$ that the event involved the following decay of a tau:$\tau^\pm \to \mu^\pm \mu^+\mu^-$ The tau lepton decayed to three muons. The charge is conserved but the lepton number is not: among the decay products, the negative muon and the positive muon cancel but there's still another muon – and it was created from a tau. $$L_\mu$$ and $$L_\tau$$ conservation laws were violated.

At many leading orders of the Standard Model, the probability of such a decay is zero. I believe that the actual predicted rate is nonzero but unmeasurably tiny. New physics allows this "flavor-violating" process to take place, however.

To show you the unexpected relationships between different TRF blog posts, let me tell you that the blog post right before this one talked about the $$Z'$$ boson and this new spin-one particle could actually cause this "so far non-existent" process.

In fact, this option appears in the logo of the contest! The $$\tau^\pm$$ lepton decays to one $$\mu^\pm$$ and a virtual $$Z'$$, and the virtual $$Z'$$ decays to $$\mu^+\mu^-$$. The first vertex violates the flavor numbers but it's not so shocking for a new heavy particle to couple to leptons in this "non-diagonal" way.

The LHCb contest is harder than the Higgs contest in several respects such as
1. lower prizes: $7k,$5k, $3k for the winner, silver medal, and bronze medal. It's harder to write difficult programs if you're less financially motivated. But LHCb is smaller than ATLAS so you should have expected that. ;-) 2. no sharing of scripts: you won't be permitted to share your scripts for this contest so everyone has to start from "scratch". Sadly, you may still use your programs and experience from other projects so the machine learning folks will still have a huge advantage, perhaps a bigger one than in the Higgs contest. 3. agreement and correlation pre-checks: to make things worse, your submission won't be counted at all if it fails to pass two tests: the agreement test and the correlation test. This feature of the contest, along with the previous one, will make the leaderboard much smaller than in the Higgs contest. The two tests reflect the fact that the dataset is composed of several groups of events – real collisions, simulated realistic ones, and simulated new-physics ones for verification purposes. 4. larger files to download: in total, you have to download 400 MB worth of ZIP files that decompress to many gigabytes. 5. messy details of the LHC are kept: lots of the technical details that make the real life of experimental physicists hard were kept – although translated to the machine-learning-friendly conventions. Also, the evaluation metric is more sophisticated – some weighted area under the curve (depicting the graph relating the number of false positives and the false negatives). 6. and I forgot about 3 more complications that have scared me... An ambitious contestant may view all these vices as virtues (or at least some of them). After all, money corrupts and sucks; sharing encourages losers to accidentally mix with the skillful guys; it's good for the submissions to pass some extra tests so that one doesn't coincidentally submit garbage; all these difficulties will keep the leaderboard of true competitors shorter and easier to follow (instead of the 2,000 people in the Higgs contest); I vaguely guess that the final, private leaderboard will be much closer to the preliminary, public one (there was a substantial change in the Higgs contest, sadly for your humble correspondent LOL). The reason for this belief of mine is that the contestants submit a larger number of guesses, they're continuous numbers, and the evaluation metric is a more continuous function of those, too. So the room for overfitting will probably be much lower than in the Higgs contest. So far, there are only 13 people in the leaderboard and it's plausible that the total number will remain very low throughout the contest. If you write a single script that passes the tests at all, chances are high that you will be immediately placed very high in the leaderboard. At any rate, you have 2 months left to win this contest and proudly announce it to the world on this blog and in The Wall Street Journal. Your solution may be much more useful than in the Higgs case; technicalities weren't eliminated, so your ideas may be used directly. And what you may discover is a genuinely new, surprising process – but one that may actually be already present in the LHCb data (as the hints of a $$Z'$$ and flavor-violating Higgs decays suggest). Good luck. Correction: the Higgs money was just$7k, $4k,$2k, so this contest actually has better prizes. The money comes from CERN, Intel, two subdivisions of Yandex (a Russian Google competitor), and universities in Zurich, Warwick, Poland, and Russia.

### Symmetrybreaking - Fermilab/SLAC

The Standard Model of particle physics

Explore the elementary particles that make up our universe.

The Standard Model is a kind of periodic table of the elements for particle physics. But instead of listing the chemical elements, it lists the fundamental particles that make up the atoms that make up the chemical elements, along with any other particles that cannot be broken down into any smaller pieces.

The complete Standard Model took a long time to build. Physicist J.J. Thomson discovered the electron in 1897, and scientists at the Large Hadron Collider found the final piece of the puzzle, the Higgs boson, in 2012.

Use this interactive model (based on a design by Walter Murch for the documentary Particle Fever) to explore the different particles that make up the building blocks of our universe.

u
c
t
d
s
b

### Up Quark

1968

2.3 MeV

SLAC

2/3

First

#### Spin:

1/2

Up and down quarks make up protons and neutrons, which make up the nucleus of every atom.

### Charm Quark

1974

1.275 GeV

#### Discovered at:

Brookhaven & SLAC

2/3

Second

#### Spin:

1/2

In 1974, two independent research groups conducting experiments at two independent labs discovered the charm quark, the fourth quark to be found. The surprising discovery forced physicists to reconsider how the universe works at the smallest scale.

### Top Quark

1995

173.21 GeV

Fermilab

2/3

Third

#### Spin:

1/2

The top quark is the heaviest quark discovered so far. It has about the same weight as a gold atom. But unlike an atom, it is a fundamental, or elementary, particle; as far as we know, it is not made of smaller building blocks.

### Down Quark

1968

4.8 MeV

SLAC

-1/3

First

#### Spin:

1/2

Nobody knows why, but a down quark is a just a little bit heavier than an up quark. If that weren’t the case, the protons inside every atom would decay and the universe would look very different.

### Strange Quark

1947

95 MeV

#### Discovered at:

Manchester University

-1/3

Second

#### Spin:

1/2

Scientists discovered particles with “strange" properties many years before it became clear that those strange properties were due to the fact that they all contained a new, “strange” kind of quark. Theorist Murray Gell-Mann was awarded the Nobel Prize for introducing the concepts of strangeness and quarks.

### Bottom Quark

1977

4.18 GeV

Fermilab

-1/3

Third

#### Spin:

1/2

This particle is a heavier cousin of the down and strange quarks. Its discovery confirmed that all elementary building blocks of ordinary matter come in three different versions.

e
μ
τ
υe
υμ
υτ

### Electron

1897

0.511 MeV

#### Discovered at:

Cavendish Laboratory

-1

First

#### Spin:

1/2

The electron powers the world. It is the lightest particle with an electric charge and a building block of all atoms. The electron belongs to the family of charged leptons.

### Muon

1937

105.66 MeV

#### Discovered at:

Caltech & Harvard

-1

Second

#### Spin:

1/2

The muon is a heavier version of the electron. It rains down on us as it is created in collisions of cosmic rays with the Earth’s atmosphere. When it was discovered in 1937, a physicist asked, “Who ordered that?”

### Tau

1976

1776.82 MeV

SLAC

-1

Third

#### Spin:

1/2

The discovery of this particle in 1976 completely surprised scientists. It was the first discovery of a particle of the so-called third generation. It is the third and heaviest of the charged leptons, heavier than both the electron and the muon.

### Electron Neutrino

1956

<2 eV

#### Discovered at:

Savannah River Plant

0

First

#### Spin:

1/2

Measurements and calculations in the 1920s led to the prediction of the existence of an elusive particle without electric charge, the neutrino. But it wasn’t until 1956 that scientists observed the signal of an electron neutrino interacting with other particles. Nuclear reactions in the sun and in nuclear power plants produce electron antineutrinos.

### Muon Neutrino

1962

<0.19 MeV

Brookhaven

0

Second

#### Spin:

1/2

Neutrinos come in three flavors. The muon neutrino was first discovered in 1962. Neutrino beams from accelerators are typically made up of muon neutrinos and muon antineutrinos.

### Tau Neutrino

2000

<18.2 MeV

Fermilab

0

Third

#### Spin:

1/2

Based on theoretical models and indirect observations, scientists expected to find a third generation of neutrino. But it took until 2000 for scientists to develop the technologies to identify the particle tracks created by tau neutrino interactions.

Ɣ
g
W
Z
H

### Photon

1923

<1x10^-18 eV

#### Discovered at:

Washington University

0

#### Spin:

1

The photon is the only elementary particle visible to the human eye—but only if it has the right energy and frequency (color). It transmits the electromagnetic force between charged particles.

### Gluon

1979

0

DESY

0

#### Spin:

1

The gluon is the glue that holds together quarks to form protons, neutrons and other particles. It mediates the strong nuclear force.

### Z Boson

1983

91.1876 GeV

CERN

0

#### Spin:

1

The Z boson is the electrically neutral cousin of the W boson and a heavy relative of the photon. Together, these particles explain the electroweak force.

### W Boson

1983

80.385 GeV

CERN

±1

#### Spin:

1

The W boson is the only force carrier that has an electric charge. It’s essential for weak nuclear reactions: Without it, the sun would not shine.

### Higgs Boson

2012

125.7 GeV

CERN

0

#### Spin:

0

Discovered in 2012, the Higgs boson was the last missing piece of the Standard Model puzzle. It is a different kind of force carrier from the other elementary forces, and it gives mass to quarks as well as the W and Z bosons. Whether it also gives mass to neutrinos remains to be discovered.

Launch the interactive model »

### ZapperZ - Physics and Physicists

Yoichiro Nambu
This is a bit late, but I will kick myself if I don't acknowledge the passing of Yoichiro Nambu this past week. This person, if you've never heard of his name before, is truly a GIANT in physics, and not just in elementary particle. His work transcends any field of physics, and had a significant impact in condensed matter.

I wrote an entry on his work when he won the Nobel prize a few years ago. His legacy will live on long after him.

Zz.

## July 20, 2015

### arXiv blog

Robotic Surgery Linked To 144 Deaths Since 2000

Surgery involving robots is far from perfect, according to a new study of death rates during medical procedures involving robotic equipment and techniques.

Robotic surgeons were involved in the deaths of 144 people between 2000 and 2013, according to records kept by the U.S. Food and Drug Administration. And some forms of robotic surgery are much riskier than others: the death rate for head, neck, and cardiothoracic surgery is almost 10 times higher than for other forms of surgery.

### Sean Carroll - Preposterous Universe

Why is the Universe So Damn Big?

I love reading io9, it’s such a fun mixture of science fiction, entertainment, and pure science. So I was happy to respond when their writer George Dvorsky emailed to ask an innocent-sounding question: “Why is the scale of the universe so freakishly large?”

You can find the fruits of George’s labors at this io9 post. But my own answer went on at sufficient length that I might as well put it up here as well. Of course, as with any “Why?” question, we need to keep in mind that the answer might simply be “Because that’s the way it is.”

Whenever we seem surprised or confused about some aspect of the universe, it’s because we have some pre-existing expectation for what it “should” be like, or what a “natural” universe might be. But the universe doesn’t have a purpose, and there’s nothing more natural than Nature itself — so what we’re really trying to do is figure out what our expectations should be.

The universe is big on human scales, but that doesn’t mean very much. It’s not surprising that humans are small compared to the universe, but big compared to atoms. That feature does have an obvious anthropic explanation — complex structures can only form on in-between scales, not at the very largest or very smallest sizes. Given that living organisms are going to be complex, it’s no surprise that we find ourselves at an in-between size compared to the universe and compared to elementary particles.

What is arguably more interesting is that the universe is so big compared to particle-physics scales. The Planck length, from quantum gravity, is 10^{-33} centimeters, and the size of an atom is roughly 10^{-8} centimeters. The difference between these two numbers is already puzzling — that’s related to the “hierarchy problem” of particle physics. (The size of atoms is fixed by the length scale set by electroweak interactions, while the Planck length is set by Newton’s constant; the two distances are extremely different, and we’re not sure why.) But the scale of the universe is roughly 10^29 centimeters across, which is enormous by any scale of microphysics. It’s perfectly reasonable to ask why.

Part of the answer is that “typical” configurations of stuff, given the laws of physics as we know them, tend to be very close to empty space. (“Typical” means “high entropy” in this context.) That’s a feature of general relativity, which says that space is dynamical, and can expand and contract. So you give me any particular configuration of matter in space, and I can find a lot more configurations where the same collection of matter is spread out over a much larger volume of space. So if we were to “pick a random collection of stuff” obeying the laws of physics, it would be mostly empty space. Which our universe is, kind of.

Two big problems with that. First, even empty space has a natural length scale, which is set by the cosmological constant (energy of the vacuum). In 1998 we discovered that the cosmological constant is not quite zero, although it’s very small. The length scale that it sets (roughly, the distance over which the curvature of space due to the cosmological constant becomes appreciable) is indeed the size of the universe today — about 10^26 centimeters. (Note that the cosmological constant itself is inversely proportional to this length scale — so the question “Why is the cosmological-constant length scale so large?” is the same as “Why is the cosmological constant so small?”)

This raises two big questions. The first is the “coincidence problem”: the universe is expanding, but the length scale associated with the cosmological constant is a constant, so why are they approximately equal today? The second is simply the “cosmological constant problem”: why is the cosmological constant scale so enormously larger than the Planck scale, or event than the atomic scale? It’s safe to say that right now there are no widely-accepted answers to either of these questions.

So roughly: the answer to “Why is the universe so big?” is “Because the cosmological constant is so small.” And the answer to “Why is the cosmological constant so small?” is “Nobody knows.”

But there’s yet another wrinkle. Typical configurations of stuff tend to look like empty space. But our universe, while relatively empty, isn’t *that* empty. It has over a hundred billion galaxies, with a hundred billion stars each, and over 10^50 atoms per star. Worse, there are maybe 10^88 particles (mostly photons and neutrinos) within the observable universe. That’s a lot of particles! A much more natural state of the universe would be enormously emptier than that. Indeed, as space expands the density of particles dilutes away — we’re headed toward a much more natural state, which will be much emptier than the universe we see today.

So, given what we know about physics, the real question is “Why are there so many particles in the observable universe?” That’s one angle on the question “Why is the entropy of the observable universe so small?” And of course the density of particles was much higher, and the entropy much lower, at early times. These questions are also ones to which we have no good answers at the moment.

### John Baez - Azimuth

The Game of Googol

Here’s a puzzle from a recent issue of Quanta, an online science magazine:

Puzzle 1: I write down two different numbers that are completely unknown to you, and hold one in my left hand and one in my right. You have absolutely no idea how I generated these two numbers. Which is larger?

You can point to one of my hands, and I will show you the number in it. Then you can decide to either select the number you have seen or switch to the number you have not seen, held in the other hand. Is there a strategy that will give you a greater than 50% chance of choosing the larger number, no matter which two numbers I write down?

At first it seems the answer is no. Whatever number you see, the other number could be larger or smaller. There’s no way to tell. So obviously you can’t get a better than 50% chance of picking the hand with the largest number—even if you’ve seen one of those numbers!

But “obviously” is not a proof. Sometimes “obvious” things are wrong!

It turns out that, amazingly, the answer to the puzzle is yes! You can find a strategy to do better than 50%. But the strategy uses randomness. So, this puzzle is a great illustration of the power of randomness.

If you want to solve it yourself, stop now or read Quanta magazine for some clues—they offered a small prize for the best answer:

• Pradeep Mutalik, Can information rise from randomness?, Quanta, 7 July 2015.

Greg Egan gave a nice solution in the comments to this magazine article, and I’ll reprint it below along with two followup puzzles. So don’t look down there unless you want a spoiler.

I should add: the most common mistake among educated readers seems to be assuming that the first player, the one who chooses the two numbers, chooses them according to some probability distribution. Don’t assume that. They are simply arbitrary numbers.

### The history of this puzzle

I’d seen this puzzle before—do you know who invented it? On G+, Hans Havermann wrote:

I believe the origin of this puzzle goes back to (at least) John Fox and Gerald Marnie’s 1958 betting game ‘Googol’. Martin Gardner mentioned it in his February 1960 column in Scientific American. Wikipedia mentions it under the heading ‘Secretary problem’. Gardner suggested that a variant of the game was proposed by Arthur Cayley in 1875.﻿

Actually the game of Googol is a generalization of the puzzle that we’ve been discussing. Martin Gardner explained it thus:

Ask someone to take as many slips of paper as he pleases, and on each slip write a different positive number. The numbers may range from small fractions of 1 to a number the size of a googol (1 followed by a hundred 0s) or even larger. These slips are turned face down and shuffled over the top of a table. One at a time you turn the slips face up. The aim is to stop turning when you come to the number that you guess to be the largest of the series. You cannot go back and pick a previously turned slip. If you turn over all the slips, then of course you must pick the last one turned.

So, the puzzle I just showed you is the special case when there are just 2 slips of paper. I seem to recall that Gardner incorrectly dismissed this case as trivial!

There’s been a lot of work on Googol. Julien Berestycki writes:

• Alexander V. Gnedin, A solution to the game of Googol, Annals of Probability (1994), 1588–1595.

One of the many beautiful ideas in this paper is that it asks what is the best strategy for the guy who writes the numbers! It also cites a paper by Gnedin and Berezowskyi (of oligarchic fame). ﻿

### Egan’s solution

Okay, here is Greg Egan’s solution, paraphrased a bit:

Pick some function $f : \mathbb{R} \to \mathbb{R}$ such that:

$\displaystyle{ \lim_{x \to -\infty} f(x) = 0 }$

$\displaystyle{ \lim_{x \to +\infty} f(x) = 1 }$

$f$ is monotonically increasing: if $x > y$ then $f(x) > f(y)$

There are lots of functions like this, for example

$\displaystyle{f(x) = \frac{e^x}{e^x + 1} }$

Next, pick one of the first player’s hands at random. If the number you are shown is $x,$ compute $f(x).$ Then generate a uniformly distributed random number $z$ between 0 and 1. If $z$ is less than or equal to $f(x)$ guess that $x$ is the larger number, but if $z$ is greater than $f(x)$ guess that the larger number is in the other hand.

The probability of guessing correctly can be calculated as the probability of seeing the larger number initially and then, correctly, sticking with it, plus the probability of seeing the smaller number initially and then, correctly, choosing the other hand.

This is

$\frac{1}{2} f(x) + \frac{1}{2} (1 - f(y)) = \frac{1}{2} + \frac{1}{2} (f(x) - f(y))$

This is strictly greater than $\frac{1}{2}$ since $x > y$ so $f(x) - f(y) > 0$.

So, you have a more than 50% chance of winning! But as you play the game, there’s no way to tell how much more than 50%. If the numbers on the other players hands are very large, or very small, your chance will be just slightly more than 50%.

### Followup puzzles

Here are two more puzzles:

Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.

Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?﻿

But watch out—here come Egan’s solutions to those!

### Solutions

Egan writes:

Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.

Answer: If we adopt a deterministic strategy, that means there is a function $S: \mathbb{R} \to \{0,1\}$ that tells us whether on not we stick with the number x when we see it. If $S(x)=1$ we stick with it, if $S(x)=0$ we swap it for the other number.

If the two numbers are $x$ and $y,$ with $x > y,$ then the probability of success will be:

$P = 0.5 + 0.5(S(x)-S(y))$

This is exactly the same as the formula we obtained when we stuck with $x$ with probability $f(x),$ but we have specialised to functions $S$ valued in $\{0,1\}.$

We can only guarantee a more than 50% chance of choosing the larger number if $S$ is monotonically increasing everywhere, i.e. $S(x) > S(y)$ whenever $x > y.$ But this is impossible for a function valued in $\{0,1\}.$ To prove this, define $x_0$ to be any number in $[1,2]$ such that $S(x_0)=0;$ such an $x_0$ must exist, otherwise $S$ would be constant on $[1,2]$ and hence not monotonically increasing. Similarly define $x_1$ to be any number in $[-2,-1]$ such that $S(x_1) = 1.$ We then have $x_0 > x_1$ but $S(x_0) < S(x_1).$

Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?﻿

Answer: As Philip Gibbs noted, a deterministic pseudo-random number generator is still deterministic. Using a specific sequence of algorithmically random bits

$(b_1, b_2, \dots )$

to construct a number $z$ between $0$ and $1$ means $z$ takes on the specific value:

$z_0 = \sum_i b_i 2^{-i}$

So rather than sticking with $x$ with probability $f(x)$ for our monotonically increasing function $f,$ we end up always sticking with $x$ if $z_0 \le f(x),$ and always swapping if $z_0 > f(x).$ This is just using a function $S:\mathbb{R} \to \{0,1\}$ as in Puzzle 2, with:

$S(x) = 0$ if $x < f^{-1}(z_0)$

$S(x) = 1$ if $x \ge f^{-1}(z_0)$

So all the same consequences as in Puzzle 2 apply, and we cannot guarantee a more than 50% chance of choosing the larger number.

Puzzle 3 emphasizes the huge gulf between ‘true randomness’, where we only have a probability distribution of numbers $z,$ and the situation where we have a specific number $z_0,$ generated by any means whatsoever.

We could generate $z_0$ using a pseudorandom number generator, radioactive decay of atoms, an oracle whose randomness is certified by all the Greek gods, or whatever. No matter how randomly $z_0$ is generated, once we have it, we know there exist choices for the first player that will guarantee our defeat!

This may seem weird at first, but if you think about simple games of luck you’ll see it’s completely ordinary. We can have a more than 50% chance of winning such a game even if for any particular play we make the other player has a move that ensures our defeat. That’s just how randomness works.

## July 19, 2015

### Ben Still - Neutrino Blog

Pentaquark Series 1: What Are Quarks?
This is the first in a series of posts I will release over the next two weeks aimed at covering the physics behind Pentaquarks, the history of "discovery", and the implications of the latest results from LHCb. We start off today by first answering the question:

## What Are Quarks?

 Quarks are building blocks that cannot be broken into smaller things.
Quarks are: a group of fundamental particles that are indivisible, meaning that they cannot be broken into smaller pieces. They are building blocks that combine in groups to make up a whole zoo of other (composite) particles. They were first thought up by physicists Murray Gell-Mann and George Zweig while attempting to mathematically explain the vast array of new particles popping up in experiments throughout the 1950’s and 1960’s.
Debris that results from smashing protons and protons into each other was seen in experiments to be a whole lot messier than debris from two electrons colliding headlong. Gell-Mann and others reasoned that this would happen if the proton were not a single entity like the electron but instead, like a bag of groceries, containing multiple particles within itself.

The menagerie of particles being discovered each week at particle accelerators could, in Gell-Manns model, all be explained as different composites of just a few types of truly fundamental particles. The multiple that seemed to fit the data in most cases was three and Gell-Mann got the spelling for his 'kwork' from a passage in James Joyce’s ‘Fineganns Wake’ - “Three quarks for Muster Mark”. Proof of Gell-Mann’s model came when a particle he predicted in 1962 to exist (which he called Ω-) was seen at an experiment at Brookhaven National Lab in the US in 1964. Gell-Mann received the Nobel Prize in Physics in 1969 for this work which was the birth of the quark.

We know today that the proton is made up from three quarks; two up quarks and one down quark. The naming of ‘up’ and ‘down’ shows that some poetry disappeared in naming the individual types of quark! The up and down quarks have the lightest mass of all of the quarks (they would weigh the least if we could practically weigh something so small!). The fact they are so light also means they are the most stable of all of the quarks. Experiment has shown us that the heavier a particle is the shorter its lifespan. Just like high fashion models, particles are constantly wanting to become as light as possible.

 Protons and neutrons are each made from three quark building blocks.

A neutron particle is also composed of a grouping of three quarks; one up quark and two down. The lifetime of a neutron sitting by itself is limited because although moderately stable (metastable) it knows it can still become a lighter proton. The change from neutron to proton (plus electron and neutrino) is known as radioactive beta decay. Experiments around the world have been looking closely at protons to see if they, like the neutron, change into something lighter. To date not a single experiment has seen a proton decay into anything else which suggests that the proton is immortal and certainly the most stable composite particle we know of.

The up and down quarks are part of what is known as the first generation of fundamental particles. For reasons which we do not know Nature has presented us with two more generations. The only difference between particles in each generation is their mass. Generation 1 particles are lightest, with generation 2 particles heavier than 1 but in turn lighter than generation 3, which are the heaviest. All of the other properties of the particles, they way they feel forces, seem to remain the same; E.g. their electric charge. The heavier versions of the down quark in generation 2 and 3 are called strange quark and bottom quark. The heavier versions of the up quark are called the charm quark in generation 2 and top quark in generation 3.

Heavy particles are made in particle accelerators like the LHC thanks to Einstein’s most famous equation E=mc2 which tells us that mass of new particles (m) can be created from lots of energy (E). The heavier the particles we want to make the higher in energy we have to accelerate protons to in our accelerator before smashing them together. Remember I said heavy particles are unstable, it turns out that the heavier they get the more unstable they become which means any heavy particle made with quarks from generations 2 or 3 are usually not around for very long.

Next Post: Rule of Three - Why are there not a different number of quarks in protons and other similar particles?

### Lubos Motl - string vacua and pheno

Glimpsed particles that the LHC may confirm
The LHC is back in business. Many of us have watched the webcast today. There was a one-hour delay at the beginning. Then they lost the beam once. And things went pretty much smoothly afterwards. After a 30-month coffee break, the collider is collecting actual data to be used in the future papers at the center mass of $$13\TeV$$.

So far, no black hole has destroyed the Earth.

It's possible that the LHC will discover nothing new, at least for years. But it is in no way inevitable. I would say that it's not even "very likely". We have various theoretical reasons to expect one discovery or another. A theory-independent vague argument is that the electroweak scale has no deep reason to be too special. And every time we added an order of magnitude to the energies, we saw something new.

But in this blog post, I would like to recall some excesses – inconclusive but tantalizing upward deviations from the Standard Model predictions – that have been mentioned on this blog. Most of them emerged from ATLAS or CMS analyses at the LHC. Some of them may be confirmed soon.

Please submit your corrections if some of the "hopeful hints" have been killed. And please submit those that I forgot.

The hints below will be approximately sorted from those that I consider most convincing at this moment. The energy at the beginning is the estimated mass of a new particle.
I omitted LHC hints older than November 2011 but you may see that the number of possible deviations has been nontrivial.

The most accurate photographs of the Standard Model's elementary particles provided by CERN so far. The zoo may have to be expanded.

Stay tuned.

### The n-Category Cafe

Category Theory 2015

Just a quick note: you can see lots of talk slides here:

Category Theory 2015, Aveiro, Portugal, June 14-19, 2015.

The Giry monad, tangent categories, Hopf monoids in duoidal categories, model categories, topoi… and much more!

### The n-Category Cafe

In my last post I promised to follow up by explaining something about the relationship between homotopy type theory (HoTT) and computer formalization. (I’m getting tired of writing “publicity”, so this will probably be my last post for a while in this vein — for which I expect that some readers will be as grateful as I).

As a potential foundation for mathematics, HoTT/UF is a formal system existing at the same level as set theory (ZFC) and first-order logic: it’s a collection of rules for manipulating syntax, into which we can encode most or all of mathematics. No such formal system requires computer formalization, and conversely any such system can be used for computer formalization. For example, the HoTT Book was intentionally written to make the point that HoTT can be done without a computer, while the Mizar project has formalized huge amounts of mathematics in a ZFC-like system.

Why, then, does HoTT/UF seem so closely connected to computer formalization? Why do the overwhelming majority of publications in HoTT/UF come with computer formalizations, when such is still the exception rather than the rule in mathematics as a whole? And why are so many of the people working on HoTT/UF computer scientists or advocates of computer formalization?

To start with, note that the premise of the third question partially answers the first two. If we take it as a given that many homotopy type theorists care about computer formalization, then it’s only natural that they would be formalizing most of their papers, creating a close connection between the two subjects in people’s minds.

Of course, that forces us to ask why so many homotopy type theorists are into computer formalization. I don’t have a complete answer to that question, but here are a few partial ones.

1. HoTT/UF is built on type theory, and type theory is closely connected to computers, because it is the foundation of typed functional programming languages like Haskell, ML, and Scala (and, to a lesser extent, less-functional typed programming languages like Java, C++, and so on). Thus, computer proof assistants built on type theory are well-suited to formal proofs of the correctness of software, and thus have received a lot of work from the computer science end. Naturally, therefore, when a new kind of type theory like HoTT comes along, the existing type theorists will be interested in it, and will bring along their predilection for formalization.

2. HoTT/UF is by default constructive, meaning that we don’t need to assert the law of excluded middle or the axiom of choice unless we want to. Of course, most or all formal systems have a constructive version, but with type theories the constructive version is the “most natural one” due to the Curry-Howard correspondence. Moreover, one of the intriguing things about HoTT/UF is that it allows us to prove certain things constructively that in other systems require LEM or AC. Thus, it naturally attracts attention from constructive mathematicians, many of whom are interested in computable mathematics (i.e. when something exists, can we give an algorithm to find it?), which is only a short step away from computer formalization of proofs.

3. One could, however, try to make similar arguments from the other side. For instance, HoTT/UF is (at least conjecturally) an internal language for higher topos theory and homotopy theory. Thus, one might expect it to attract an equal influx of higher topos theorists and homotopy theorists, who don’t care about computer formalization. Why hasn’t this happened? My best guess is that at present the traditional 1-topos theorists seem to be largely disjoint from the higher topos theorists. The former care about internal languages, but not so much about higher categories, while for the latter it is reversed; thus, there aren’t many of us in the intersection who care about both and appreciate this aspect of HoTT. But I hope that over time this will change.

4. Another possible reason why the influx from type theory has been greater is that HoTT/UF is less strange-looking to type theorists (it’s just another type theory) than to the average mathematician. In the HoTT Book we tried to make it as accessible as possible, but there are still a lot of tricky things about type theory that one seemingly has to get used to before being able to appreciate the homotopical version.

5. Another sociological effect is that Vladimir Voevodsky, who introduced the univalence axiom and is a Fields medalist with “charisma”, is also a very vocal and visible advocate of computer formalization. Indeed, his personal programme that he calls “Univalent Foundations” is to formalize all of mathematics using a HoTT-like type theory.

6. Finally, many of us believe that HoTT is actually the best formal system extant for computer formalization of mathematics. It shares most of the advantages of type theory, such as the above-mentioned close connection to programming, the avoidance of complicated ZF-encodings for even basic concepts like natural numbers, and the production of small easily-verifiable “certificates” of proof correctness. (The advantages of some type theories that HoTT doesn’t yet share, like a computational interpretation, are work in progress.) But it also rectifies certain infelicious features of previously existing type theories, by specifying what equality of types means (univalence), including extensionality for functions and truth values, providing well-behaved quotient types (HITs), and so on, making it more comfortable for ordinary mathematicians. (I believe that historically, this was what led Voevodsky to type theory and univalence in the first place.)

There are probably additional reasons why HoTT/UF attracts more people interested in computer formalization. (If you can think of others, please share them in the comments.) However, there is more to it than this, as one can guess from the fact that even people like me, coming from a background of homotopy theory and higher category theory, tend to formalize a lot of our work on HoTT. Of course there is a bit of a “peer pressure” effect: if all the other homotopy type theorists formalize their papers, then it starts to seem expected in the subject. But that’s far from the only reason; here are some “real” ones.

1. Computer formalization of synthetic homotopy theory (the “uniquely HoTT” part of HoTT/UF) is “easier”, in certain respects, than most computer formalization of mathematics. In particular, it requires less infrastructure and library support, because it is “closer to the metal” of the underlying formal system than is usual for actually “interesting” mathematics. Thus, formalizing it still feels more like “doing mathematics” than like programming, making it more attractive to a mathematician. You really can open up a proof assistant, load up no pre-written libraries at all, and in fairly short order be doing interesting HoTT. (Of course, this doesn’t mean that there is no value in having libraries and in thinking hard about how best to design those libraries, just that the barrier to entry is lower.)

2. Precisely because, as mentioned above, type theory is hard to grok for a mathematician, there is a significant benefit to using a proof assistant that will automatically tell you when you make a mistake. In fact, messing around with a proof assistant is one of the best ways to learn type theory! I posted about this almost exactly four years ago.

3. I think the previous point goes double for homotopy type theory, because it is an unfamiliar new world for almost everyone. The types of HoTT/UF behave kind of like spaces in homotopy theory, but they have their own idiosyncracies that it takes time to develop an intuition for. Playing around with a proof assistant is a great way to develop that intuition. It’s how I did it.

4. Moreover, because that intuition is unique and recently developed for all of us, we may be less confident in the correctness of our informal arguments than we would be in classical mathematics. Thus, even an established “homotopy type theorist” may be more likely to want the comfort of a formalization.

5. Finally, there is an additional benefit to doing mathematics with a proof assistant (as opposed to formalizing mathematics that you’ve already done on paper), which I think is particularly pronounced for type theory and homotopy type theory. Namely, the computer always tells you what you need to do next: you don’t need to work it out for yourself. A central part of type theory is inductive types, and a central part of HoTT is higher inductive types; both of which are characterized by an induction principle (or “eliminator”) which says that in order to prove a statement of the form “for all $x:Wx:W$, $P\left(x\right)P\left(x\right)$”, it suffices to prove some number of other statements involving the predicate $PP$. The most familiar example is induction on the natural numbers, which says that in order to prove “for all $n\in ℕn\in \mathbb\left\{N\right\}$, $P\left(n\right)P\left(n\right)$” it suffices to prove $P\left(0\right)P\left(0\right)$ and “for all $n\in ℕn\in \mathbb\left\{N\right\}$, if $P\left(n\right)P\left(n\right)$ then $P\left(n+1\right)P\left(n+1\right)$”. When using proof by induction, you need to isolate $PP$ as a predicate on $nn$, specialize to $n=0n=0$ to check the base case, write down $P\left(n\right)P\left(n\right)$ as the inductive hypothesis, then replace $nn$ by $n+1n+1$ to find what you have to prove in the induction step. The students in an intro to proofs class have trouble with all of these steps, but professional mathematicians have learned to do them automatically. However, for a general inductive or higher inductive type, there might instead be four, six, ten, or more separate statements to prove when applying the induction principle, many of which involve more complicated transformations of $PP$, and it’s common to have to apply several such inductions in a nested way. Thus, when doing HoTT on paper, a substantial amount of time is sometimes spent simply figuring out what has to be proven. But a proof assistant equipped with a unification algorithm can do that for you automatically: you simply say “apply induction for the type $WW$” and it immediately decides what $PP$ is and presents you with a list of the remaining goals that have to be proven.

To summarize this second list, then, I think it’s fair to say that compared to formalizing traditional mathematics, formalizing HoTT tends to give more benefit at lower cost. However, that cost is still high, especially when you take into account the time spent learning to use a proof assistant, which is often not the most user-friendly of software. This is why I always emphasize that HoTT can perfectly well be done without a computer, and why we wrote the book the way we did.

## July 18, 2015

### Georg von Hippel - Life on the lattice

LATTICE 2015, Days Three and Four
Due to the one-day shift of the entire conference programme relative to other years, Thursday instead of Wednesday was the short day. In the morning, there were parallel sessions. The most remarkable thing to be reported from those (from my point of view) is that MILC are generating a=0.03 fm lattices now, which handily beats the record for the finest lattice spacing; they are observing some problems with the tunnelling of the topological charge at such fine lattices, but appear hopeful that they can be useful.

After the lunch break, excursions were offered. I took the trip to Himeji to see Himeji Castle, a very remarkable five-story wooden building that due to its white exterior is also known the "White Heron Castle". During the trip, typhoon Nangka approached, so the rains cut our enjoyment of the castle park a bit short (though seeing koi in a pond with the rain falling into it had a certain special appeal to it, the enjoyment of which I in my Western ignorance suppose might be considered a form of Japanese wabi aesthetics).

As the typhoon resolved into a rainstorm, the programme wasn't cancelled or changed, and so today's plenary programme started with a talk on some formal developments in QFT by Mithat Ünsal, who reviewed trans-series, Lefschetz thimbles, and Borel summability as different sides of the same coin. I'm far too ignorant of these more formal field theory topics to do them justice, so I won't try a detailed summary. Essentially, it appears that the expansion of certain theories around the saddle points corresponding to instantons is determined by their expansion around the trivial vacuum, and the ambiguities arising in the Borel resummation of perturbative series when the Borel transform has a pole on the positive real axis can in some way be connected to this phenomenon, which may allow for a way to resolve the ambiguities.

Next, Francesco Sannino spoke about the "bright, dark, and safe" sides of the lattice. The bright side referred to the study of visible matter, in particular to the study of technicolor models as a way of implementing the spontaneous breaking of electroweak symmetry, without the need for a fundamental scalar introducing numerous tunable parameters, and with the added benefits of removing the hierarchy problem and the problem of φ4 triviality. The dark side referred to the study of dark matter in the context of composite dark matter theories, where one should remember that if the visible 5% of the mass of the universe require three gauge groups for their description, the remaining 95% are unlikely to be described by a single dark matter particle and a homogeneous dark energy. The safe side referred to the very current idea of asymptotic safety, which is of interest especially in quantum gravity, but might also apply to some extension of the Standard Model, making it valid at all energy scales.

After the coffee break, the traditional experimental talk was given by Toru Iijima of the Belle II collaboration. The Belle II detector is now beginning commissioning at the upcoming SuperKEKB accelerator, which will greatly improved luminosity to allow for precise tests of the Standard Model in the flavour sector. In this, Belle II will be complementary to LHCb, because it will have far lower backgrounds allowing for precision measurements of rare processes, while not being able to access as high energies. Most of the measurements planned at Belle II will require lattice inputs to interpret, so there is a challenge to our community to come up with sufficiently precise and reliable predictions for all required flavour observables. Besides quark flavour physics, Belle II will also search for lepton flavour violation in τ decays, try to improve the phenomenological prediction for (g-2)μ by measuring the cross section for e+e- -> hadrons more precisely, and search for exotic charmonium- and bottomonium-like states.

Closely related was the next talk, a review of progress in heavy flavour physics on the lattice given by Carlos Pena. While simulations of relativistic b quarks at the physical mass will become a possibility in the not-too-distant future, for the time being heavy-quark physics is still dominated by the use of effective theories (HQET and NRQCD) and methods based either on appropriate extrapolations from the charm quark mass region, or on the Fermilab formalism, which is sort of in-between. For the leptonic decay constants of heavy-light mesons, there are now results from all formalisms, which generally agree very well with each other, indicating good reliability. For the semileptonic form factors, there has been a lot of development recently, but to obtain precision at the 1% level, good control of all systematics is needed, and this includes the momentum-dependence of the form factors. The z-expansion, and extended versions thereof allowing for simultaneous extrapolation in the pion mass and lattice spacing, has the advantage of allowing for a test of its convergence properties by checking the unitarity bound on its coefficients.

After the coffee break, there were parallel sessions again. In the evening, the conference banquet took place. Interestingly, the (excelleent) food was not Japanese, but European (albeit with a slight Japanese twist in seasoning and presentation).

### Georg von Hippel - Life on the lattice

LATTICE 2015, Day Five
In a marked deviation from the "standard programme" of the lattice conference series, Saturday started off with parallel sessions, one of which featured my own talk.

The lunch break was relatively early, therefore, but first we all assembled in the plenary hall for the conference group photo (a new addition to the traditions of the lattice conference), and was followed by afternoon plenary sessions. The first of these was devoted to finite temperature and density, and started with Harvey Meyer giving the review talk on finite-temperature lattice QCD. The thermodynamic properties of QCD are by now relatively well-known: the transition temperature is agreed to be around 155 MeV, chiral symmetry restoration and the deconfinement transition coincide (as well as that can defined in the case of a crossover), and the number of degrees of freedom is compatible with a plasma of quarks and gluons above the transition, but the thermodynamic potentials approach the Stefan-Boltzmann limit only slowly, indicating that there are strong correlations in the medium. Below the transition, the hadron resonance gas model describes the data well. The Columbia plot describing the nature of the transition as a function of the light and strange quark masses is being further solidified: the size of the lower-left hand corner first-order region is being measured, and the nature of the left-hand border (most likely O(4) second-order) is being explored. Beyond these static properties, real-time properties are beginning to be studied through the finite-temperature spectral functions. One interesting point was that there is a difference between the screening masses (spatial correlation lengths) and quasiparticle masses (from the spectral function) in any given channel, which may even tend in opposite directions as functions of the temperature (as seen for the pion channel).

Next, Szabolcs Borsanyi spoke about fluctuations of conserved charges at finite temperature and density. While of course the sum of all outcoming conserved charges in a collision must equal the sum of the ingoing ones, when considering a subvolume of the fireball, this can be best described in the grand canonical ensemble, as charges can move into and out of the subvolume. The quark number susceptibilities are then related to the fluctuating phase of the fermionic determinant. The methods being used to avoid the sign problem include Taylor expansions, fugacity expansions and simulations at imaginary chemical potential, all with their own strengths and weaknesses. Fluctuations can be used as a thermometer to measure the freeze-out temperature.

Lastly, Luigi Scorzato reviewed the Lefschetz thimble, which may be a way out of the sign problem (e.g. at finite chemical potential). The Lefschetz thimble is a higher-dimensional generalization of the concept of steepest-descent integration, in which the integral of eS(z) for complex S(z) is evaluated by finding the stationary points of S and integrating along the curves passing through them along which the imaginary part of S is constant. On such Lefschetz thimbles, a Langevin algorithm can be defined, allowing for a Monte Carlo evaluation of the path integral in terms of Lefschetz thimbles. In quantum-mechanical toy models, this seems to work already, and there appears hope that this might be a way to avoid the sign problem of finite-density QCD.

After the coffee break, the last plenary session turned to physics beyond the Standard Model. Daisuke Kadoh reviewed the progress in putting supersymmetry onto the lattice, which is still a difficult problem due to the fact that the finite differences which replace derivatives on a lattice do not respect the Leibniz rule, introducing SUSY-breaking terms when discretizing. The ways past this are either imposing exact lattice supersymmetries or fine-tuning the theory so as to remove the SUSY-breaking in the continuum limit. Some theories in both two and four dimensions have been simulated successfully, including N=1 Super-Yang-Mills theory in four dimensions. Given that there is no evidence for SUSY in nature, lattice SUSY is of interesting especially for the purpose of verifying the ideas of gauge-dravity duality from the Super-Yang-Mills side, and in one and two dimensions, agreement with the predictions from gauge-gravity duality has been found.

The final plenary speaker was Anna Hasenfratz, who reviewed Beyond-the-Standard-Model calculations in technicolor-like theories. If the Higgs is to be a composite particle, there must be some spontaneously broken symmetry that keeps it light, either a flavour symmetry (pions) or a scale symmetry (dilaton). There are in fact a number of models that have a light scalar particle, but the extrapolation of these theories is rendered difficult by the fact that this scalar is (and for phenomenologically interesting models would have to be) lighter than the (techni-)pion, and thus the usual formalism of chiral perturbation theory may not work. Many models of strong BSM interactions have been and are being studied using a large number of different methods, with not always conclusive results. A point raised towards the end of the talk was that for theories with a conformal IR fixed-point, universality might be violated (and there are some indications that e.g. Wilson and staggered fermions seem to give qualitatively different behaviour for the beta function in such cases).

The conference ended with some well-deserved applause for the organizing team, who really ran the conference very smoothly even in the face of a typhoon. Next year's lattice conference will take place in Southampton (England/UK) from 24th to 30th July 2016. Lattice 2017 will take place in Granada (Spain).

### Lubos Motl - string vacua and pheno

Pentaquark discovery claimed by LHCb
Due to confinement, quarks are obsessive about the creation of bound states. Most typically, we have quark-antiquark pairs (mesons) and triplets of quarks (baryons). All of those states have lots of extra gluons and quark-antiquark pairs.

Quark bound states simpler than the pentaquark

The word "pentaquark" means "five quarks". They are hypothetical particles made out of five quarks-or-antiquarks. The Greek prefix is being used to remember the times when Greece was an advanced country, some 2,000 years ago. These bags of 5 particles have to contain 4 quarks and 1 antiquark, or vice versa, because $$4-1$$ and $$1-4$$ are the only multiples of 3 among the allowed numbers $$x-(5-x)$$ and the divisibility by three is needed for the particle to be color-neutral.

This word has appeared once on this blog. About 9.7 years ago, I wrote about a seminar at which Peter Ouyang had claimed that pentaquarks didn't exist, for some subtle technical reasons. (Well, the plural has appeared thrice on TRF.)

Well, this Gentleman will surely find a today's paper by the LHCb collaboration controversial. The experimenters released this preprint
Observation of $$J/\psi p$$ resonances consistent with pentaquark states in $$\Lambda^0_b\to J/\psi K^- p$$ decays
The particular pentaquark that is apparently being observed is a "pentaquark-charmonium" state.

They looked at lots and lots of decays of the $$\Lambda_b^0$$ hyperon, a well-known cousin of the neutron (or the proton). The neutron has the quark content $$udd$$. Replace one $$d$$ by the bottom quark $$b$$ and you get $$udb$$ which is the content of the neutral bottom Lambda hyperon.

This beast is created many times in the LHC collisions. And it often decays to three particles: the $$J/\psi$$ meson, also called the charmonium (the content is $$c\bar c$$), discovered by Richter's and Ting's teams in 1974; the negative kaon with the $$s\bar u$$ content; and our beloved $$uud$$ proton.

This is a three-body final state but one may calculate the invariant mass of two of the final particles, the charmonium and the proton, and they experimentally find two clear peaks. They correspond to resonances with
1. mass $$4.380\pm 0.008\pm 0.029 \GeV$$, width $$205\pm 18\pm 86\MeV$$
2. mass $$4.450\pm 0.002\pm 0.003 \GeV$$, width $$39\pm 5\pm 19\MeV$$
Each of these two resonances is seen at a significance level exceeding nine sigma so there's no possibility that it's just some "fluke". The interpretation could hypothetically be different from a "pentaquark" – whose quark content is $$uds c\bar c$$ – but the observed widths are rather large, several megaelectronvolts (the decay is fast) so there is not enough time for changes of the quark content.

I would think that these new resonances are pentaquarks, indeed, when it comes to the number of dominant valence quarks. Another question is whether all claims that have been made about pentaquarks in the theoretical literature are correct. I would be much less certain about that... There is a school of thought that interprets these new states as hadronic molecules. Because the counterpart of the fine-structure constant for the strong force is so close to one, I have a problem with the very concept of separation of multi-quark bound states and "molecules". The difference between them can't be parameterically separated. But just to be sure, it is plausible that the description of the states as "molecules" will turn out to be useful.

Those impatient people who love to talk about "deadlines" for discovery may want to know that the pentaquarks were first hypothesized in the 1960s, half a century ago. Even though they don't really require any extraordinary energies of the experiments, it took quite some time to observe them.

See e.g. the BBC for a short story or a CERN press release.

### Clifford V. Johnson - Asymptotia

Goodbye Nambu
One of the towering giants of the field, Yoichiro Nambu, passed away a short while ago, at age 94. He made a remarkably wide range of major (foundational) contributions to various fields, from condensed matter through particle physics, to string theory. His 2008 Nobel Prize was for work that was a gateway for other Nobel Prize-winning work, for example 2012's Higgs particle work. He was an inspiration to us all. Here's an excellent 1995 Scientific American piece (updated a bit in 2008) about him, which nicely characterises some of his style and contributions, with comments from several notable physicists. Here is a University of Chicago obituary, a Physics World one, one by Hirosi Ooguri, and one from the New York Times. There are several others worth reading too. Since everyone is talking more about his wonderful work on symmetry-breaking (and rightly so), I've put up (on the board above) instead the Nambu-Goto action governing the motion of a relativistic string (written with a slight abuse of notation). This action, and its generalisations, is a cornerstone of string theory, and you'll find it in pretty much every text on the subject. Enjoy. Thank you, Professor Nambu. -cvj Click to continue reading this post

## July 17, 2015

### Sean Carroll - Preposterous Universe

Yoichiro Nambu

It was very sad to hear yesterday that Yoichiro Nambu has died. He was aged 94, so it was after a very long and full life.

Nambu was one of the greatest theoretical physicists of the 20th century, although not one with a high public profile. Among his contributions:

• Being the first to really understand spontaneous symmetry breaking in quantum field theory, work for which he won a (very belated) Nobel Prize in 2008. We now understand the pion as a (pseudo-) “Nambu-Goldstone boson.”
• Suggesting that quarks might come in three colors, and those colors might be charges for an SU(3) gauge symmetry, giving rise to force-carrying particles called gluons.
• Proposing the first relativistic string theory, based on what is now called the Nambu-Goto action.

So — not too shabby.

But despite his outsized accomplishments, Nambu was quiet, proper, it’s even fair to say “shy.” He was one of those physicists who talked very little, and was often difficult to understand when he does talk, but if you put in the effort to follow him you would invariably be rewarded. One of his colleagues at the University of Chicago, Bruce Winstein, was charmed by the fact that Nambu was an experimentalist at heart; at home, apparently, he kept a little lab, where he would tinker with electronics to take a break from solving equations.

Any young person in science might want to read this profile of Nambu by his former student Madhusree Mukerjee. In it, Nambu tells of when he first came to the US from Japan, to be a postdoctoral researcher at the Institute for Advanced Study in Princeton. “Everyone seemed smarter than I,” Nambu recalls. “I could not accomplish what I wanted to and had a nervous breakdown.”

If Yoichiro Nambu can have a nervous breakdown because he didn’t feel smart enough, what hope is there for the rest of us?

Here are a few paragraphs I wrote about Nambu and spontaneous symmetry breaking in The Particle at the End of the Universe.

A puzzle remained: how do we reconcile the idea that photons have mass inside a superconductor with the conviction that the underlying symmetry of electromagnetism forces the photon to be massless?

This problem was tackled by a number of people, including American physicist Philip Anderson, Soviet physicist Nikolai Bogoliubov, and Japanese-American physicist Yoichiro Nambu. The key turned out to be that the symmetry was indeed there, but that it was hidden by a field with that took on a nonzero value in the superconductor. According to the jargon that accompanies this phenomenon, we say the symmetry is “spontaneously broken”: the symmetry is there in the underlying equations, but the particular solution to those equations in which we are interested doesn’t look very symmetrical.

Yoichiro Nambu, despite the fact that he won the Nobel Prize in 2008 and has garnered numerous other honors over the years, remains relatively unknown outside physics. That’s a shame, as his contributions are comparable to those of better-known colleagues. Not only was he one of the first to understand spontaneous symmetry breaking in particle physics, but he was also the first to propose that quarks carry color, to suggest the existence of gluons, and to point out that certain particle properties could be explained by imagining that the particles were really tiny strings, thus launching string theory. Theoretical physicists admire Nambu’s accomplishments, but his inclination is to avoid the limelight.

For several years in the early 2000’s I was a faculty member at the University of Chicago, with an office across the hall from Nambu’s. We didn’t interact much, but when we did he was unfailingly gracious and polite. My major encounter with him was one time when he knocked on my door, hoping that I could help him with the email system on the theory group computers, which tended to take time off at unpredictable intervals. I wasn’t much help, but he took it philosophically. Peter Freund, another theorist at Chicago, describes Nambu as a “magician”: “He suddenly pulls a whole array of rabbits out of his hat and, before you know it, the rabbits reassemble in an entirely novel formation and by God, they balance the impossible on their fluffy cottontails.” His highly developed sense of etiquette, however, failed him when he was briefly appointed as department chair: reluctant to explicitly say “no” to any question, he would indicate disapproval by pausing before saying “yes.” This led to a certain amount of consternation among his colleagues, once they realized that their requests hadn’t actually been granted.

After the BCS theory of superconductivity was proposed, Nambu began to study the phenomenon from the perspective of a particle physicist. He put his finger on the key role played by spontaneous symmetry breaking, and began to wonder about its wider applicability. One of Nambu’s breakthroughs was to show (partly in collaboration with Italian physicist Giovanni Jona-Lasinio) how spontaneous symmetry breaking could happen even if you weren’t inside a superconductor. It could happen in empty space, in the presence of a field with a nonzero value — a clear precursor to the Higgs field. Interestingly, this theory also showed how a fermion field could start out massless, but gain mass through the process of symmetry breaking.

As brilliant as it was, Nambu’s suggestion of spontaneous symmetry breaking came with a price. While his models gave masses to fermions, they also predicted a new massless boson particle — exactly what particle physicists were trying to avoid, since they didn’t see any such particles created by the nuclear forces. Soon thereafter, Scottish physicist Jeffrey Goldstone argued that this wasn’t just an annoyance: this kind of symmetry breaking necessarily gave rise to massless particles, now called “Nambu-Goldstone bosons.” Pakistani physicist Abdus Salam and American physicist Steven Weinberg then collaborated with Goldstone in promoting this argument to what seemed like an air-tight proof, now called “Goldstone’s theorem.”

One question that must be addressed by any theory of broken symmetry is, what is the field that breaks the symmetry? In a superconductor the role is played by the Cooper pairs, composite states of electrons. In the Nambu/Jona-Lasinio model, a similar effect happens with composite nucleons. Starting with Goldstone’s 1961 paper, however, physicists become comfortable with the idea of simply positing a set of new fundamental boson fields whose job it was to break symmetries by taking on a nonzero value in empty space. The kind of fields required are known as a “scalar” fields, which is a way of saying they have no intrinsic spin. The gauge fields that carry forces, although they are also bosons, have spin equal to one.

If the symmetry weren’t broken, all the fields in Goldstone’s model would behave in exactly the same way, as massive scalar bosons, due to the requirements of the symmetry. When the symmetry is broken, the fields differentiate themselves. In the case of a global symmetry (a single transformation all throughout space), which is what Goldstone considered, one field remains massive, while the others become massless Nambu-Goldstone bosons — that’s Goldstone’s theorem.

### astrobites - astro-ph reader's digest

Telltale Signs of Dwarf Galaxies

Smaller but Not Lesser

Galaxies come in many different shapes and sizes. Dwarf galaxies are the most common type of galaxies in the Universe. Compared to the Milky Way and other normal-sized galaxies that house hundreds billions of stars, dwarf galaxies typically only boast of several billions of stars. As a result, they have smaller masses and lower surface brightness (a measure of luminosity averaged over pixels area, in units of mag/arcsec2). These dwarf galaxies hang around larger galaxies and are good companions to their larger and more massive counterparts. For instance, we currently know of 43 dwarf galaxies that orbit our Milky Way (the Small and Large Magellanic Clouds are Milky Way’s most massive dwarfs, for instance) and ~50 that are contained within the Local Group, with membership belonging mostly to the Milky Way and Andromeda.

What is so interesting about dwarf galaxies that people are so keen in finding them? Besides being interesting from their sheer number, dwarf galaxies are non-negligible players in the cosmology playground: they are more dark-matter dominated than larger galaxies and are laboratories to test dark matter theories.  One of the recurring obstacles to the normative cosmological model we adopt, known as LCDM (Lambda Cold Dark Matter), is the overestimation of Milky Way dwarf galaxies predicted from simulations compared to observations, also known as the “missing satellites problem. Simulations also predict some massive dwarf galaxies to host more dark matter than we observed, another cosmology buzzword known as the “too big to fail” problem. In the hope of resolving these cosmological issues, dwarf galaxies are scrutinized more carefully and the expedition to uncover more dwarfs is more vibrant than ever.

A New Way of Looking

So now we’re pumped up to go and search for more dwarf galaxies. But wait. Current search strategies (which searches for dwarf galaxies by resolving stellar populations) restrict us against finding low surface brightness and low Galactic latitude dwarfs, a factor that could have contributed to the “missing satellites problem”. Figure 1 illustrates the biases in current dwarf search methods. Fainter dwarfs are just more difficult to find while the increasing number of foreground stars at low Galactic latitudes hampers dwarf searches in this region of the sky. This paper proposes to use a special type of variable stars, RR Lyrae (RRL), as tools for discovering faint and low Galactic latitude dwarfs.

Fig 1 – Left plot shows size versus luminosity for confirmed (closed points) and new (open points) dwarf galaxies in the Milky Way, while right plots shows luminosity versus distance for the same points. Notice the dearth of faint dwarf galaxies on the upper left corner in the left plot and the lack of distant dwarfs on the lower right corner in the right plot. These show that current search methods work best in finding close-by and bright dwarfs, but are biased against distant faint dwarfs. [Figure 1 in paper.]

What is this group of stars so exotically named, you ask? RR Lyrae (RRL) is a class of short-period (0.2-1 day) variable stars found in old and metal-poor populations. They have light curves that are easy to distinguish and are good distance indicators, having their intrinsic luminosity already well-determined. Because the Milky Way halo consists mostly of old stellar populations, RRL are particularly abundant there.  This is where it gets interesting: At least one RRL star has been found in each known (low-luminosity) Milky Way dwarf (see this earlier paper). Based on this informed premise, single RRL star is hypothesized to be indicators of extremely low luminosity dwarfs; this paper investigates the detectability of dwarf galaxies using RRL in simulations and ranks performance of time-domain surveys in RRL (and thus dwarf galaxies) discovery.

Groups of RR Lyrae as telltales of Dwarf Galaxies

To assess the improvement in discovery rate of faint dwarf galaxies using RRL, the authors simulated RRL spatial distributions in tens of thousands of fake dwarf galaxies. They then searched for groups of 2 or more RRL, which is the minimum group number to effectively search for dwarf galaxies at halo distances (ie, at d > 50 kpc). Figure 2 shows the detectability of simulated dwarf galaxies as groups of 2 or more RR. Notice that huge portion of white in the figure? This means that a larger portion of discovery space at low surface brightness and low Galactic latitude can be opened up using this method of identifying dwarf galaxies as groups of RRL.

Fig 2 – The detectability of simulated dwarf galaxies as groups of 2 or more RR Lyrae stars, as a function of size (x-axis) and luminosity (y-axis). The detectability of dwarfs is also dependent on how common we think RRL stars occur; left plot is for less common while right plot is for more common occurrence. The blue closed and open points are confirmed and new Milky Way dwarfs while the red dashed lines indicate constant surface brightness. Identification rate of low-surface brightness dwarf galaxies through RRL is high, as shown through the white regions. [Figure 7 in paper.]

The authors searched RRL catalogs in the hope of uncovering new dwarf galaxy candidates. Although their search is partially limited by the incompleteness of the catalogs, they managed to find two RRL groups that are not known to be associated with any Milky Way structures. Follow up of these two groups via deep imaging would be required to confirm their natures. Additionally, the authors also rank the performance of current time-domain surveys in terms of dwarf galaxy discovery using this RRL method. Among current surveys, PanSTARRS I (Panoramic Survey Telescope And Rapid Response System) may be our best hope. Its survey region includes a large area at low Galactic latitude and it is deep enough to reveal low surface brightness structures. With PanSTARRS I, we hope to find ~15% of Milky Way dwarf galaxies. Looking ahead, LSST (Large Synoptic Survey Telescope) would crush all existing surveys in discovering new dwarf galaxies. It is wide enough, fast enough, and deep enough to find ~45% of all Milky Way dwarfs and would constitute the most complete possible census of the Milky Way dwarfs, according to the authors’ prediction.

In efforts mining for new astrophysical objects, astronomers always strive for completeness and efficiency. In this case, a complete and unbiased sample of dwarf galaxies would enable all sorts of interesting statistical studies, especially in terms of resolving the “missing satellites problem” mentioned in the beginning. Exploiting RRL as proxies for dwarf galaxies, the authors proposed a new way to find low luminosity and low Galactic latitude dwarf galaxies which current search strategies are biased against. Based on studies of dwarf detectability mentioned above, this method should unveil more dwarf galaxies, especially via PanSTARRS I and LSST.  However, it would be interesting to see how many we actually find using this neat little method.

### Lubos Motl - string vacua and pheno

Symmetry magazine, papers about the $$2\TeV$$ $$W_R$$-like bumps
A good idea to get used to left-right-symmetric models
Sad news: Yoichiro Nambu died of heart attack on July 5th. This forefather of string theory and other things shared the 2008 Nobel prize in physics.
When I listed some of the excesses seen at the LHC that the ongoing run will either confirm or disprove, the #1 bump I mentioned was the $$2\TeV$$ bump of ATLAS that looks like a new $$W$$-like boson decaying to two normal electroweak bosons. The local significance was about 3.5 sigma and the global one was 2.5 sigma. Moreover, CMS saw similar (but weaker) effects at a nearby place.

It seems increasingly clear that the high-energy phenomenological community actually agrees with my choice of the "most interesting bump of all". The Symmetry Magazine published by SLAC+Fermilab just printed a story
Something goes bump in the data

The CMS and ATLAS experiments at the LHC see something mysterious, but it’s too soon to pop the Champagne
where two ladies describe the same bump. And make no mistake about it. Lots of phenomenologists also think that it is an extremely interesting bump because new papers appear on a daily basis.

For example, a new paper today suggests that this bump results from a new, heavy Higgs boson, either a charged one or a neutral one. And another hep-ph paper released today discusses this excess and another one which may be a hint of light supersymmetry and/or left-right-symmetric models. I am pretty sure that the number of experts who are excited by this excess is large.

Supersymmetry has been discussed many times but I think that I haven't ever written about the concept of the left-right-symmetric models. They contain an idea that is less revolutionary or far-reaching than supersymmetry, I think, but it's still remarkably cool and it is a potentially important step towards grand unification, too.

Left-right-symmetric models: more natural than the Standard Model?

In the Standard Model, the electromagnetic $$U(1)_{\rm em}$$ group generated by the electric charge $$Q$$ is embedded into a larger group, the electroweak group $$SU(2)_W\times U(1)_Y$$, composed of the electroweak isospin and the hypercharge. The electric charge is written as a combination of two generators of the electroweak group$Q = \frac Y2 + I_{3L}$ Quarks and leptons are described as Dirac, four-component spinors but to describe their electroweak interactions, these four-component spinors have to be divided to two two-component Weyl spinors. The left-handed and right-handed parts of the spinor (particle) have to be treated independently.

In the Standard Model, the left-handed components of the quarks and leptons (and, similarly, the right handed components of their antiparticles – produced by the Hermitian conjugate fields) transform nontrivially, as a doublet, under $$SU(2)_W$$. That's why they interact with the $$W$$-bosons. The left-handed electron and the left-handed neutrino are combined into a doublet – which are therefore analogous particles (they behave exactly the same at energies much higher than the Higgs mass. However, these particles' right-handed spinorial parts (well, at least the right-handed electron: the right-handed neutrinos don't have to exist, as far as direct experimental evidence goes) have to be treated as singlets.

These doublets' and singlets' values of the hypercharge $$Y$$ have to be adjusted in the right way for the value of $$Q$$, given by the sum above, to be the same for the left-handed electron and the right-handed one – and similarly for the down-type and up-type quarks, too. So in effect, the Standard Model depends on the assignment of lots of independent values of $$Y$$ to singlets and doublets that have the property that all (charged) quarks and leptons may be combined to full Dirac fermions. You always find pairs of Weyl spinors for which the values of $$Q$$ match.

You might say that this structure of the Standard Model is contrived and the Standard Model doesn't explain why you may always construct the whole Dirac spinors with a uniform value of $$Q$$. The separate treatment of the left-handed and right-handed parts of the fermions may look "contrived" to you by itself. Isn't there a more symmetric treatment that explains why the Dirac spinors exist (why the $$Q$$ agrees for both parts) etc.?

Yes, there is an extension of the Standard Model that explains that, the left-right-symmetric models!

The group $$U(1)_Y$$ is extended into a new $$SU(2)$$ – namely $$SU(2)_R$$ (right), while the original $$SU(2)_W$$ (weak) is renamed as $$SU(2)_L$$ (left) – and the hypercharge $$Y$$ is rewritten as$Y = 2I_{3R} + (B-L)$ You can see that the adjustable part of the hypercharge "became" the third component of the new, right-handed isospin $$SU(2)_R$$ group. However, the original hypercharge wasn't always an integer, like $$2I_{3R}$$ is, so there has to be some extra additive shift. If you combine this new "definition" of $$Y$$ with the Standard Model formula for $$Q$$, you get a pretty neat formula for the electric charge in the left-right-symmetric models:$Q = I_{3L} + I_{3R} + \frac{B-L} 2$ The left-handed fermions are doublets under $$SU(2)_L$$ but singlets under $$SU(2)_R$$; it's reversed for the right-handed fermions. The list of possible values of the sum of the first two terms is therefore the same for the left-handed particles as it is for the right-handed ones. The parts of the doublet always differ by $$\Delta I_3=\pm 1$$ i.e. by $$\Delta Q=1$$ which is right. So the only other thing you have to verify is the additive shift. It is clearly the average value of the electric charge of the "Diracized doublets" because the average of $$I_{3L}+I_{3R}$$ is zero – as for every non-Abelian group.

But that's the correct value. The electron-neutrino Diracized doublet has the average $$Q$$ equal to $$(-1+0)/2=-1/2$$ which matches $$(B-L)/2$$ because $$B=0$$ and $$L=1$$. Similarly, the quark Diracized doublets have the average electric charge $$(+2/3-1/3)/2=+1/6$$ which agrees with $$(B-L)/2$$ because $$B=+1/3$$ and $$L=0$$ for quarks.

In some counting, we have simplified the assignment of charges and representations to the known quarks and leptons. Why? We have copied the tricky assignment of the $$SU(2)_W$$ representation twice – both for the left-handed particles and the right-handed ones. So the addition of the new $$SU(2)_R$$ group hasn't added any "arbitrariness" at all. And concerning the $$U(1)$$ charges, we have replaced the seemingly arbitrary assignment of many values of $$Y$$ to the left-handed doublets and the right-handed singlets by the many fewer assignments of the $$(B-L)$$ charge generating the $$U(1)_{B-L}$$ group. And only two values of $$B-L$$ had to be chosen – one for leptons and one for fermions!

So this left-right-symmetric models "explains" the representations and charges of all the quarks and leptons "more naturally" than the Standard Model. Moreover, the new $$U(1)_{B-L}$$ factor of the gauge group that we had to add may naturally arise from grand unified theories (GUT). You just need a good model that also breaks the grand unified symmetry – and you always need some mechanism that breaks the new group $$SU(2)_R$$ that the left-right-symmetric models added. That breaking has to occur at a higher energy scale than the known electroweak $$SU(2)_L=SU(2)_W$$ symmetry breaking, perhaps those $$2\TeV$$ or so. But it can be done. Viable models like that exist.

Just to be sure: We extended $$U(1)_Y$$ into an $$SU(2)_R$$ group – if we treat $$(B-L)$$ to be a "constant" because that group is probably broken at much higher energies than the LHC energies – which basically means that the gauge boson $$Z$$ of $$U(1)_Y$$ (well, it's mixed with $$I_{3L}$$ and a photon, but let's identify the generator with the $$Z$$-boson) is extended to the list of three bosons, $$Z$$ and $$W_R^\pm$$, and the latter two might have the mass of $$2\TeV$$ and they may try to emphasize their existence in the form of the ATLAS bump.

I do find it totally plausible that by the end of the year, the LHC will confirm the existence of these new particles and in 2016, people will already be saying that "it had to be obvious" that the new status quo model, the left-right-symmetric model, is nicer and more natural and people including geniuses like Weinberg et al. had been stupid for almost 50 years when they avoided the self-evident left-right extensions of the Standard Model!

Stay tuned. ;-)

Off-topic but physics: Weyl points were finally detected – they were theorized in 1929. The lesson is the same as with other recent discoveries such as the pentaquarks. Sometimes it just takes time to see things experimentally.

### CERN Bulletin

Et pourquoi pas au CERN ?
Télétravail ou travail à distance, aménagement des horaires de travail et autres évolutions favorables à un meilleur équilibre vie privée et vie professionnelle sont adoptés par nombre d’entreprises et d’organisations !   Rendu possible grâce au développement de nouvelles technologies dont Internet, le travail à distance séduit de plus en plus de personnels, ainsi que de plus en plus de sociétés qui y trouvent des avantages en matière de gestion de l’espace, de sécurité (moins de trajets domicile-entreprise), de développement durable (moins de pollution), de motivation et de bien-être de leurs personnels. Les horaires aménagés, voire les « core-hours1 », sont également des pratiques de plus en plus courantes ; les entreprises qui ont opté pour cette flexibilité ont mis en avant l’autonomie de leur personnel, le bien-être des employés mais aussi l’attractivité et la compétitivité. De plus, des études portant sur l’économie du bien-être et l’économie comportementale montrent que « ... les entreprises conservent encore aujourd’hui cette culture présentéiste qui va à l’encontre de ce que réclament les employés ... » (voir l’article de La Tribune de Genève du 21 juin 2015 – Heureux au travail ? Une utopie réaliste). Dans le cadre de la révision quinquennale 2015 et sur la thématique de la diversité « Équilibre vie privée et vie professionnelle », ces questions ont été abordées. Pour ce faire, le CERN a commandé auprès de l’OCDE une étude comparative des acquis dans ce domaine avec plusieurs organisations internationales (EMBL, ESA, ESO, ITER, UNOG, EC et EPO). Cette étude comparative montre qu’une grande proportion de ces organisations pratique déjà, et souvent depuis longtemps, le travail à distance (entre deux et cinq jours, y compris le télétravail occasionnel), la flexibilité des horaires de travail (entre « core-hours » et une flexibilité totale (p.ex. EMBL et ITER)) et le temps-partiel avec des pratiques très diverses. Il est certain que de telles évolutions ne peuvent s’adapter à tout type de travail ; les activités nécessitant une présence physique sur le domaine du CERN ou celles devant répondre à des impératifs d’horaires ne peuvent en bénéficier. Pour les autres activités et sauf autres restrictions à définir, rien ne s’oppose à la mise en place d’une plus grande flexibilité au CERN, dans un souci de bien-être et de motivation du personnel, mais aussi afin de répondre à des nécessités de compétitivité et d’attractivité. Le maitre-mot est la confiance réciproque. Et ça marche ! Dans différents pays (Grande Bretagne, Allemagne, États-Unis, etc.) les administrations et chefs d’entreprise ont opté pour une organisation dite libérée, considérant le système traditionnel de contrôle et de commandement des salariés archaïque ; ils misent sur la responsabilité des employés en leur offrant plus de liberté. Le CERN revoit sa politique de diversité, ne serait-ce pas une occasion rêvée d’apporter des améliorations dans ce domaine ? C’est en tout cas le point de vue de l’Association du personnel qui par ses propositions fait le pari de la confiance, de l’autonomie, de la maturité des membres du personnel du CERN. Retrouvez des articles sur ce thème sur notre page Facebook https://www.facebook.com/StaffAssociation.Cern et sur https://social.cern.ch. The English version of this article will be published in the next issue of the Echo.   1 Plages horaires pendant lesquelles le personnel doit être présent.

### Clifford V. Johnson - Asymptotia

To Fill a Mockingbird
Meanwhile, here at the Aviary (as we're calling the garden because of the ridiculously high level of bird activity there has been in the last few months) there has been some interesting news. Happy news, some would say. This is hard for me since it is all about my arch-nemesis (or one of them) the Mockingbird. Many hours of sleep have been damaged because of them (they do their spectacular vocal antics during both night and day - loudly), and there seems to be more and more of them each year. I've been known to go outside (in various stages of undress) in the wee hours of the morning and thrash long sticks at parts of trees to chase persistent offenders away. Well, we'd noticed that a particular spot in a hedge was being visited regularly some weeks back, and guessed that there might be a nest in there. Then one day last week, two juvenile mockingbirds emerged, practicing their flying! I knew immediately what they were since they have the same markings, but their feathers still have those fluffy/downy clumpiness in places, and of course they were not nearly as acrobatic as their adult counterparts. They hung out on [...] Click to continue reading this post

## July 16, 2015

### CERN Bulletin

LHC Report: intensity ramp-up – familiar demons

The first 2015 scrubbing run ended on Friday, 3 July and successfully delivered a well-scrubbed machine ready for operation with a 50 ns beam. This opened the way for the first phase of the so-called beam intensity ramp-up. The last couple of weeks have seen the number of bunches increase from 3 to 476 per beam via periods of 50, 144 and 300 bunches per beam.

The graph plots the rate of LHC beam dumps due to single-event effects (SEE) versus beam luminosity. It is an indication of the importance of tackling this issue.

To verify the full and proper functioning of all systems, operators need at least 3 fills and 20 hours of stable beams without significant problems. After 20 hours, an extensive checklist is signed off by the system experts before the next step up in the number of bunches. The systems involved include magnet protection, radio-frequency, beam instrumentation, collimation, operations, feedback, beam dump and injection.

Increasing the total beam intensity poses a number of operational challenges, and the higher beam currents demand attention to a number of important details, ranging from well-optimised injection to sufficiently good control of the key beam parameters such as tuning, chromaticity and the closed orbit.

The increase in total beam currents has flushed out two major but familiar issues. The first are the unidentified falling objects (UFOs) – micrometre-sized dust particles falling through the beam and generating localised beam loss. These have been observed as expected and, with the increase in intensity, have caused beam dumps and even magnet quenches at high energy. A particular worry was the unidentified lying object (ULO) near Point 8 which has been associated with particularly big UFOs. Things have been quiet in this region recently but earlier this week two ramps were lost due to UFOs in the vicinity of the ULO. In Run 1, the number of UFOs observed decreased with time. It is hoped that we will experience the same sort of conditioning in Run 2.

A lot of experience was gained in Run 1 on the effect of radiation on tunnel electronics. This issue was addressed through a well-coordinated campaign during LS1, which deployed a number of wide-ranging measures to alleviate the effects, including the relocation and improved shielding of electronics. However, a specific weakness in components of the quench protection system has emerged following the increase in beam intensity. Although it does not compromise machine safety, the weakness has led to some premature beam dumps. Test mitigation measures have been deployed in the tunnel and hopefully the problem will not compromise progress too much.

Next week the LHC moves into a five-day machine development period to be followed by a two-week scrubbing period aimed at preparing the machine for 25 ns operation.

### CERN Bulletin

Presidents' Words
In the context of the sixtieth anniversary of the Staff Association, we asked former presidents to tell us about their years of Presidency. We continue in this issue of Echo with the contribution of Franco Francia. Franco Francia During my term as President of the Staff Association (January 1978 – June 1980) a major topic was the Review of Social and Economic Conditions (RESCO). It was the first major revision of the CERN Staff Rules and Regulations. The salary scale at the time, before the revision, had a parabolic shape. For an organization like CERN, which already had a third of its staff with a university level education, this proportion hinted at a too important growth of the total salary bill, compared to the cost of investment and maintenance of the CERN facilities. We thus flattened the curve by stopping the automatic advancement in grades 12 to 14 for three years. This measure, although restrictive for senior staff, made the CERN budget more acceptable in the long term to the Member States. It was a measure that went against the interests of a category of staff, but which, in our analysis, protected the existence of the Organization, and hence was in the interest of all. The CERN Management and the vast majority of physicists agreed. However, the engineers and some physicists, believing that we were fanatic egalitarians, revolted and created a parallel association whose purpose was to defend the higher grades. The parallel association was finally dissolved when the initiators of the revolt were elected to the Staff Association Committee. Some initiators, when they had taken a position of responsibility in the Staff Association, used it to satisfy their frustrations, sometimes endangering the Organization. In hindsight, I can say that we acted correctly, but that our action was not sufficiently explained and discussed with all staff categories. For such painful adventures not to happen again, it is essential that all staff are aware that the decision-making mechanisms for financing CERN have been carefully designed, yet are fragile and highly dependent on the political will of each Member State. The principles that I have always followed, during as well as after my term, are as follows: Explain the role of CERN to its staff and to the neighbouring populations. Ensure attractive salaries and professionally interesting working conditions. This promotes motivation at work and develops attachment to the Organization. To ensure a long life to CERN the following conditions should be met: CERN, its employees and the Member States must defend Europe and continue to defend it. The current dismantling of Europe would be fatal to CERN (trivial maybe, but some do not realize this fact). If a new accelerator were to be constructed after the LHC, it should be in this region. Its design should be such that physicists from all countries may take part (the "grid" will be very helpful). The current policy of CERN is the right one. Its evolution towards a worldwide organization is desirable. The balance between activities of basic research and a good flexibility for facilitating knowledge transfer to industry is necessary. The Staff Association should foster the collective spirit of the current young employees through a variety of initiatives. Currently the individualist and opportunistic spirit which has spread worldwide deteriorates relations between co-workers everywhere. We must fight racism and intolerance by all means. Internally, every time a major problem occurs, the Staff Association should, whenever possible, concert with Management in order not to weaken the joint action. Finally, I am convinced that the Staff Association will find the right answers to all questions, provided it puts the general interest of the employees and the Organization before individual interests and opportunism. The following issues of Gravitons provide analyses and suggestions which are still useful today. N° 9 – Évolution de la recherche (Ugo Amaldi) N° 10 – État de la recherche sur la gravitation (M. Jacob). Arrangement CERN – Commission européenne (O. Barbalat) N° 14 – Greatly reduce the radiation dose (G. Charpak ) N° 15 – Interview with J. Lefrançois (Chairman of Scientific Policy Committee) N° 16 – The scientific policy at CERN (A. de Rujula and L. Foà) N° 20 – CERN’s history from 1954 to 1998 (M. Gigliarelli and F. Francia) N° 21 – French-Geneva Campus in Archamp (Y. Lemoigne ) N° 22 – L’Opinion des jeunes (M. Goossens), SESAME Centre (H. Schopper) N° 23 – Editorial (F. Francia); Data GRID (F. Gagliardi) N° 25 – Editorial and several interesting articles N° 26 – CERN’s legal evolution (J.M. Dufour) N° 27 – Two interesting interviews (A. Rubio and C. Benvenuti)

### CERN Bulletin

Collection for Philippines
Following the devastating Typhoon Haiyan that hit the Philippines in autumn 2013, a collection of funds to help the victims was organised at CERN. An amount of 16 950 CHF had been contributed and was forwarded to Caritas Switzerland. Tuesday 14 July, we received a message from Caritas in order to thank all the contributors for their generosity and to share with you their project’s progress and the results obtained so far. You can find the report on our website: http://staff-association.web.cern.ch/sites/staff-association.web.cern.ch/files/Docs/Rapport_Philippines.pdf

### Symmetrybreaking - Fermilab/SLAC

Something goes bump in the data

The CMS and ATLAS experiments at the LHC see something mysterious, but it’s too soon to pop the Champagne.

An unexpected bump in data gathered during the first run of the Large Hadron Collider is stirring the curiosity of scientists on the two general-purpose LHC experiments, ATLAS and CMS.

CMS scientists first published this bump in 2014. But because it was compatible with being a statistical fluke, they made no claim that they had observed a new particle. Recently ATLAS confirmed that they also see a bump in roughly the same place, and this time it’s bigger and stronger.

“Both ATLAS and CMS are developing new search techniques that are greatly improving our ability to search for new particles,” says Ayana Arce, an assistant professor of physics at Duke University. “We can look for new physics in ways we couldn’t before.”

Unlike the pronounced peak that recently led to the discovery of pentaquarks, these two studies are in their nascent stages. And scientists aren’t quite sure what they’re seeing yet… or if they’re seeing anything at all.

If this bump matures into a sharp peak during the second run of the LHC, it could indicate the existence of a new heavy particle with 2000 times the mass of a proton. The discovery of a new and unpredicted particle would revolutionize our understanding of the laws of nature. But first, scientists have to rule out false leads.

“It’s like trying to pick up a radio station,” says theoretical physicist Bogdan Dobrescu of Fermi National Accelerator Laboratory who co-authored a paper on the bump in CMS and ATLAS data. “As you tune the dial, you think you’re beginning to hear voices through the static, but you can’t understand what they’re saying, so you keep tuning until you hear a clear voice.”

On the heels of the Higgs boson discovery in the first run of the LHC, scientists must navigate a tricky environment where people are hungry for new results while relying on data that is slow to gather and laborious to interpret.

The data physicists are analyzing are particle decay patterns around 2 TeV, or 2000 GeV.

“We can’t see short-lived particles directly, but we can reconstruct their mass based on what they transform into during their decay,” says Jim Olsen, a professor of physics at Princeton University. “For instance, we found the Higgs boson because we saw more pairs of W bosons, Z bosons and photons at 125 GeV than our background models predicted.”

Considering that the heaviest particle of the Standard Model, the top quark, has a mass of 173 GeV, if this bump is real and not a fluctuation, it indicates a significantly heavier particle than those covered in the Standard Model.

While the theories being batted around at this early stage disagree on the particulars, most agree that, so far, this data bump best fits the properties of an extended Standard Model gauge boson.

The gauge bosons are the force-carrying particles that enable matter particles to interact with each other. The heaviest bosons are the W and Z bosons, which carry the weak force. An extended Standard Model predicts comparable particles at higher energies, heavier versions known as W prime and Z prime (or W’ and Z’). Several theorists suggest the bump at 2 TeV could be a type of W prime.

But LHC physicists aren’t practicing their Swedish for the Nobel ceremony yet. Unexpected bumps are common and almost always fizzle out with more data. For instance, in 2003 an international collaboration working on the Belle experiment at the KEK accelerator laboratory in Japan saw an apparent contradiction to the Standard Model’s predictions in the decay patterns of particles containing bottom quarks.

“It was really striking,” says Olsen. “The probability that the signal was due to sheer statistical fluctuation was only about one in 10,000.”

Seven years later, after inundating their analysis with heaps of fresh data, the original contradiction from the Belle experiment withered and died, and from its ashes arose a stronger result that perfectly matched the predictions of the Standard Model.

But scientists also haven’t written off this new bump as a statistical fluctuation. In fact, the closer they look, the more exciting it becomes.

With most anomalies in the data, one experiment will see it while the other one won’t—a clear indication of a statistical fluctuation. But in this case, both CMS and ATLAS independently reported the same observation. And not only do both experiments see it, they see it at roughly the same energy across several different types of analyses.

“This is kind of like what we saw with the Higgs,” says JoAnne Hewett, a theoretical physicist at SLAC National Accelerator Laboratory who co-authored a paper theorizing the bump could be a type of W prime particle. “The Higgs just started showing up as 2- to 3-sigma bumps in a few different channels in the two different experiments. But there were also false leads with the Higgs.”

Scientists are seeing more Z boson and W boson pairs popping up at 2 TeV than the Standard Model predicts. But besides this curious excess of events, they haven’t identified any sort of clear pattern.

“Theorists come up with the models that predict the patterns we should see if there is some type of new physics influencing our experimental data,” Olsen says. “So if this bump is new physics, then our models should predict what else we should see.”

Even though this bump is far too small to signify a discovery and presents no predictable pattern, its presence across multiple different analyses from both CMS and ATLAS is intriguing and suspicious. Scientists will have to patiently wait for more data before they can flesh out what it actually is.

“We will soon have a lot more data from the second run of the LHC, and both experiments will be able to look more closely at this anomaly,” Arce says. “But I think it would almost be too lucky if we discovered a new particle this soon into the second run of the LHC.”

The latest results from these two studies will be presented at the European Physical Society conference in Vienna at the end of the month.

Like what you see? Sign up for a free subscription to symmetry!

### Georg von Hippel - Life on the lattice

LATTICE 2015, Day Two
Hello again from Lattice 2015 in Kobe. Today's first plenary session began with a review talk on hadronic structure calculations on the lattice given by James Zanotti. James did an excellent job summarizing the manifold activities in this core area of lattice QCD, which is also of crucial phenomenological importance given situations such as the proton radius puzzle. It is now generally agreed that excited-state effects are one of the more important issues facing hadron structure calculations, especially in the nucleon sector, and that these (possibly together with finite-volume effects) are likely responsible for the observed discrepancies between theory and experiment for quantities such as the axial charge of the nucleon. Many groups are studying the charges and form factors of the nucleon, and some have moved on to more complicated quantities, such as transverse momentum distributions. Newer ideas in the field include the use of the Feynman-Hellmann theorem to access quantities that are difficult to access through the traditional three-point-over-two-point ratio method, such as form factors at very high momentum transfer, and quantities with disconnected diagrams (such as nucleon strangeness form factors).

Next was a review of progress in light flavour physics by Andreas Jüttner, who likewise gave an excellent overview of this also phenomenologically very important core field. Besides the "standard" quantities, such as the leptonic pion and kaon decay constants and the semileptonic K-to-pi form factors, more difficult light-flavour quantities are now being calculated, including the bag parameter BK and other quantities related to both Standard Model and BSM neutral kaon mixing, which require the incorporation of long-distance effects, including those from charm quarks. Given the emergence of lattice ensembles at the physical pion mass, the analysis strategies of groups are beginning to change, with the importance of global ChPT fits receding. Nevertheless, the lattice remains important in determining the low-energy constants of Chiral Perturbation Theory. Some groups are also using newer theoretical developments to study quantities once believed to be outside the purview of lattice QCD, such as final-state photon corrections to meson decays, or the timelike pion form factor.

After the coffee break, the Ken Wilson Award for Excellence in Lattice Field Theory was announced. The award goes to Stefan Meinel for his substantial and timely contributions to our understanding of the physics of the bottom quark using lattice QCD. In his acceptance talk, Stefan reviewed his recent work on determining |Vub|/|Vcb| from decays of Λb baryons measured by the LHCb collaboration. There has long been a discrepancy between the inclusive and exclusive (from B -> πlν) determinations of Vub, which might conceivably be due to a new (BSM) right-handed coupling. Since LHCb measures the decay widths for Λb to both pμν and Λcμν, combining these with lattice determinations of the corresponding Λb form factors allows for a precise determination of |Vub|/|Vcb|. The results agree well with the exclusive determination from B -> πlν, and fully agree with CKM unitarity. There are, however, still other channels (such as b -> sμ+μ- and b -> cτν) in which there is still potential for new physics, and LHCb measurements are pending.

This was followed by a talk by Maxwell T. Hansen (now a postdoc at Mainz) on three-body observables from lattice QCD. The well-known Lüscher method relates two-body scattering amplitudes to the two-body energy levels in a finite volume. The basic steps in the derivation are to express the full momentum-space propagator in terms of a skeleton expansion involving the two-particle irreducible Bethe-Salpeter kernel, to express the difference between the two-particle reducible loops in finite and infinite volume in terms of two-particle cuts, and to reorganize the skeleton expansion by the number of cuts to reveal that the poles of the propagator (i.e. the energy levels) in finite volume are related to the scattering matrix. For three-particle systems, the skeleton expansion becomes more complicated, since there can now be situations involving two-particle interactions and a spectator particle, and intermediate lines can go on-shell between different two-particle interactions. Treating a number of other technical issues such as cusps, Max and collaborators have been able to derive a Lüscher-like formula three-body scattering in the case of scalar particles with a Z2 symmetry forbidding 2-to-3 couplings. Various generalizations remain to be explored.

The day's plenary programme ended with a talk on the Standard Model prediction for direct CP violation in K-> ππ decays by Christopher Kelly. This has been an enormous effort by the RBC/UKQCD collaboration, who have shown that the ΔI=1/2 rule comes from low-energy QCD by way of strong cancellations between the dominant contributions, and have determined ε' from the lattice for the first time. This required the generation of ensembles with an unusual set of boundary conditions (G-parity boundary conditions on the quarks, requiring complex conjugation boundary conditions on the gauge fields) in space to enforce a moving pion ground state, as well as the precise evaluation of difficult disconnected diagrams using low modes and stochastic estimators, and treatment of finite-volume effects in the Lellouch-Lüscher formalism. Putting all of this together with the non-perturbative renormalization (in the RI-sMOM scheme) of ten operators in the electroweak Hamiltonian gives a result which currently still has three times the experimental error, but is systematically improvable, with better-than-experimental precision expected in maybe five years.

In the afternoon there were parallel sessions again, and in the evening, the poster session took place. Food ran out early, but it was pleasant to see free-form smearing begin improved upon and used to very good effect by Randy Lewis, Richard Woloshyn and students.

## July 15, 2015

### ZapperZ - Physics and Physicists

Pentaquark Discovery - Here We Go Again!
I read with a combination excitement and skepticism of the report that LHCb may have seen not one, but two pentaquarks. The skepticism should be justified because previous claims of the discovery of such quarks have turned out to be false. Still, this one comes with a 9sigma statistics.

The LHCb team is confident that the particles are indeed pentaquarks that comprise two up quarks, one down quark, one charm quark and one anticharm quark. "Benefitting from the large data set provided by the LHC, and the excellent precision of our detector, we have examined all possibilities for these signals, and conclude that they can only be explained by pentaquark states," explains LHCb physicist Tomasz Skwarnicki of Syracuse University in the US.

As always, and as with any other new and important claim, time will tell as more analysis and experiments are done. The public and the media, especially, need to understand that this is still a work in progress, as with any scientific endeavor.

Zz.

### Symmetrybreaking - Fermilab/SLAC

Miraculous WIMPs

What are WIMPs, and what makes them such popular dark matter candidates?

Invisible dark matter accounts for 85 percent of all matter in the universe, affecting the motion of galaxies, bending the path of light and influencing the structure of the entire cosmos. Yet we don’t know much for certain about its nature.

Most dark matter experiments are searching for a type of particles called WIMPs, or weakly interacting massive particles.

“Weakly interacting” means that WIMPs barely ever “talk” to regular matter. They don’t often bump into other matter and also don’t emit light—properties that could explain why researchers haven’t been able to detect them yet.

Created in the early universe, they would be heavy (“massive”) and slow-moving enough to gravitationally clump together and form structures observed in today’s universe.

Scientists predict that dark matter is made of particles. But that assumption is based on what they know about the nature of regular matter, which makes up only about 4 percent of the universe.

WIMPs advanced in popularity in the late 1970s and early 1980s when scientists realized that particles that naturally pop out in models of Supersymmetry could potentially explain the seemingly unrelated cosmic mystery of dark matter.

Supersymmetry, developed to fill gaps in our understanding of known particles and forces, postulates that each fundamental particle has a yet-to-be-discovered superpartner. It turns out that the lightest one of the bunch has properties that make it a top contender for dark matter.

“The lightest supersymmetric WIMP is stable and is not allowed to decay into other particles,” says theoretical physicist Tim Tait of the University of California, Irvine. “Once created in the big bang, many of these WIMPs would therefore still be around today and could have gone unnoticed because they rarely produce a detectable signal.”

When researchers use the properties of the lightest supersymmetric particle to calculate how many of them would still be around today, they end up with a number that matches closely the amount of dark matter experimentally observed—a link referred to as the “WIMP miracle.” Many researchers believe it could be more than coincidence.

“But WIMPs are also popular because we know how to look for them,” says dark matter hunter Thomas Shutt of Stanford University and SLAC National Accelerator Laboratory. “After years of developments, we finally know how to build detectors that have a chance of catching a glimpse of them.”

Shutt is co-founder of the LUX experiment and one of the key figures in the development of the next-generation LUX-ZEPLIN experiment. He is one member of the group of scientists trying to detect WIMPs as they traverse large, underground detectors.

Other scientists hope to create them in powerful particle collisions at CERN’s Large Hadron Collider. “Most supersymmetric theories estimate the mass of the lightest WIMP to be somewhere above 100 gigaelectronvolts, which is well within LHC’s energy regime,” Tait says. “I myself and others are very excited about the recent LHC restart. There is a lot of hope to create dark matter in the lab.”

A third way of searching for WIMPs is to look for revealing signals reaching Earth from space. Although individual WIMPs are stable, they decay into other particles when two of them collide and annihilate each other. This process should leave behind detectable amounts of radiation. Researchers therefore point their instruments at astronomical objects rich in dark matter such as dwarf satellite galaxies orbiting our Milky Way or the center of the Milky Way itself.

“Dark matter interacts with regular matter through gravitation, impacting structure formation in the universe,” says Risa Wechsler, a researcher at Stanford and SLAC. “If dark matter is made of WIMPs, our predictions of the distribution of dark matter based on this assumption must also match our observations.”

Wechsler and others calculate, for example, how many dwarf galaxies our Milky Way should have and participate in research efforts under way to determine if everything predicted can also be found experimentally.

So how would researchers know for sure that dark matter is made of WIMPs? “We would need to see conclusive evidence for WIMPs in more than one experiment, ideally using all three ways of detection,” Wechsler says.

In the light of today’s mature detection methods, dark matter hunters should be able to find WIMPs in the next five to 10 years, Shutt, Tait and Wechsler say. Time will tell if scientists have the right idea about the nature of dark matter.

Like what you see? Sign up for a free subscription to symmetry!

## July 14, 2015

### Tommaso Dorigo - Scientificblogging

Marek Karliner: Not A Pentaquark, But A Molecule - As He And Rosner Predicted
The reported observation of a resonant state of a J/psi meson and a proton in the decay of the Lambda_b baryon by the LHCb collaboration, broadcast by CERN today, is a very intriguing new piece of the puzzle of hadron spectroscopy - a topic on which many brilliant minds have spent their life in the course of the last half century.

### Sean Carroll - Preposterous Universe

Infinite Monkey Cage

The Infinite Monkey Cage is a British science/entertainment show put on by the dynamic duo of physicist Brian Cox and comedian Robin Ince. It exists as a radio program, a podcast, and an occasional live show. There are laughs, a bit of education, and some guests for the hosts to spar with. The popular-science ecosystem is a lot different in the UK than it is here in the US; scientists and science communicators can generally have a much higher profile, and a show like this can really take off.

So it was a great honor for me to appear as one of the guests when the show breezed through LA back in March. It was a terrific event, as you might guess from the other guests: comedian Joe Rogan, TV writer David X. Cohen, and one Eric Idle, who used to play in the Rutles. And now selected bits of the program can be listened to at home, courtesy of this handy podcast link, or directly on iTunes.

Be sure to check out the other stops on the IMC tour of the US, which included visits to NYC, Chicago, and San Francisco, featuring many friends-of-the-blog along the way.

These guys, of course, are heavy hitters, so you never know who is going to show up at one of these things. Their relationship with Eric Idle goes back quite a ways, and he actually composed and performed a theme song for the show (below). Naturally, since he was on stage in LA, they asked him to do a live version, which was a big hit. And there in the band, performing on ukulele for just that one song, was Jeff Lynne, of the Electric Light Orchestra. Maybe a bit under-utilized in this context, but why not get the best when you can?

### Symmetrybreaking - Fermilab/SLAC

LHC physicists discover five-quark particle

Pentaquarks are no longer just a theory.

Protons and neutrons, which together with electrons form atoms, are made up of even smaller particles called quarks. Protons and neutrons contain three quarks each.

Scientists at the LHCb experiment at the Large Hadron Collider just discovered two particles made up of not three, not four, but five quarks—the first observed pentaquarks. Scientists have been searching for this class of particles for about 50 years.

“The pentaquark is not just any new particle,” says LHCb spokesperson Guy Wilkinson of University of Oxford. “Studying its properties may allow us to understand better how ordinary matter, the protons and neutrons from which we’re all made, is constituted.”

Quarks were first theorized in 1964 by physicists Murray Gell-Mann and George Zweig. The two independently proposed that several of the particles thought to be fundamental—unable to be broken down into smaller parts—were actually made up of smaller particles called quarks. Quarks were eventually found to come in six types, called up, down, charm, strange, top and bottom.

Gell-Mann predicted that some known particles, such as the pion, were made up of two quarks, and others, such as the proton and neutron, were made up of three quarks. But he also postulated that particles made up of four or five quarks could exist.

Last year, researchers on the LHCb experiment at CERN confirmed the existence of a particle containing four quarks. But until recently, five-quark particles remained elusive.

“Ever since Gell-Mann published his theory, researchers have actively looked for these pentaquarks, but all the previous searches turned out to be false,” says Syracuse University physicist Sheldon Stone. “Since then, the LHCb collaboration has been particularly deliberate in this study.”

In the end, however, researchers on the LHCb experiment found the pentaquark outside of a pentaquark study; they serendipitously stumbled upon it while investigating something completely different.

“We asked a graduate student [Nathan Jurik of Syracuse University] to examine what we thought was an uninteresting and minor source of background events, just in case it happened to be a nasty source of experimental noise,” Stone says. “He did it begrudgingly but came back with a big smile on his face because there was a huge and unexpected signal. We told him to forget about what he was working on and focus on this instead.”

What Jurik had found was a surprising feature of the decay of the Lambda-b particle—a baryon consisting of an up quark, down quark and bottom quark.

“There was a sharp peak in the data that we couldn’t explain,” Stone says. “We kept hoping it would go away, but it never did. No matter what we did we couldn’t get an explanation for why it was there, so we took a closer look and discovered it was this new particle.”

The Lamba-b particle is unstable and quickly deteriorates into three other particles: a J-psi particle (a charm and anti-charm quark bundled together), a proton (two up quarks and a down quark) and a kaon (a strange quark paired with an anti-up quark).

Researchers discovered that sometimes, before the Lambda-b fragments into these byproducts, it transforms into a new particle consisting of five quarks: two up quarks, a down quark, a charm quark and an anti-charm quark.

This newly discovered particle comes in two varieties—one with a spin of 3/2 and the other with a spin of 5/2. Scientists on the LHCb experiment named these two new pentaquarks Pc(4450)+ and Pc(4380)+.

Jurik says this will make for a more interesting PhD thesis.

“At some moments towards the start of this analysis I was feeling disheartened because it was a complicated analysis and it wasn’t clear if it was going to pay off,” he says. “I was not expecting this. It’s kind of amazing—not many physicists can say they helped discover a new state of matter at my age.”

This discovery of a five-quark hadron completes Gell-Mann’s original hypothesis and will enable researchers to study how quarks stick together and form the scaffolding for solid matter.

It will also allow physicists to tune the equations that predict the behavior of all known forms of matter in the universe. This is particularly important for the investigation of new particles and forces.

Thus far LHCb researchers have identified the spin, mass and lifetime of this pentaquark pair, but what they haven’t figured out is how the five quarks bind together.

“The quarks could be tightly bound or loosely bound,” says Syracuse University physicist Tomasz Skwarnicki. “If they are tightly bound, all the particles pull on each other, but if it is loosely bound, it would look more like a meson-baryon molecule, meaning a clump of three quarks would loosely bind to a clump of two quarks.”

For Stone, this discovery shows that fundamental physics research still holds many surprises, some of which show up in unexpected places.

“We didn’t go out looking for a pentaquark,” Stone says, “It’s a particle that found us.”

Like what you see? Sign up for a free subscription to symmetry!