Particle Physics Planet


April 29, 2017

Peter Coles - In the Dark

Championship Update

Well, the plot thickens.

The penultimate round of matches this weekend has seen another twist in the story of this  year’s Championship.

Last night Newcastle United played Cardiff City here in Cardiff, beating the home side 2-0. I didn’t go to the match, but there seem to have been plenty of Newcastle fans in town last night.

That result meant that Newcastle United were still 2nd, but only one point behind leaders Brighton and Hove Albion.

A win for them this afternoon at home against lowly Bristol City would have given them the Championship. Surprisingly, however, they lost 1-0.

The title race, somewhat unexpectedly, therefore goes to the last round of matches next Sunday. If Brighton win, they are Champions. If they don’t, and Newcastle win or draw, then Newcastle United are champions (the latter courtesy of goal difference). If Newcastle lose then Brighton are champions whatever their result.

Given the way this season has gone it seems rather fitting that it will be decided in the final round of matches. May the best team finish top (as long as it’s Newcastle)!

And in other news, to crown an excellent weekend for Newcastle supporters, Sunderland got relegated from the Premiership.


by telescoper at April 29, 2017 06:48 PM

April 28, 2017

Christian P. Robert - xi'an's og

Symmetrybreaking - Fermilab/SLAC

#AskSymmetry Twitter chat with Tulika Bose

See Boston University physicist Tulika Bose's answers to readers’ questions about research at the Large Hadron Collider.

Freeze frame of physicist Tulika Bose with
<noscript>[<a href="http://storify.com/Symmetry/asksymmetry-twitter-chat-with-tulika-bose" target="_blank">View the story "#AskSymmetry Twitter chat with Tulika Bose 4/28/17" on Storify</a>]</noscript>

April 28, 2017 07:59 PM

Christian P. Robert - xi'an's og

Camembert day?!

Google France is celebrating the 256th anniversary of the birth of Marie Harel, who, according to local legend, invented Camembert cheese. Enjoy (if you can!).


Filed under: Kids, pictures, Wines Tagged: camembert, Google, Marie Harel, Normandie fort et vert, Normandy

by xi'an at April 28, 2017 05:18 PM

Peter Coles - In the Dark

Precision Cosmology!

Well, look what the postman brought me today!

Hot off the press, here is a textbook by my friend and erstwhile collaborator Bernard Jones. As you will see, it even has an endorsement by me on the back cover. I think its a very fine book indeed and it will be immensely useful for cosmologists young and old alike!


by telescoper at April 28, 2017 03:26 PM

Emily Lakdawalla - The Planetary Society Blog

Trusty Cassini survives first dive between Saturn and its rings
Cheers erupted in the Von Karman auditorium at the Jet Propulsion Laboratory early Thursday morning as a squiggly green line on a graph developed a crisp, tall peak, signifying that the Cassini spacecraft was calling home after surviving its first plunge between Saturn and its ring system.

April 28, 2017 02:29 PM

Christian P. Robert - xi'an's og

Bayes Comp 2018

After a rather extended wait, I learned today of the dates of the next MCMski conference, now called Bayes Comp, in Barcelona, Spain, March 26-29, next year (2018). With a cool webpage! (While the ski termination has been removed from the conference name, there are ski resorts located not too far from Barcelona, in the Pyrenees.) Just unfortunate that it happens at the same dates as the ENAR 2018 meeting. (And with the Gregynog Statistical Conference!)


Filed under: Mountains, pictures, Statistics, Travel, University life, Wines Tagged: Barcelona, BayesComp, Bayesian Computing Section, ENAR 2018, Gregynog Statistical Conference, MCMSki, Monte Carlo Statistical Methods, ski resorts, Spain, University of Warwick

by xi'an at April 28, 2017 12:18 PM

Peter Coles - In the Dark

A Joe Morello Drum Master Class

After a busy morning, I reckon it’s time for a pause and a quick blog post. I stumbled across this clip of a great drum solo a while ago and immediately bookmarked it for future posting. As happens most times I do that I then forgot about it, only finding it again right now so I thought I’d post it before I forget again.

This is the great Joe Morello at the very peak of his prowess in 1964, with the Dave Brubeck Quartet with whom he recorded over 60 albums. That band pioneered the use of unusual time signatures in jazz, such as 3/4, 7/4, 13/4, 9/8 and most famously in their big hit Take Five which is in 5/4 time throughout; they recorded a number of other tracks in which the time signature shifts backwards and forwards between, e.g., 7/4 and the standard 4/4.

A few points struck me watching this clip. The first is that it’s a great example of the use of the ‘trad’ grip which is with the left hand under the stick, passing between the thumb and index finger and between the second and third fingers, thusly:

The right stick is usually held with an overhand grip. Most jazz drummers (whether they play ‘trad’ jazz or not) use this grip. Most rock drummers on the other hand use a ‘balanced’ grip in which both sticks are held with an overhand grip. You might think holding the left-hand and right-hand sticks the same way is the obvious thing to do, but do bear in mind that people aren’t left-right symmetric and neither are drum kits so it’s really not obvious at all!

The trad grip looks a bit unnatural when you first see it, but it does have an advantage for many of the patterns often used  in jazz. Once you’ve mastered the skill, a slight rotation of the wrist and subtle use of the fingers makes some difficult techniques (e.g. rolls) much easier to do rapidly with this grip than with the balanced grip. I’m not claiming to be a drummer when I say all this, but my Dad was and he did teach me the rudiments. In fact, he thought that drummers who used the balanced grip weren’t proper drummers at all!

(I’ll no doubt get a bunch of angry comments from rock drummers now, but what the hell…)

Anyway you can see Joe Morello using the trad grip to great effect in this clip, in which he displays astonishing speed, accuracy and control. The way he builds that single-stroke roll from about 2:28 is absolutely astonishing. In fact he’s so much in command throughout his solo, that he even has time to adjust his spectacles and move his bass drum a bit closer! Jazz musicians used to joke that atomic clocks could be set to Joe Morello, as he kept time so accurately, but as you can see in this clip he did so much more than beat out a rhythm. It’s only about 3 minutes long but this solo really is a master class.

Joe Morello was never a ‘showy’ musician. He never adopted the popular image of the drummer as the madman who sat at the back of the band that was cultivated by the likes of Gene Krupa in the jazz world and later spread into rock’n’roll. Bespectacled and wearing a suit and tie he looks a bit like a bank clerk, but boy could he play! The expression on Dave Brubeck’s face tells you that he knew he was very lucky to have Joe Morello in his band.

 

 


by telescoper at April 28, 2017 12:02 PM

Emily Lakdawalla - The Planetary Society Blog

Learn the rocket equation, part 1
Have you ever wanted to learn the fundamental physics behind one of the most basic concepts of rocket science? In part one of our two-part series, we explore the foundations of the famous rocket equation.

April 28, 2017 11:00 AM

April 27, 2017

John Baez - Azimuth

Biology as Information Dynamics (Part 2)

Here’s a video of the talk I gave at the Stanford Complexity Group:

You can see slides here:

Biology as information dynamics.

Abstract. If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’ — a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Liebler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clearer, more general formulation of Fisher’s fundamental theorem of natural selection.

I’d given a version of this talk earlier this year at a workshop on Quantifying biological complexity, but I’m glad this second try got videotaped and not the first, because I was a lot happier about my talk this time. And as you’ll see at the end, there were a lot of interesting questions.


by John Baez at April 27, 2017 10:28 PM

Peter Coles - In the Dark

WikiLeeks

I’ve had today off to work on the launch of my new project, called WikiLeeks.

I’m thrilled now to be able to publish our first findings.


by telescoper at April 27, 2017 07:36 PM

Symmetrybreaking - Fermilab/SLAC

Did you see it?

Boston University physicist Tulika Bose explains why there's more than one large, general-purpose particle detector at the Large Hadron Collider.

Freeze frame of physicist Tulika Bose

Physicist Tulika Bose of the CMS experiment at CERN explains how the CMS and ATLAS experiments complement one another at the Large Hadron Collider. 

Ask Symmetry - Why is there more than one detector at the Large Hadron Collider?

Video of Ask Symmetry - Why is there more than one detector at the Large Hadron Collider?

Have a burning question about particle physics? Let us know via email or Twitter (using the hashtag #AskSymmetry). We might answer you in a future video!

You can watch a playlist of the #AskSymmetry videos here. You can see Tulika Bose's answers to readers' questions about the LHC on Twitter here.​

April 27, 2017 06:30 PM

Clifford V. Johnson - Asymptotia

Almost Within Grasp!

I just noticed! The book is now in MIT Press' Fall 2017 catalog, and so you can see the cover and read the blurb they wrote about it! See the full thing here (a pdf; on page 9). Alternatively, here is the online page for it. (I can also reveal what I could not say before: Frank Wilczek kindly agreed to write a foreword for it.)

This. is. so. exciting.

I don't know about how you pre-order yet, but when I do I'll let you know.

-cvj
Click to continue reading this post

The post Almost Within Grasp! appeared first on Asymptotia.

by Clifford at April 27, 2017 06:19 PM

Tommaso Dorigo - Scientificblogging

A Visit To GSI
GSI, the Helmholtz Centre for Heavy Ion Research, is a laboratory located near the town of Darmstadt, in central Germany, just a few miles away from the Frankfurt airport. The centre was founded in 1969, and has since then been a very active facility where heavy elements are studied (six rare heavy ones were in fact discovered there, including the one they named Darmstadtium!), and where a wide research plan of nuclear physics is carried out.

read more

by Tommaso Dorigo at April 27, 2017 02:18 PM

Axel Maas - Looking Inside the Standard Model

A shift in perspective - or: what makes an electron an electron?
We have recently published a new paper. It is based partially on the master thesis of my student Larissa Egger, but involves also another scientist from a different university. In this paper, we look at a quite fundamental question: How do we distinguish the matter particles? What makes an electron an electron and a muon a muon?

In a standard treatment, this identity is just an integral part of the particle. However, results from the late 1970ies and early 1980ies as well as our own research point to a somewhat different direction. I have described the basic idea sometime back. The basic idea back then was that what we perceive as an electron is not really just an electron. It consists itself out of two particles. A Higgs and something I would call a constituent electron. Back then, we were just thinking about how to test this idea.

This took some time.

We thought this was an outrageous question, putting almost certain things into question.

Now we see: Oh, this was just the beginning. And things got more crazy in every step.

But, as a theoretician, if I determine the consequences of a theory, we should not stop because something sounds crazy. Almost everything what we take for granted today, like quantum physics, sounded crazy in the beginning. But if you have reason to believe that a theory is right, then you have to take it seriously. And then its consequences are what they are. Of course, we may just have made an error somewhere. But that remains to be checked, preferably by independent research groups. After all, at some point, it is hard to see the forest for the trees. But so far, we are convinced that we made at most quantitative errors, but no qualitative errors. So the concept appears to us sound. And therefore I keep on writing about it here.

The older works was just the beginning. And we just followed their suggestion to take the standard model of particle physics not only serious, but also literal.

I will start out with the leptons, i.e. electrons, muons, and tauons as well as the three neutrinos. I come back to the quarks later.

The first thing we established was that it is indeed possible to think of particles like the electron as a kind of bound state of other particles, without upsetting what we have measured in experiment. We also gave an estimate what would be necessary to test this statement in an experiment. Though really exact numbers are as always complicated, we believe that the next generation of experiments which use electrons and positrons and collide them could be able to detect difference between the conventional picture and our results. In fact, the way they are currently designed makes them ideally suited to do so. However, they will not provide a measurement before, roughly, 2035 or so. We also understand quite well, why we would need these machines to see the effect. So right now, we will have to sit and wait for this. Keep your fingers crossed that they will be build, if you are interested in the answer.

Naturally, we therefore asked ourselves if there is no alternative. The unfortunate thing is that you will need at least enough energy to copiously produce the Higgs to test this. The only existing machine being able to do so is the LHC at CERN. However, to do so they collide protons. So we had to discuss whether the same effect also occurs for protons. Now a proton is much more complicated than any lepton, because it is already build from quarks and gluons. Still, what we found is the following: If we take the standard model serious as a theory, then a proton cannot be a theoretically well-defined entity if it is only made out of three quarks. Rather, it needs to have some kind of Higgs component. And this should be felt somehow. However, for the same reason as with the lepton, only the LHC could test it. And here comes the problem. Because the proton is made up out of three quarks, it has already a very complicated structure. Furthermore, even at the LHC, the effect of the additional Higgs component will likely be tiny. In fact, the probably best chance to probe it will be if this Higgs component can be linked to the production of the heaviest known quark, the top quark The reason is that the the top quark is so very sensitive to the Higgs. While the LHC indeed produces a lot of top quarks, producing a top quark linked to a Higgs is much harder. Even just the strongest effect has not yet been seen above doubt. And what we find will only be a (likely small) correction to it. There is still a chance, but this will need much more data. But the LHC will keep on running for a long time. So maybe, it will be enough. We will see.

So, this is what we did. In fact, this will all be part of the review I am writing. So, more will be told about this.

If you are still reading, I want to give you some more of the really weird stuff, which came out.

The first is that live is actually even more complicated. Even without all of what I have written about above, there are actually two types of electrons in the standard model. One which is affected by the weak interaction, and one which is not. Other than that, they are the same. They have the same mass, and they are electromagnetically the same. The same is actually true for all leptons and quarks. The matter all around us is actually a mixture of both types. However, the subtle effects I have been talking so far about only affect those which are affected by the weak interaction. There is a technical reason for this (the weak interaction is a so-called gauge symmetry). However, it makes detecting everything more harder, because it only works if we get the 'right' type of an electron.

The second is that electrons and quarks come in three sets of four particles each, the so-called generations or families. The only difference between these copies is the mass. Other than that, there is no difference that we know of. Though we cannot exclude it, but we have no experiment saying otherwise with sufficient confidence. This is one of the central mysteries. It occupies, and keeps occupying, many physicist. Now, we had the following idea: If we provide internal structure to the members of the family - could it be that the different generations are just different arrangements of the internal structure? That such things are in principle possible is known already from atoms. Here, the problem is even more involved, because of the two types of each of the quarks and leptons. This was just a speculation. However, we found that this is, at least logically, possible. Unfortunately, it is yet too complicated to provide definite quantitative prediction how this can be tested. But, at least, it seems to be not at odds with what we know already. If this would be true, this would be a major step in understanding particle physics. But we are still far, far away from this. Still, we are motivated to continue this road.

by Axel Maas (noreply@blogger.com) at April 27, 2017 08:29 AM

April 26, 2017

Emily Lakdawalla - The Planetary Society Blog

The first Space Launch System flight will probably be delayed
NASA's new heavy lift rocket is currently scheduled to launch the Orion spacecraft on a test flight next year. But all signs are pointing to a probable delay.

April 26, 2017 11:00 AM

Peter Coles - In the Dark

The STFC ‘Breadth of Programme’ Exercise

I suddenly realized this morning that I there was a bit of community service I meant to do when I got back from vacations, namely to pass on to astronomers and particle physicists a link to the results of the latest Programmatic Review (actually ‘Breadth of Programme’ Exercise) produced by the Science and Technology Facilities Council.

It’s a lengthy document, running to 89 pages, but it’s a must-read if you’re in the UK and work in area of science under the remit of STFC. There was considerable uncertainty about the science funding situation anyway because of BrExit, and that has increased dramatically because of the impending General Election which will probably kick quite a few things into the long grass, quite possibly delaying the planned reorganization of the research councils. Nevertheless, this document is well worth reading as it will almost certainly inform key decisions that will have to be made whatever happens in the broader landscape. With `flat cash’ being the most optimistic scenario, increasing inflation means that some savings will have to be found so belts will inevitable have to be tightened. Moreover, there are strong strategic arguments that some areas should grow, rather than remain static, which means that others will have to shrink to compensate.

There are 29 detailed recommendations and I can’t discuss them all here, but here are a couple of tasters:

The E-ELT is the European Extremely Large Telescope, in case you didn’t know.

Another one that caught my eye is this:

I’ve never really understood why gravitational-wave research came under ‘Particle Astrophysics’ anyway, but given their recent discovery by Advanced LIGO there is a clear case for further investment in future developments, especially because the UK community is currently rather small.

Anyway, do read the document and, should you be minded to do so, please feel free to comment on it below through the comments box.

 

 


by telescoper at April 26, 2017 10:33 AM

April 25, 2017

Symmetrybreaking - Fermilab/SLAC

Archaeology meets particle physics

Undergraduates search for hidden tombs in Turkey using cosmic-ray muons.

Header: Archeology meets particle physics

While the human eye is an amazing feat of evolution, it has its limitations. What we can see tells only a sliver of the whole story. Often, it is what is on the inside that counts. 

To see a broken femur, we pass X-rays through a leg and create an image on a metal film. Archaeologists can use a similar technique to look for ancient cities buried in hillsides. Instead of using X-rays, they use muons, particles that are constantly raining down on us from the upper atmosphere. 

Muons are heavy cousins of the electron and are produced when single-atom meteorites called cosmic rays collide with the Earth’s atmosphere. Hold your hand up and a few muons will pass through it every second. 

Physics undergraduates at Texas Tech University, led by Professors Nural Akchurin and Shuichi Kunori, are currently developing detectors that will act like an X-ray film and record the patterns left behind by muons as they pass through hillsides in Turkey. Archaeologists will use these detectors to map the internal structure of hills and look for promising places to dig for buried archaeological sites.

Like X-rays, muons are readily absorbed by thick, dense materials but can traverse through lighter materials. So they can be stopped by rock but move easily through the air in a buried cavern.

The detector under development at Texas Tech will measure the amount of cosmic-ray muons that make it through the hill.  An unexpected excess could mean that there’s a hollow subterranean structure facilitating the muon’s passage.

“We’re looking for a void, or a tomb, that the archaeologists can investigate to learn more about the history of the people that were buried there,” says Hunter Cymes, one of the students working on the project.

The technique of using cosmic muons to probe for subterranean structures was developed almost half a century ago. Luis Alvarez, a Nobel Laureate in Physics, first used this technique to look inside the Second Pyramid of Chephren, one of the three great pyramids of Egypt. Since then, it has been used for many different applications, including searching for hidden cavities in other pyramids and estimating the lava content of volcanoes.

According to Jason Peirce, another undergraduate student working on this project, those previous applications had resolutions of about 10 meters. “We’re trying to make that smaller, somewhere in the range of 2 to 5 meters, to find a smaller room than what’s previously been done.”

They hope to accomplish this by using an array of scintillators, a type of plastic that can be used to detect particles. “When a muon passes through it, it absorbs some of that energy and creates light,” says student Hunter Cymes. That light can then be detected and measured and the data stored for later analysis.

Unfortunately, muons with enough energy to travel through a hill and reach the detector are relatively rare, meaning that the students will need to develop robust detectors which can collect data over a long period of time. Just like it’s hard to see in dim light, it’s difficult to reconstruct the internal structure of a hill with only a handful of muons. 

Aashish Gupta, another undergraduate working on this project, is currently developing a simulation of cosmic-ray muons, the hill, and the detector prototype. The group hopes to use the simulation to guide their design process by predicting how well different designs will work and much data they will need to take.

As Peirce describes it, they are “getting some real, hands-on experience putting this together while also keeping in mind that we need to have some more of these results from the simulation to put together the final design.”

They hope to finish building the prototype detector within the next few months and are optimistic about having a final design by next fall.

by Jameson O'Reilly at April 25, 2017 02:55 PM

Emily Lakdawalla - The Planetary Society Blog

Curiosity update, sols 1600-1674: The second Bagnold Dunes campaign
The four-stop dune science campaign offered the engineers some time to continue troubleshooting the drill without any pressure to use it for science. They scooped sand at a site called Ogunquit Beach but couldn't complete the planned sample activity because of new developments in the drill inquiry. The rover has now headed onward toward Vera Rubin Ridge.

April 25, 2017 11:00 AM

April 24, 2017

Symmetrybreaking - Fermilab/SLAC

A tiny droplet of the early universe?

Particles seen by the ALICE experiment hint at the formation of quark-gluon plasma during proton-proton collisions.

ALICE detector with its red doors open

About 13.8 billion years ago, the universe was a hot, thick soup of quarks and gluons—the fundamental components that eventually combined into protons, neutrons and other hadrons.

Scientists can produce this primitive particle soup, called the quark-gluon plasma, in collisions between heavy ions. But for the first time physicists on an experiment at the Large Hadron Collider have observed particle evidence of its creation in collisions between protons as well.

The LHC collides protons during the majority of its run time. This new result, published in Nature Physics by the ALICE collaboration, challenges long-held notions about the nature of those proton-proton collisions and about possible phenomena that were previously missed.

“Many people think that protons are too light to produce this extremely hot and dense plasma,” says Livio Bianchi, a postdoc at the University of Houston who worked on this analysis. “But these new results are making us question this assumption.”

Scientists at the LHC and at the US Department of Energy’s Brookhaven National Laboratory’s Relativistic Heavy Ion Collider, or RHIC, have previously created quark-gluon plasma in gold-gold and lead-lead collisions.

In the quark gluon plasma, mid-sized quarks—such as strange quarks—freely roam and eventually bond into bigger, composite particles (similar to the way quartz crystals grow within molten granite rocks as they slowly cool). These hadrons are ejected as the plasma fizzles out and serve as a telltale signature of their soupy origin. ALICE researchers noticed numerous proton-proton collisions emitting strange hadrons at an elevated rate.

“In proton collisions that produced many particles, we saw more hadrons containing strange quarks than predicted,” says Rene Bellwied, a professor at the University of Houston. “And interestingly, we saw an even bigger gap between the predicted number and our experimental results when we examined particles containing two or three strange quarks.”

From a theoretical perspective, a proliferation of strange hadrons is not enough to definitively confirm the existence of quark-gluon plasma. Rather, it could be the result of some other unknown processes occurring at the subatomic scale.

“This measurement is of great interest to quark-gluon-plasma researchers who wonder how a possible QGP signature can arise in proton-proton collisions,” says Urs Wiedemann, a theorist at CERN. “But it is also of great interest for high energy physicists who have never encountered such a phenomenon in proton-proton collisions.”

Earlier research at the LHC found that the spatial orientation of particles produced during some proton-proton collisions mirrored the patterns created during heavy-ion collisions, suggesting that maybe these two types of collisions have more in common than originally predicted. Scientists working on the ALICE experiment will need to explore multiple characteristics of these strange proton-proton collisions before they can confirm if they are really seeing a miniscule droplet of the early universe.

“Quark-gluon plasma is a liquid, so we also need to look at the hydrodynamic features,” Bianchi says. “The composition of the escaping particles is not enough on its own.”

This finding comes from data collected the first run of the LHC between 2009 and 2013. More research over the next few years will help scientists determine whether the LHC can really make quark-gluon plasma in proton-proton collisions.

“We are very excited about this discovery,” says Federico Antinori, spokesperson of the ALICE collaboration. “We are again learning a lot about this extreme state of matter. Being able to isolate the quark-gluon-plasma-like phenomena in a smaller and simpler system, such as the collision between two protons, opens up an entirely new dimension for the study of the properties of the primordial state that our universe emerged from.” 

Other experiments, such as those using RHIC, will provide more information about the observable traits and experimental characteristics of quark-gluon plasmas at lower energies, enabling researchers to gain a more complete picture of the characteristics of this primordial particle soup.

“The field makes far more progress by sharing techniques and comparing results than we would be able to with one facility alone,” says James Dunlop, a researcher at RHIC. “We look forward to seeing further discoveries from our colleagues in ALICE.”

by Sarah Charley at April 24, 2017 05:10 PM

CERN Bulletin

L’Association du personnel (AP) en réunion du Directorat élargi (ED) !

Le 3 avril dernier, la Vice-Présidente et le Président de l’Association du personnel ont présenté en réunion du Directorat élargi (Directeurs et Chefs de départements et d’unités) le plan des activités de l’Association du personnel pour 2017 et ont fait part des préoccupations de l’AP.

Cinq sujets ont été abordés en commençant par la mise en œuvre des décisions prises dans le cadre de l’examen quinquennal de 2015.

Examen quinquennal – suivi (voir Echo n° 257)

2016 – Principales mises en œuvre

De nombreux changements ont déjà été mis en place en 2016 :

  • Révision des Statut et Règlement du personnel en janvier 2016, pour les aspects de diversité, et en septembre 2016, pour la nouvelle structure de carrière : grille des salaires avec l’introduction des grades ;
  • Révision de la Circulaire administrative n° 26 (Rev 11) sur la « Reconnaissance du mérite » ;
  • Placement des titulaires dans des grades et placement provisoire dans des emplois repères ;
  • Définition des lignes directrices de l’exercice MERIT pour 2017.

L’Association du personnel a été largement associée à ces révisions et à leur mise en place. Le processus de concertation a en général bien fonctionné dans ce cadre : des accords qui préservent les intérêts du personnel et ceux de l’Organisation ont été trouvés.

2017 – 1ère année de l’exercice MERIT (voir Echo n°259)

L’Association du personnel a mis l’accent sur les points suivants :

Correction de placement dans un emploi repère (voir Echo n° 261)

Fin février 2017, de nombreuses demandes de corrections ont déjà été formulées auprès du Département HR. Ces demandes émanaient :

  • de titulaires (144) : majoritairement des demandes de changement d’emploi repère pour un emploi repère dans une gamme de grades supérieure (p. ex. de technicien(ne) en 3-4-5 à ingénieur(e) technicien(ne) en 4-5-6) et, dans une moindre mesure, des demandes de changement de grade ;
  • de la hiérarchie (242) : majoritairement des changements de titre d’emploi repère dans une même gamme de grades.

Pour l’Association du personnel, l’accord reste que les demandes de changement de grade (promotion) doivent être étudiées dans le cadre de la procédure de promotion.

En revanche, nous avons insisté pour que les corrections suite à un placement dans le mauvais emploi repère, avec ou sans changement de gamme de grades, soient instruites et traitées au plus tôt. Ces corrections doivent être effectives avant le 1er juillet 2017, date de la confirmation officielle du placement dans un emploi repère.

Positions personnelles de titulaires

Visuel présenté lors de la réunion publique du 22 septembre 2016 (voir Echo n°254)

La mise en application de la nouvelle grille de salaires a entrainé le placement de nombreux titulaires dans des « positions personnelles », c’est-à-dire des positions salariales en dehors de la grille des salaires, soit au-dessous du minimum de leur grade, soit plus fréquemment au-dessus du maximum de leur grade.

L’Association du personnel a dit au ED être consciente que nos collègues en position personnelle, avec un salaire supérieur au salaire maximum de leur grade, ne pourront pas tous bénéficier d’une promotion cette année ; l’AP a même conscience que, pour certains d’entre eux, il n’y aura pas de promotion tout court.

Néanmoins, nous avons insisté pour que le cas de chaque collègue en position personnelle soit considéré avec une réponse individuelle donnée.

Lignes directrices MERIT de 2017

L’Association du personnel a rappelé :

  • qu’une promotion est un changement de grade ;
  • qu’un changement d’emploi repère sanctionne un changement de fonctions ;
  • que ces deux concepts sont différents dans leur usage et suivent donc des procédures différentes ;
  • que ces procédures s’appliquent à l’ensemble du CERN de la même façon (CERN-Wide) ;
  • qu’aucune ligne directrice numérique n’est applicable, comme décidé par la Direction et accepté par l’Association du personnel.

En conséquence, l’Association du personnel s’attend en 2017 à un maximum de promotions, tout en tenant compte de la maitrise de l’augmentation du budget à long terme.

Emplois repères sur trois grades et non sur deux + un

Sur la base du Guide des promotions (voir Echo n° 263), le passage au 3e grade d’un emploi repère est analysé et évalué de la même façon que le passage du 1er au 2e grade, sur la base de critères tenant compte du niveau des fonctions occupées, de l’expérience et de l’expertise acquises, etc.

Par ailleurs, les recrutements s’effectuent normalement sur le 1er ou le 2e grade d’un emploi repère, en fonction de l’expérience du candidat et de son expertise ; toutefois, l’embauche sur un 3e grade, bien qu’exceptionnelle, reste possible. Le(s) grade(s) de recrutement doi(ven)t toujours être spécifié(s) dans la vacance de poste.

En conclusion, tout affichage de grades qui fait apparaitre des parenthèses « 1-2-(3) » ou un 3e grade grisé n’est absolument pas nécessaire en raison des processus HR et ne peut être que démotivant. Nous avons donc instamment demandé que cet affichage se fasse sur trois grades « 1-2-3 » et sans partie grisée.

Mises en garde

L’Association du personnel a fait part d’informations concernant le non-respect de règles concertées qui lui ont été rapportées, et notamment des deux points suivants :

  • la non-éligibilité à une promotion pour les titulaires dont la position salariale serait inférieure à 110 % du salaire médian de leur grade, ce qui revient à limiter les propositions de promotions aux seuls titulaires ayant une position salariale égale ou supérieure à 110 % de leur grade. Ceci est inacceptable et contraire aux règles fixées par le Management, en accord avec l’Association du personnel, et valable pour l’ensemble du CERN ;
  • le refus de changement d’emploi repère pour des raisons de convenance personnelle. Il faut rappeler que l’emploi-repère assigné à une personne doit refléter les fonctions réelles de la personne et non les diplômes obtenus ou un titre académique. En effet, les emplois repères doivent permettre d’avoir une vue précise des fonctions occupées au CERN (type et nombre de postes) et donc d’aider à établir une planification des ressources (« Capacity planning »). Enfin, une personne dont les fonctions ne correspondent pas à l’emploi repère assigné sera évaluée, pour les exercices de promotion, sur les fonctions associées à l’emploi repère et non sur celles réellement occupées, ce qui aura sans aucun doute un impact sur la carrière de cette personne.
    L’Association du personnel a recommandé fortement que chaque personne au CERN ait le bon emploi-repère, même si celui-ci ne correspond plus au diplôme initial de la personne.

Encore trois thèmes à aborder

Pour clore la mise en œuvre de l’examen quinquennal, trois thèmes sont encore à traiter en 2017 :

  • la mobilité interne,
  • la Validation des Acquis de l’Expérience (VAE),
  • les entretiens en développement de carrière.

Trois groupes de travail ont été lancés par le Département HR, avec la participation de représentants de l’Association du personnel. Pour l’Association du personnel ces éléments vont dynamiser les carrières et en partie compenser les pertes sur l’avancement actées lors de la révision quinquennale.

Concertation

L’Association du personnel a rappelé que la concertation est un processus selon lequel le Directeur général et l’Association du personnel se concertent afin de trouver autant que possible une position commune. La concertation nécessite une attitude positive, loin de toute défiance, et une confiance mutuelle. L’Association du personnel est fermement engagée dans ce sens, mais elle constate que la concertation ne se porte pas aussi bien que nous le voudrions. En réponse à une question de la Directrice générale, l’exemple a été donné de la communication décalée des minutes et des documents du Comité de Concertation Permanent qui garde l’Association à bonne distance sans aucune raison objective.

Enquêtes et justice internes

Un travail sur les processus internes d’enquêtes et de justice est nécessaire et urgent. Ce constat est partagé par différents services et à différents niveaux.

L’Association du personnel a rappelé que le CERN comme Organisation Internationale a les devoirs d’un État à l’égard de son personnel et qu’il doit mettre en place des processus exemplaires dans le domaine touchant aux enquêtes et à la justice interne.

L’Association du personnel demande donc qu’un groupe de travail soit mis en place aussi rapidement que possible, sous l’égide du Département HR et avec une participation de l’AP dans ce groupe.

Santé et Sécurité

Le Service Médical du CERN a fait état, dans son rapport annuel, de problèmes en lien avec le bien-être psychosocial : le nombre de jours d’absences de longue durée pour maladie en lien avec des problèmes psychosociaux a augmenté de façon significative.

Un groupe de travail a été lancé par HR afin de bien appréhender cette problématique, identifier les causes et établir un plan d’action. L’Association du personnel participe à cette étude, au même titre que HR, le Service Médical, HSE et la hiérarchie en général. Le message de l’AP au ED a été qu’il n’y a pas lieu de paniquer mais que le CERN ne peut ignorer les signaux qui sont perçus et qui reflètent une souffrance au travail mais aussi une désorganisation et une perte économique pour les services.

VICO et Élections

VICO (VIsite COlleagues) (voir Echo n° 264)

Une campagne de courtes visites au personnel du CERN par les délégués du personnel a été lancée mi-mars et se poursuivra jusqu’à mi-juin.

Le but de cette campagne est de rencontrer nos collègues, d’initier un dialogue sur des sujets d’intérêt mutuel et de répondre autant que possible à leurs interrogations. C’est aussi une opportunité pour inciter nos collègues à adhérer à l’Association et pour proposer à certains de se présenter aux élections du Conseil du personnel prévues en novembre 2017.

Collèges électoraux

Suite à la restructuration de l’Organisation en janvier 2016 et au remplacement des filières de carrière par des grades, l’Association du personnel doit revoir les collèges électoraux en tenant compte des différentes catégories professionnelles, des différents secteurs / départements / unités, de la distribution du nombre de titulaires par départements / unités, etc.

Nous avons rappelé que cinq places au Conseil du personnel sont réservées aux délégués représentant les boursiers et les membres du personnel associés. En réponse à une question, l’AP a indiqué que le nombre de ces places sera augmenté dès que l’intérêt pour l’Association aura augmenté chez les boursiers et MPA en nombre d’adhérents et de candidats aux élections ; actuellement seules deux de ces cinq places sont pourvues.

Nous avons insisté auprès des Directeurs et des Chefs de départements et d’unités sur la nécessité d’une bonne représentation de toutes les catégories professionnelles et de tous les secteurs et départements au sein du Conseil du personnel, et nous leur avons demandé de contribuer à assurer cette représentativité.

La présentation s’est terminée par une série de questions et réponses. La Directrice générale a remercié la Vice-présidente et le Président de l’Association du personnel pour les sujets soulevés dans cette présentation et les franches réponses aux questions et a invité l’Association à revenir plus tard devant le Directorat élargi pour poursuivre ce dialogue constructif.

Ce que nous ne manquerons pas de faire, bien sûr !

La version anglaise de cet article sera publié dans le prochain Echo.

April 24, 2017 05:04 PM

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juin, juillet et décembre.

La prochaine permanence se tiendra le :
Mardi 30 mai de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel

Les permanences suivantes auront lieu les mardis 29 août, 26 septembre, 31 octobre et 28 novembre 2017.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/.
Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

April 24, 2017 02:04 PM

CERN Bulletin

Cine Club

Wednesday 3 May 2017 at 20:00
CERN Council Chamber

Kagemusha


Directed by Akira Kurosawa
Japan, 1980, 162 minutes

When a powerful warlord in medieval Japan dies, a poor thief recruited to impersonate him finds difficulty living up to his role and clashes with the spirit of the warlord during turbulent times in the kingdom. 

Original version Japanese; English subtitles.

April 24, 2017 02:04 PM

CERN Bulletin

Cine Club - Special Event

Special event

on Thursday 4 May 2017 at 18:30
CERN Council Chamber

In collaboration with the CERN Running Club and the Women In Technology initiative, the CERN CineClub is happy to announce the screening of the film

Free to Run

Directed by Pierre Morath
Switzerland, 2016, 99 minutes

Today, all anybody needs to run is the determination and a pair of the right shoes. But just fifty years ago, running was viewed almost exclusively as the domain of elite male athletes who competed on tracks. With insight and propulsive energy, director Pierre Morath traces running's rise to the 1960s, examining how the liberation movements and newfound sense of personal freedom that defined the era took the sport out of the stadiums and onto the streets, and how legends like Steve Prefontaine, Fred Lebow, and Kathrine Switzer redefined running as a populist phenomenon.

Original version French; English subtitles.

http://freetorun.ch/

Come along to watch the film and learn more about the history of popular races and amateur running, and how women had to fight for their rights to be free to run! Join us after the projection for drinks in restaurant 1, so that we can share impressions and discuss about the film.

April 24, 2017 02:04 PM

CERN Bulletin

Exhibition

La couleur des jours

oriSio

Du 2 au 12 mai 2017
CERN Meyrin, Bâtiment principal

oriSio - Motus

Suite à un fort intérêt pour la Chine et une curiosité pour un médium très ancien, la laque !

Je réinterprète cet art à travers un style abstrait.

Je présente ici des laques sur aluminium, travaillés au plasma et ensuite colorés à l’aide de pigments pour l’essentiel.

Mes œuvres je les veux brutes, déchirées, évanescentes, gondolées, voire trouées mais avec une belle approche de profondeur de la couleur.

 

Pour plus d’informations : staff.association@cern.ch | Tél: 022 766 37 38

April 24, 2017 02:04 PM

John Baez - Azimuth

Complexity Theory and Evolution in Economics

This book looks interesting:

• David S. Wilson and Alan Kirman, editors, Complexity and Evolution: Toward a New Synthesis for Economics, MIT Press, Cambridge Mass., 2016.

You can get some chapters for free here. I’ve only looked carefully at this one:

• Joshua M. Epstein and Julia Chelen, Advancing Agent_Zero.

Agent_Zero is a simple toy model of an agent that’s not the idealized rational actor often studied in economics: rather, it has emotional, deliberative, and social modules which interact with each other to make decisions. Epstein and Chelen simulate collections of such agents and see what they do:

Abstract. Agent_Zero is a mathematical and computational individual that can generate important, but insufficiently understood, social dynamics from the bottom up. First published by Epstein (2013), this new theoretical entity possesses emotional, deliberative, and social modules, each grounded in contemporary neuroscience. Agent_Zero’s observable behavior results from the interaction of these internal modules. When multiple Agent_Zeros interact with one another, a wide range of important, even disturbing, collective dynamics emerge. These dynamics are not straightforwardly generated using the canonical rational actor which has dominated mathematical social science since the 1940s. Following a concise exposition of the Agent_Zero model, this chapter offers a range of fertile research directions, including the use of realistic geographies and population levels, the exploration of new internal modules and new interactions among them, the development of formal axioms for modular agents, empirical testing, the replication of historical episodes, and practical applications. These may all serve to advance the Agent_Zero research program.

It sounds like a fun and productive project as long as one keeps ones wits about one. It’s hard to draw conclusions about human behavior from such simplified agents. One can argue about this, and of course economists will. But regardless of this, one can draw conclusions about which kinds of simplified agents will engage in which kinds of collective behavior under which conditions.

Basically, one can start mapping out a small simple corner of the huge ‘phase space’ of possible societies. And that’s bound to lead to interesting new ideas that one wouldn’t get from either 1) empirical research on human and animal societies or 2) pure theoretical pondering without the help of simulations.

Here’s an article whose title, at least, takes a vastly more sanguine attitude toward benefits of such work:

• Kate Douglas, Orthodox economics is broken: how evolution, ecology, and collective behavior can help us avoid catastrophe, Evonomics, 22 July 2016.

I’ll quote just a bit:

For simplicity’s sake, orthodox economics assumes that Homo economicus, when making a fundamental decision such as whether to buy or sell something, has access to all relevant information. And because our made-up economic cousins are so rational and self-interested, when the price of an asset is too high, say, they wouldn’t buy—so the price falls. This leads to the notion that economies self-organise into an equilibrium state, where supply and demand are equal.

Real humans—be they Wall Street traders or customers in Walmart—don’t always have accurate information to hand, nor do they act rationally. And they certainly don’t act in isolation. We learn from each other, and what we value, buy and invest in is strongly influenced by our beliefs and cultural norms, which themselves change over time and space.

“Many preferences are dynamic, especially as individuals move between groups, and completely new preferences may arise through the mixing of peoples as they create new identities,” says anthropologist Adrian Bell at the University of Utah in Salt Lake City. “Economists need to take cultural evolution more seriously,” he says, because it would help them understand who or what drives shifts in behaviour.

Using a mathematical model of price fluctuations, for example, Bell has shown that prestige bias—our tendency to copy successful or prestigious individuals—influences pricing and investor behaviour in a way that creates or exacerbates market bubbles.

We also adapt our decisions according to the situation, which in turn changes the situations faced by others, and so on. The stability or otherwise of financial markets, for instance, depends to a great extent on traders, whose strategies vary according to what they expect to be most profitable at any one time. “The economy should be considered as a complex adaptive system in which the agents constantly react to, influence and are influenced by the other individuals in the economy,” says Kirman.

This is where biologists might help. Some researchers are used to exploring the nature and functions of complex interactions between networks of individuals as part of their attempts to understand swarms of locusts, termite colonies or entire ecosystems. Their work has provided insights into how information spreads within groups and how that influences consensus decision-making, says Iain Couzin from the Max Planck Institute for Ornithology in Konstanz, Germany—insights that could potentially improve our understanding of financial markets.

Take the popular notion of the “wisdom of the crowd”—the belief that large groups of people can make smart decisions even when poorly informed, because individual errors of judgement based on imperfect information tend to cancel out. In orthodox economics, the wisdom of the crowd helps to determine the prices of assets and ensure that markets function efficiently. “This is often misplaced,” says Couzin, who studies collective behaviour in animals from locusts to fish and baboons.

By creating a computer model based on how these animals make consensus decisions, Couzin and his colleagues showed last year that the wisdom of the crowd works only under certain conditions—and that contrary to popular belief, small groups with access to many sources of information tend to make the best decisions.

That’s because the individual decisions that make up the consensus are based on two types of environmental cue: those to which the entire group are exposed—known as high-correlation cues—and those that only some individuals see, or low-correlation cues. Couzin found that in larger groups, the information known by all members drowns out that which only a few individuals noticed. So if the widely known information is unreliable, larger groups make poor decisions. Smaller groups, on the other hand, still make good decisions because they rely on a greater diversity of information.

So when it comes to organising large businesses or financial institutions, “we need to think about leaders, hierarchies and who has what information”, says Couzin. Decision-making structures based on groups of between eight and 12 individuals, rather than larger boards of directors, might prevent over-reliance on highly correlated information, which can compromise collective intelligence. Operating in a series of smaller groups may help prevent decision-makers from indulging their natural tendency to follow the pack, says Kirman.

Taking into account such effects requires economists to abandon one-size-fits-all mathematical formulae in favour of “agent-based” modelling—computer programs that give virtual economic agents differing characteristics that in turn determine interactions. That’s easier said than done: just like economists, biologists usually model relatively simple agents with simple rules of interaction. How do you model a human?

It’s a nut we’re beginning to crack. One attendee at the forum was Joshua Epstein, director of the Center for Advanced Modelling at Johns Hopkins University in Baltimore, Maryland. He and his colleagues have come up with Agent_Zero, an open-source software template for a more human-like actor influenced by emotion, reason and social pressures. Collections of Agent_Zeros think, feel and deliberate. They have more human-like relationships with other agents and groups, and their interactions lead to social conflict, violence and financial panic. Agent_Zero offers economists a way to explore a range of scenarios and see which best matches what is going on in the real world. This kind of sophistication means they could potentially create scenarios approaching the complexity of real life.

Orthodox economics likes to portray economies as stately ships proceeding forwards on an even keel, occasionally buffeted by unforeseen storms. Kirman prefers a different metaphor, one borrowed from biology: economies are like slime moulds, collections of single-celled organisms that move as a single body, constantly reorganising themselves to slide in directions that are neither understood nor necessarily desired by their component parts.

For Kirman, viewing economies as complex adaptive systems might help us understand how they evolve over time—and perhaps even suggest ways to make them more robust and adaptable. He’s not alone. Drawing analogies between financial and biological networks, the Bank of England’s research chief Andrew Haldane and University of Oxford ecologist Robert May have together argued that we should be less concerned with the robustness of individual banks than the contagious effects of one bank’s problems on others to which it is connected. Approaches like this might help markets to avoid failures that come from within the system itself, Kirman says.

To put this view of macroeconomics into practice, however, might mean making it more like weather forecasting, which has improved its accuracy by feeding enormous amounts of real-time data into computer simulation models that are tested against each other. That’s not going to be easy.

 


by John Baez at April 24, 2017 12:52 AM

April 23, 2017

The n-Category Cafe

On Clubs and Data-Type Constructors

Guest post by Pierre Cagne

The Kan Extension Seminar II continues with a third consecutive of Kelly, entitled On clubs and data-type constructors. It deals with the notion of club, first introduced by Kelly as an attempt to encode theories of categories with structure involving some kind of coherence issues. Astonishing enough, there is no mention of operads whatsoever in this article. (To be fair, there is a mention of “those Lawvere theories with only associativity axioms”…) Is it because the notion of club was developed in several stages at various time periods, making operads less identifiable among this work? Or does Kelly judge irrelevant the link between the two notions? I am not sure, but anyway I think it is quite interesting to read this article in the light of what we now know about operads.

Before starting with the mathematical content, I would like to thank Alexander, Brendan and Emily for organizing this online seminar. It is a great opportunity to take a deeper look at seminal papers that would have been hard to explore all by oneself. On that note, I am also very grateful for the rich discussions we have with my fellow participants.

Non symmetric Set-operads

Let us take a look at the simplest kind of operads: non symmetric <semantics>Set<annotation encoding="application/x-tex">\mathsf{Set}</annotation></semantics>-operads. Those are informally collections of operations with given arities closed under compositions. The usual way to define them is to endow the category <semantics>[N,Set]<annotation encoding="application/x-tex">[\mathbf{N},\mathsf{Set}]</annotation></semantics> of <semantics>N<annotation encoding="application/x-tex">\mathbf{N}</annotation></semantics>-indexed families of sets with the substitution monoidal product (see Simon’s post): for two such families <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> and <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>, <semantics>(RS) n= k 1++k m=nR m×S k 1××S k mnN<annotation encoding="application/x-tex"> (R \circ S)_n = \sum_{k_1+\dots+k_m = n} R_m \times S_{k_1} \times \dots \times S_{k_m} \quad \forall n \in \mathbf{N} </annotation></semantics> This monoidal product is better understood when elements of <semantics>R n<annotation encoding="application/x-tex">R_n</annotation></semantics> and <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics> are thought as branching with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> inputs and one output: <semantics>RS<annotation encoding="application/x-tex">R\circ S</annotation></semantics> is then obtained by plugging outputs of elements of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> to the inputs of elements of <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics>. A non symmetric operad is defined to be a monoid for that monoidal product, a typical example being the family <semantics>(Set(X n,X)) nN<annotation encoding="application/x-tex">(\mathsf{Set}(X^n,X))_{n\in\mathbf{N}}</annotation></semantics> for a set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

We can now take advantage of the equivalence <semantics>[N,Set]Set/N<annotation encoding="application/x-tex">[\mathbf{N},\mathsf{Set}] \overset \sim \to \mathsf{Set}/\mathbf{N}</annotation></semantics> to equip the category <semantics>Set/N<annotation encoding="application/x-tex">\mathsf{Set}/\mathbf{N}</annotation></semantics> with a monoidal product. This equivalence maps a family <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> to the coproduct <semantics> nS n<annotation encoding="application/x-tex">\sum_n S_n</annotation></semantics> with the canonical map to <semantics>N<annotation encoding="application/x-tex">\mathbf{N}</annotation></semantics>, while the inverse equivalence maps a function <semantics>a:AN<annotation encoding="application/x-tex">a: A \to \mathbf{N}</annotation></semantics> to the family of fibers <semantics>(a 1(n)) nN<annotation encoding="application/x-tex">(a^{-1}(n))_{n\in\mathbf{N}}</annotation></semantics>. It means that a <semantics>N<annotation encoding="application/x-tex">\mathbf{N}</annotation></semantics>-indexed family can be thought either as a set of operations of arity <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> for each <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> or as a bunch of operations, each labeled by an integer given its arity. Let us transport the monoidal product of <semantics>[N,Set]<annotation encoding="application/x-tex">[\mathbf{N}, \mathsf{Set}]</annotation></semantics> to <semantics>Set/N<annotation encoding="application/x-tex">\mathsf{Set}/\mathbf{N}</annotation></semantics>: given two maps <semantics>a:AN<annotation encoding="application/x-tex">a: A \to \mathbf{N}</annotation></semantics> and <semantics>b:BN<annotation encoding="application/x-tex">b: B \to \mathbf{N}</annotation></semantics>, we compute the <semantics><annotation encoding="application/x-tex">\circ</annotation></semantics>-product of the family of fibers, and then take the coproduct to get <semantics>AB={(x,y 1,,y m):xA,y iB,a(x)=m}<annotation encoding="application/x-tex"> A\circ B = \{ (x,y_1,\dots,y_m) : x \in A, y_i \in B, a(x) = m \} </annotation></semantics> with the map <semantics>ABN<annotation encoding="application/x-tex">A\circ B \to \mathbf{N}</annotation></semantics> mapping <semantics>(x,y 1,,y m) ib(y i)<annotation encoding="application/x-tex">(x,y_1,\dots,y_m)\mapsto \sum_i b(y_i)</annotation></semantics>. That is, the monoidal product is achieved by computing the following pullback:

Non symmetric operads as pullbacks

where <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> is the free monoid monad (or list monad) on <semantics>Set<annotation encoding="application/x-tex">\mathsf{Set}</annotation></semantics>. Hence a non symmetric operad is equivalently a monoid in <semantics>Set/N<annotation encoding="application/x-tex">\mathsf{Set}/\mathbf{N}</annotation></semantics> for this monoidal product. In Burroni’s terminology, it would be called a <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics>-category with one object.

In my opinion, Kelly’s clubs are a way to generalize this point of view to other kind of operads, replacing <semantics>N<annotation encoding="application/x-tex">\mathbf{N}</annotation></semantics> by the groupoid <semantics>P<annotation encoding="application/x-tex">\mathbf P</annotation></semantics> of bijections (to get symmetric operads) or the category <semantics>Fin<annotation encoding="application/x-tex">\mathsf{Fin}</annotation></semantics> of finite sets (to get Lawvere theories). Obviously, <semantics>Set/P<annotation encoding="application/x-tex">\mathsf{Set}/\mathbf P</annotation></semantics> or <semantics>Set/Fin<annotation encoding="application/x-tex">\mathsf{Set}/\mathsf{Fin}</annotation></semantics> does not make much sense, but the coproduct functor of earlier can be easily understood as a Grothendieck construction that adapts neatly in this context, providing functors: <semantics>[P,Set]Cat/P,[Fin,Set]Cat/Fin<annotation encoding="application/x-tex"> [\mathbf P,\mathsf{Set}] \to \mathsf{Cat}/\mathbf P,\qquad [\mathsf{Fin},\mathsf{Set}] \to \mathsf{Cat}/\mathsf{Fin} </annotation></semantics> Of course, these functors are not equivalences anymore, but it does not prevent us from looking for monoidal products on <semantics>Cat/P<annotation encoding="application/x-tex">\mathsf{Cat}/\mathbf P</annotation></semantics> and <semantics>Cat/Fin<annotation encoding="application/x-tex">\mathsf{Cat}/\mathsf{Fin}</annotation></semantics> that restrict to the substitution product on the essential images of these functors (i.e. the discrete opfibrations). Before going to the abstract definitions, you might keep in mind the following goal: we are seeking those small categories <semantics>𝒞<annotation encoding="application/x-tex">\mathcal{C}</annotation></semantics> such that <semantics>Cat/𝒞<annotation encoding="application/x-tex">\mathsf{Cat}/\mathcal{C}</annotation></semantics> admits a monoidal product reflecting through the Grothendieck construction the substition product in <semantics>[𝒞,Set]<annotation encoding="application/x-tex">[\mathcal{C},\mathsf{Set}]</annotation></semantics>.

Abstract clubs

Recall that in a monoidal category <semantics><annotation encoding="application/x-tex">\mathcal{E}</annotation></semantics> with product <semantics><annotation encoding="application/x-tex">\otimes</annotation></semantics> and unit <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>, any monoid <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> with multiplication <semantics>m:MMM<annotation encoding="application/x-tex">m: M\otimes M \to M</annotation></semantics> and unit <semantics>u:IM<annotation encoding="application/x-tex">u: I \to M</annotation></semantics> induces a monoidal structure on <semantics>/M<annotation encoding="application/x-tex">\mathcal{E}/M</annotation></semantics> as follows: the unit is <semantics>u:IM<annotation encoding="application/x-tex">u: I \to M</annotation></semantics> and the product of <semantics>f:XM<annotation encoding="application/x-tex">f: X \to M</annotation></semantics> by <semantics>g:YM<annotation encoding="application/x-tex">g: Y \to M</annotation></semantics> is the composite <semantics>XYfgMMmM<annotation encoding="application/x-tex"> X\otimes Y \overset {f\otimes g}\to M \otimes M \overset{m}\to M </annotation></semantics> Be aware that this monoidal structure depends heavily on the monoid <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>. For example, even if <semantics><annotation encoding="application/x-tex">\mathcal{E}</annotation></semantics> is finitely complete and <semantics><annotation encoding="application/x-tex">\otimes</annotation></semantics> is the cartesian product, the induced structure on <semantics>/M<annotation encoding="application/x-tex">\mathcal{E}/M</annotation></semantics> is almost never the cartesian one. A notable fact about this structure on <semantics>/M<annotation encoding="application/x-tex">\mathcal{E}/M</annotation></semantics> is that the monoids in it are exactly the morphisms of monoids with codomain <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>.

We will use this property in the monoidal category <semantics>[𝒜,𝒜]<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{A}]</annotation></semantics> of endofunctors on a category <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics>. I will not say a lot about size issues here, but of course we assume that there exist enough universes to make sense of <semantics>[𝒜,𝒜]<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{A}]</annotation></semantics> as a category even when <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> is not small but only locally small: that is, if smallness is relative to a universe <semantics>𝕌<annotation encoding="application/x-tex">\mathbb{U}</annotation></semantics>, then we posit a universe <semantics>𝕍𝕌<annotation encoding="application/x-tex">\mathbb{V} \ni \mathbb{U}</annotation></semantics> big enough to contain the set of objects of <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics>, making <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> a <semantics>𝕍<annotation encoding="application/x-tex">\mathbb{V}</annotation></semantics>-small category hence <semantics>[𝒜,𝒜]<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{A}]</annotation></semantics> a locally <semantics>𝕍<annotation encoding="application/x-tex">\mathbb{V}</annotation></semantics>-small category. The monoidal product on <semantics>[𝒜,𝒜]<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{A}]</annotation></semantics> is just the composition of endofunctors and the unit is the identity functor <semantics>Id<annotation encoding="application/x-tex">\mathrm{Id}</annotation></semantics>. The monoids in that category are precisely the monads on <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics>, and for any such <semantics>S:𝒜𝒜<annotation encoding="application/x-tex">S: \mathcal{A} \to \mathcal{A}</annotation></semantics> with multiplication <semantics>n:SSS<annotation encoding="application/x-tex">n: SS \to S</annotation></semantics> and unit <semantics>j:IdS<annotation encoding="application/x-tex">j: \mathrm{Id} \to S</annotation></semantics>, the slice category <semantics>[𝒜,𝒜]/S<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{A}]/S</annotation></semantics> inherits a monoidal structure with unit <semantics>j<annotation encoding="application/x-tex">j</annotation></semantics> and product <semantics>α Sβ<annotation encoding="application/x-tex">\alpha \circ^S \beta</annotation></semantics> the composite <semantics>TRαβSSnS<annotation encoding="application/x-tex"> T R \overset{\alpha\beta} \to S S \overset n \to S </annotation></semantics> for any <semantics>α:TS<annotation encoding="application/x-tex">\alpha: T \to S</annotation></semantics> and <semantics>β:RS<annotation encoding="application/x-tex">\beta: R \to S</annotation></semantics>.

Now a natural transformation <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> between two functors <semantics>F,G:𝒜𝒜<annotation encoding="application/x-tex">F,G: \mathcal{A} \to \mathcal{A}</annotation></semantics> is said to be cartesian whenever the naturality squares

Cartesian natural transformation

are pullback diagrams. If <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> is finitely complete, as it will be for the rest of the post, it admits in particular a terminal object <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> and the pasting lemma ensures that we only have to check for the pullback property of the naturality squares of the form

Alternative definition of cartesian natural transformation

to know if <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> is cartesian. Let us denote by <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics> the (possibly large) set of morphsisms in <semantics>[𝒜,𝒜]<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{A}]</annotation></semantics> that are cartesian in this sense, and denote by <semantics>/S<annotation encoding="application/x-tex">\mathcal{M}/S</annotation></semantics> the full subcategory of <semantics>[𝒜,𝒜]/S<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{A}]/S</annotation></semantics> whose objects are in <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics>.

Definition. A club in <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> is a monad <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> such that <semantics>/S<annotation encoding="application/x-tex">\mathcal{M}/S</annotation></semantics> is closed under the monoidal product <semantics> S<annotation encoding="application/x-tex">\circ^S</annotation></semantics>.

By “closed under <semantics> S<annotation encoding="application/x-tex">\circ^S</annotation></semantics>”, it is understood that the unit <semantics>j<annotation encoding="application/x-tex">j</annotation></semantics> of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> is in <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics> and that the product <semantics>α Sβ<annotation encoding="application/x-tex">\alpha \circ^S \beta</annotation></semantics> of two elements of <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics> with codomain <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> still is in <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics>. A useful alternate characterization is the following:

Lemma. A monad <semantics>(S,n,j)<annotation encoding="application/x-tex">(S,n,j)</annotation></semantics> is a club if and only if <semantics>n,j<annotation encoding="application/x-tex">n,j \in \mathcal{M}</annotation></semantics> and <semantics>S<annotation encoding="application/x-tex">S\mathcal{M}\subseteq \mathcal{M}</annotation></semantics>.

It is clear from the definition of <semantics> S<annotation encoding="application/x-tex">\circ^S</annotation></semantics> that the condition is sufficient, as the <semantics>α Sβ<annotation encoding="application/x-tex">\alpha \circ^S \beta</annotation></semantics> can be written as <semantics>n(Sβ)(αT)<annotation encoding="application/x-tex">n\cdot(S\beta)\cdot(\alpha T)</annotation></semantics> via the exchange rule. Now suppose <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> is a club: <semantics>j<annotation encoding="application/x-tex">j \in \mathcal{M}</annotation></semantics> as it is the monoidal unit; <semantics>n<annotation encoding="application/x-tex">n \in \mathcal{M}</annotation></semantics> comes from <semantics>id S Sid S<annotation encoding="application/x-tex">\mathrm{id}_S \circ^S \mathrm{id}_S \in \mathcal{M}</annotation></semantics>; finally for any <semantics>α:TS<annotation encoding="application/x-tex">\alpha: T \to S \in \mathcal{M}</annotation></semantics>, we should have <semantics>id S Sα=n(Sα)<annotation encoding="application/x-tex">\mathrm{id}_S \circ^S \alpha = n\cdot(S\alpha) \in \mathcal{M}</annotation></semantics>, and having already <semantics>n<annotation encoding="application/x-tex">n\in\mathcal{M}</annotation></semantics> this yields <semantics>Sα<annotation encoding="application/x-tex">S\alpha \in \mathcal{M}</annotation></semantics> by the pasting lemma.

In particular, this lemma shows that monoids in <semantics>/S<annotation encoding="application/x-tex">\mathcal{M}/S</annotation></semantics>, which coincide with monad maps <semantics>TS<annotation encoding="application/x-tex">T \to S \in \mathcal{M}</annotation></semantics> for some monad <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>, are clubs too. We shall denote the category of these by <semantics>Club(𝒜)/S<annotation encoding="application/x-tex">\mathbf{Club}(\mathcal{A})/S</annotation></semantics>.

The lemma also implies that any cartesian monad, by which is meant a pullbacks preserving monad with cartesian unit and multiplication, is automatically a club.

Now note that evaluation at <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> provides an equivalence <semantics>/S𝒜/S1<annotation encoding="application/x-tex">\mathcal{M}/S \overset\sim\to \mathcal{A}/S1</annotation></semantics> whose pseudo inverse is given for a map <semantics>f:KS1<annotation encoding="application/x-tex">f:K \to S1</annotation></semantics> by the natural transformation pointwise defined as the pullback

Pullback

The previous monoidal product on <semantics>/S<annotation encoding="application/x-tex">\mathcal{M}/S</annotation></semantics> can be transported on <semantics>𝒜/S1<annotation encoding="application/x-tex">\mathcal{A}/S1</annotation></semantics> and bears a fairly simple description: given <semantics>f:KS1<annotation encoding="application/x-tex">f:K \to S1</annotation></semantics> and <semantics>g:HS1<annotation encoding="application/x-tex">g:H \to S1</annotation></semantics>, the product, still denoted <semantics>f Sg<annotation encoding="application/x-tex">f\circ^S g</annotation></semantics>, is the evaluation at <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> of the composite <semantics>TRSSS<annotation encoding="application/x-tex">TR \to SS \to S</annotation></semantics> where <semantics>TS<annotation encoding="application/x-tex">T \to S</annotation></semantics> corresponds to <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> and <semantics>RS<annotation encoding="application/x-tex">R\to S</annotation></semantics> to <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics>. Hence the explicit equivalence given above allows us to write this as

Clubs as pullbacks

Definition. By abuse of terminology, a monoid in <semantics>𝒜/S1<annotation encoding="application/x-tex">\mathcal{A}/S1</annotation></semantics> is said to be a club over <semantics>S1<annotation encoding="application/x-tex">S1</annotation></semantics>.

Examples of clubs

On <semantics>Set<annotation encoding="application/x-tex">\mathsf{Set}</annotation></semantics>, the free monoid monad <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> is cartesian, hence a club on <semantics>Set<annotation encoding="application/x-tex">\mathsf{Set}</annotation></semantics> in the above sense. Of course, we retrieve as <semantics> L<annotation encoding="application/x-tex">\circ^L</annotation></semantics> the monoidal product of the introduction on <semantics>Set/N<annotation encoding="application/x-tex">\mathsf{Set}/\mathbf{N}</annotation></semantics>. Hence, clubs over <semantics>N<annotation encoding="application/x-tex">\mathbf{N}</annotation></semantics> in <semantics>Set<annotation encoding="application/x-tex">\mathsf{Set}</annotation></semantics> are exactly the non symmetric <semantics>Set<annotation encoding="application/x-tex">\mathsf{Set}</annotation></semantics>-operads.

Considering <semantics>Cat<annotation encoding="application/x-tex">\mathsf{Cat}</annotation></semantics> as a <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>-category, the free finite coproduct category monad <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> on <semantics>Cat<annotation encoding="application/x-tex">\mathsf{Cat}</annotation></semantics> is a club in the above sense. This can be shown directly through the charaterization we stated earlier: its unit and multiplication are cartesian and it maps cartesian transformations to cartesian transformations. Moreover, the obvious monad map <semantics>PF<annotation encoding="application/x-tex">P \to F</annotation></semantics> is cartesian, where <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is the free strict symmetric monoidal category monad on <semantics>Cat<annotation encoding="application/x-tex">\mathsf{Cat}</annotation></semantics>. Hence it yields for free that <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is also a club on <semantics>Cat<annotation encoding="application/x-tex">\mathsf{Cat}</annotation></semantics>. Note that the groupoid <semantics>P<annotation encoding="application/x-tex">\mathbf{P}</annotation></semantics> of bijections is <semantics>P1<annotation encoding="application/x-tex">P1</annotation></semantics> and the category <semantics>Fin<annotation encoding="application/x-tex">\mathsf{Fin}</annotation></semantics> of finite sets is <semantics>F1<annotation encoding="application/x-tex">F1</annotation></semantics>. So it is now a matter of careful bookkeeping to establish that the functors (given by the Grothendieck construction) <semantics>[P,Set]Cat/P,[Fin,Set]Cat/Fin<annotation encoding="application/x-tex"> [\mathbf{P},\mathsf{Set}] \to \mathsf{Cat}/\mathbf{P}, \qquad [\mathsf{Fin},\mathsf{Set}] \to \mathsf{Cat}/\mathsf{Fin} </annotation></semantics> are strong monoidal where the domain categories are given Kelly’s substition product. In other words, it exhibits symmetric <semantics>Set<annotation encoding="application/x-tex">\mathsf{Set}</annotation></semantics>-operads and non enriched Lawvere theories as special clubs over <semantics>P<annotation encoding="application/x-tex">\mathbf{P}</annotation></semantics> and <semantics>Fin<annotation encoding="application/x-tex">\mathsf{Fin}</annotation></semantics>.

We could say that we are done: we have a polished abstract notion of clubs that can encompass the different notions of operads on <semantics>Set<annotation encoding="application/x-tex">\mathsf{Set}</annotation></semantics> that we are used to. But what about operads on other categories? Also, the above monads <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> and <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> are actually <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-monads on <semantics>Cat<annotation encoding="application/x-tex">\mathsf{Cat}</annotation></semantics> when seen as a <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-category. Can we extend the notion to this enrichement?

Enriched clubs

We shall fix a cosmos <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> to enriched over (and denote as usual the underlying ordinary notions by a <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>-index), but we want it to have good properties, so that finite completeness makes sense in this enriched framework. Hence we ask that <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> is locally finitely presentable as a closed category (see David’s post). Taking a look at what we did in the ordinary case, we see that it heavily relies on the possibility of defining slice categories, which is not possible in full generality. Hence we ask for <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> to be semicartesian, meaning that the monoidal unit of <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> is its terminal object: then for a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-category <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>, the slice category <semantics>/B<annotation encoding="application/x-tex">\mathcal{B}/B</annotation></semantics> is defined to have elements <semantics>1(X,B)<annotation encoding="application/x-tex">1 \to \mathcal{B}(X,B)</annotation></semantics> as objects, and the space of morphisms between such <semantics>f:1(X,B)<annotation encoding="application/x-tex">f:1 \to \mathcal{B}(X,B)</annotation></semantics> and <semantics>f:1(X,B)<annotation encoding="application/x-tex">f':1 \to \mathcal{B}(X',B)</annotation></semantics> is given by the following pullback in <semantics>𝒱 0<annotation encoding="application/x-tex">\mathcal{V}_0</annotation></semantics>:

Comma enriched

If we also want to be able to talk about the category of enriched clubs over something, we should be able to make a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-category out of the monoids in a monoidal <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-category. Again, this is a priori not possible to do: the space of monoid maps between <semantics>(M,m,i)<annotation encoding="application/x-tex">(M,m,i)</annotation></semantics> and <semantics>(N,n,j)<annotation encoding="application/x-tex">(N,n,j)</annotation></semantics> is supposed to interpret “the subspace of those <semantics>f:MN<annotation encoding="application/x-tex">f: M \to N</annotation></semantics> such that <semantics>fi=j<annotation encoding="application/x-tex">fi=j</annotation></semantics> and <semantics>fm(x,y)=n(fx,fy)<annotation encoding="application/x-tex">fm(x,y)=n(fx,fy)</annotation></semantics> for all <semantics>x,y<annotation encoding="application/x-tex">x,y</annotation></semantics>”, where the later equation has two occurences of <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> on the right. Hence we ask that <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> is actually a cartesian cosmos, so that the interpretation of such a subspace is the joint equalizer of

Monoid enriched

Monoid enriched

Moreover, these hypothesis also resolve the set theoretical issues: because of all the hypotheses on <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>, the underlying <semantics>𝒱 0<annotation encoding="application/x-tex">\mathcal{V}_0</annotation></semantics> identifies with the category <semantics>Lex[𝒯 0,Set]<annotation encoding="application/x-tex">\mathrm{Lex}[\mathcal{T}_0,\mathsf{Set}]</annotation></semantics> of <semantics>Set<annotation encoding="application/x-tex">\mathsf{Set}</annotation></semantics>-valued left exact functors from the finitely presentables of <semantics>𝒱 0<annotation encoding="application/x-tex">\mathcal{V}_0</annotation></semantics>. Hence, for a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-category <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics>, the category of <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-endofunctors <semantics>[𝒜,𝒜]<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{A}]</annotation></semantics> is naturally a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}'</annotation></semantics>-category for the cartesian cosmos <semantics>𝒱=Lex[𝒯 0,Set]<annotation encoding="application/x-tex">\mathcal{V}'=\mathrm{Lex}[\mathcal{T}_0,\mathsf{Set}']</annotation></semantics> where <semantics>Set<annotation encoding="application/x-tex">\mathsf{Set}'</annotation></semantics> is the category of <semantics>𝕍<annotation encoding="application/x-tex">\mathbb{V}</annotation></semantics>-small sets for a universe <semantics>𝕍<annotation encoding="application/x-tex">\mathbb{V}</annotation></semantics> big enough to contain the set of objects of <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics>. Hence we do not care so much about size issues and consider everything to be a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-category; the careful reader will replace <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> by <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}'</annotation></semantics> when necessary.

In the context of categories enriched over a locally finitely presentable cartesian closed cosmos <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>, all we did in the ordinary case is directly enrichable. We call a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-natural transformation <semantics>α:TS<annotation encoding="application/x-tex">\alpha: T \to S</annotation></semantics> cartesian just when it is so as a natural transformation <semantics>T 0S 0<annotation encoding="application/x-tex">T_0 \to S_0</annotation></semantics>, and denote the set of these by <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics>. For a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-monad <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> on <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics>, the category <semantics>/S<annotation encoding="application/x-tex">\mathcal{M}/S</annotation></semantics> is the full subcategory of the slice <semantics>[𝒜,𝒜]/S<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{A}]/S</annotation></semantics> spanned by the objects in <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics>.

Definition. A <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-club on <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> is a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-monad <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> such that <semantics>/S<annotation encoding="application/x-tex">\mathcal{M}/S</annotation></semantics> is closed under the induced <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-monoidal product of <semantics>[𝒜,𝒜]/S<annotation encoding="application/x-tex">[\mathcal{A},\mathcal{A}]/S</annotation></semantics>.

Now comes the fundamental proposition about enriched clubs:

Proposition. A <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-monad <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> is a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-club if and only if <semantics>S 0<annotation encoding="application/x-tex">S_0</annotation></semantics> is an ordinary club.

In that case, the category of monoids in <semantics>/S<annotation encoding="application/x-tex">\mathcal{M}/S</annotation></semantics> is composed of the clubs <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> together with a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-monad map <semantics>1[𝒜,𝒜](T,S)<annotation encoding="application/x-tex">1 \to [\mathcal{A},\mathcal{A}](T,S)</annotation></semantics> in <semantics><annotation encoding="application/x-tex">\mathcal{M}</annotation></semantics>. We will still denote it <semantics>Club(𝒜)/S<annotation encoding="application/x-tex">\mathbf{Club}(\mathcal{A})/S</annotation></semantics> and its underlying ordinary category is <semantics>Club(𝒜 0)/S 0<annotation encoding="application/x-tex">\mathbf{Club}(\mathcal{A}_0)/S_0</annotation></semantics>. We can once again take advantage of the <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-equivalence <semantics>/S𝒜/S1<annotation encoding="application/x-tex">\mathcal{M}/S \simeq \mathcal{A}/S1</annotation></semantics> to equip the later with a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-monoidal product, and abuse terminlogy to call its monoids <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-clubs over <semantics>S1<annotation encoding="application/x-tex">S1</annotation></semantics>. Proving all that carefully require notions of enriched factorization systems that are of no use for this post.

So basically, the slogan is: as long as <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> is a cartesian cosmos which is loccally presentable as a closed category, everything works the same way as in the ordinary case, and <semantics>() 0<annotation encoding="application/x-tex">(-)_0</annotation></semantics> preserves and reflects clubs.

Examples of enriched clubs

As we said earlier, <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> and <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> are <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-monads on <semantics>Cat<annotation encoding="application/x-tex">\mathsf{Cat}</annotation></semantics>, and the underlying <semantics>F 0<annotation encoding="application/x-tex">F_0</annotation></semantics> and <semantics>P 0<annotation encoding="application/x-tex">P_0</annotation></semantics> (earlier just denoted <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> and <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>) are ordinary clubs. So <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> and <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> are <semantics>Cat<annotation encoding="application/x-tex">\mathsf{Cat}</annotation></semantics>-clubs, maybe better called <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-clubs. Moreover, the map <semantics>P 0F 0<annotation encoding="application/x-tex">P_0 \to F_0</annotation></semantics> mentioned earlier is easily promoted to a <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-natural transformation making <semantics>P<annotation encoding="application/x-tex">\mathbf{P}</annotation></semantics> a <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-club over <semantics>Fin<annotation encoding="application/x-tex">\mathsf{Fin}</annotation></semantics>.

The free monoid monad on a cartesian cosmos <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> is a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-club and the clubs over <semantics>L1<annotation encoding="application/x-tex">L1</annotation></semantics> are precisely the non symmetric <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-operads.

Last but not least, a quite surprising example at first sight. Any small ordinary category <semantics>𝒜 0<annotation encoding="application/x-tex">\mathcal{A}_0</annotation></semantics> is naturally enriched in its category of presheaves <semantics>Psh(𝒜 0)<annotation encoding="application/x-tex">\mathrm{Psh}(\mathcal{A}_0)</annotation></semantics>, as the full subcategory of the cartesian cosmos <semantics>𝒱=Psh(𝒜 0)<annotation encoding="application/x-tex">\mathcal{V}=\mathrm{Psh}(\mathcal{A}_0)</annotation></semantics> spanned by the representables. Concretely, the space of morphisms between <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> is given by the presheaf <semantics>𝒜(A,B):C𝒜 0(A×C,B)<annotation encoding="application/x-tex"> \mathcal{A}(A,B): C \mapsto \mathcal{A}_0(A \times C, B) </annotation></semantics> Hence an <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-endofunctor <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> on <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> is the data of a map <semantics>ASA<annotation encoding="application/x-tex">A \mapsto SA</annotation></semantics> on objects, together with for any <semantics>A,B<annotation encoding="application/x-tex">A,B</annotation></semantics> a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-natural transformation <semantics>σ A,B:𝒜(A,B)𝒜(SA,SB)<annotation encoding="application/x-tex">\sigma_{A,B}: \mathcal{A}(A,B) \to \mathcal{A}(SA,SB)</annotation></semantics> satisfying some axioms. Now fixing <semantics>A,C𝒜<annotation encoding="application/x-tex">A,C \in \mathcal{A}</annotation></semantics>, the collection of <semantics>(σ A,B) C:𝒜 0(A×C,B)𝒜 0(SA×C,SB)<annotation encoding="application/x-tex"> (\sigma_{A,B})_C : \mathcal{A}_0(A\times C,B) \to \mathcal{A}_0(SA \times C, SB) </annotation></semantics> is equivalently, via Yoneda, a collection of <semantics>σ˜ A,C:𝒜 0(SA×C,S(A×C)).<annotation encoding="application/x-tex"> \tilde{\sigma}_{A,C} : \mathcal{A}_0(SA\times C,S(A \times C)). </annotation></semantics> The axioms that <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> satisfies as a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-enriched natural transformation make <semantics>σ˜<annotation encoding="application/x-tex">\tilde \sigma</annotation></semantics> a strength for the endofunctor <semantics>S 0<annotation encoding="application/x-tex">S_0</annotation></semantics>. Along this translation, a strong monad on <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> is then just a <semantics>Psh(𝒜 0)<annotation encoding="application/x-tex">\mathrm{Psh}(\mathcal{A}_0)</annotation></semantics>-monad. And it is very common, when modelling side effects by monads in Computer Science, to end up with strong cartesian monads. As cartesian monads, they are in particular ordinary clubs on <semantics>𝒜 0<annotation encoding="application/x-tex">\mathcal{A}_0</annotation></semantics>. Hence, those are <semantics>Psh(𝒜 0)<annotation encoding="application/x-tex">\mathrm{Psh}(\mathcal{A}_0)</annotation></semantics>-monads whose underlying ordinary monad is a club: that is, they are <semantics>Psh(𝒜 0)<annotation encoding="application/x-tex">\mathrm{Psh}(\mathcal{A}_0)</annotation></semantics>-clubs on <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics>.

In conclusion, let me point out that there is much more in Kelly’s article than presented here, especially on local factorisation systems and their link to (replete) reflexive subcategories with a left exact reflexion. It is by the way quite surprising that he does not stay in full generality longer, as one could define an abstract club in just that framework. Maybe there is just no interesting example to come up with at that level of generality…

Also, a great deal of examples of club comes from never published work of Robin Cockett (or at least, I was not able to find it), so these motivations are quite difficult to follow.

Going a little further in the generalization, the cautious reader should have noticed that we did not say anything about coloured operads. For then we would not have to look at slice categories of the form <semantics>𝒜/S1<annotation encoding="application/x-tex">\mathcal{A}/S1</annotation></semantics>, but at categories of span with one leg pointing to <semantics>SC<annotation encoding="application/x-tex">S C</annotation></semantics> (morally mapping an operation to its coloured arity) and the other one to <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> (morally picking the output colour), where the <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is the object of colours. Those spans actually appear above implicitly whenever a map or the form <semantics>!:X1<annotation encoding="application/x-tex">!:X \to 1</annotation></semantics> is involved (morally, this is the map picking the “only output colour” in a non coloured operad). This somehow should be contained somewhere in Garner’s work on double clubs or in Shulman’s and Cruttwell’s unified framework for generalized multicategories. I am looking forward to learn more about that in the comments!

by riehl (eriehl@math.jhu.edu) at April 23, 2017 06:01 AM

April 22, 2017

Lubos Motl - string vacua and pheno

Physicists, smart folks use same symbols for Lie groups, algebras for good reasons
I have always been amazed by the sheer stupidity and tastelessness of the people who aren't ashamed of the likes of Peter Woit. He is obviously a mediocre man with no talents, no achievements, no ethics, and no charisma but because of the existence of many people who have no taste and who want to have a leader in their jihad against modern physics, he was allowed to talk about physics as if his opinions mattered.

Woit is a typical failing-grade student who simply isn't and has never been the right material for college. His inability to learn string theory is a well-known aspect of this fact. But most people in the world – and maybe even most of the physics students – misunderstand string theory. But his low math-related intelligence is often manifested in things that are comprehensible to all average or better students of physics.

Two years ago, Woit argued that
the West Coast metric is the wrong one.
Now, unless you are a complete idiot, you must understand that the choice of the metric tensor – either \(({+}{-}{-}{-})\) or \(({-}{+}{+}{+})\) – is a pure convention. The metric tensor \(g^E_{\mu\nu}\) of the first culture is simply equal to minus the metric tensor of the second culture \(g^W_{\mu\nu}\), i.e. \(g^E_{\mu\nu} = - g^W_{\mu\nu}\), and every statement or formula written with one set of conventions may obviously be translated to a statement written in the other, and vice versa. The equations or statements basically differ just by some signs. The translation from one convention to another is always possible and is no more mysterious than the translation from British to U.S. English or vice versa.

How stupid do you have to be to misunderstand this point, that there can't be any "wrong" convention for the sign? And how many people are willing to believe that someone's inability to get this simple point is compatible with the credibility of his comments about string theory?




Well, this individual has brought us a new ludicrous triviality of the same type,
Two Pet Peeves
We're told that we mustn't use the same notation for a Lie group and a Lie algebra. Why? Because Tony Zee, Pierre Ramond, and partially Howard Georgi were using the unified notation and Woit "remember[s] being very confused about this when I first started studying the subject". Well, Mr Woit, you were confused simply because you have never been college material. But it's easier to look for flaws in Lie groups and Lie algebras than in your own worthless existence, right?




Many physicists use the same symbols for Lie groups and the corresponding Lie algebras for a simple reason: they – or at least their behavior near the identity (or any other point on the group manifold) – is completely equivalent. Except for some global behavior, the information about the Lie group is completely equivalent to the information about the corresponding Lie algebra. They're just two languages to talk about the same thing.

Just to be sure, in my and Dr Zahradník's textbook on linear algebra, we used the separate symbols and I love the fraktur fonts. In Czechia and maybe elsewhere, most people who are familiar with similar fonts at all call them "Schwabacher" but strictly speaking, Textura, Rotunda, Schwabacher, and Fraktur are four different typefaces. Schwabacher is older and was replaced by Fraktura in the 16th century. In 1941, Hitler decided that there were too many typos in the newspapers and that foreigners couldn't decode Fraktura which diminishes the importance of Germany abroad, so he banned Fraktura and replaced it with Antiqua.



When we published our textbook, I was bragging about the extensive index that was automatically created by a \({\rm \LaTeX}\) macro. I told somebody: Tell me any word and you will see that we can find it in the index. In front of several witnesses, the first person wanted to humiliate me so he said: "A broken bone." So I abruptly responded: "The index doesn't include a 'broken bone' literally but there's a fracture in it!" ;-) Yes, I did include a comment about the font in the index. You know, the composition of the index was as simple as placing the command like \placeInTheIndex{fraktura} in a given place of the source. After several compilations, the correct index was automatically created. I remember that in 1993 when I began to type it, one compilation of the book took 15 minutes on the PCs in the computer lab of our hostel! When we received new 90 GHz frequency PCs, the speed was almost doubled. ;-)

OK, I don't want to review elementary things because some readers know them and wouldn't learn anything new, while others don't know these things and a brief introduction wouldn't help them. But there is a simple relationship between a Lie algebra and a Lie group. You may obtain the elements of the group by a simple exponentiation of an element of a Lie algebra. For this reason, all the "structure coefficients" \(f_{ij}{}^k\) that remember the structure of commutators\[

[T_i,T_j] = f_{ij}{}^k T_k

\] contain the same information as all the curvature information about the group manifold near the identity. The Lie algebra simply is the tangent space of the group manifold around the identity (or any element) and all the commutators in the Lie algebra are equivalent to the information about the distortions that a projection of the neighborhood of the identity in the group manifold to a flat space causes.

We often use the same symbols because it's harder to write the gothic fonts. More importantly,
whenever a theory, a solution, or a situation is connected with a particular Lie group, it's also connected with the corresponding Lie algebra, and vice versa!
That's the real reason why it doesn't matter whether you talk about a Lie group or a Lie algebra. We use their labels for "identification purposes" and the identification is the same whether you have a Lie group or a Lie algebra in mind. A very simple example:
There exist two rank-8, dimension-496 heterotic string theories whose gauge groups in the 10-dimensional spacetime are \(SO(32)\) and \(E_8\times E_8\), respectively.

There exist two rank-8, dimension-496 heterotic string theories whose gauge groups in the 10-dimensional spacetime are (or have the Lie algebras) \({\mathfrak so}(32)\) and \({\mathfrak e}_8\oplus {\mathfrak e}_8\), respectively.
I wrote the sentence in two ways. The first one sort of talks about the group manifolds while the second talks about Lie algebras. The information is obviously almost completely equivalent.

Well, except for subtleties – the global choices and identifications in the group manifold that don't affect the behavior of the group manifold in the vicinity of the identity element. If you want to be careful about these subtleties, you need to talk about the group manifolds, not just Lie algebras, because the Lie algebras "forget" the information about these global issues.

So you might want to be accurate and talk about the Lie groups in 10 dimensions – and say that the allowed heterotic gauge groups are \(E_8\times E_8\) and \(SO(32)\). However, this effort of yours would actually make things worse because when you use a language that has the ambition of being correct about the global issues, it's your responsibility to be correct about them, indeed, and chances are that your first guess will be wrong!

In particular, the "\(SO(32)\)" heterotic string also contains spinors. So a somewhat smart person could say that the gauge group of that heterotic string is actually \(Spin(32)\), not \(SO(32)\). However, that would be about as wrong as \(SO(32)\) itself – almost no improvement – because the actual perturbative gauge group of this heterotic theory is isomorphic to\[

Spin(32) / \ZZ_2

\] where the \(\ZZ_2\) is chosen in such a way that the group is not isomorphic to \(SO(32)\). It's another \(\ZZ_2\) from the center isomorphic to \(\ZZ_2\times \ZZ_2\) that allows left-handed spinors but not the right-handed ones! By the way, funnily, the S-dual theory is type I superstring theory whose gauge group – arising from Chan-Paton factors of the open strings – seems to be \(O(32)\). However, the global form of the gauge group gets modified by D-particles, the other half of \(O(32)\) beyond \(SO(32)\) is broken, and spinors of \(Spin(32)\) are allowed by the D-particles so non-perturbatively, the gauge group of type I superstring theory agrees with that of the heterotic S-dual theory including the global subtleties.

(Peter Woit also ludicrously claims that physicists only need three groups, \(U(1),SU(2), SO(3)\). That may have been almost correct in the 1920s but it's surely not true in the 21st century particle physics. If you're an undergraduate with plans to do particle physics and someone offers you to quickly learn about symplectic or exceptional groups, and perhaps a few others, you shouldn't refuse it.)

You don't need to talk about string theory to encounter similar subtleties. Ask a simple question. What is the gauge group of the Standard Model? Well, people will normally answer \(SU(3)\times SU(2)\times U(1)\). But what they actually mean is just the statement that the Lie algebra of the gauge group is\[

{\mathfrak su}(3) \oplus {\mathfrak su}(2) \oplus {\mathfrak u}(1).

\] Note that the simple, Cartesian \(\times\) product of Lie groups gets translated to the direct \(\oplus\) sum of the Lie algebras – the latter are linear vector spaces. OK, so the statement that the Lie algebra of the gauge group of the Standard Model is the displayed expression above is correct.

But if you have the ambition to talk about the precise group manifolds, those know about all the "global subtleties" and it turns out that \(SU(3)\times SU(2)\times U(1)\) is not isomorphic to the Standard Model gauge group. Instead, the Standard Model gauge group is\[

[SU(3)\times SU(2)\times U(1)] / \ZZ_6.

\] The quotient by \(\ZZ_6\) must be present because all the fields of the Standard Model have a correlation between the hypercharge \(Y\) modulo \(1/6\) and the spin under the \(SU(2)\) as well as the representation under the \(SU(3)\). It is therefore impossible to construct states that wouldn't be invariant under this \(\ZZ_6\) even a priori which means that this \(\ZZ_6\) acts trivially even on the original Hilbert space and "it's not there".

The \(\ZZ_6\) must be divided by for the same reasons why we usually say that the Standard Model gauge group doesn't contain an \(E_8\) factor. You could also say that there's also an \(E_8\) factor except that all fields transform as a singlet. ;-) We don't do it – when we say that there is a symmetry or a gauge group, we want at least something to transform nontrivially.

OK, you see that the analysis of the correlations of the discrete charges modulo \(1/6\) may be subtle. We usually don't care about these details when we want to determine much more important things – how many gauge bosons there are and what their couplings are. These important things are given purely by the Lie algebra which is why our statements about the identity of the gauge group should mostly be understood as statements about Lie algebras.

At some level, you may want to be picky and discuss the global properties of the gauge group and correlations. But you usually don't need to know these answers for anything else. The knowledge of these facts is usually only good for its own sake. You can't calculate any couplings from it, and so on. That's why our sentences should be assumed not to talk about these details at all – and/or be sloppy about these details.

(Just to be sure, the global subtleties, centers of the group, differences between \(SO(N)\) and \(O(N)\) and \(Spin(N)\), differences for even and odd \(N\), or dependence on \(N\) modulo 8, may still lead to interesting physical consequences and consistency checks and several papers of mine, especially about the heterotic matrix models, were obsessed with these details, too. But this kind of concerns only represents a minority of physicists' interests, especially in the case of beginners.)

By the way, the second "pet sleeve" by Woit is that one should distinguish real and complexified versions of the same Lie algebras (and groups). Well, I agree you should distinguish them. But at some general analytic or algebraic level, all algebras and other structures should always be understood as the complexified ones – and only afterwards, we may impose some reality conditions on fields (and therefore the allowed symmetries, too). So I would say that to a large extent, even this complaint of Woit reflects his misunderstanding of something important – the fact that the most important information about the Lie groups is hiding in the structure constants of the corresponding Lie algebra, and those are identical for all Lie groups with the same Lie algebra, and they're also identical for real and complex versions of the groups.

(By the way, he pretends to be very careful about the complexification, but he writes the condition for matrix elements of an \(SU(2)\) matrix as \(\alpha^2+\beta^2=1\) instead of \(|\alpha|^2+|\beta|^2 = 1\). Too bad. You just shouldn't insist on people's distinguishing non-essential things about the complexification if you can't even write the essential ones correctly yourself.)

In the futile conversations about the foundations of quantum mechanics, I often hear or read comments like:
Please, don't use the confusing word "observation" which makes it look like quantum mechanics depends on what is an observation and what isn't etc. and it's scary.
Well, the reason why my – and Heisenberg's – statements look like we are saying that quantum mechanics depends on observations is that quantum mechanics depends on observations, indeed. So the dissatisfied laymen or beginners really ask the physicists to use the language that would strengthen the listeners' belief that classical physics is still basically right. Except that it's not! We mostly use this language – including the word "observation" – because it really is essential in the new framework of physics.

In the same way, failing-grade students such as Peter Woit may be constantly asking whether a physicist talks about a Lie group or the corresponding Lie algebra. They are basically complaining:
Georgi, Ramond, Zee, don't use this notation that looks like it suggests that the Lie group and the Lie algebra are basically the same thing even though they are something completely different.
The problem is, of course, that the failing-grade students such as Peter Woit are wrong. Georgi, Ramond, Zee, and others often use the same symbols for the Lie groups and the Lie algebras because they really are basically the same thing. And it's just too bad if you don't understand this tight relationship – basically an equivalence.

I think that there exist many lousy teachers of mathematics and physics that are similar to Peter Woit. Those don't understand the substance – what is really important, what is true. So they focus on what they understand – arbitrarily invented rules what the students are obliged to parrot for the teacher to feel more important. So the poor students who have such teachers are often being punished for using a different metric tensor convention once or for using a wrong font for a Lie algebra. These teachers don't understand the power and beauty of mathematics and physics and they're working hard to make sure that their students won't understand them, either.

by Luboš Motl (noreply@blogger.com) at April 22, 2017 01:16 PM

ZapperZ - Physics and Physicists

Earth Day 2017 - March For Science Day
Today is the March for Science day to coincide with Earth Day 2017.

Unfortunately, I will not be participating in it, because I'm flying off to start my vacation. However, I have the March for Science t-shirt, and will be wearing it all day. So I may not be with all of you who will be participating it in today, but I'll be there in spirit.

And yes, I have written to my elected officials in Washington DC to let them know how devastating the Trump budget proposal is to science and the economic future of this country. Unfortunately, I may be preaching to the choir, because all 3 of them (2 Senators and 1 Representative of my district) are all Democrats who I expect to oppose the Trump budget as it is anyway.

Anyhow, to those of you who will be marching, YOU GO, BOYS AND GIRLS!

Zz.

by ZapperZ (noreply@blogger.com) at April 22, 2017 12:23 PM

April 21, 2017

Clifford V. Johnson - Asymptotia

Silicon Valley

I’ll be at Silicon Valley Comic Con this weekend, talking on two panels about science and its intersection with film on the one hand (tonight at 7pm if my flight is not too delayed), and non-fiction comics (see my book to come) on the other (Saturday at 12:30 or so). … Click to continue reading this post

The post Silicon Valley appeared first on Asymptotia.

by Clifford at April 21, 2017 11:21 PM

ZapperZ - Physics and Physicists

"Physics For Poets" And "Poetry For Physicists"?
Chad Orzel has a very interesting and thought-provoking article that you should read.

What he is arguing is that scientists should learn the mindset of the arts and literature, while those in the humanities and the arts should learn the mindset of science. College courses should not be tailored in such a way that the mindset of the home department is lost, and that a course in math, let's say, has been devolved into something palatable to an arts major.

I especially like his summary at the end:

One of the few good reasons is that a mindset that embraces ambiguity is something useful for scientists to see and explore a bit. By the same token, though, the more rigorous and abstract scientific mindset is something that is equally worthy of being experienced and explored by the more literarily inclined. A world in which physics majors are more comfortable embracing divergent perspectives, and English majors are more comfortable with systematic problem solving would be a better world for everyone.

I think we need to differentiate between changing the mindset versus tailoring a course for a specific need. I've taught a physics class for mainly life science majors. The topics that we covered is almost identical to that offered to engineering/physics majors, with the exception that they do not contain any calculus. But other than that, it has the same rigor and coverage. The thing that made it specific to the group of students is that many of the examples that I used came out of biology and medicine. These were what I used to keep the students' interest, and to show them the relevance of what they were studying to their major area. But the systematic and analytical approach to the subject are still there. In fact, I consciously emphasized the technique and skills in analyzing and solving a problem, and made them as important as the material itself. In other words, this is the "mindset" that Chad Orzel was referring to that we should not lose when the subject is being taught to non-STEM majors.

Zz.

by ZapperZ (noreply@blogger.com) at April 21, 2017 01:06 PM

Clifford V. Johnson - Asymptotia

Advising on Genius: Helping Bring a Real Scientist to Screen

Well, I've been meaning to tell you about this for some time, but I've been distracted by many other things. Last year I had the pleasure of working closely with the writers and producers on the forthcoming series on National Geographic entitled "Genius". (Promotional photo above borrowed from the show's website.)The first season, starting on Tuesday, is about Einstein - his life and work. It is a ten episode arc. I'm going to venture that this is a rather new kind of TV show that I really hope does well, because it could open the door to longer more careful treatments of subjects that usually are considered too "difficult" for general audiences, or just get badly handled in the short duration of a two-hour movie.

Since reviews are already coming out, let me urge you to keep an open mind, and bear in mind that the reviewers (at the time of writing) have only seen the two or three episodes that have been sent to them for review. A review based on two or three episodes of a series like this (which is more like a ten hour movie - you know how these newer forms of "long form TV" work) is akin to a review based on watching the first 25-35 minutes of a two hour film. You can get a sense of tone and so forth from such a short sample, but not much can be gleaned about content to come. So remember that when the various opinion pieces appear in the next few weeks.

So... content. That's what I spent a lot of time helping them with. I do this sort of thing for movies and TV a lot, as you know, but this was a far [...] Click to continue reading this post

The post Advising on Genius: Helping Bring a Real Scientist to Screen appeared first on Asymptotia.

by Clifford at April 21, 2017 07:21 AM

April 19, 2017

ZapperZ - Physics and Physicists

The Mystery Of The Proton Spin
If you are not familiar with the issues surrounding the origin of the proton's spin quantum number, then this article might help.

It explains the reason why we don't believe that the proton spin is due just to the 3 quarks that make up the proton, and in the process, you get an idea how complicated things can be inside a proton.

There are three good reasons that these three components might not add up so simply.
  1. The quarks aren't free, but are bound together inside a small structure: the proton. Confining an object can shift its spin, and all three quarks are very much confined.
  2. There are gluons inside, and gluons spin, too. The gluon spin can effectively "screen" the quark spin over the span of the proton, reducing its effects.
  3. And finally, there are quantum effects that delocalize the quarks, preventing them from being in exactly one place like particles and requiring a more wave-like analysis. These effects can also reduce or alter the proton's overall spin.
Expect the same with a neutron.

Zz.

by ZapperZ (noreply@blogger.com) at April 19, 2017 08:42 PM

The n-Category Cafe

Functional Equations, Entropy and Diversity: A Seminar Course

I’ve just finished teaching a seminar course officially called “Functional Equations”, but really more about the concepts of entropy and diversity.

I’m grateful to the participants — from many parts of mathematics, biology and physics, at levels from undergraduate to professor — who kept coming and contributing, week after week. It was lots of fun, and I learned a great deal.

This post collects together all the material in one place. First, the notes:

Now, the posts I wrote every week:

by leinster (Tom.Leinster@ed.ac.uk) at April 19, 2017 04:19 PM

The n-Category Cafe

The Diversity of a Metacommunity

The eleventh and final installment of the functional equations course can be described in two ways:

  • From one perspective, I talked about conditional entropy, mutual information, and a very appealing analogy between these concepts and the most basic primary-school Venn diagrams.

  • From another, it was about diversity across a metacommunity, that is, an ecological community divided into smaller communities (e.g. geographical sites).

The notes begin on page 44 here.

Venn diagram showing various entropy measures for a pair of random variables

by leinster (Tom.Leinster@ed.ac.uk) at April 19, 2017 04:05 PM

Emily Lakdawalla - The Planetary Society Blog

This weekend, it's the beginning of the end for Cassini
NASA's long-lived Cassini spacecraft is about to buzz Titan for the final time, putting it on course for a spectacular mission finale that concludes in September.

April 19, 2017 11:00 AM

Lubos Motl - string vacua and pheno

All of string theory's power, beauty depends on quantum mechanics
Wednesday papers: Arkani-Hamed et al. show that the amplituhedron is all about sign flips. Maldacena et al. study the double-trace deformations that make a wormhole traversable. Among other things, they argue that the cloning is avoided because the extraction (by "Bob") eliminates the interior copy of the quantum information.
String/M-theory is the most beautiful, powerful, and predictive theory we know – and, most likely, the #1 with these adjectives among those that are mathematically possible – but the degree of one's appreciation for its exceptional credentials depends on one's general knowledge of physics, especially quantum mechanics.



Click to see an animation (info).

Quantum mechanics was basically discovered at one point in the mid 1920s and forced physics to make a one-time quantum jump. On the other hand, it also defines a trend because the novelties of quantum mechanics may be taken more or less seriously, exploited more or less cleverly and completely, and as physics was evolving towards more advanced, stringy theories and explanations of things, the role of the quantum mechanical thinking was undoubtedly increasing.

When we say "classical string theory", it is a slightly ambiguous term. We can take various classical limits of various theories that emerge from string theory, e.g. the classical field theory limit of some effective field theories in the spacetime. But the most typical representation of "classical string theory" is given by the dull yellow animation above. A classical string is literally a curve in a pre-existing spacetime that oscillates according to a wave equation of a sort.




OK, on that picture, you see a vibrating rope. It is not better or more exceptional than an oscillating membrane, a Chladni pattern, a little green man with Parkinson's disease, or anything else that moves and jiggles. The power of string theory only emerges once you consider the real, adult theory where all the observables such as the positions of points along the string are given by non-commuting operators.

Just to be sure, the rule that "observable = measurable quantities are associated with non-commuting operators" is what I mean by quantum mechanics.




What does quantum mechanics do for a humble string like the yellow string above?

First, it makes the spectrum of vibrations discrete.

Classically, you may change the initial state of the vibrating string arbitrarily and continuously, and the energy carried by the string is therefore continuous, too. That's not the case in quantum mechanics. Quantum mechanics got its name from the quantized, discrete eigenvalues of the energy. A vibrating string is basically equivalent to a collection of infinitely many harmonic oscillators. Each quantum mechanical harmonic oscillator only carries an integer number of excitations, not a continuous amount of energy.

The discreteness of the spectrum – which depends on quantum mechanics for understandable reasons – is obviously needed for strings in string theory to coincide with a finite number of particle species we know in particle physics – or a countable one that we may know in the future. Without the quantization, the number of species would be uncountably infinite. The species would form a continuum. There would be not just an electron and a muon but also elemuon and all other things in between, in an infinite-dimensional space.

Quantum mechanics is needed for some vibrating strings to act as gravitons and other exceptional particles.

String theory predicts gravity. It makes Einstein's general relativity – and the curved spacetime and gravitational waves that result from it – unavoidable. Why is it so? It's because some of the low-energy vibrating strings, when they're added into the spacetime, have exactly the same effect as a deformation of the underlying geometry – or other low-energy fields defining the background.

Why is it so? It's ultimately because of the state-operator correspondence. The internal dynamics of a string depends on the underlying spacetime geometry. And the spacetime geometry may be changed. But the infinitesimal change of the action etc. for a string is equivalent to the interaction of the string with another, "tiny" string that is equivalent to the geometry change.

We may determine the right vibration of the "tiny" string that makes the previous sentence work because for every operator on the world sheet (2D history of a fundamental string), there exists a state of the string in the Hilbert space of the stringy vibrations. And this state-operator correspondence totally depends on quantum mechanics, too.

In classical physics, the number of observables – any function \(f(x_i,p_i)\) on a phase space – is vastly greater than the number of states. The states are just points given by the coordinates \((x_i,p_i)\) themselves. It's not hard to see that the first set is much greater – an infinite-dimensional vector space – than the second. However, quantum mechanics increases the number of states (by allowing all the superpositions) and reduces the number of observables (by making them quantized, or respectful towards the quantization of the phase space) and the two numbers become equivalent up to a simple tensoring with the functions of the parameter \(\sigma\) along the string.

I don't want to explain the state-operator correspondence, other blog posts have tried it and it is a rather technical issue in conformal field theory that you should study once you are really serious about learning string theory. But here, I want to emphasize that it wouldn't be possible in any classical world.

Let me point out that the world of the "interpreters" of quantum mechanics who imagine that the wave function is on par with a classical wave is a classical world, so it is exactly as impotent as any other world.

T-duality depends on quantum mechanics

A nice elementary symmetry that you discover in string theory compactified on tori is the so-called T-duality. The compactified string theory on a circle of radius \(R\) is the same as the theory on a circle of radius \(\alpha' / R\) where \(T=1/2 \pi \alpha'\) is the string tension (energy or mass per unit length of the string). Well, this property depends on quantum mechanics as well because the T-duality map exchanges the momentum \(n\) with the winding \(w\) which are two integers.

But in a classical string theory, the winding number \(w\in \ZZ\) would still be integer (it counts how many times a closed string is wrapped around the circle) while the momentum would be continuous, \(n\in\RR\). So they couldn't be related by a permutation symmetry. The T-duality couldn't exist.

Enhanced gauge symmetry on a self-dual radius depends on quantum mechanics

The fancier features of string theory you look at, the more obviously unavoidable quantum mechanics becomes. One of the funny things of bosonic string theory compactified on a circle is that the generic gauge group \(U(1)\times U(1)\) gets enhanced to \(SU(2)\times SU(2)\) on the self-dual radius. Even though you start with a theory where everything is "Abelian" or "linear" in some simple sense – a string propagating on a circle – you discover that the non-Abelian \(SU(2)\) automatically arises if the radius obeys \(R = \alpha' / R\), if it is self-dual.

I have discussed the enhanced symmetries in string theory some years ago but let's shorten the story. Why does the group get enhanced?

First, one must understand that for a generic radius, the unbroken gauge group is \(U(1)\times U(1)\). One gets two \(U(1)\) gauge groups because the gauge fields are basically \(g_{\mu,25}\) and \(B_{\mu,25}\). They arise as "last columns" of a symmetric tensor, the metric tensor, and an antisymmetric tensor, the \(B\)-field. The first (metric tensor-based) \(U(1)\) group is the standard Kaluza-Klein gauge group and it is \(U(1)\) because \(U(1)\) is the isometry group of the compactification manifold. There is another gauge group arising from the gauge field that you get from a pre-existing 2-index gauge field \(B_{\mu\nu}\), a two-form, if you set the second index equal to the compactified direction.

These two gauge fields are permuted by the T-duality symmetry (just like the momentum and winding are permuted, because the momentum and winding are really the charges under these two symmetries).

OK, how do you get the \(SU(2)\)? The funny thing is that the \(U(1)\) gauge bosons are associated, via the operator-state correspondence mentioned above, with the operators on the world sheet\[

(\partial_z X^{25}, \quad \partial_{\bar z} X^{25}).

\] One of them is holomorphic, the other one is anti-holomorphic, we say. T-duality maps these operators to\[

(\partial_z X^{25}, \quad -\partial_{\bar z} X^{25}).

\] so it may be understood as a mirror reflection of the \(X^{25}\) coordinate of the spacetime except that it only acts on the anti-holomorphic (or right-moving) oscillations propagating along the string. That's great. You have something like a discrete T-duality which is just some sign flip or, equivalently, the exchange of the momentum and winding. How do you get a continuous \(SU(2)\), I ask again?

The funny thing is that at the self-dual radius, there are not just two operators like that but six. The holomorphic one, \(\partial_z X^{25}\), becomes just one component of a three-dimensional vector\[

(\partial_z X_L^{25},\,\, :\exp(+i X_L^{25}):, :\exp(-i X_L^{25}):)

\] Classically, the first operator looks nothing like the last two. If you have a holomorphic function \(X_L^{25}(z)\) of some coordinate \(z\), its \(z\)-derivative seems to be something completely different than its exponential, right? But quantum mechanically, they are almost the same thing! Why is it so?

If you want to describe all physically meaningful properties of three operators like that, the algebra of all their commutators encodes all the information. Just like string theory has the state-operator correspondence that allows you to translate between states and operators, it also has the OPEs – operator-product expansions – that allow you to extract the commutators of operators from the singularities in a decomposition of their products etc.

And it just happens that the singularities in the OPEs of any such operators are compatible with the statement that these three operators are components of a triplet that transforms under an \(SU(2)\) symmetry. So you get one \(SU(2)\) from the left-moving, \(z\)-dependent part \(X_L^{25}\), and one \(SU(2)\) from the \(\bar z\)-dependent \(X_R^{25}\).

All other non-Abelian and sporadic or otherwise cool groups that you get from perturbative string theory arise similarly, and are therefore similarly dependent on quantum mechanics. For example, the monster group in the string theory model explaining the monstrous moonshine only exists because of a similar "equivalence" that is only true at the quantum level.

Spacetime dimension and sizes of group are only predictable in quantum mechanics

String theory is so predictive that it forces you to choose a preferred dimension of the spacetime. The simple bosonic string theory has \(D=26\) and superstring theory, the more realistic and fancy one, similarly demands \(D=10\). This contrasts with the relatively unconstrained, "anything goes" theories of the pre-stringy era.

Polchinski's book contains "seven" ways to calculate the critical dimension, according to the counting by the author. But here, what is important is that all of them depend on a cancellation of some quantum anomalies.

In the covariant quantization, \(D=26\) basically arises as the number of bosonic fields \(X^\mu\) whose conformal anomaly cancels that from the \(bc\) ghost system. The latter has \(c=1-3k^2=-26\) because some constant is \(k=3\): the central charge describes a coefficient in front of a standard term to the conformal anomaly. Well, you need to add \(c=+26\) – from 26 bosons – to get zero. And you need to get zero for the conformal symmetry to hold, even in the quantum theory. And the conformal symmetry is needed for the state-operator correspondence and other things – it is a basic axioms of covariant perturbative string theory.

Alternatively, you may define string theory in the light-cone gauge. The full Lorentz symmetry won't be obvious anymore. You will find out that some commutators\[

[j^{i-},j^{j-}] = \dots

\] in the light-cone coordinates behaves almost correctly. Except that when you substitute the "bilinear in stringy oscillators" expressions for the generators \(j^{i-}\), the calculation of the commutator will contain not only the "single contractions" – this part of the calculation is basically copying a classical calculation – but also the "double contraction" terms. And those don't trivially cancel. You will find out that they only cancel for 24 transverse coordinates. Needless to say, the "double contraction" is something invisible at the level of the Poisson brackets. You really need to talk about the "full commutators" – and therefore full quantum mechanics, not just some Poisson-bracket-like approximation – to get these terms at all.

Again, the correct spacetime dimension \(D=26\) or \(D=10\) arises from the cancellation of some quantum anomaly – some new quantum mechanical effects that have the potential of spoiling some symmetries that "trivially" hold in the classical limit that may have inspired you. The prediction couldn't be there if you ignored quantum mechanics.

The field equations in the spacetime result from an anomaly cancellation, too.

If you order perturbative strings to propagate on a curved spacetime background, you may derive Einstein's equations (plus stringy short-distance corrections), which in the vacuum simply demand the Ricci-flatness \[

R_{\mu\nu} = 0.

\] A century ago, Einstein had to discover that this is what the geometry has to obey in the vacuum. It's an elegant equation and among similarly simple ones, it's basically unique that is diffeomorphism-symmetric. And you may derive it from the extremization of the Einstein-Hilbert action, too.

However, string theory is capable of doing all this guesswork for you. In other words, string theory is capable of replacing Einstein's 10 years of work. You may derive the Ricci-flatness from the cancellation of the conformal anomaly, too. You need the world sheet theory to stay invariant under the scaling of the world sheet coordinates, even at the quantum level.

But the world sheet theory depends on the functions\[

g_{\mu\nu} (X^\lambda(\sigma,\tau))

\] and for every point in the spacetime given by the numbers \(\{X^\lambda\}\), you have a whole symmetric tensor \(g_{\mu\nu}\) of parameters that behave like "coupling constants" in the theory. But in a quantum field theory, and the world sheet theory is a quantum field theory, every coupling constant generically "runs". Its value depends on the chosen energy scale \(E\). And the derivative with respect to the scale\[

\frac{dg_{\mu\nu}(X^\lambda)}{d (\ln E)} = \beta_{\mu\nu}(X^\lambda)

\] is known as the beta-function. Here you have as many beta-functions as you have the numbers that determine the metric tensor at each spacetime point. The beta-functions have to vanish for the theory to remain scale-invariant on the world sheet – and you need it. And you will find out that\[

\beta_{\mu\nu}(X^\lambda) = R_{\mu\nu} (X^\lambda).

\] The beta-function is nothing else than the Ricci tensor. Well, it could be the Einstein tensor and there could be extra constants and corrections. But I want to please you with the cool stuff; I hope that you don't doubt that if you want to work with these things, you have to take care of many details that make the exact answers deviate from the most elegant, naive Ansatz with the given amount of beauty.

So Einstein's equations result from the cancellation of the conformal anomaly as well. The very requirement that the theory remains consistent at the quantum level – and the preservation of gauge symmetries is indeed needed for the consistency – is enough to derive the equations for the metric tensor in the spacetime.

Needless to say, this rule generalizes to all the fields that you may get from particular vibrating strings in the spacetime. Dirac, Weyl, Maxwell, Yang-Mills, Proca, Higgs, and other equations of motions for the fields in the spacetime (including all their desirable interactions) may be derived from the scale-invariance of the world sheet theory, too.

In this sense, the logical consistency of the quantum mechanical theory dictates not only the right spacetime dimension and other numbers of degrees of freedom, sizes of groups such as \(E_8\times E_8\) or \(SO(32)\) for the heterotic string (the rank must be \(16\) and the dimension has to be \(496\), among other conditions), but the consistency also determines all the dynamical equations of motion.

S-duality, T-duality, mirror symmetry, AdS/CFT and holography, ER-EPR, and so on

And I could continue. S-duality – the symmetry of the theories under the \(g\to 1/g\) maps of the coupling constant – also depend on quantum mechanics. It's absolutely obvious that no S-duality could ever work in a classical world, not even in quantum field theory. Among other things, S-dualities exchange the elementary electrically charged particles such as electrons with the magnetically charged ones, the magnetic monopoles. But classically, those are very different: electrons are point-like objects with an "intrinsic" charge while the magnetic monopoles are solitonic solutions where the charge is spread over the solution and quantized because of topological considerations.

However, quantum mechanically, they may be related by a permutation symmetry.

Mirror symmetry is an application of T-duality in the Calabi-Yau context, so everything I said about the quantum mechanical dependence of T-duality obviously holds for mirror symmetry, too.

Holography in quantum gravity – as seen in AdS/CFT and elsewhere – obviously depends on quantum mechanics, too. The extra holographic dimension morally arises from the "energy scale" in the boundary theory. But the AdS space has an isometry relating all these dimensions. Classically, "energy scale" cannot be indistinguishable from a "spacetime coordinate". Classically, the energy and momentum live in a spacetime, they have different roles.

Quantum mechanically, there may be such symmetries between energy/momentum and position/timing. The harmonic oscillator is a basic template for such a symmetry: \(x\) and \(p\) may be rotated to each other.

ER-EPR talks about the quantum entanglement so it's obvious that it would be impossible in a classical world.

I could make the same point about basically anything that is attractive about string theory – and even about comparably but less intriguing features of quantum field theories. All these things depend on quantum mechanics. They would be impossible in a classical world.

Summary: quantum mechanics erases qualitative differences, creates new symmetries, merges concepts, magnifies new degrees of freedom to make singularities harmless.

Quantum mechanics does a lot of things. You have seen many examples – and there are many others – that quantum mechanics generally allows you to find symmetries between objects that look classically totally different. Like the momentum and winding of a string. Or the derivative of \(X\) with the exponential of \(X\) – at the self-dual radius. Or the states and operators. Or elementary particles and composite objects such as magnetic monopoles. And so on, and so on.

Sometimes, the spectrum of a quantity becomes discrete in order for the map or symmetry to be possible.

Sometimes, just the qualitative differences are erased. Sometimes, all the differences are erased and quantum mechanics enables the emergence of exact new symmetries that would be totally crazy within classical physics. Sometimes, these symmetries are combined with some naive ones that already exist classically. \(U(1)\times U(1)\) may be extended to \(SU(2)\times SU(2)\) quantum mechanically. Similarly, \(SO(16)\times SO(16)\) in the fermionic definition or \(U(1)^{16}\) in the bosonic formulation of the heterotic string gets extended to \(E_8\times E_8\). A much smaller, classically visible discrete group gets extended to the monster group in the full quantum string theory explaining the monstrous moonshine.

Whenever a classical theory would be getting dangerously singular, quantum mechanics changes the situation so that either the dangerous states disappear or they're supplemented with new degrees of freedom or another cure. In many typical cases, the "potentially dangerous regime" of a theory – where you could be afraid of an inconsistency – is protected and consistent because quantum mechanics makes all the modifications and additions needed for that regime to be exactly equivalent to another theory that you have known – or whose classical limit you have encountered. Quantum mechanics is what allows all the dualities and the continuous connection of all seemingly inequivalent vacua of string/M-theory into one master theory.

All the constraints - on the number of dimensions, sizes of gauge groups, and even equations of motion for the fields in spacetime – arise from the quantum mechanical consistency, e.g. from the anomaly cancellation conditions.

When you become familiar with all these amazing effects of string theory and others, you are forced to start to think quantum mechanically. You will understand that the interesting theory – with the uniqueness, predictive power, consistency, symmetries, unification of concepts – is unavoidably just the quantum mechanical one. There is really no cool classical theory. The classical theories that you encounter anywhere in string theory are the classical limits of the full theory.

You will unavoidably get rid of the bad habit of thinking of a classical theory as the "primary one", while the quantum mechanical theory is often considered "derived" from it by the beginners (including permanent beginners). Within string/M-theory, it's spectacularly clear that the right relationship is going in the opposite direction. The quantum mechanical theory – with its quantum rules, objects, statements, and relationships – is the primary one while classical theories are just approximations and caricatures that lack the full glory of the quantum mechanical theory.

by Luboš Motl (noreply@blogger.com) at April 19, 2017 06:39 AM

John Baez - Azimuth

Stanford Complexity Group

Aaron Goodman of the Stanford Complexity Group invited me to give a talk there on Thursday April 20th. If you’re nearby—like in Silicon Valley—please drop by! It will be in Clark S361 at 4:20 pm.

Here’s the idea. Everyone likes to say that biology is all about information. There’s something true about this—just think about DNA. But what does this insight actually do for us, quantitatively speaking? To figure this out, we need to do some work.

Biology is also about things that make copies of themselves. So it makes sense to figure out how information theory is connected to the replicator equation—a simple model of population dynamics for self-replicating entities.

To see the connection, we need to use ‘relative information’: the information of one probability distribution relative to another, also known as the Kullback–Leibler divergence. Then everything pops into sharp focus.

It turns out that free energy—energy in forms that can actually be used, not just waste heat—is a special case of relative information Since the decrease of free energy is what drives chemical reactions, biochemistry is founded on relative information.

But there’s a lot more to it than this! Using relative information we can also see evolution as a learning process, fix the problems with Fisher’s fundamental theorem of natural selection, and more.

So this what I’ll talk about! You can see my slides here:

• John Baez, Biology as information dynamics.

but my talk will be videotaped, and it’ll eventually be put here:

Stanford complexity group, YouTube.

You can already see lots of cool talks at this location!

 


by John Baez at April 19, 2017 05:06 AM

April 18, 2017

Symmetrybreaking - Fermilab/SLAC

A new search to watch from LHCb

A new result from the LHCb experiment could be an early indicator of an inconsistency in the Standard Model.

The subatomic universe is an intricate mosaic of particles and forces. The Standard Model of particle physics is a time-tested instruction manual that precisely predicts how particles and forces behave. But it’s incomplete, ignoring phenomena such as gravity and dark matter.

Today the LHCb experiment at CERN European research center released a result that could be an early indication of new, undiscovered physics beyond the Standard Model.

However, more data is needed before LHCb scientists can definitively claim they’ve found a crack in the world’s most robust roadmap to the subatomic universe.

“In particle physics, you can’t just snap your fingers and claim a discovery,” says Marie-Hélène Schune, a researcher on the LHCb experiment from Le Centre National de la Recherche Scientifique in Orsay, France. “It’s not magic. It’s long, hard work and you must be obstinate when facing problems. We always question everything and never take anything for granted.”

The LHCb experiment records and analyzes the decay patterns of rare hadrons—particles made of quarks—that are produced in the Large Hadron Collider’s energetic proton-proton collisions. By comparing the experimental results to the Standard Model’s predictions, scientists can search for discrepancies. Significant deviations between the theory and experimental results could be an early indication of an undiscovered particle or force at play.

This new result looks at hadrons containing a bottom quark as they transform into hadrons containing a strange quark. This rare decay pattern can generate either two electrons or two muons as byproducts. Electrons and muons are different types or “flavors” of particles called leptons. The Standard Model predicts that the production of electrons and muons should be equally favorable—essentially a subatomic coin toss every time this transformation occurs.

“As far as the Standard Model is concerned, electrons, muons and tau leptons are completely interchangeable,” Schune says. “It’s completely blind to lepton flavors; only the large mass difference of the tau lepton plays a role in certain processes. This 50-50 prediction for muons and electrons is very precise.”

But instead of finding a 50-50 ratio between muons and electrons, the latest results from the LHCb experiment show that it’s more like 40 muons generated for every 60 electrons.

“If this initial result becomes stronger with more data, it could mean that there are other, invisible particles involved in this process that see flavor,” Schune says. “We’ll leave it up to the theorists’ imaginations to figure out what’s going on.”

However, just like any coin-toss, it’s difficult to know if this discrepancy is the result of an unknown favoritism or the consequence of chance. To delineate between these two possibilities, scientists wait until they hit a certain statistical threshold before claiming a discovery, often 5 sigma.

“Five sigma is a measurement of statistical deviation and means there is only a 1-in-3.5-million chance that the Standard Model is correct and our result is just an unlucky statistical fluke,” Schune says. “That’s a pretty good indication that it’s not chance, but rather the first sightings of a new subatomic process.”

Currently, this new result is at approximately 2.5 standard deviations, which means there is about a 1-in-125 possibility that there’s no new physics at play and the experimenters are just the unfortunate victims of statistical fluctuation.

This isn’t the first time that the LHCb experiment has seen unexpected behavior in related processes. Hassan Jawahery from the University of Maryland also works on the LHCb experiment and is studying another particle decay involving bottom quarks transforming into charm quarks. He and his colleagues are measuring the ratio of muons to tau leptons generated during this decay.

“Correcting for the large mass differences between muons and tau leptons, we’d expect to see about 25 taus produced for every 100 muons,” Jawahery says. “We measured a ratio of 34 taus for every 100 muons.”

On its own, this measurement is below the line of statistical significance needed to raise an eyebrow. However, two other experiments—the BaBar experiment at SLAC and the Belle experiment in Japan—also measured this process and saw something similar.

“We might be seeing the first hints of a new particle or force throwing its weight around during two independent subatomic processes,” Jawahery says. “It’s tantalizing, but as experimentalists we are still waiting for all these individual results to grow in significance before we get too excited.”

More data and improved experimental techniques will help the LHCb experiment and its counterparts narrow in on these processes and confirm if there really is something funny happening behind the scenes in the subatomic universe.

“Conceptually, these measurements are very simple,” Schune says. “But practically, they are very challenging to perform. These first results are all from data collected between 2011 and 2012 during Run 1 of the LHC. It will be intriguing to see if data from Run 2 shows the same thing.”

by Sarah Charley at April 18, 2017 10:06 PM

ZapperZ - Physics and Physicists

Testing For The Unruh Effect
A new paper that is to appear in Phys. Rev. Lett. is already getting quite a bit of advanced publicity. In it, the authors proposed a rather simple way to test for the existence of the long-proposed Unruh effect.

Things get even weirder if one observer accelerates. Any observer traveling at a constant speed will measure the temperature of empty space as absolute zero. But an accelerated observer will find the vacuum hotter. At least that's what William Unruh, a theorist at the University British Columbia in Vancouver, Canada, argued in 1976. To a nonaccelerating observer, the vacuum is devoid of particles—so that if he holds a particle detector it will register no clicks. In contrast, Unruh argued, an accelerated observer will detect a fog of photons and other particles, as the number of quantum particles flitting about depends on an observer's motion. The greater the acceleration, the higher the temperature of that fog or "bath."

So obviously, this is a very difficult effect to detect, which explains why we haven't had any evidence for it since it was first proposed in 1976. That is why this new paper is causing heads to turn, because the authors are proposing a test using our existing technology. You may read the two links above to see what they are proposing using our current particle accelerators.

But what is a bit amusing is that there are already skeptics about this methodology of testing, but each camp is arguing it for different reasons.

Skeptics say the experiment won’t work, but they disagree on why. If the situation isproperly analyzed, there is no fog of photons in the accelerated frame, says Detlev Buchholz, a theorist at the University of Göttingen in Germany. "The Unruh gas does not exist!" he says. Nevertheless, Buchholz says, the vacuum will appear hot to an accelerated observer, but because of a kind of friction that arises through the interplay of quantum uncertainty and acceleration. So,the experiment might show the desired effect, but that wouldn't reveal the supposed fog of photons in the accelerating frame.

In contrast, Robert O'Connell, a theorist at Louisiana State University in Baton Rouge, insists that in the accelerated frame there is a fog of photons. However, he contends, it is not possible to draw energy out of that fog to produce extra radiation in the lab frame. O'Connell cites a basic bit of physics called the fluctuation-dissipation theorem, which states that a particle interacting with a heat bath will pump as much energy into the bath as it pulls out. Thus, he argues, Unruh's fog of photons exists, but the experiment should not produce the supposed signal anyway.

If there's one thing that experimenters like, it is to prove theorists wrong! :) So which ever way an experiment on this turns out, it will bound to disprove one group of theorists or another. It's a win-win situation! :)

Zz.

by ZapperZ (noreply@blogger.com) at April 18, 2017 08:06 PM

Tommaso Dorigo - Scientificblogging

LHCb Measures Unity, Finds 0.6
With a slightly anti-climatic timing if we consider the just ended orgy of new results presented at winter conferences in particle physics (which I touched on here), the LHCb collaboration outed today the results of a measurement of unity, drawing attention on the fact that unity was found to be not equal to 1.0.

read more

by Tommaso Dorigo at April 18, 2017 03:15 PM

Symmetrybreaking - Fermilab/SLAC

How blue-sky research shapes the future

While driven by the desire to pursue curiosity, fundamental investigations are the crucial first step to innovation.

Header: How blue-sky research shapes the future

When scientists announced their discovery of gravitational waves in 2016, it made headlines all over the world. The existence of these invisible ripples in space-time had finally been confirmed. 

It was a momentous feat in basic research, the curiosity-driven search for fundamental knowledge about the universe and the elements within it. Basic (or “blue-sky”) research is distinct from applied research, which is targeted toward developing or advancing technologies to solve a specific problem or to create a new product.

But the two are deeply connected.

“Applied research is exploring the continents you know, whereas basic research is setting off in a ship and seeing where you get,” says Frank Wilczek, a theoretical physicist at MIT. “You might just have to return, or sink at sea, or you might discover a whole new continent. So it’s much more long-term, it’s riskier and it doesn’t always pay dividends.” 

When it does, he says, it opens up entirely new possibilities available only to those who set sail into uncharted waters. 

Most of physics—especially particle physics—falls under the umbrella of basic research. In particle physics “we’re asking some of the deepest questions that are accessible by observations about the nature of matter and energy—and ultimately about space and time also, because all of these things are tied together,” says Jim Gates, a theoretical physicist at the University of Maryland. 

Physicists seek answers to questions about the early universe, the nature of dark energy, and theoretical phenomena, such as supersymmetry, string theory and extra dimensions. 

Perhaps one of the most well-known basic researchers was the physicist who predicted the existence of gravitational waves: Albert Einstein. 

Einstein devoted his life to elucidating elementary concepts such as the nature of gravity and the relationship between space and time. According to Wilczek, “it was clear that what drove what he did was not the desire to produce a product, or anything so worldly, but to resolve puzzles and perceived imperfections in our understanding.” 

In addition to advancing our understanding of the world, Einstein’s work led to important technological developments. The Global Positioning System, for instance, would not have been possible without the theories of special and general relativity. A GPS receiver, like the one in your smart phone, determines its location based on timed signals it receives from the nearest four of a collection of GPS satellites orbiting Earth. Because the satellites are moving so quickly while also orbiting at a great distance from the gravitational pull of Earth, they experience time differently from the receiver on Earth’s surface. Thanks to Einstein’s theories, engineers can calculate and correct for this difference.

Inline: How blue-sky research shapes the future
Illustration by Corinne Mucha

There’s a long history of serendipitous output from basic research. For example, in 1989 at CERN European research center, computer scientist Tim Berners-Lee was looking for a way to facilitate information-sharing between researchers. He invented the World Wide Web.

While investigating the properties of nuclei within a magnetic field at Columbia University in the 1930s, physicist Isidor Isaac Rabi discovered the basic principles of nuclear magnetic resonance. These principles eventually formed the basis of Magnetic Resonance Imaging, MRI. 

It would be another 50 years before MRI machines were widely used—again with the help of basic research. MRI machines require big, superconducting magnets to function. Luckily, around the same time that Rabi’s discovery was being investigated for medical imaging, scientists and engineers at the US Department of Energy’s Fermi National Accelerator Laboratory began building the Tevatron particle accelerator to enable research into the fundamental nature of particles, a task that called for huge amounts of superconducting wire. 

“We were the first large, demanding customer for superconducting cable,” says Chris Quigg, a theoretical physicist at Fermilab. “We were spending a lot of money to get the performance that we needed.” The Tevatron created a commercial market for superconducting wire, making it practical for companies to build MRI machines on a large scale for places like hospitals. 

Doctors now use MRI to produce detailed images of the insides of the human body, helpful tools in diagnosing and treating a variety of medical complications, including cancer, heart problems, and diseases in organs such as the liver, pancreas and bowels. 

Another tool of particle physics, the particle detector, has also been adopted for uses in various industries. In the 1980s, for example, particle physicists developed technology precise enough to detect a single photon. Today doctors use this same technology to detect tumors, heart disease and central nervous system disorders. They do this by conducting positron emission tomography scans, or PET scans. Before undergoing a PET scan, the patient is given a dye containing radioactive tracers, either through an injection or by ingesting or inhaling. The tracers emit antimatter particles, which interact with matter particles and release photons, which are picked up by the PET scanner to create a picture detailed enough to reveal problems at the cellular level. 

As Gates says, “a lot of the devices and concepts that you see in science fiction stories will never come into existence unless we pursue the concept of basic research. You’re not going to be able to construct starships unless you do the research now in order to build these in the future.”

It’s unclear what applications could come of humanity’s new knowledge of the existence of gravitational waves.

It could be enough that we have learned something new about how our universe works. But if history gives us any indication, continued exploration will also provide additional benefits along the way.

by Diana Kwon at April 18, 2017 03:10 PM

Lubos Motl - string vacua and pheno

LHCb insists on tension with lepton universality in \(1\)-\(6\GeV^2\)
The number of references to B-mesons on this blog significantly exceeds my degree of excitement about these bound states of quarks and antiquarks but what can I do? They are among the leaders of the revolt against the Standard Model.


Various physicists have mentioned a new announcement by the LHCb collaboration which is smaller than ATLAS and CMS but at least equally assertive.

Another physicist has embedded the key graph where you should notice that the black crosses sit well below the dotted line where they're predicted to sit


and we were told about the LHCb PowerPoint presentation where this graph was taken from.




To make the story short, some ratio describing the decays of B-mesons that should be one according to the Standard Model if the electron, muon, and tau are equally behaved – except for their differing masses which are rather irrelevant here – ends up being \[

\Large {\mathcal R}_{K^{*0}} = 0.69 + 0.12 - 0.08

\] especially in the interval of momentum transfer \(q^2 \in (1,6)\GeV^2\).




There are some similar deviations at higher values of \(q^2\), it's always about 2.2-2.5 standard deviations below the Standard Model. Sadly, it seems that neither BaBar nor Belle saw these deficits: their mean values are slightly greater than one although their error margin was greater than that of the LHCb collaboration. On the other hand, the deficit seems rather compatible with the LHCb's recent announcements based on a (hopefully) disjoint set of decays.

An obvious reaction is that the deviation in this low-energy range isn't too exciting, anyway, because


Well, unless it's some new physics (new even for Jester) that affects this energy range. ;-)

I find this deviation rather small and our survival of the 4-sigma excess at \(750\GeV\) should have made us a little bit more demanding when it comes to the significance level that is needed to make us aroused. But those who are interested in the existing or potentially emerging experimental anomalies should be aware of this deviation because the competition in this field is very limited.

by Luboš Motl (noreply@blogger.com) at April 18, 2017 10:07 AM

April 17, 2017

ZapperZ - Physics and Physicists

Hot Atoms Interferometer
This work will not catch media attention because it isn't "sexy", but damn, it is astonishing nevertheless.

Quantum behavior are clearly seen at the macroscopic level because of the problem in maintaining coherence over a substantial length and time scales. One of the ways one can extend such scales is by cooling things down to extremely low temperatures so that decoherence due to thermal scattering is minimized.

So it is with great interest that I read this new paper on atoms interferometer that has been accomplished with "warm" atomic vapor[1]! You also have access to the actual paper from that link.

While the sensitivity of this technique is significantly and unsurprisingly low when compared to cold atoms, it has 2 major advantages:

However, sensitivity is not the only parameter of relevance for applications, and the new scheme offers two important advantages over cold schemes. The first is that it can acquire data at a rate of 10 kHz, in contrast to the typical 1-Hz rate of cold-atom LPAIs. The second advantage is the broader range of accelerations that can be measured with the same setup. This vapor-cell sensor remains operational over an acceleration range of 88g, several times larger than the typical range of cold LPAIs.

The large bandwidth and dynamic range of the instrument built by Biedermann and co-workers may enable applications like inertial navigation in highly vibrating environments, such as spacecraft or airplanes. What’s more, the new scheme, like all LPAIs, has an important advantage over devices like laser or electromechanical gyroscopes: it delivers acceleration measurements that are absolute, without requiring a reference signal. This opens new possibilities for drift-free inertial navigation devices that work even when signals provided by global satellite positioning systems are not available, such as in underwater navigation.

And again, let me highlight the direct and clear application of something that started out as simply appearing to be a purely academic and knowledge-driven curiosity. This really is an application of the principle of superposition in quantum mechanics, i.e. the Schrodinger Cat.

This is an amazing experimental accomplishment.

Zz.

[1] G. W. Biedermann et al., Phys. Rev. Lett. 118, 163601 (2017).

by ZapperZ (noreply@blogger.com) at April 17, 2017 10:07 PM

April 15, 2017

The n-Category Cafe

Value

What is the value of the whole in terms of the values of the parts?

More specifically, given a finite set whose elements have assigned “values” <semantics>v 1,,v n<annotation encoding="application/x-tex">v_1, \ldots, v_n</annotation></semantics> and assigned “sizes” <semantics>p 1,,p n<annotation encoding="application/x-tex">p_1, \ldots, p_n</annotation></semantics> (normalized to sum to <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>), how can we assign a value <semantics>σ(p,v)<annotation encoding="application/x-tex">\sigma(\mathbf{p}, \mathbf{v})</annotation></semantics> to the set in a coherent way?

This seems like a very general question. But in fact, just a few sensible requirements on the function <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> are enough to pin it down almost uniquely. And the answer turns out to be closely connected to existing mathematical concepts that you probably already know.

Let’s write

<semantics>Δ n={(p 1,,p n) n:p i0,p i=1}<annotation encoding="application/x-tex"> \Delta_n = \Bigl\{ (p_1, \ldots, p_n) \in \mathbb{R}^n : p_i \geq 0, \sum p_i = 1 \Bigr\} </annotation></semantics>

for the set of probability distributions on <semantics>{1,,n}<annotation encoding="application/x-tex">\{1, \ldots, n\}</annotation></semantics>. Assuming that our “values” are positive real numbers, we’re interested in sequences of functions

<semantics>(σ:Δ n×(0,) n(0,)) n1<annotation encoding="application/x-tex"> \Bigl( \sigma \colon \Delta_n \times (0, \infty)^n \to (0, \infty) \Bigr)_{n \geq 1} </annotation></semantics>

that aggregate the values of the elements to give a value to the whole set. So, if the elements of the set have relative sizes <semantics>p=(p 1,,p n)<annotation encoding="application/x-tex">\mathbf{p} = (p_1, \ldots, p_n)</annotation></semantics> and values <semantics>v=(v 1,,v n)<annotation encoding="application/x-tex">\mathbf{v} = (v_1, \ldots, v_n)</annotation></semantics>, then the value assigned to the whole set is <semantics>σ(p,v)<annotation encoding="application/x-tex">\sigma(\mathbf{p}, \mathbf{v})</annotation></semantics>.

Here are some properties that it would be reasonable for <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> to satisfy.

Homogeneity  The idea is that whatever “value” means, the value of the set and the value of the elements should be measured in the same units. For instance, if the elements are valued in kilograms then the set should be valued in kilograms too. A switch from kilograms to grams would then multiply both values by 1000. So, in general, we ask that

<semantics>σ(p,cv)=cσ(p,v)<annotation encoding="application/x-tex"> \sigma(\mathbf{p}, c\mathbf{v}) = c \sigma(\mathbf{p}, \mathbf{v}) </annotation></semantics>

for all <semantics>pΔ n<annotation encoding="application/x-tex">\mathbf{p} \in \Delta_n</annotation></semantics>, <semantics>v(0,) n<annotation encoding="application/x-tex">\mathbf{v} \in (0, \infty)^n</annotation></semantics> and <semantics>c(0,)<annotation encoding="application/x-tex">c \in (0, \infty)</annotation></semantics>.

Monotonicity  The values of the elements are supposed to make a positive contribution to the value of the whole, so we ask that if <semantics>v iv i<annotation encoding="application/x-tex">v_i \leq v'_i</annotation></semantics> for all <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> then

<semantics>σ(p,v)σ(p,v)<annotation encoding="application/x-tex"> \sigma(\mathbf{p}, \mathbf{v}) \leq \sigma(\mathbf{p}, \mathbf{v}') </annotation></semantics>

for all <semantics>pΔ n<annotation encoding="application/x-tex">\mathbf{p} \in \Delta_n</annotation></semantics>.

Replication  Suppose that our <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements have the same size and the same value, <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics>. Then the value of the whole set should be <semantics>nv<annotation encoding="application/x-tex">n v</annotation></semantics>. This property says, among other things, that <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> isn’t an average: putting in more elements of value <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> increases the value of the whole set!

If <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> is homogeneous, we might as well assume that <semantics>v=1<annotation encoding="application/x-tex">v = 1</annotation></semantics>, in which case the requirement is that

<semantics>σ((1/n,,1/n),(1,,1))=n.<annotation encoding="application/x-tex"> \sigma\bigl( (1/n, \ldots, 1/n), (1, \ldots, 1) \bigr) = n. </annotation></semantics>

Modularity  This one’s a basic logical axiom, best illustrated by an example.

Imagine that we’re very ambitious and wish to evaluate the entire planet — or at least, the part that’s land. And suppose we already know the values and relative sizes of every country.

We could, of course, simply put this data into <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> and get an answer immediately. But we could instead begin by evaluating each continent, and then compute the value of the planet using the values and sizes of the continents. If <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> is sensible, this should give the same answer.

The notation needed to express this formally is a bit heavy. Let <semantics>wΔ n<annotation encoding="application/x-tex">\mathbf{w} \in \Delta_n</annotation></semantics>; in our example, <semantics>n=7<annotation encoding="application/x-tex">n = 7</annotation></semantics> (or however many continents there are) and <semantics>w=(w 1,,w 7)<annotation encoding="application/x-tex">\mathbf{w} = (w_1, \ldots, w_7)</annotation></semantics> encodes their relative sizes. For each <semantics>i=1,,n<annotation encoding="application/x-tex">i = 1, \ldots, n</annotation></semantics>, let <semantics>p iΔ k i<annotation encoding="application/x-tex">\mathbf{p}^i \in \Delta_{k_i}</annotation></semantics>; in our example, <semantics>p i<annotation encoding="application/x-tex">\mathbf{p}^i</annotation></semantics> encodes the relative sizes of the countries on the <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>th continent. Then we get a probability distribution

<semantics>w(p 1,,p n)=(w 1p 1 1,,w 1p k 1 1,,w np 1 n,,w np k n n)Δ k 1++k n,<annotation encoding="application/x-tex"> \mathbf{w} \circ (\mathbf{p}^1, \ldots, \mathbf{p}^n) = (w_1 p^1_1, \ldots, w_1 p^1_{k_1}, \,\,\ldots, \,\, w_n p^n_1, \ldots, w_n p^n_{k_n}) \in \Delta_{k_1 + \cdots + k_n}, </annotation></semantics>

which in our example encodes the relative sizes of all the countries on the planet. (Incidentally, this composition makes <semantics>(Δ n)<annotation encoding="application/x-tex">(\Delta_n)</annotation></semantics> into an operad, a fact that we’ve discussed many times before on this blog.) Also let

<semantics>v 1=(v 1 1,,v k 1 1)(0,) k 1,,v n=(v 1 n,,v k n n)(0,) k n.<annotation encoding="application/x-tex"> \mathbf{v}^1 = (v^1_1, \ldots, v^1_{k_1}) \in (0, \infty)^{k_1}, \,\,\ldots,\,\, \mathbf{v}^n = (v^n_1, \ldots, v^n_{k_n}) \in (0, \infty)^{k_n}. </annotation></semantics>

In the example, <semantics>v j i<annotation encoding="application/x-tex">v^i_j</annotation></semantics> is the value of the <semantics>j<annotation encoding="application/x-tex">j</annotation></semantics>th country on the <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>th continent. Then the value of the <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>th continent is <semantics>σ(p i,v i)<annotation encoding="application/x-tex">\sigma(\mathbf{p}^i, \mathbf{v}^i)</annotation></semantics>, so the axiom is that

<semantics>σ(w(p 1,,p n),(v 1 1,,v k 1 1,,v 1 n,,v k n n))=σ(w,(σ(p 1,v 1),,σ(p n,v n))).<annotation encoding="application/x-tex"> \sigma \bigl( \mathbf{w} \circ (\mathbf{p}^1, \ldots, \mathbf{p}^n), (v^1_1, \ldots, v^1_{k_1}, \ldots, v^n_1, \ldots, v^n_{k_n}) \bigr) = \sigma \Bigl( \mathbf{w}, \bigl( \sigma(\mathbf{p}^1, \mathbf{v}^1), \ldots, \sigma(\mathbf{p}^n, \mathbf{v}^n) \bigr) \Bigr). </annotation></semantics>

The left-hand side is the value of the planet calculated in a single step, and the right-hand side is its value when calculated in two steps, with continents as the intermediate stage.

Symmetry  It shouldn’t matter what order we list the elements in. So it’s natural to ask that

<semantics>σ(p,v)=σ(pτ,vτ)<annotation encoding="application/x-tex"> \sigma(\mathbf{p}, \mathbf{v}) = \sigma(\mathbf{p} \tau, \mathbf{v} \tau) </annotation></semantics>

for any <semantics>τ<annotation encoding="application/x-tex">\tau</annotation></semantics> in the symmetric group <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics>, where the right-hand side refers to the obvious <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics>-actions.

Absent elements should count for nothing! In other words, if <semantics>p 1=0<annotation encoding="application/x-tex">p_1 = 0</annotation></semantics> then we should have

<semantics>σ((p 1,,p n),(v 1,,v n))=σ((p 2,,p n),(v 2,,v n)).<annotation encoding="application/x-tex"> \sigma\bigl( (p_1, \ldots, p_n), (v_1, \ldots, v_n)\bigr) = \sigma\bigl( (p_2, \ldots, p_n), (v_2, \ldots, v_n)\bigr). </annotation></semantics>

This isn’t quite triival. I haven’t yet given you any examples of the kind of function that <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> might be, but perhaps you already have in mind a simple one like this:

<semantics>σ(p,v)=v 1++v n.<annotation encoding="application/x-tex"> \sigma(\mathbf{p}, \mathbf{v}) = v_1 + \cdots + v_n. </annotation></semantics>

In words, the value of the whole is simply the sum of the values of the parts, regardless of their sizes. But if <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> is to have the “absent elements” property, this won’t do. (Intuitively, if <semantics>p i=0<annotation encoding="application/x-tex">p_i = 0</annotation></semantics> then we shouldn’t count <semantics>v i<annotation encoding="application/x-tex">v_i</annotation></semantics> in the sum, because the <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>th element isn’t actually there.) So we’d better modify this example slightly, instead taking

<semantics>σ(p,v)= i:p i>0v i.<annotation encoding="application/x-tex"> \sigma(\mathbf{p}, \mathbf{v}) = \sum_{i \,:\, p_i \gt 0} v_i. </annotation></semantics>

This function (or rather, sequence of functions) does have the “absent elements” property.

Continuity in positive probabilities  Finally, we ask that for each <semantics>v(0,) n<annotation encoding="application/x-tex">\mathbf{v} \in (0, \infty)^n</annotation></semantics>, the function <semantics>σ(,v)<annotation encoding="application/x-tex">\sigma(-, \mathbf{v})</annotation></semantics> is continuous on the interior of the simplex <semantics>Δ n<annotation encoding="application/x-tex">\Delta_n</annotation></semantics>, that is, continuous over those probability distributions <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics> such that <semantics>p 1,,p n>0<annotation encoding="application/x-tex">p_1, \ldots, p_n \gt 0</annotation></semantics>.

Why only over the interior of the simplex? Basically because of natural examples of <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> like the one just given, which is continuous on the interior of the simplex but not the boundary. Generally, it’s sometimes useful to make a sharp, discontinuous distinction between the cases <semantics>p i>0<annotation encoding="application/x-tex">p_i \gt 0</annotation></semantics> (presence) and <semantics>p i=0<annotation encoding="application/x-tex">p_i = 0</annotation></semantics> (absence).

 

Arrow’s famous theorem states that a few apparently mild conditions on a voting system are, in fact, mutually contradictory. The mild conditions above are not mutually contradictory. In fact, there’s a one-parameter family <semantics>σ q<annotation encoding="application/x-tex">\sigma_q</annotation></semantics> of functions each of which satisfies these conditions. For real <semantics>q1<annotation encoding="application/x-tex">q \neq 1</annotation></semantics>, the definition is

<semantics>σ q(p,v)=( i:p i>0p i qv i 1q) 1/(1q).<annotation encoding="application/x-tex"> \sigma_q(\mathbf{p}, \mathbf{v}) = \Bigl( \sum_{i \,:\, p_i \gt 0} p_i^q v_i^{1 - q} \Bigr)^{1/(1 - q)}. </annotation></semantics>

For instance, <semantics>σ 0<annotation encoding="application/x-tex">\sigma_0</annotation></semantics> is the example of <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> given above.

The formula for <semantics>σ q<annotation encoding="application/x-tex">\sigma_q</annotation></semantics> is obviously invalid at <semantics>q=1<annotation encoding="application/x-tex">q = 1</annotation></semantics>, but it converges to a limit as <semantics>q1<annotation encoding="application/x-tex">q \to 1</annotation></semantics>, and we define <semantics>σ 1(p,v)<annotation encoding="application/x-tex">\sigma_1(\mathbf{p}, \mathbf{v})</annotation></semantics> to be that limit. Explicitly, this gives

<semantics>σ 1(p,v)= i:p i>0(v i/p i) p i.<annotation encoding="application/x-tex"> \sigma_1(\mathbf{p}, \mathbf{v}) = \prod_{i \,:\, p_i \gt 0} (v_i/p_i)^{p_i}. </annotation></semantics>

In the same way, we can define <semantics>σ <annotation encoding="application/x-tex">\sigma_{-\infty}</annotation></semantics> and <semantics>σ <annotation encoding="application/x-tex">\sigma_\infty</annotation></semantics> as the appropriate limits:

<semantics>σ (p,v)=max i:p i>0v i/p i,σ (p,v)=min i:p i>0v i/p i.<annotation encoding="application/x-tex"> \sigma_{-\infty}(\mathbf{p}, \mathbf{v}) = \max_{i \,:\, p_i \gt 0} v_i/p_i, \qquad \sigma_{\infty}(\mathbf{p}, \mathbf{v}) = \min_{i \,:\, p_i \gt 0} v_i/p_i. </annotation></semantics>

And it’s easy to check that for each <semantics>q[,]<annotation encoding="application/x-tex">q \in [-\infty, \infty]</annotation></semantics>, the function <semantics>σ q<annotation encoding="application/x-tex">\sigma_q</annotation></semantics> satisfies all the natural conditions listed above.

These functions <semantics>σ q<annotation encoding="application/x-tex">\sigma_q</annotation></semantics> might be unfamiliar to you, but they have some special cases that are quite well-explored. In particular:

  • Suppose you’re in a situation where the elements don’t have “sizes”. Then it would be natural to take <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics> to be the uniform distribution <semantics>u n=(1/n,,1/n)<annotation encoding="application/x-tex">\mathbf{u}_n = (1/n, \ldots, 1/n)</annotation></semantics>. In that case, <semantics>σ q(u n,v)=const(v i 1q) 1/(1q),<annotation encoding="application/x-tex"> \sigma_q(\mathbf{u}_n, \mathbf{v}) = const \cdot \bigl( \sum v_i^{1 - q} \bigr)^{1/(1 - q)}, </annotation></semantics> where the constant is a certain power of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>. When <semantics>q0<annotation encoding="application/x-tex">q \leq 0</annotation></semantics>, this is exactly a constant times <semantics>v 1q<annotation encoding="application/x-tex">\|\mathbf{v}\|_{1 - q}</annotation></semantics>, the <semantics>(1q)<annotation encoding="application/x-tex">(1 - q)</annotation></semantics>-norm of the vector <semantics>v<annotation encoding="application/x-tex">\mathbf{v}</annotation></semantics>.

  • Suppose you’re in a situation where the elements don’t have “values”. Then it would be natural to take <semantics>v<annotation encoding="application/x-tex">\mathbf{v}</annotation></semantics> to be <semantics>1=(1,,1)<annotation encoding="application/x-tex">\mathbf{1} = (1, \ldots, 1)</annotation></semantics>. In that case, <semantics>σ q(p,1)=(p i q) 1/(1q).<annotation encoding="application/x-tex"> \sigma_q(\mathbf{p}, \mathbf{1}) = \bigl( \sum p_i^q \bigr)^{1/(1 - q)}. </annotation></semantics> This is the quantity that ecologists know as the Hill number of order <semantics>q<annotation encoding="application/x-tex">q</annotation></semantics> and use as a measure of biological diversity. Information theorists know it as the exponential of the Rényi entropy of order <semantics>q<annotation encoding="application/x-tex">q</annotation></semantics>, the special case <semantics>q=1<annotation encoding="application/x-tex">q = 1</annotation></semantics> being Shannon entropy. And actually, the general formula for <semantics>σ q<annotation encoding="application/x-tex">\sigma_q</annotation></semantics> is very closely related to Rényi relative entropy (which Wikipedia calls Rényi divergence).

Anyway, the big — and as far as I know, new — result is:

Theorem  The functions <semantics>σ q<annotation encoding="application/x-tex">\sigma_q</annotation></semantics> are the only functions <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> with the seven properties above.

So although the properties above don’t seem that demanding, they actually force our notion of “aggregate value” to be given by one of the functions in the family <semantics>(σ q) q[,]<annotation encoding="application/x-tex">(\sigma_q)_{q \in [-\infty, \infty]}</annotation></semantics>. And although I didn’t even mention the notions of diversity or entropy in my justification of the axioms, they come out anyway as special cases.

I covered all this yesterday in the tenth and penultimate installment of the functional equations course that I’m giving. It’s written up on pages 38–42 of the notes so far. There you can also read how this relates to more realistic measures of biodiversity than the Hill numbers. Plus, you can see an outline of the (quite substantial) proof of the theorem above.

by leinster (Tom.Leinster@ed.ac.uk) at April 15, 2017 10:36 AM

April 14, 2017

Tommaso Dorigo - Scientificblogging

Waiting For Jupiter
This evening I am blogging from a residence in Sesto val Pusteria, a beautiful mountain village in the Italian Alps. I came here for a few days of rest after a crazy work schedule in the past few days -the reason why my blogging has been intermittent. Sesto is surrounded by glorious mountains, and hiking around here is marvelous. But right now, as I sip a non-alcoholic beer (pretty good), chilling off after a day out, my thoughts are focused 500,000,000 kilometers away.

read more

by Tommaso Dorigo at April 14, 2017 06:05 PM

Marco Frasca - The Gauge Connection

Well below 1%

ResearchBlogging.org

When a theory is too hard to solve people try to consider lower dimensional cases. This also happened for Yang-Mills theory. The four dimensional case is notoriously difficult to manage due to the large coupling and the three dimensional case has been treated both theoretically and by lattice computations. In this latter case, the ground state energy of the theory is known very precisely (see here). So, a sound theoretical approach from first principles should be able to get that number at the same level of precision. We know that this is the situation for Standard Model with respect to some experimental results but a pure Yang-Mills theory has not been seen in nature and we have to content ourselves with computer data. The reason is that a Yang-Mills theory is realized in nature just in interaction with other kind of fields being these scalars, fermions or vector-like.

In these days, I have received the news that my paper on three dimensional Yang-Mills theory has been accepted for publication in the European Physical Journal C. Here is tha table for the ground state for SU(N) at different values of N compared to lattice data

N Lattice     Theoretical Error

2 4.7367(55) 4.744262871 0.16%

3 4.3683(73) 4.357883714 0.2%

4 4.242(9)     4.243397712 0.03%

4.116(6)    4.108652166 0.18%

These results are strikingly good and the agreement is well below 1%. This in turn implies that the underlying theoretical derivation is sound. Besides, the approach proves to be successful both also in four dimensions (see here). My hope is that this means the beginning of the era of high precision theoretical computations in strong interactions.

Andreas Athenodorou, & Michael Teper (2017). SU(N) gauge theories in 2+1 dimensions: glueball spectra and k-string tensions J. High Energ. Phys. (2017) 2017: 15 arXiv: 1609.03873v1

Marco Frasca (2016). Confinement in a three-dimensional Yang-Mills theory arXiv arXiv: 1611.08182v2

Marco Frasca (2015). Quantum Yang-Mills field theory Eur. Phys. J. Plus (2017) 132: 38 arXiv: 1509.05292v2


Filed under: Particle Physics, Physics, QCD Tagged: Ground state, Lattice Gauge Theories, Mass Gap, Millenium prize, Yang-Mills theory

by mfrasca at April 14, 2017 12:58 PM

April 13, 2017

Clifford V. Johnson - Asymptotia

Quick Oceanside Art…

So an unexpected but very welcome message from my publisher a while back was a query to see if I'd be interested in doing the cover for my forthcoming book. Of course, the answer was a very definite yes! (I knew that publishers often want to control that aspect of a book themselves, and while some time ago I made a deliberately vague suggestion about what I thought the cover might be like, I was careful not to try to insert myself into that aspect of production, so this was a genuine surprise.) I'm focusing on physics research during this part of my sabbatical, so this would have to be primarily an "after hours" sort of operation, but should not take long since I had a clear idea of what to do. I worked up two or three versions of an idea and sent it along to see that they liked where I was going and once they picked one (happily, the one I liked most) I set it aside as a thing to work on once I got finished with a paper (see last post) and the (prep for as well as the actual) trip East to give a physics colloquium (see the post I never got around to doing about that trip).

Then I had terrible delays on the way back that cost me the better part of an extra day getting back. So I worked up some of the nearly final art and layout [...] Click to continue reading this post

The post Quick Oceanside Art… appeared first on Asymptotia.

by Clifford at April 13, 2017 09:51 PM

April 11, 2017

Symmetrybreaking - Fermilab/SLAC

What’s left to learn about antimatter?

Experiments at CERN investigate antiparticles.

Header What’s left to learn about antimatter

What do shrimp, tennis balls and pulsars all have in common? They are all made from matter.

Admittedly, that answer is a cop-out, but it highlights a big, persistent quandary for scientists: Why is everything made from matter when there is a perfectly good substitute—antimatter?

The European laboratory CERN hosts several experiments to ascertain the properties of antimatter particles, which almost never survive in our matter-dominated world. 

Particles (such as the proton and electron) have oppositely charged antimatter doppelgangers (such as the antiproton and antielectron). Because they are opposite but equal, a matter particle and its antimatter partner annihilate when they meet.

Antimatter wasn’t always rare. Theoretical and experimental research suggests that there was an equal amount of matter and antimatter right after the birth of our universe. But 13.8 billion years later, only matter-made structures remain in the visible universe.

Scientists have found small differences between the behavior of matter and antimatter particles, but not enough to explain the imbalance that led antimatter to disappear while matter perseveres. Experiments at CERN are working to solve that riddle using three different strategies. 

Inline 1: What’s left to learn about antimatter
Illustration by Sandbox Studio, Chicago

Antimatter under the microscope

It’s well known that CERN is home to Large Hadron Collider, the world’s highest-energy particle accelerator. Less known is that CERN also hosts the world’s most powerful particle decelerator—a machine that slows down antiparticles to a near standstill.

The antiproton decelerator is fed by CERN’s accelerator complex. A beam of energetic protons is diverted from CERN’s Proton Synchrotron and into a metal wall, spawning a multitude of new particles, including some antiprotons. The antiprotons are focused into a particle beam and slowed by electric fields inside the antiproton decelerator. From here they are fed into various antimatter experiments, which trap the antiprotons inside powerful magnetic fields.

“All these experiments are trying to find differences between matter and antimatter that are not predicted by theory,” says Will Bertsche, a researcher at University of Manchester, who works in CERN’s antimatter factory. “We’re all trying to address the big question: Why is the universe made up of matter these days and not antimatter?”

By cooling and trapping antimatter, scientists can intimately examine its properties without worrying that their particles will spontaneously encounter a matter companion and disappear. Some of the traps can preserve antiprotons for more than a year. Scientists can also combine antiprotons with positrons (antielectrons) to make antihydrogen.

“Antihydrogen is fascinating because it lets us see how antimatter interacts with itself,” Bertsche says. “We’re getting a glimpse at how a mirror antimatter universe would behave.”

Scientists in CERN’s antimatter factory have measured the mass, charge, light spectrum, and magnetic properties of antiprotons and antihydrogen to high precision. They also look at how antihydrogen atoms are affected by gravity; that is, do the anti-atoms fall up or down? One experiment is even trying to make an assortment of matter-antimatter hybrids, such as a helium atom in which one of the electrons is replaced with an orbiting antiproton.

So far, all their measurements of trapped antimatter match the theory: Except for the opposite charge and spin, antimatter appears completely identical to matter. But these affirmative results don’t deter Bertsche from looking for antimatter surprises. There must be unpredicted disparities between these particle twins that can explain why matter won its battle with antimatter in the early universe.  

“There’s something missing in this model,” Bertsche says. “And nobody is sure what that is.”

Antimatter in motion

The LHCb experiment wants to answer this same question, but they are looking at antimatter particles that are not trapped. Instead, LHCb scientists study how free-range antimatter particles behave as they travel and transform inside the detector.

“We’re recording how unstable matter and antimatter particles decay into showers of particles and the patterns they leave behind when they do,” says Sheldon Stone, a professor at Syracuse University working on the LHCb Experiment. “We can’t make these measurements if the particles aren’t moving.”

The particles-in-motion experiments have already observed some small differences between matter and antimatter particles. In 1964 scientists at Brookhaven National Laboratory noticed that neutral kaons (a particle containing a strange and down quark) decay into matter and antimatter particles at slightly different rates, an observation that won them the Nobel Prize in 1980.

The LHCb experiment continues this legacy, looking for even more discrepancies between the metamorphoses of matter and antimatter particles. They recently observed that the daughter particles of certain antimatter baryons (particles containing three quarks) have a slightly different spatial orientation than their matter contemporaries.

But even with the success of uncovering these discrepancies, scientists are still very far from understanding why antimatter all but disappeared.

“Theory tells us that we’re still off by nine orders of magnitude,” Stone says, “so we’re left asking, where is it? What is antimatter’s Achilles heel that precipitated its disappearance?”

Inline 2: What’s left to learn about antimatter
Illustration by Sandbox Studio, Chicago

Antimatter in space

Most antimatter experiments based at CERN produce antiparticles by accelerating and colliding protons. But one experiment is looking for feral antimatter freely roaming through outer space.

The Alpha Magnetic Spectrometer is an international experiment supported by the US Department of Energy and NASA. This particle detector was assembled at CERN and is now installed on the International Space Station, where it orbits Earth 400 kilometers above the surface. It records the momentum and trajectory of roughly a billion vagabond particles every month, including a million antimatter particles.

Nomadic antimatter nuclei could be lonely relics from the Big Bang or the rambling residue of nuclear fusion in antimatter stars. 

But AMS searches for phenomena not explained by our current models of the cosmos. One of its missions is to look for antimatter that is so complex and robust, there is no way it could have been produced through normal particle collisions in space.

“Most scientists accept that antimatter disappeared from our universe because it is somehow less resilient than matter,” says Mike Capell, a researcher at MIT and a deputy spokesperson of the AMS experiment. “But we’re asking, what if all the antimatter never disappeared? What if it’s still out there?”

If an antimatter kingdom exists, astronomers expect that they would observe mass particle-annihilation fizzing and shimmering at its boundary with our matter-dominated space—which they don’t. Not yet, at least. Because our universe is so immense (and still expanding), researchers on AMS hypothesize that maybe these intersections are too dim or distant for our telescopes.

“We already have trouble seeing deep into our universe,” Capell says. “Because we’ve never seen a domain where matter meets antimatter, we don’t know what it would look like.”

AMS has been collecting data for six years. From about 100 billion cosmic rays, they’ve identified a few strange events with characteristics of antihelium. Because the sample is so tiny, it’s impossible to say whether these anomalous events are the first messengers from an antimatter galaxy or simply part of the chaotic background.

“It’s an exciting result,” Capell says. “However, we remain skeptical. We need data from many more cosmic rays before we can determine the identities of these anomalous particles.”

by Sarah Charley at April 11, 2017 04:05 PM

April 10, 2017

Axel Maas - Looking Inside the Standard Model

Making connections inside dead stars
Last time I wrote about our research on neutron stars. In that case we were concerned with the properties of neutron stars - its mass and size. But these are determined by the particles inside the star, the quarks and gluons and how they influence each other by the strong force.

However, a neutron star is much more than just quarks and gluons bound by gravity and the strong force.

Neutron stars are also affected by the weak force. This happens in a quite subtle way. The weak force can transform a neutron into a proton, an electron and an (anti)neutrino, and back. In a neutron star, this happens all the time. Still, the neutron are neutrons most of the time, hence the name neutron stars. Looking into this process more microscopically, the protons and neutrons consist out of quarks. The proton out of two up quarks and a down quark, and the neutron out of one up quark and two down quarks. Thus, what really happens is that a down quark changes into an up quark and an electron and an (anti)neutrino and back.

As noted, this does not happen too often. But this is actually only true for a neutron star just hanging around. When neutron stars are created in a supernova, this happens very often. In particular, the star which becomes a supernova is mostly protons, which have to be converted to neutrons for the neutron star. Another case is when two neutron stars collide. Then this process becomes much more important, and more rapid. The latter is quite exciting, as the consequences maybe observable in astronomy in the next few years.

So, how can the process be described? Usually, the weak force is weak, as the name says. Thus, it is usually possible to consider it a small effect. Such small effects are well described by perturbation theory. This is OK, if the neutron star just hangs around. But for collisions, or forming, the effect is no longer small. And then other methods are necessary. For the same reasons as in the case of inert neutron stars we cannot use simulations to do so. But our third possibility, the so-called equations of motion, work.

Therefore Walid Mian, a PhD student of mine, and myself used these equations to study how quarks behave, if we offer to them a background of electrons and (anti)neutrinos. We have published a paper about our results, and I would like to outline what we found.

Unfortunately, we still cannot do the calculations exactly. So, in a sense, we cannot independently vary the amount of electrons and (anti)neutrinos, and the strength of their coupling to the quarks. Thus, we can only estimate what a more intense combination of both together means. Since this is qualitatively what we expect to happen during the collision of two neutron stars, this should be a reasonable approximation.

For a very small intensity we do not see anything but what we expect in perturbation theory. But the first surprise was already when we cranked up the intensity. Much earlier than expected new effects which showed up. In fact, they started to be there at intensities some factor 10-1000 smaller than expected. Thus, the weak interaction could play a much larger role in such environments than usually assumed. That was the first insight.

The second was that the type of quarks - whether it is an up or a down quark is more relevant than expected. In particular, whether they have a different mass, like it is in nature, or the same mass makes a big difference. If the mass is different qualitatively new effects arise, which was not expected in this form.

The observed effects themselves are actually quite interesting: They make the quarks, depending on their type, either more sensitive or less sensitive to the weak force. This is important. When neutron stars are created or collide, they become very hot. The main way to get cooler is by dumping (anti)neutrinos into space. This becomes more efficient if the quarks react less to the weak force. Thus, our findings could have consequences on how quickly neutron stars could become colder.

We also saw that these effects only start to play a role if the quark can move inside the neutron star over a sufficiently large distance. Where sufficiently large is here about the size of a neutron. Thus the environment of a neutron star shows itself already when the quarks start to feel that they do not live in a single neutron, but rather in a neutron star, where there neutrons touch each other. All of the qualitative new effects then started to appear.

Unfortunately, to estimate how important these new effects for the neutron star really are, we first have to understand what it means for the neutrons. Essentially, we have to somehow pull our results on a larger scale - what does this mean for the whole neutron - before we can recreate our investigation of the full neutron star with these effects included. Not even to mention the impact for a collision, which is even more complicated.

Thus, our current next step is to understand what the weak interaction implies for hadrons, i.e. states of multiple quarks like the neutron. The first step is to understand how the hadron can decay and reform by the weak force, as I described earlier. The decay itself can be described already quite well using perturbation theory. But decay and reforming, or even an endless chain of these processes, cannot yet. To become able to do so is where we head next.

by Axel Maas (noreply@blogger.com) at April 10, 2017 01:09 PM

Symmetrybreaking - Fermilab/SLAC

Urban Sketchers visit Fermilab

The group brought their on-site drawing practice to the particle physics laboratory.

Group of about 30 people looks up at the camera holding their sketches

In March, about 30 participants in the Chicago chapter of the artist network Urban Sketchers visited Fermi National Accelerator Laboratory, located in west Chicagoland, and sketched their hearts out. They drew buildings, interiors and scenes of nature from the laboratory environment, capturing the laboratory's most iconic building, Wilson Hall, along with restored prairie land and the popular bison herd on site.

Urban Sketchers holds monthly “sketch crawls,” as they’re called. Their mission is to “show the world, one drawing at a time.”

Sketcher Harold Goldfus drew scenes of art and architecture.

“I regard myself as primarily a figurative artist. At the Urban Sketchers Chicago outing, I expected to sketch figures at Fermilab with hints of the environment in the background,” Goldfus said. “Instead, I found myself taken with the architecture and aesthetics of the interior of Wilson Hall, and decided on a more unconventional approach.”

The sketch crawl was organized by Peggy Condon and Wes Douglas from Urban Sketchers Chicago along with Fermilab Art Gallery curator Georgia Schwender.

“I was very inspired by Fermilab’s strong commitment to the arts. I didn’t expect this for a world-renowned scientific research institution,” said sketcher Lynne Fairchild. “I really appreciated that they found so many ways to honor the arts and culture: the art gallery, lecture series, the awe-inspiring sculptures on the campus, and the design of Wilson Hall, especially the beauty of the atrium.”

Editor's note: Fermilab previously posted a version of this article.

by Leah Hesla at April 10, 2017 01:00 PM

April 06, 2017

John Baez - Azimuth

Periodic Patterns in Peptide Masses

Gheorghe Craciun is a mathematician at the University of Wisconsin who recently proved the Global Attractor Conjecture, which since 1974 was the most famous conjecture in mathematical chemistry. This week he visited U. C. Riverside and gave a talk on this subject. But he also told me about something else—something quite remarkable.

The mystery

A peptide is basically a small protein: a chain of made of fewer than 50 amino acids. If you plot the number of peptides of different masses found in various organisms, you see peculiar oscillations:

These oscillations have a frequency of about 14 daltons, where a ‘dalton’ is roughly the mass of a hydrogen atom—or more precisely, 1/12 the mass of a carbon atom.

Biologists had noticed these oscillations in databases of peptide masses. But they didn’t understand them.

Can you figure out what causes these oscillations?

It’s a math puzzle, actually.

Next I’ll give you the answer, so stop looking if you want to think about it first.

The solution

Almost all peptides are made of 20 different amino acids, which have different masses, which are almost integers. So, to a reasonably good approximation, the puzzle amounts to this: if you have 20 natural numbers m_1, ... , m_{20}, how many ways can you write any natural number N as a finite ordered sum of these numbers? Call it F(N) and graph it. It oscillates! Why?

(We count ordered sums because the amino acids are stuck together in a linear way to form a protein.)

There’s a well-known way to write down a formula for F(N). It obeys a linear recurrence:

F(N) = F(N - m_1) + \cdots + F(N - m_{20})

and we can solve this using the ansatz

F(N) = x^N

Then the recurrence relation will hold if

x^N = x^{N - m_1} + x^{N - m_2} + \dots + x^{N - m_{20}}

for all N. But this is fairly easy to achieve! If m_{20} is the biggest mass, we just need this polynomial equation to hold:

x^{m_{20}} = x^{m_{20} - m_1} + x^{m_{20} - m_2} + \dots + 1

There will be a bunch of solutions, about m_{20} of them. (If there are repeated roots things get a bit more subtle, but let’s not worry about.) To get the actual formula for F(N) we need to find the right linear combination of functions x^N where x ranges over all the roots. That takes some work. Craciun and his collaborator Shane Hubler did that work.

But we can get a pretty good understanding with a lot less work. In particular, the root x with the largest magnitude will make x^N grow the fastest.

If you haven’t thought about this sort of recurrence relation it’s good to look at the simplest case, where we just have two masses m_1 = 1, m_2 = 2. Then the numbers F(N) are the Fibonacci numbers. I hope you know this: the Nth Fibonacci number is the number of ways to write N as the sum of an ordered list of 1’s and 2’s!

1

1+1,   2

1+1+1,   1+2,   2+1

1+1+1+1,   1+1+2,   1+2+1,   2+1+1,   2+2

If I drew edges between these sums in the right way, forming a ‘family tree’, you’d see the connection to Fibonacci’s original rabbit puzzle.

In this example the recurrence gives the polynomial equation

x^2 = x + 1

and the root with largest magnitude is the golden ratio:

\Phi = 1.6180339...

The other root is

1 - \Phi = -0.6180339...

With a little more work you get an explicit formula for the Fibonacci numbers in terms of the golden ratio:

\displaystyle{ F(N) = \frac{1}{\sqrt{5}} \left( \Phi^{N+1} - (1-\Phi)^{N+1} \right) }

But right now I’m more interested in the qualitative aspects! In this example both roots are real. The example from biology is different.

Puzzle 1. For which lists of natural numbers m_1 < \cdots < m_k are all the roots of

x^{m_k} = x^{m_k - m_1} + x^{m_k - m_2} + \cdots + 1

real?

I don’t know the answer. But apparently this kind of polynomial equation always one root with the largest possible magnitude, which is real and has multiplicity one. I think it turns out that F(N) is asymptotically proportional to x^N where x is this root.

But in the case that’s relevant to biology, there’s also a pair of roots with the second largest magnitude, which are not real: they’re complex conjugates of each other. And these give rise to the oscillations!

For the masses of the 20 amino acids most common in life, the roots look like this:

The aqua root at right has the largest magnitude and gives the dominant contribution to the exponential growth of F(N). The red roots have the second largest magnitude. These give the main oscillations in F(N), which have period 14.28.

For the full story, read this:

• Shane Hubler and Gheorghe Craciun, Periodic patterns in distributions of peptide masses, BioSystems 109 (2012), 179–185.

Most of the pictures here are from this paper.

My main question is this:

Puzzle 2. Suppose we take many lists of natural numbers m_1 < \cdots < m_k and draw all the roots of the equations

x^{m_k} = x^{m_k - m_1} + x^{m_k - m_2} + \cdots + 1

What pattern do we get in the complex plane?

I suspect that this picture is an approximation to the answer you’d get to Puzzle 2:

If you stare carefully at this picture, you’ll see some patterns, and I’m guessing those are hints of something very beautiful.

Earlier on this blog we looked at roots of polynomials whose coefficients are all 1 or -1:

The beauty of roots.

The pattern is very nice, and it repays deep mathematical study. Here it is, drawn by Sam Derbyshire:

But now we’re looking at polynomials where the leading coefficient is 1 and all the rest are -1 or 0. How does that change things? A lot, it seems!

By the way, the 20 amino acids we commonly see in biology have masses ranging between 57 and 186. It’s not really true that all their masses are different. Here are their masses:

57, 71, 87, 97, 99, 101, 103, 113, 113, 114, 115, 128, 128, 129, 131, 137, 147, 156, 163, 186

I pretended that none of the masses m_i are equal in Puzzle 2, and I left out the fact that only about 1/9th of the coefficients of our polynomial are nonzero. This may affect the picture you get!


by John Baez at April 06, 2017 04:20 PM

The n-Category Cafe

Applied Category Theory

The American Mathematical Society is having a meeting here at U. C. Riverside during the weekend of November 4th and 5th, 2017. I’m organizing a session on Applied Category Theory, and I’m looking for people to give talks.

The goal is to start a conversation about applications of category theory, not within pure math or fundamental physics, but to other branches of science and engineering — especially those where the use of category theory is not already well-established! For example, my students and I have been applying category theory to chemistry, electrical engineering, control theory and Markov processes.

Alas, we have no funds for travel and lodging. If you’re interested in giving a talk, please submit an abstract here:

More precisely, please read the information there and then click on the link in blue to submit an abstract. It should then magically fly through cyberspace to me! Abstracts are due September 12th, but the sooner you submit one, the greater the chance that we’ll have space.

For the program of the whole conference, go here:

We’ll be having some interesting plenary talks:

  • Paul Balmer, UCLA, An invitation to tensor-triangular geometry.

  • Pavel Etingof, MIT, Double affine Hecke algebras and their applications.

  • Monica Vazirani, U.C. Davis, Combinatorics, categorification, and crystals.

by john (baez@math.ucr.edu) at April 06, 2017 01:48 AM

John Baez - Azimuth

Applied Category Theory

The American Mathematical Society is having a meeting here at U. C. Riverside during the weekend of November 4th and 5th, 2017. I’m organizing a session on Applied Category Theory, and I’m looking for people to give talks.

The goal is to start a conversation about applications of category theory, not within pure math or fundamental physics, but to other branches of science and engineering—especially those where the use of category theory is not already well-established! For example, my students and I have been applying category theory to chemistry, electrical engineering, control theory and Markov processes.

Alas, we have no funds for travel and lodging. If you’re interested in giving a talk, please submit an abstract here:

General information about abstracts, American Mathematical Society.

More precisely, please read the information there and then click on the link on that page to submit an abstract. It should then magically fly through cyberspace to me! Abstracts are due September 12th, but the sooner you submit one, the greater the chance that we’ll have space.

For the program of the whole conference, go here:

Fall Western Sectional Meeting, U. C. Riverside, Riverside, California, 4–5 November 2017.

We’ll be having some interesting plenary talks:

• Paul Balmer, UCLA, An invitation to tensor-triangular geometry.

• Pavel Etingof, MIT, Double affine Hecke algebras and their applications.

• Monica Vazirani, U.C. Davis, Combinatorics, categorification, and crystals.


by John Baez at April 06, 2017 01:31 AM

April 04, 2017

Tommaso Dorigo - Scientificblogging

Winter 2017 LHC Results: The Higgs Is Still There, But...
Snow is melting in the Alps, and particle physicists, who have flocked to La Thuile for exciting ski conferences in the past weeks, are now back to their usual occupations. The pressure of the deadline is over: results have been finalized and approved, preliminary conference notes have been submitted, talks have been given. The period starting now, the one immediately following presentation of new results, when the next deadline (summer conferences!) is still far away, is more productive in terms of real thought and new ideas. Hopefully we'll come up with some new way to probe the standard model or to squeeze more information from those proton-proton collisions, lest we start to look like accountants!

read more

by Tommaso Dorigo at April 04, 2017 05:32 PM

Symmetrybreaking - Fermilab/SLAC

WIMPs in the dark matter wind

We know which way the dark matter wind should blow. Now we just have to find it.

Header: WIMPs in the Dark Matter Wind

Picture yourself in a car, your hand surfing the breeze through the open window. Hold your palm perpendicular to the wind and you can feel its force. Now picture the car slowing, rolling up to a stop sign, and feel the force of the wind lessen until it—and the car—stop. 

This wind isn’t due to the weather. It arises because of your motion relative to air molecules. Simple enough to understand and known to kids, dogs and road-trippers the world over. 

This wind has an analogue in the rarefied world of particle astrophysics called the “dark matter wind,” and scientists are hoping it will someday become a valuable tool in their investigations into that elusive stuff that apparently makes up about 85 percent of the mass in the universe. 

In the analogy above, the air molecules are dark matter particles called WIMPs, or weakly interacting massive particles. Our sun is the car, racing around the Milky Way at about 220 kilometers per second, with the Earth riding shotgun. Together, we move through a halo of dark matter that encompasses our galaxy. But our planet is a rowdy passenger; it moves from one side of the sun to the other in its orbit.

When you add the Earth’s velocity of 30 kilometers per second to the sun’s, as happens when both are traveling in the same direction (toward the constellation Cygnus), then the dark matter wind feels stronger. More WIMPs are moving through the planet than if it were at rest, resulting in greater number of detections by experiments. Subtract that velocity when the Earth is on the other side of its orbit, and the wind feels weaker, resulting in fewer detections.

Astrophysicists have been thinking about the dark matter wind for decades. Among the first, way back in 1986, were theorist David Spergel of Princeton University and colleagues Katherine Freese of the University of Michigan and Andrzej K. Drukier (now in private industry, but still looking for WIMPs).

“We looked at how the Earth’s motion around the sun should cause the number of dark matter particles detected to vary on a regular basis by about 10 percent a year,” Spergel says.

At least that’s what should happen—if our galaxy really is embedded in a circular, basically homogeneous halo of dark matter, and if dark matter is really made up of WIMPs. 

Inline: WIMPs in the Dark Matter Wind
Illustration by Corinne Mucha

The Italian experiment DAMA/NaI and its upgrade DAMA/Libra claim to have been seeing this seasonal modulation for decades, a claim that has yet to be conclusively supported by any other experiments. CoGeNT, an experiment in the Soudan Underground Laboratory in South Dakota, seemed to back them up for a time, but now the signals are thought to be caused by other sources such as high-energy gamma rays hitting a layer of material just outside the germanium of the detector, resulting in a signal that looks much like a WIMP.

Actually confirming the existence of the dark matter wind is important for one simple reason: the pattern of modulation can’t be explained by anything but the presence of dark matter. It’s what’s called a “model-independent” phenomenon. No natural backgrounds—no cosmic rays, no solar neutrinos, no radioactive decays—would show a similar modulation. The dark matter wind could provide a way to continue exploring dark matter, even if the particles are light enough that experiments cannot distinguish them from almost massless particles called neutrinos, which are constantly streaming from the sun and other sources.

“It’s a big, big prize to go after,” says Jocelyn Monroe, a physics professor at Royal Holloway University of London, who currently works on two dark matter detection experiments, DEAP-3600 at SNOLAB, in Canada, and DMTPC. “If you could correlate detections with the direction in which the planet is moving you would have unambiguous proof” of dark matter.

At the same time Spergel and his colleagues were exploring the wind’s seasonal modulation, he also realized that this correlation could extend far beyond a twice-per-year variation in detection levels. The location of the Earth in its orbit would affect the direction in which nucleons, the particles that make up the nucleus of an atom, recoil when struck by WIMPs. A sensitive-enough detector should see not only the twice-yearly variations, but even daily variations, since the detector constantly changes its orientation to the dark matter wind as the Earth rotates. 

“I had initially thought that it wasn’t worth writing up the paper because no experiment had the sensitivity to detect the recoil direction,” he says. “However, I realized that if I pointed out the effect, clever experimentalists would eventually figure out a way to detect it.”

Monroe, as the leader of the DMTPC collaboration, is a member of the clever experimentalist set. The DMTPC, or Dark Matter Time-Projection Chamber, is one of a small number of direct detection experiments that are designed to track the actual movements of recoiling atoms. 

Instead of semiconductor crystals or liquefied noble gases, these experiments use low-pressure gases as their target material. DMTPC, for example, uses carbon tetrafluoride. If a WIMP hits a molecule of carbon tetrafluoride, the low pressure in the chamber means that molecule has room to move—up to about 2 millimeters. 

“Making the detector is super hard,” Monroe says. “It has to map a 2-millimeter track in 3D.” Not to mention reducing the number of molecules in a detector chamber reduces the chances for a dark matter particle to hit one. According to Monroe, DMTPC will deal with that issue by fabricating an array of 1-cubic-meter-sized modules. The first module has already been constructed and a worldwide collaboration of scientists from five different directional dark matter experiments (including DMTPC) are working on the next step together: a much larger directional dark matter array called the CYGNUS (for CosmoloGY with NUclear recoilS) experiment.

When and if such directional dark matter detectors raise their metaphorical fingers to test the direction of the dark matter wind, Monroe says they’ll be able to see far more than just seasonal variations in detections. Scientists will be able to see variations in atomic recoils not on a seasonal basis, but on a daily basis. Monroe envisions a sort of dark matter telescope with which to study the structure of the halo in our little corner of the Milky Way.

Or not.

There’s always a chance that this next generation of dark matter detectors, or the generation after, still won’t see anything. 

Even that, Monroe says, is progress.

“If we’re still looking in 10 years we might be able to say it’s not WIMPs but something even more exotic  As far as we can tell right now, dark matter has got to be something new out there.” 

by Lori Ann White at April 04, 2017 02:24 PM

Lubos Motl - string vacua and pheno

ATLAS: locally 3.3-sigma \(ZH\) evidence for a new \(3\TeV\) boson
About two dozens of new ATLAS and CMS papers seem absolutely well-behaved. It's hard to find even a glimpse of an emerging deviation from the Standard Model. A week ago, I mentioned an outstanding B-meson anomaly which is 4.9-sigma strong.



Here I want to mention this Figure 3 upper-left on Page 12 of ATLAS'
Search for Heavy Resonances Decaying to a \(W\) or \(Z\) Boson and a Higgs Boson in the \(q\bar q^{(\prime)} b\bar b\) Final State in \(pp\) Collisions at \(\sqrt s = 13\TeV\) with the ATLAS Detector
You may also look at Page 14 of the paper, Figure 4, where the Brazilian bands show a wide 3-sigmaish excess near \(m_{Z'}\sim 3\TeV\).




The local significance is 3.3 sigma, the global significance is quantified as 2.2 sigma. So it's nowhere near a discovery but it's still among the strongest deviations from the Standard Model that you may find in any new LHC paper published in 2017 so far.




As the picture embedded at the top shows, about 3 events were predicted in the interval of masses \(3,000-3,050\GeV\) but 10 events were observed. Not bad. Correct me if you can read the numbers more accurately.

Clearly, if this apparent fluke were a real signal, there could be a new \(Z'\)-boson whose mass would be close to \(3\TeV\). That's amusing especially because in September 2015, CMS announced an electron-positron pair whose invariant mass was \(m\sim 2.9\TeV\) – it was the energy record-breaker at that moment and a higher scorer than what was expected – and that one could have been a new \(Z'\)-boson, too.

I think that the probability is some 98% that this ATLAS excess is a fluke but if you want to be intrigued by some existing – not yet outdated – deviations from the Standard Model, this could be one of your choices.

by Luboš Motl (noreply@blogger.com) at April 04, 2017 01:01 PM

April 03, 2017

Symmetrybreaking - Fermilab/SLAC

Art intimates physics

Artist Chris Henschke’s latest piece inspired by particle physics mixes constancy with unpredictability, the natural with the synthetic.

Green piece of an accelerator machinery and a plate of fruit on a lit table

Artist Chris Henschke has spent more than a decade exploring the intersection of art and physics. His pieces bring invisible properties and theoretical concepts to light through still images, sound and video.

His latest piece, called “Song of the Phenomena,” gives new life to a retired piece of equipment once used by a long-time collaborator of Henschke, University of Melbourne and Australian Synchrotron physicist Mark Boland.

Crossing paths

The story of “Song of the Phenomena” begins in the 1990s. In 1991, Henschke enrolled in the University of Melbourne to study science, but he turned to sound design instead. Boland entered the same university to study physics.

Personal computers were just entering the market. Sound designers and animators began coding basic programs, and Henschke joined in. “I was always interested in making sounds and music, interested in light and art and physics and nature and how it all combines—either in our heads or the devices that mediate between us and nature,” he says.

Boland completed his thesis in physics at the Australian Radiation Laboratory (now called the Australian Radiation Protection and Nuclear Safety Agency). He was testing a new type of electron detector in a linear accelerator, or linac. The linac used radio waves to guide electrons through a series of accelerator cavities, which imparted more and more energy to the particles as they moved through.

That particular linac spent more than 20 years with the Australian Radiation Protection and Nuclear Safety Agency, where medical physics professionals used it to accelerate electrons to different energies to create calibration standards for radiation oncology treatments. Once they no longer needed it, Boland’s former advisor contacted him to ask if he’d like the accelerator or any of its still-working parts. He said yes, though he was unsure what he would do with it.

An artist’s view

In 2007 Henschke came to the Australian Synchrotron as part of an artist-in-residence program. Boland was familiar with his artwork; he had seen Henschke’s first piece exploring particle physics in the pages of Symmetry. Boland grew up with an appreciation for art; he says his parents made sure of that by “dragging” him through many galleries in his youth.

When Henschke and Boland met, they got into an hours-long conversation about physics. “We hit it off, we resonated,” Boland says, “and we’ve been working together ever since.”

Since that first residency program, Henschke has spent significant time at the Australian Synchrotron facility and at CERN European research center and has taken shorter trips to the DESY German national research center.

His process of creating artwork echoes the scientific process and the setup of an experiment, Boland says. Henschke thinks through the role that each piece of the artwork plays. Everything is where it is for a reason.

“He’s a perfectionist, he doesn't settle for second best,” Boland says. “He has the same level of professionalism and tenacity as an artist as a physicist does. It’s as if there’s a five-sigma quality test on his work as well.”

Song of the Phenomena

Video of Song of the Phenomena

Once accelerator, now art

Boland mentioned the linac he had to Henschke during a conversation in early 2016. “Chris ran with it,” Boland says. “He took it and made it into his installation.”

Henschke discovered the machine hums at 220 hertz—the musical note of A—as it produces its resonant waves. “In a sense, particle accelerators are gigantic, high-energy synthesizers because they are creating high-energy waves at very specific frequencies and amplitudes,” Henschke says.

Henschke explored different aspects of the machine, still unsure how each part would come together as a final piece of art. “I have to let it speak to me, I have to let it speak for itself,” he says.

Finally it dawned on him; the art could be an echo of the accelerator’s past.

The accelerator no longer accelerates electrons. Instead Henschke feeds it a steady supply of electrons and their antimatter partners, positrons. He does this by placing it beside a pile of bananas, which release the particles as their potassium decays. (Using decaying fruit was a nod to Dutch still-life vanitas paintings, Henschke says.)

Observers cannot see the electrons and positrons in the piece, but they can hear them. Henschke ensured this by adding a Geiger counter, which emits a chirp each time it detects a particle.

Visitors can also hear the accelerator itself. Henschke attached speakers and pumped up the sound of the machine’s natural hum with a stereo amp (a bit too much at first; they blew up an oscilloscope they were using to measure the frequency). He used an AM radio coil to amplify the sound of the accelerator’s electromagnetic field.

“Song of the Phenomena” plays upon resonance, amplification and decay, Henschke says. “It creates this tension between the constant hum of the device versus the unpredictability of the subatomic emission.”

The idea of playing with the analogy between the linac’s resonance and sound resonance is one that Australian Synchrotron Director Andrew Peele appreciates. “A lot of science communication is about how you find analogies that people can engage with, and this is a great example,” Peele says. 

Henschke displayed “Song of the Phenomena” at the Royal Melbourne Institute of Technology Gallery from November 17, 2016, to February 18, 2017. Since then, the apparatus has returned to the Australian Synchrotron, where it sits in a vast, open room where some of the facility’s synchrotron beamline stations used to stand. Scientists meet nearby for a weekly social coffee break.

Henschke is currently writing his thesis for his PhD in experimental art (with Boland as his advisor). In his next project, he hopes to tackle the subject of quantum entanglement.

by Liz Kruesi at April 03, 2017 01:00 PM

March 30, 2017

Axel Maas - Looking Inside the Standard Model

Building a dead star
I have written previously about how we investigate QCD to learn about neutron stars. Neutron stars are the extremely dense and small objects left over after a medium-sized star became a supernova.

For that, we have decided to take a detour. To do so, we have slightly modified the strong interactions. The reason for this modification was to do numerical simulations. In the original version of the theory, this is yet impossible. Mainly, because we have not yet been able to develop an algorithm, which is fast enough to get a result within our lifetime. With the small changes we did to our theory, this changes. And therefore, we have now a (rough) idea of how this theory behaves at densities relevant for neutron stars.

Now Ouraman Hajizadeh, a PhD student of mine, and I went all the way. We used these results to construct a neutron star from it. What we found is written up in a paper. And I will describe here what we learned.

The first insight is that we needed a baseline. Of course, we could compare to what we have on neutron star from astrophysics. But we do not yet know too much about their internal structure. This may change with the newly established gravitational wave astronomy, but this will take a few years. Thus, we decided to use neutrons, which do not interact with each other, as the baseline. A neutron star of such particles is only held together by the gravitational pull and the so-called Pauli principle. This principle forbids certain types of particles, so-called fermions, to occupy the same spots. Neutrons are such fermions. Any difference from such a neutron star has therefore to be attributed to interactions.

The observed neutron stars show the existence of interactions. This is exemplified by their mass. A neutron star made out of non-interacting neutrons can have only masses which are somewhat below the mass of our sun. The heaviest neutron stars we have observed so far are more than twice the mass of our sun. The heaviest possible neutron stars could be a little bit heavier than three times our sun. Everything which is heavier would collapse further, either to a different object unknown to us, or to a black hole.

Now, the theory we investigated is different from the true strong-interactions by two effects. One is that we had only one type of quarks, rather than the real number. Also, our quarks was heavier than the lightest quark in nature. Finally, we have more colors and also more gluons than in nature. Thus, our neutron has a somewhat different structure than the real one. But we used this modified version of the neutron to create our baseline, so that we can still see the effect of interactions.

Then, we cranked the machinery. This machinery is a little bit of general relativity, and thermodynamics. The prior is not modified, but our theory determines the latter. What we got was a quite interesting result. First, our heaviest neutron star was much heavier than our baseline. Roughly 20 to 50 percent heaver than our sun, depending on details and uncertainties. Also, a typical neutron star of this mass had much less variation of its size than the baseline. For non-interacting neutrons, changing the maximum mass by ten percent changes the radius by a kilometer, or so. In our case, this changed the radius almost not at all. So, our heaviest neutron stars are much more reluctant to change. So interactions indeed change the structure of a neutron star considerably.

Another long-standing question is, what the internal structure of a neutron star is. Especially, whether they are a, more or less, monolithic block, except for a a very thin layer close to the surface. Or whether they are composed of many different layers, like our earth. In our case, we find indeed a layered structure. There is an outer surface, a kilometer or so thick, and then a different state of matter down to the core. However, the change appears to be quite soft, and there is no hard distinction. Still, our results signal that there a light neutron stars, which only consist out of the 'surface' material, and only heavier neutron stars have such a core of different stuff. Thus, there could be two classes of neutron stars, with different properties. However, the single-type class is lighter than those which have been observed so far. Such light neutron stars, while apparently stable, seem not, or rarely, be formed during the supernovas giving birth to neutron stars.

Of course, the question is, to which extent such qualitative features can be translated to the real case. We can learn more about this by doing the same in other theories. If features turn out to be generic, this points at something which may also happen for the real case. But even our case, which in a certain sense is the simplest possibility, was not trivial. It may take some time to repeat it for other theories.

by Axel Maas (noreply@blogger.com) at March 30, 2017 09:03 AM

March 29, 2017

Tommaso Dorigo - Scientificblogging

The Way I See It
Where by "It" I really mean the Future of mankind. The human race is facing huge new challenges in the XXI century, and we are only starting to get equipped to face them. 

The biggest drama of the past century was arguably caused by the two world conflicts and the subsequent transition to nuclear warfare: humanity had to learn to coexist with the impending threat of global annihilation by thermonuclear war. But today, in addition to that dreadful scenario there are now others we have to cope with.

read more

by Tommaso Dorigo at March 29, 2017 06:58 PM

March 28, 2017

Symmetrybreaking - Fermilab/SLAC

How to make a discovery

Particle physics is a dance between theory and experiment.

Header: How to make a discovery

Meenakshi Narain, a professor of physics at Brown University, remembers working on the DZero experiment at Fermi National Accelerator Laboratory near Chicago in the winter of 1994. She would bring blankets up to her fifth-floor office to keep warm as she sat at her computer going through data in search of the then-undiscovered top quark.

For weeks, her group had been working on deciphering some extra background that originally had not been accounted for. Their conclusions contradicted the collaboration’s original assumptions.

Narain, who was a postdoctoral researcher at the time, talked to her advisor about sharing the group’s result. Her advisor told her that if she had followed the scientific method and was confident in her result, she should talk about it. 

“I had a whole sequence of logic and explanation prepared,” Narain says. “When I presented it, I remember everybody was very supportive. I had expected some pushback or some criticism and nothing like that happened.” 

This, she says, is the scientific process: A multitude of steps designed to help us explore the world we live in.

“In the end the process wins. It’s not about you or me, because we’re all going after the same thing. We want to discover that particle or phenomenon or whatever else is out there collaboratively. That’s the goal.”

Narain’s group’s analysis was essential to the collaboration’s understanding of a signal that turned out to be the elusive top quark.

Inline 1: How to make a discovery
Artwork by Sandbox Studio, Chicago

The modern hypothesis

“The scientific method was not invented overnight,” says Joseph Incandela, vice chancellor for research at the University of California, Santa Barbara. “People used to think completely differently. They thought if it was beautiful it had to be true. It took many centuries for people to realize that this is how you must approach the acquisition of true knowledge that you can verify.”

For particle physicists, says Robert Cahn, a senior scientist at Lawrence Berkeley National Laboratory, the scientific method isn’t so much going from hypothesis to conclusion, but rather “an exploration in which we measure with as much precision as possible a variety of quantities that we hope will reveal something new.

“We build a big accelerator and we might have some ideas of what we might discover, but it’s not as if we say, ‘Here’s the hypothesis and we’re going to prove or disprove it. If there’s a scientific method, it’s something much broader than that.” 

Scientific inquiry is more of a continuing conversation between theorists and experimentalists, says Chris Quigg, a distinguished scientist emeritus at Fermilab.

“Theorists in particular spend a lot of time telling stories, making up ideas or elaborating ideas about how something might happen,” he says. “There’s an evolution of our ideas as we engage in dialogue with experiments.” 

An important part of the process, he adds, is that the scientists are trained never to believe their own stories until they have experimental support. 

“We are often reluctant to take our ideas too seriously because we’re schooled to think about ideas as tentative,” Quigg says. “It’s a very good thing to be tentative and to have doubt. Otherwise you think you know all the answers, and you should be doing something else.”

It’s also good to be tentative because “sometimes we see something that looks tantalizingly like a great discovery, and then it turns out not to be,” Cahn says.

At the end of 2015, hints appeared in the data of the two general-purpose experiments at the Large Hadron Collider that scientists had stumbled upon a particle 750 times as massive as a proton. The hints prompted more than 500 scientific papers, each trying to tell the story behind the bump in the data.

“It’s true that if you simply want to minimize wasting your time, you will ignore all such hints until they [reach the traditional uncertainty threshold of] 5 sigma,” Quigg said. “But it’s also true that as long as they’re not totally flaky, as long as it looks possibly true, then it can be a mind-expanding exercise.”

In the case of the 750-GeV bump, Quigg says, you could tell a story in which such a thing might exist and wouldn’t contradict other things that we knew. 

“It helps to take it from just an unconnected observation to something that’s linked to everything else,” Quigg says. “That’s really one of the beauties of scientific theories, and specifically the current state of particle physics. Every new observation is linked to everything else we know, including all the old observations. It’s important that we have enough of a network of observation and interpretation that any new thing has to make sense in the context of other things.”

After collecting more data, physicists eventually ruled out the hints, and the theorists moved on to other ideas.

The importance of uncertainty

But sometimes an idea makes it further than that. Much of the work scientists put into publishing a scientific result involves figuring out how well they know it: What’s the uncertainty and how do we quantify it?

“If there’s any hallmark to the scientific method in particle physics and in closely related fields like cosmology, it’s that our results always come with an error bar,” Cahn says. “A result that doesn’t have an uncertainty attached to it has no value.”

In a particle physics experiment, some uncertainty comes from background, like the data Narain’s group found that mimicked the kind of signal they were looking for from the top quark. 

This is called systematic uncertainty, which is typically introduced by aspects of the experiment that cannot be completely known. 

“When you build a detector, you must make sure that for whatever signal you’re going to see, there is not much possibility to confuse it with the background,” says Helio Takai, a physicist at Brookhaven National Laboratory. “All the elements and sensors and electronics are designed having that in mind. You have to use your previous knowledge from all the experiments that came before.”

Careful study of your systematic uncertainties is the best way to eliminate bias and get reliable results.

“If you underestimate your systematic uncertainty, then you can overestimate the significance of the signal,” Narain says. “But if you overestimate the systematic uncertainty, then you can kill your signal. So, you really are walking this fine line in understanding where the issues may be. There are various ways the data can fool you. Trying to be aware of those ways is an art in itself and it really defines the thinking process.”

Physicists also must think about statistical uncertainty which, unlike systematic uncertainty, is simply the consequence having a limited amount of data.

“For every measurement we do, there’s a possibility that the measurement is a wrong measurement just because of all the events that happen at random while we are doing the experiment,” Takai says. “In particle physics, you’re producing many particles, so a lot of these particles may conspire and make it appear like the event you’re looking for.”

You can think of it as putting your hand inside a bag of M&Ms, Takai says. If the first few M&Ms you picked were brown and you didn’t know there were other colors, you would think the entire bag was brown. It wouldn’t be until you finally pulled out a blue M&M that you realized that the bag had more than one color. 

Particle physicists generally want their results to have a statistical significance corresponding to at least 5 sigma, a measure that means that there is only a 0.00003 percent chance of a statistical fluctuation giving an excess as big or bigger than the one observed.

Inline 2: How to make a discovery
Artwork by Sandbox Studio, Chicago

 

The scientific method at work

One of the most stunning recent examples of the scientific method – careful consideration of statistical and systematic uncertainties coming together – was announced in 2012 at the moment the spokespersons for the ATLAS and CMS experiments at the LHC revealed the discovery of the Higgs boson. 

More than half a century of theory and experimentation led up to that moment. Experiments from the 1950s on had accumulated a wealth of information on particle interactions, but the interactions were only partially understood and seemed to come from disconnected sources.

“But brilliant theoretical physicists found a way to make a single model that gave them a good description of all the known phenomena, says Incandela, who was spokesperson for the CMS experiment during the Higgs discovery. “It wasn’t guaranteed that the Higgs field existed. It was only guaranteed that this model works for everything we do and have already seen, and we needed to see if there really was a boson that we could find that could tell us in fact that that field is there.”

This led to a generation-long effort to build an accelerator that would reach the extremely high energies needed to produce the Higgs boson, a particle born of the Higgs field, and then two gigantic detectors that could detect the Higgs boson if it appeared.

Building two different detectors would allow scientists to double-check their work. If an identical signal appeared in two separate experiments run by two separate groups of physicists, chances were quite good that it was the real thing. 

“So there you saw a really beautiful application of the scientific method where we confirmed something that was incredibly difficult to confirm, but we did it incredibly well with a lot of fail-safes and a lot of outstanding experimental approaches,” Incandela says. “The scientific method was already deeply engrained in everything we did to the greatest extreme. And so we knew when we saw these things that they were real, and we had to take them seriously.” 

The scientific method is so engrained that scientists don’t often talk about it by name anymore, but implementing it “is what separates the great scientists from the average scientists from the poor scientists,” Incandela says. “It takes a lot of scrutiny and a deep understanding of what you’re doing.”

by Ali Sundermier at March 28, 2017 03:16 PM

Lubos Motl - string vacua and pheno

\(B\)-meson \(b\)-\(s\)-\(\mu\)-\(\mu\) anomaly remains at 4.9 sigma after Moriond
There was no obvious announcement of new physics at Moriond 2017, one that would have settled supersymmetry or other bets in a groundbreaking direction, but that doesn't mean that the Standard Model is absolutely consistent with all observations.

In recent years, the LHCb collaboration has claimed various deviations of their observations of mostly \(B\)-meson decays from the Standard Model predictions. A new paper was released yesterday, summarizing the situation after Moriond 2017:
Status of the \(B\to K^*\mu^+\mu^−\) anomaly after Moriond 2017
Wolfgang Altmannshofer, Christoph Niehoff, Peter Stangl, David M. Straub (the German language is so effective with these one-syllable surnames, isn't it?) and Matthias Rindfleischetikettierungsüberwachungsaufgabenübertragungsgesetz have looked at the tension with the newest data.



The Good-lookers, Matterhorn (1975): In the morning, they started their journey at CERN (or in Bern). I've made the would-be witty replacement of Bern with CERN so many times that I am not capable of singing this verse reliably correctly anymore!

The new data include the angular distribution of the decay mentioned in the title, as measured by the major (ATLAS and CMS) detectors.




Microscopically, at the level of quarks and leptons, these decays of the \(B\)-mesons correspond to the\[

b\to s + \mu^+ + \mu^-

\] transformation of the bottom-quark.




There seems to be a deviation from the Standard Model. But they see that the deviation doesn't seem to visibly depend on \(q^2\) and it's independent of the helicities, too. The first fact encourages them to explain the "extra processes" by an extra four-fermion interaction including the fermions \(b,s,\mu,\mu\). There are various tensor structures that allow you to contract the four spinors in the four-fermion interactions and once they look carefully, the deviation from the Standard Model seems to be maximally hiding in the new physics (NP) term in the Hamiltonian:\[

\eq{
\HH_{\rm eff} &= -\frac{4 G_F}{\sqrt{2}} V_{tb} V^*_{ts} \frac{e^2}{16\pi^2} \cdot C_9 O_9 + {\rm h.c.},\\
O_9 &= (\bar s \gamma_\mu P_L b) (\bar \ell \gamma^\mu \ell)
}

\] There are numerous other possible terms a priori, up to \(O_{10}\). Also, analogous operators may have primes and the prime indicates the replacement of \(P_L\) with \(P_R\).



If you memorize this song about quarks, you should understand all the four-fermion interactions unless you will conclude that the song is about cheese, as one of the singers did. The ladies from the girl band – those on the first photograph ever posted on the web – are planning a comeback and look for donations.

At any rate, only the evidence in favor of a nonzero coefficient \(O_9\) from new physics seems strong enough to deserve the paper – and the TRF blog post – and the best fit value of \(C_9\) seems to be negative and\[

C_9 = -1.21 \pm 0.22

\] which means that the experimental data indicate that \(C_9\) is nonzero (it should be zero in the Standard Model) at the 4.9-sigma level. Not bad. Well, there is also a similar but weaker anomaly for \(C_{10}\) that multiplies a similar operator with an extra \(\gamma_5\) and whose best fit is:\[

\eq{
O_{10} &= (\bar s \gamma_\mu P_L b) (\bar \ell \gamma^\mu \gamma_5 \ell)\\
C_{10} &= +0.69\pm 0.25
}

\] which differs from the Standard Model's zero by 2.9 sigma. The numbers make it clear that the hypothesis that \(C_{9}=-C_{10}\) is rather compatible with the data, too, within one sigma, and the best fit for this \(C_{9}=-C_{10}\) is \(-0.62\pm 0.14\) or so, a 4.2-sigma deviation from zero (I believe that \(-0.62\pm 0.14\) should really be multiplied by \(\sqrt{2}\) but let me not make this confusion too visible).

The German/Ohio authors translate this effect to various other parameterizations of the LFUs (lepton flavor universality parameters) and if I understand the ultimate claim well, they basically say that similar anomalies from ATLAS+CMS, LHCb, and Belle seem to be consistent with each other and with the extra new physics term that was proposed above.

Some skeptics could say that these anomalies could be due to some difficult QCD effects. But the bottom-quark is pretty heavy and therefore "ignoring" the gluy, sticky environment around itself so I tend to think that the deviation from the Standard Model is rather exciting.



I've made fun of the German language so I want to make sure that the U.S. readers don't think that they're untouchable. ;-)

If it exists, the authors say, the clear deviations from the Standard Model could be made very strong by the experiments very soon.

Theoretically, I would try to explain this four-fermion interaction by the exchange of a new gauge boson or a scalar particle but I am not capable of giving you a more refined let alone stringy inspired detailed story about this new effect at this moment.

by Luboš Motl (noreply@blogger.com) at March 28, 2017 01:38 PM

March 24, 2017

Symmetrybreaking - Fermilab/SLAC

A new gem inside the CMS detector

This month scientists embedded sophisticated new instruments in the heart of a Large Hadron Collider experiment.

Close-up of a person wearing a cleanroom mask and a helmet with a light next to colorful cables.

Sometimes big questions require big tools. That’s why a global community of scientists designed and built gigantic detectors to monitor the high-energy particle collisions generated by CERN’s Large Hadron Collider in Geneva, Switzerland. From these collisions, scientists can retrace the footsteps of the Big Bang and search for new properties of nature.

The CMS experiment is one such detector. In 2012, it co-discovered the elusive Higgs boson with its sister experiment, ATLAS. Now, scientists want CMS to push beyond the known laws of physics and search for new phenomena that could help answer fundamental questions about our universe. But to do this, the CMS detector needed an upgrade.

“Just like any other electronic device, over time parts of our detector wear down,” says Steve Nahn, a researcher in the US Department of Energy’s Fermi National Accelerator Laboratory and the US project manager for the CMS detector upgrades. “We’ve been planning and designing this upgrade since shortly after our experiment first started collecting data in 2010.”

The CMS detector is built like a giant onion. It contains layers of instruments that track the trajectory, energy and momentum of particles produced in the LHC’s collisions. The vast majority of the sensors in the massive detector are packed into its center, within what is called the pixel detector. The CMS pixel detector uses sensors like those inside digital cameras but with a lightning fast shutter speed: In three dimensions, they take 40 million pictures every second.

For the last several years, scientists and engineers at Fermilab and 21 US universities have been assembling and testing a new pixel detector to replace the current one as part of the CMS upgrade, with funding provided by the Department of Energy Office of Science and National Science Foundation.

The pixel detector consists of three sections: the innermost barrel section and two end caps called the forward pixel detectors. The tiered and can-like structure gives scientists a near-complete sphere of coverage around the collision point. Because the three pixel detectors fit on the beam pipe like three bulky bracelets, engineers designed each component as two half-moons, which latch together to form a ring around the beam pipe during the insertion process.

Over time, scientists have increased the rate of particle collisions at the LHC. In 2016 alone, the LHC produced about as many collisions as it had in the three years of its first run together. To be able to differentiate between dozens of simultaneous collisions, CMS needed a brand new pixel detector.

The upgrade packs even more sensors into the heart of the CMS detector. It’s as if CMS graduated from a 66-megapixel camera to a 124-megapixel camera.

Each of the two forward pixel detectors is a mosaic of 672 silicon sensors, robust electronics and bundles of cables and optical fibers that feed electricity and instructions in and carry raw data out, according to Marco Verzocchi, a Fermilab researcher on the CMS experiment.

The multipart, 6.5-meter-long pixel detector is as delicate as raw spaghetti. Installing the new components into a gap the size of a manhole required more than just finesse. It required months of planning and extreme coordination.

“We practiced this installation on mock-ups of our detector many times,” says Greg Derylo, an engineer at Fermilab. “By the time we got to the actual installation, we knew exactly how we needed to slide this new component into the heart of CMS.”

The most difficult part was maneuvering the delicate components around the pre-existing structures inside the CMS experiment.

“In total, the full three-part pixel detector consists of six separate segments, which fit together like a three-dimensional cylindrical puzzle around the beam pipe,” says Stephanie Timpone, a Fermilab engineer. “Inserting the pieces in the right positions and right order without touching any of the pre-existing supports and protections was a well-choreographed dance.”

For engineers like Timpone and Derylo, installing the pixel detector was the last step of a six-year process. But for the scientists working on the CMS experiment, it was just the beginning.

“Now we have to make it work,” says Stefanos Leontsinis, a postdoctoral researcher at the University of Colorado, Boulder. “We’ll spend the next several weeks testing the components and preparing for the LHC restart.”

by Sarah Charley at March 24, 2017 01:00 PM

March 21, 2017

Symmetrybreaking - Fermilab/SLAC

High-energy visionary

Meet Hernán Quintana Godoy, a scientist who helped make Chile central to international astronomy.

Header:High-energy visionary

Professor Hernán Quintana Godoy has a way of taking the long view, peering back into the past through distant stars while looking ahead to the future of astronomy in his home, Chile. 

For three decades, Quintana has helped shape the landscape of astronomy in Chile, host to some of the largest ground-based observatories in the world.

In January he became the first recipient of the Education Prize of the American Astronomical Society from a country other than the United States or Canada.     

“Training the next generation of astronomers should not be limited to just a few countries,” says Keely Finkelstein, former chair of the AAS Education Prize Committee. “[Quintana] has been a tireless advocate for establishing excellent education and research programs in Chile.” 

Quintana earned his doctorate from the University of Cambridge in the United Kingdom in 1973. The same year, a military junta headed by General Augusto Pinochet took power in a coup d’état. 

Quintana came home and secured a teaching position at the University of Chile. At the time, Chilean researchers mainly focused on the fundamentals of astronomy—measuring the radiation from stars and calculating the coordinates of celestial objects. By contrast, Quintana’s dissertation on high-energy phenomena seemed downright radical. 

A year and a half after taking his new job, Quintana was granted a leave of absence to complete a post-doc abroad. Writing from the United States, Quintana published an article encouraging Chile to take better advantage of its existing international observatories. He urged the government to provide more funding and to create an environment that would encourage foreign-educated astronomers to return home to Chile after their postgraduate studies. The article did not go over well with the administration at his university.

“I wrote it for a magazine that was clearly against Pinochet,” Quintana says. “The magazine cover was a black page with a big ‘NO’ in red” related to an upcoming referendum.

UCh dissolved Quintana’s teaching position. 

Quintana became a wandering postdoc and research associate in Europe, the US and Canada. It wasn’t until 1981 that Quintana returned to teach at the Physics Institute at Pontifical Catholic University of Chile. 

He continued to push the envelope at PUC. He created elective courses on general astronomy, extragalactic astrophysics and cluster dynamics. He revived and directed a small astronomy group. He encouraged students to expand their horizons by hiring both Chilean and foreign teachers and sending students to study abroad.

“Because of him I took advantage of most of the big observatories in Chile and had an international perspective of research from the very beginning of my career,” says Amelia Ramirez, who studied with Quintana in 1983. A specialist in interacting elliptical galaxies, she is now head of Research and Development in University of La Serena.

In mid-1980s Quintana became the scriptwriter for a set of distance learning astronomy classes produced by the educational division of his university’s public TV channel, TELEDUC. He challenged his viewers to take on advanced topics—and they responded.

 

Inline 1: High-energy visionary
Illustration by Corinne Mucha

“I even introduced two episodes on relativity theory,” Quintana says. “This shocked them. The reception was so good that I wrote a whole book on the subject.” 

The station partnered with universities and institutions across Chile to provide viewers the opportunity to earn a diploma by taking a written test based on the televised material. More than 5000 people enrolled during the four-year broadcasting period. 

“What stands out [about Quintana] is his strategic vision and his creativity to materialize projects,” says Alejandro Clocchiatti, a professor at PUC who worked with Quintana for 20 years. “All he does is with dedication and enthusiasm, even if things don’t go according to plan. He’s got an unbeatable optimism.” 

Over the years, Quintana has had a hand in planning the locations of multiple new telescopes in Chile. In 1994 he guided an expedition to identify the location of the Atacama Large Millimeter Array, a collection of 66 high-precision antennae.

In 1998, PUC finally responded to decades of advocating by Quintana and his colleagues and opened a new major in astronomy. Gradually more universities followed suit. 

Quintana retired three years ago. He is optimistic about the future of Chilean astronomy. It has grown from a collection of 25 professors and their students in the late ’90s to a community of more than 800 hundred students, teachers and researchers.

He says he is looking out for new discoveries forthcoming instruments will bring. The European Extremely Large Telescope, under construction on Cerro Armazones in the Atacama Desert of northern Chile, is expected to produce images 16 times sharper than Hubble’s. The southern facilities of the Cherenkov Telescope Array, a planned collection of 99 telescopes in Chile, will complement a northern array to complete the world’s most sensitive high-energy gamma-ray observatory. Both arrangements will peer into super-massive black holes, the atmospheres of extra-solar planets, and the origin of relativistic cosmic particles. 

“Everything in our universe is constantly changing,” Quintana says. “We are all heirs of that structural evolution.”

by Oscar Miyamoto at March 21, 2017 02:17 PM

Clifford V. Johnson - Asymptotia

News from the Front, XIII: Holographic Heat Engines for Fun and Profit

I put a set of new results out on to the arxiv recently. They were fun to work out. They represent some of my continued fascination with holographic heat engines, those things I came up with back in 2014 that I think I've written about here before (here and here). For various reasons (that I've explained in various papers) I like to think of them as an answer waiting for the right question, and I've been refining my understanding of them in various projects, trying to get clues to what the question or questions might be.

As I've said elsewhere, I seem to have got into the habit of using 21st Century techniques to tackle problems of a 19th Century flavour! The title of the paper is "Approaching the Carnot limit at finite power: An exact solution". As you may know, the Carnot engine, whose efficiency is the best a heat engine can do (for specified temperatures of exchange with the hot and cold reservoirs), is itself not a useful practical engine. It is a perfectly reversible engine and as such takes infinite time to run a cycle. A zero power engine is not much practical use. So you might wonder how close a real engine can come to the Carnot efficiency... the answer should be that it can come arbitrarily close, but most engines don't, and so people who care about this sort of thing spend a lot of time thinking about how to design special engines that can come close. And there are various arguments you can make for how to do it in various special systems and so forth. It's all very interesting and there's been some important work done.

What I realized recently is that my old friends the holographic heat engines are a very good tool for tackling this problem. Part of the reason is that the underlying working substance that I've been using is a black hole (or, if you prefer, is defined by a black hole), and such things are often captured as exact [...] Click to continue reading this post

The post News from the Front, XIII: Holographic Heat Engines for Fun and Profit appeared first on Asymptotia.

by Clifford at March 21, 2017 05:19 AM

March 19, 2017

Jaques Distler - Musings

Responsibility

Many years ago, when I was an assistant professor at Princeton, there was a cocktail party at Curt Callan’s house to mark the beginning of the semester. There, I found myself in the kitchen, chatting with Sacha Polyakov. I asked him what he was going to be teaching that semester, and he replied that he was very nervous because — for the first time in his life — he would be teaching an undergraduate course. After my initial surprise that he had gotten this far in life without ever having taught an undergraduate course, I asked which course it was. He said it was the advanced undergraduate Mechanics course (chaos, etc.) and we agreed that would be a fun subject to teach. We chatted some more, and then he said that, on reflection, he probably shouldn’t be quite so worried. After all, it wasn’t as if he was going to teach Quantum Field Theory, “That’s a subject I’d feel responsible for.”

This remark stuck with me, but it never seemed quite so poignant until this semester, when I find myself teaching the undergraduate particle physics course.

The textbooks (and I mean all of them) start off by “explaining” that relativistic quantum mechanics (e.g. replacing the Schrödinger equation with Klein-Gordon) make no sense (negative probabilities and all that …). And they then proceed to use it anyway (supplemented by some Feynman rules pulled out of thin air).

This drives me up the #@%^ing wall. It is precisely wrong.

There is a perfectly consistent quantum mechanical theory of free particles. The problem arises when you want to introduce interactions. In Special Relativity, there is no interaction-at-a-distance; all forces are necessarily mediated by fields. Those fields fluctuate and, when you want to study the quantum theory, you end up having to quantize them.

But the free particle is just fine. Of course it has to be: free field theory is just the theory of an (indefinite number of) free particles. So it better be true that the quantum theory of a single relativistic free particle makes sense.

So what is that theory?

  1. It has a Hilbert space, <semantics><annotation encoding="application/x-tex">\mathcal{H}</annotation></semantics>, of states. To make the action of Lorentz transformations as simple as possible, it behoves us to use a Lorentz-invariant inner product on that Hilbert space. This is most easily done in the momentum representation <semantics>χ|ϕ=d 3k(2π) 32k 2+m 2χ(k) *ϕ(k)<annotation encoding="application/x-tex"> \langle\chi|\phi\rangle = \int \frac{d^3\vec{k}}{{(2\pi)}^3 2\sqrt{\vec{k}^2+m^2}}\, \chi(\vec{k})^* \phi(\vec{k}) </annotation></semantics>
  2. As usual, the time-evolution is given by a Schrödinger equation
(1)<semantics>i t|ψ=H 0|ψ<annotation encoding="application/x-tex">i\partial_t |\psi\rangle = H_0 |\psi\rangle </annotation></semantics>

where <semantics>H 0=p 2+m 2<annotation encoding="application/x-tex">H_0 = \sqrt{\vec{p}^2+m^2}</annotation></semantics>. Now, you might object that it is hard to make sense of a pseudo-differential operator like <semantics>H 0<annotation encoding="application/x-tex">H_0</annotation></semantics>. Perhaps. But it’s not any harder than making sense of <semantics>U(t)=e ip 2t/2m<annotation encoding="application/x-tex">U(t)= e^{-i \vec{p}^2 t/2m}</annotation></semantics>, which we routinely pretend to do in elementary quantum. In both cases, we use the fact that, in the momentum representation, the operator <semantics>p<annotation encoding="application/x-tex">\vec{p}</annotation></semantics> is represented as multiplication by <semantics>k<annotation encoding="application/x-tex">\vec{k}</annotation></semantics>.

I could go on, but let me leave the rest of the development of the theory as a series of questions.

  1. The self-adjoint operator, <semantics>x<annotation encoding="application/x-tex">\vec{x}</annotation></semantics>, satisfies <semantics>[x i,p j]=iδ j i<annotation encoding="application/x-tex"> [x^i,p_j] = i \delta^{i}_j </annotation></semantics> Thus it can be written in the form <semantics>x i=i(k i+f i(k))<annotation encoding="application/x-tex"> x^i = i\left(\frac{\partial}{\partial k_i} + f_i(\vec{k})\right) </annotation></semantics> for some real function <semantics>f i<annotation encoding="application/x-tex">f_i</annotation></semantics>. What is <semantics>f i(k)<annotation encoding="application/x-tex">f_i(\vec{k})</annotation></semantics>?
  2. Define <semantics>J 0(r)<annotation encoding="application/x-tex">J^0(\vec{r})</annotation></semantics> to be the probability density. That is, when the particle is in state <semantics>|ϕ<annotation encoding="application/x-tex">|\phi\rangle</annotation></semantics>, the probability for finding it in some Borel subset <semantics>S 3<annotation encoding="application/x-tex">S\subset\mathbb{R}^3</annotation></semantics> is given by <semantics>Prob(S)= Sd 3rJ 0(r)<annotation encoding="application/x-tex"> \text{Prob}(S) = \int_S d^3\vec{r} J^0(\vec{r}) </annotation></semantics> Obviously, <semantics>J 0(r)<annotation encoding="application/x-tex">J^0(\vec{r})</annotation></semantics> must take the form <semantics>J 0(r)=d 3kd 3k(2π) 64k 2+m 2k 2+m 2g(k,k)e i(kk)rϕ(k)ϕ(k) *<annotation encoding="application/x-tex"> J^0(\vec{r}) = \int\frac{d^3\vec{k}d^3\vec{k}'}{{(2\pi)}^6 4\sqrt{\vec{k}^2+m^2}\sqrt{{\vec{k}'}^2+m^2}} g(\vec{k},\vec{k}') e^{i(\vec{k}-\vec{k'})\cdot\vec{r}}\phi(\vec{k})\phi(\vec{k}')^* </annotation></semantics> Find <semantics>g(k,k)<annotation encoding="application/x-tex">g(\vec{k},\vec{k}')</annotation></semantics>. (Hint: you need to diagonalize the operator <semantics>x<annotation encoding="application/x-tex">\vec{x}</annotation></semantics> that you found in problem 1.)
  3. The conservation of probability says <semantics>0= tJ 0+ iJ i<annotation encoding="application/x-tex"> 0=\partial_t J^0 + \partial_i J^i </annotation></semantics> Use the Schrödinger equation (1) to find <semantics>J i(r)<annotation encoding="application/x-tex">J^i(\vec{r})</annotation></semantics>.
  4. Under Lorentz transformations, <semantics>H 0<annotation encoding="application/x-tex">H_0</annotation></semantics> and <semantics>p<annotation encoding="application/x-tex">\vec{p}</annotation></semantics> transform as the components of a 4-vector. For a boost in the <semantics>z<annotation encoding="application/x-tex">z</annotation></semantics>-direction, of rapidity <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>, we should have <semantics>U λp 2+m 2U λ 1 =cosh(λ)p 2+m 2+sinh(λ)p 3 U λp 1U λ 1 =p 1 U λp 2U λ 1 =p 3 U λp 3U λ 1 =sinh(λ)p 2+m 2+cosh(λ)p 3<annotation encoding="application/x-tex"> \begin{split} U_\lambda \sqrt{\vec{p}^2+m^2} U_\lambda^{-1} &= \cosh(\lambda) \sqrt{\vec{p}^2+m^2} + \sinh(\lambda) p_3\\ U_\lambda p_1 U_\lambda^{-1} &= p_1\\ U_\lambda p_2 U_\lambda^{-1} &= p_3\\ U_\lambda p_3 U_\lambda^{-1} &= \sinh(\lambda) \sqrt{\vec{p}^2+m^2} + \cosh(\lambda) p_3 \end{split} </annotation></semantics> and we should be able to write <semantics>U λ=e iλB<annotation encoding="application/x-tex">U_\lambda = e^{i\lambda B}</annotation></semantics> for some self-adjoint operator, <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>. What is <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>? (N.B.: by contrast the <semantics>x i<annotation encoding="application/x-tex">x^i</annotation></semantics>, introduced above, do not transform in a simple way under Lorentz transformations.)

The Hilbert space of a free scalar field is now <semantics> n=0 Sym n<annotation encoding="application/x-tex">\bigoplus_{n=0}^\infty \text{Sym}^n\mathcal{H}</annotation></semantics>. That’s perhaps not the easiest way to get there. But it is a way …

Update:

Yike! Well, that went south pretty fast. For the first time (ever, I think) I’m closing comments on this one, and calling it a day. To summarize, for those who still care,

  1. There is a decomposition of the Hilbert space of a Free Scalar field as <semantics> ϕ= n=0 n<annotation encoding="application/x-tex"> \mathcal{H}_\phi = \bigoplus_{n=0}^\infty \mathcal{H}_n </annotation></semantics> where <semantics> n=Sym n<annotation encoding="application/x-tex"> \mathcal{H}_n = \text{Sym}^n \mathcal{H} </annotation></semantics> and <semantics><annotation encoding="application/x-tex">\mathcal{H}</annotation></semantics> is 1-particle Hilbert space described above (also known as the spin-<semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>, mass-<semantics>m<annotation encoding="application/x-tex">m</annotation></semantics>, irreducible unitary representation of Poincaré).
  2. The Hamiltonian of the Free Scalar field is the direct sum of the induced Hamiltonia on <semantics> n<annotation encoding="application/x-tex">\mathcal{H}_n</annotation></semantics>, induced from the Hamiltonian, <semantics>H=p 2+m 2<annotation encoding="application/x-tex">H=\sqrt{\vec{p}^2+m^2}</annotation></semantics>, on <semantics><annotation encoding="application/x-tex">\mathcal{H}</annotation></semantics>. In particular, it (along with the other Poincaré generators) is block-diagonal with respect to this decomposition.
  3. There are other interesting observables which are also block-diagonal, with respect to this decomposition (i.e., don’t change the particle number) and hence we can discuss their restriction to <semantics> n<annotation encoding="application/x-tex">\mathcal{H}_n</annotation></semantics>.

Gotta keep reminding myself why I decided to foreswear blogging…

by distler (distler@golem.ph.utexas.edu) at March 19, 2017 07:48 AM

March 17, 2017

Symmetrybreaking - Fermilab/SLAC

Q&A: Dark matter next door?

Astrophysicists Eric Charles and Mattia Di Mauro discuss the surprising glow of our neighbor galaxy. 

Image of the gamma-ray glow in Andromeda captured by the Fermi satellite

Astronomers recently discovered a stronger-than-expected glow of gamma rays at the center of the Andromeda galaxy, the nearest major galaxy to the Milky Way. The signal has fueled hopes that scientists are zeroing in on a sign of dark matter, which is five times more prevalent than normal matter but has never been detected directly. 

Researchers believe that gamma rays—a very energetic form of light—could be produced when hypothetical dark matter particles decay or collide and destroy each other. However, dark matter isn’t the only possible source of the gamma rays. A number of other cosmic processes are known to produce them. 

So what do Andromeda’s gamma rays really tell us about dark matter? To find out, Symmetry’s Manuel Gnida talked with Eric Charles and Mattia Di Mauro, two members of the Fermi-LAT collaboration—an international team of researchers that found the Andromeda gamma-ray signal using the Large Area Telescope, a sensitive “eye” for gamma rays on NASA’s Fermi Gamma-ray Space Telescope. 

Both researchers are based at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory. The LAT was conceived of and assembled at SLAC, which also hosts its operations center.

KIPAC researchers Eric Charles and Mattia Di Mauro

KIPAC researchers Eric Charles and Mattia Di Mauro

Dawn Harmer, SLAC National Accelerator Laboratory

Have you discovered dark matter?

MD:

No, we haven’t. In the study, the LAT team looked at the gamma-ray emissions of the Andromeda galaxy and found something unexpected, something we don’t fully understand yet. But there are other potential astrophysical explanations than dark matter.

It’s also not the first time that the LAT collaboration has studied Andromeda with Fermi, but in the old data the galaxy only looked like a big blob. With more data and improved data processing, we have now obtained a much clearer picture of the galaxy’s gamma-ray glow and how it’s distributed.

What’s so unusual about the results?

EC:

As a spiral galaxy, Andromeda is similar to the Milky Way. Therefore, we expected the emissions of both galaxies to look similar. What we discovered is that they are, in fact, quite different. 

In our galaxy, gamma rays come from all kinds of locations—from the center and the spiral arms in the outer regions. For Andromeda, on the other hand, the signal is concentrated at the center.

Why do galaxies glow in gamma rays?

EC:

The answer depends on the type of galaxy. There are active galaxies called blazars. They emit gamma rays when matter in close orbit around supermassive black holes generates jets of plasma. And then there are “normal” galaxies like Andromeda and the Milky Way that produce gamma rays in other ways.

When we look at the emissions of the Milky Way, the galaxy appears like a bright disk, with the somewhat brighter galactic center at the center of the disk. Most of this glow is diffuse and comes from the gas between the stars that lights up when it’s hit by cosmic rays—energetic particles spit out by star explosions or supernovae. 

Other gamma-ray sources are the remnants of such supernovae and pulsars—extremely dense, magnetized, rapidly rotating neutron stars. These sources show up as bright dots in the gamma-ray map of the Milky Way, except at the center where the density of gamma-ray sources is high and the diffuse glow of the Milky Way is brightest, which prevents the LAT from detecting individual sources.

Andromeda is too far away to see individual gamma-ray sources, so it only has a diffuse glow in our images. But we expected to see most of the emissions to come from the disk as well. Its absence suggests that there is less interaction between gas and cosmic rays in our neighbor galaxy. Since this interaction is tied to the formation of stars, this also suggests that Andromeda had a different history of star formation than the Milky Way.

The sky in gamma rays with energies greater than 1 gigaelectronvolts

The sky in gamma rays with energies greater than 1 gigaelectronvolts, based on eight years of data from the LAT on NASA’s Fermi Gamma-ray Space Telescope.

NASA/DOE/Fermi LAT Collaboration

What does all this have to do with dark matter?

MD:

When we carefully analyze the gamma-ray emissions of the Milky Way and model all the gas and point-like sources to the best of our knowledge, then we’re left with an excess of gamma rays at the galactic center. Some people have argued this excess could be a telltale sign of dark matter particles. 

We know that the concentration of dark matter is largest at the galactic center, so if there were a dark matter signal, we would expect it to come from there. The localization of gamma-ray emissions at Andromeda’s center seems to have renewed the interest in the dark matter interpretation in the media.

Is dark matter the most likely interpretation?

EC:

No, there are other explanations. There are so many gamma-ray sources at the galactic center that we can’t really see them individually. This means that their light merges into an extended, diffuse glow.

In fact, two recent studies from the US and the Netherlands have suggested that this glow in the Milky Way could be due to unresolved point sources such as pulsars. The same interpretation could also be true for Andromeda’s signal.

What would it take to know for certain?

MD:

To identify a dark matter signal, we would need to exclude all other possibilities. This is very difficult for a complex region like the galactic center, for which we don’t even know all the astrophysical processes. Of course, this also means that, for the same reason, we can’t completely rule out the dark matter interpretation.

But what’s really important is that we would want to see the same signal in a few different places. However, we haven’t detected any gamma-ray excesses in other galaxies that are consistent with the ones in the Milky Way and Andromeda. 

This is particularly striking for dwarf galaxies, small companion galaxies of the Milky Way that only have few stars. These objects are only held together because they are dominated by dark matter. If the gamma-ray excess at the galactic center were due to dark matter, then we should have already seen similar signatures in the dwarf galaxies. But we don’t.

by Manuel Gnida at March 17, 2017 04:59 PM

March 14, 2017

Symmetrybreaking - Fermilab/SLAC

The life of an accelerator

As it evolves, the SLAC linear accelerator illustrates some important technologies from the history of accelerator science.

Header: The life of an accelerator

Tens of thousands of accelerators exist around the world, producing powerful particle beams for the benefit of medical diagnostics, cancer therapy, industrial manufacturing, material analysis, national security, and nuclear as well as fundamental particle physics. Particle beams can also be used to produce powerful beams of X-rays. 

Many of these particle accelerators rely on artfully crafted components called cavities. 

The world’s longest linear accelerator (also known as a linac) sits at the Department of Energy’s SLAC National Accelerator Laboratory. It stretches two miles and accelerates bunches of electrons to very high energies. 

The SLAC linac has undergone changes in its 50 years of operation that illustrate the evolution of the science of accelerator cavities. That evolution continues and will determine what the linac does next.

Inline_1_Cavities
Illustration by Corinne Mucha

Robust copper

An accelerator cavity is a mostly closed, hollow chamber with an opening on each side for particles to pass through. As a particle moves through the cavity, it picks up energy from an electromagnetic field stored inside. Many cavities can be lined up like beads on a string to generate higher and higher particle energies. 

When SLAC’s linac first started operations, each of its cavities was made exclusively from copper. Each tube-like cavity consisted of a 1-inch-long, 4-inch-wide cylinder with disks on either side. Technicians brazed together more than 80,000 cavities to form a straight particle racetrack.  

Scientists generate radiofrequency waves in an apparatus called a klystron that distributes them to the cavities. Each SLAC klystron serves a 10-foot section of the beam line. The arrival of the electron bunch inside the cavity is timed to match the peak in the accelerating electric field. When a particle arrives inside the cavity at the same time as the peak in the electric field, then that bunch is optimally accelerated. 

“Particles only gain energy if the variable electric field precisely matches the particle motion along the length of the accelerator,” says Sami Tantawi, an accelerator physicist at Stanford University and SLAC. “The copper must be very clean and the shape and size of each cavity must be machined very carefully for this to happen.”

In its original form, SLAC’s linac boosted electrons and their antimatter siblings, positrons, to an energy of 50 billion electronvolts. Researchers used these beams of accelerated particles to study the inner structure of the proton, which led to the discovery of fundamental particles known as quarks.

Today almost all accelerators in the world—including smaller systems for medical and industrial applications—are made of copper. Copper is a good electric conductor, which is important because the radiofrequency waves build up an accelerating field by creating electric currents in the cavity walls. Copper can be machined very smoothly and is cheaper than other options, such as silver.  

“Copper accelerators are very robust systems that produce high acceleration gradients of tens of millions of electronvolts per meter, which makes them very attractive for many applications,” says SLAC accelerator scientist Chris Adolphsen. 

Today, one-third of SLAC’s original copper linac is used to accelerate electrons for the Linac Coherent Light Source, a facility that turns energy from the electron beam into what is currently the world’s brightest X-ray laser light.

Researchers continue to push the technology to higher and higher gradients—that is, larger and larger amounts of acceleration over a given distance. 

“Using sophisticated computer programs on powerful supercomputers, we were able to develop new cavity geometries that support almost 10 times larger gradients,” Tantawi says. “Mixing small amounts of silver into the copper further pushes the technology toward its natural limits.” Cooling the copper to very low temperatures helps as well. Tests at 45 Kelvin—negative 384 degrees Fahrenheit—have shown to increase acceleration gradients 20-fold compared to SLAC’s old linac. 

Copper accelerators have their limitations, though. SLAC’s historic linac produces 120 bunches of particles per second, and recent developments have led to copper structures capable of firing 80 times faster. But for applications that need much higher rates, Adolphsen says, “copper cavities don’t work because they would melt.”

Inline_2_Cavities
Illustration by Corinne Mucha

Chill niobium

For this reason, crews at SLAC are in the process of replacing one-third of the original copper linac with cavities made of niobium. 

Niobium can support very large bunch rates, as long as it is cooled. At very low temperatures, it is what’s known as a superconductor.

“Below the critical temperature of 9.2 Kelvin, the cavity walls conduct electricity without losses, and electromagnetic waves can travel up and down the cavity many, many times, like a pendulum that goes on swinging for a very long time,” says Anna Grassellino, an accelerator scientist at Fermi National Accelerator Laboratory. “That’s why niobium cavities can store electromagnetic energy very efficiently and can operate continuously.” 

You can find superconducting niobium cavities in modern particle accelerators such as the Large Hadron Collider at CERN and the CEBAF accelerator at Thomas Jefferson National Accelerator Facility. The European X-ray Free-Electron Laser in Germany, the European Spallation Source at CERN, and the Facility for Rare Isotope Beams at Michigan State University are all being built using niobium technology. Niobium cavities also appear in designs for the next-generation International Linear Collider. 

At SLAC, the niobium cavities will support LCLS-II, an X-ray laser that will produce up to a million ultrabright light flashes per second. The accelerator will have 280 cavities, each about three feet long with a 3-inch opening for the electron beam to fly through. Sets of eight cavities will be strung together into cryomodules that keep the cavities at a chilly 2 Kelvin, which is colder than interstellar space.

Each niobium cavity is made by fusing together two halves stamped from a sheet of pure metal. The cavities are then cleaned very thoroughly because even the tiniest impurities would degrade their performance.

The shape of the cavities is reminiscent of a stack of shiny donuts. This is to maximize the cavity volume for energy storage and to minimize its surface area to cut down on energy dissipation. The exact size and shape also depends on the type of accelerated particle.

“We’ve come a long way since the first development of superconducting cavities decades ago,” Grassellino says. “Today’s niobium cavities produce acceleration gradients of up to about 50 million electronvolts per meter, and R&D work at Fermilab and elsewhere is further pushing the limits.”

Inline_3_Cavities
Illustration by Corinne Mucha

Hot plasma

Over the past few years, SLAC accelerator scientists have been working on a way to push the limits of particle acceleration even further: accelerating particles using bubbles of ionized gas called plasma. 

Plasma wakefield acceleration is capable of creating acceleration gradients that are up to 1000 times larger than those of copper and niobium cavities, promising to drastically shrink the size of particle accelerators and make them much more powerful.

“These plasma bubbles have certain properties that are very similar to conventional metal cavities,” says SLAC accelerator physicist Mark Hogan. “But because they don’t have a solid surface, they can support extremely high acceleration gradients without breaking down.”

Hogan’s team at SLAC and collaborators from the University of California, Los Angeles, have been developing their plasma acceleration method at the Facility for Advanced Accelerator Experimental Tests, using an oven of hot lithium gas for the plasma and an electron beam from SLAC’s copper linac.

Researchers create bubbles by sending either intense laser light or a high-energy beam of charged particles through plasma. They then send beams of particles through the bubbles to be accelerated.

When, for example, an electron bunch enters a plasma, its negative charge expels plasma electrons from its flight path, creating a football-shaped cavity filled with positively charged lithium ions. The expelled electrons form a negatively charged sheath around the cavity.

This plasma bubble, which is only a few hundred microns in size, travels at nearly the speed of light and is very short-lived. On the inside, it has an extremely strong electric field. A second electron bunch enters that field and experiences a tremendous energy gain. Recent data show possible energy boosts of billions of electronvolts in a plasma column of just a little over a meter.

“In addition to much higher acceleration gradients, the plasma technique has another advantage,” says UCLA researcher Chris Clayton. “Copper and niobium cavities don’t keep particle beams tightly bundled and require the use of focusing magnets along the accelerator. Plasma cavities, on the other hand, also focus the beam.”

Much more R&D work is needed before plasma wakefield accelerator technology can be turned into real applications. But it could represent the future of particle acceleration at SLAC and of accelerator science as a whole.

by Manuel Gnida at March 14, 2017 02:34 PM

March 12, 2017

Jon Butterworth - Life and Physics

March 10, 2017

Symmetrybreaking - Fermilab/SLAC

A strength test for the strong force

New research could tell us about particle interactions in the early universe and even hint at new physics.

Illustration of a carnival strength test

Much of the matter in the universe is made up of tiny particles called quarks. Normally it’s impossible to see a quark on its own because they are always bound tightly together in groups. Quarks only separate in extreme conditions, such as immediately after the Big Bang or in the center of stars or during high-energy particle collisions generated in particle colliders.

Scientists at Louisiana Tech University are working on a study of quarks and the force that binds them by analyzing data from the ATLAS experiment at the LHC. Their measurements could tell us more about the conditions of the early universe and could even hint at new, undiscovered principles of physics.

The particles that stick quarks together are aptly named “gluons.” Gluons carry the strong force, one of four fundamental forces in the universe that govern how particles interact and behave. The strong force binds quarks into particles such as protons, neutrons and atomic nuclei.

As its name suggests, the strong force is the strongest—it’s 100 times stronger than the electromagnetic force (which binds electrons into atoms), 10,000 times stronger than the weak force (which governs radioactive decay), and a hundred million million million million million million (1039) times stronger than gravity (which attracts you to the Earth and the Earth to the sun).

But this ratio shifts when the particles are pumped full of energy. Just as real glue loses its stickiness when overheated, the strong force carried by gluons becomes weaker at higher energies.

“Particles play by an evolving set of rules,” says Markus Wobisch from Louisiana Tech University. “The strength of the forces and their influence within the subatomic world changes as the particles’ energies increase. This is a fundamental parameter in our understanding of matter, yet has not been fully investigated by scientists at high energies.”

Characterizing the cohesiveness of the strong force is one of the key ingredients to understanding the formation of particles after the Big Bang and could even provide hints of new physics, such as hidden extra dimensions.

“Extra dimensions could help explain why the fundamental forces vary dramatically in strength,” says Lee Sawyer, a professor at Louisiana Tech University. “For instance, some of the fundamental forces could only appear weak because they live in hidden extra dimensions and we can’t measure their full strength. If the strong force is weaker or stronger than expected at high energies, this tells us that there’s something missing from our basic model of the universe.”

By studying the high-energy collisions produced by the LHC, the research team at Louisiana Tech University is characterizing how the strong force pulls energetic quarks into encumbered particles. The challenge they face is that quarks are rambunctious and caper around inside the particle detectors. This subatomic soirée involves hundreds of particles, often arising from about 20 proton-proton collisions happening simultaneously. It leaves a messy signal, which scientists must then reconstruct and categorize.

Wobisch and his colleagues innovated a new method to study these rowdy groups of quarks called jets. By measuring the angles and orientations of the jets, he and his colleagues are learning important new information about what transpired during the collisions—more than what they can deduce by simple counting the jets.

The average number of jets produced by proton-proton collisions directly corresponds to the strength of the strong force in the LHC’s energetic environment.

“If the strong force is stronger than predicted, then we should see an increase in the number of proton-protons collisions that generate three jets. But if the strong force is actually weaker than predicted, then we’d expect to see relatively more collisions that produce only two jets. The ratio between these two possible outcomes is the key to understanding the strong force.”

After turning on the LHC, scientists doubled their energy reach and have now determined the strength of the strong force up to 1.5 trillion electronvolts, which is roughly the average energy of every particle in the universe just after the Big Bang. Wobisch and his team are hoping to double this number again with more data.

“So far, all our measurements confirm our predictions,” Wobisch says. “More data will help us look at the strong force at even higher energies, giving us a glimpse as to how the first particles formed and the microscopic structure of space-time.”

by Sarah Charley at March 10, 2017 11:11 PM

March 09, 2017

Marco Frasca - The Gauge Connection

Quote of the day

“Bad men need nothing more to compass their ends, than that good men should look on and do nothing.”

John Stuart Mill


Filed under: Quote

by mfrasca at March 09, 2017 08:29 PM

March 07, 2017

Symmetrybreaking - Fermilab/SLAC

Researchers face engineering puzzle

How do you transport 70,000 tons of liquid argon nearly a mile underground?

Header: Researchers face engineering puzzle

Nearly a mile below the surface of Lead, South Dakota, scientists are preparing for a physics experiment that will probe one of the deepest questions of the universe: Why is there more matter than antimatter?

To search for that answer, the Deep Underground Neutrino Experiment, or DUNE, will look at minuscule particles called neutrinos. A beam of neutrinos will travel 800 miles through the Earth from Fermi National Accelerator Laboratory to the Sanford Underground Research Facility, headed for massive underground detectors that can record traces of the elusive particles.

Because neutrinos interact with matter so rarely and so weakly, DUNE scientists need a lot of material to create a big enough target for the particles to run into. The most widely available (and cost effective) inert substance that can do the job is argon, a colorless, odorless element that makes up about 1 percent of the atmosphere.

The researchers also need to place the detector full of argon far below Earth’s surface, where it will be protected from cosmic rays and other interference.

“We have to transfer almost 70,000 tons of liquid argon underground,” says David Montanari, a Fermilab engineer in charge of the experiment’s cryogenics. “And at this point we have two options: We can either transfer it as a liquid or we can transfer it as a gas.”

Either way, this move will be easier said than done.

Liquid or gas?

The argon will arrive at the lab in liquid form, carried inside of 20-ton tanker trucks. Montanari says the collaboration initially assumed that it would be easier to transport the argon down in its liquid form—until they ran into several speed bumps. 

Transporting liquid vertically is very different from transporting it horizontally for one important reason: pressure. The bottom of a mile-tall pipe full of liquid argon would have a pressure of about 3000 pounds per square inch—equivalent to 200 times the pressure at sea level. According to Montanari, to keep these dangerous pressures from occurring, multiple de-pressurizing stations would have to be installed throughout the pipe. 

Even with these depressurizing stations, safety would still be a concern. While argon is non-toxic, if released into the air, it could displace the oxygen. In the event of a leak, pressurized liquid argon would spill out and could potentially break its vacuum-sealed pipe, expanding rapidly to fill the mine as a gas. One liter of liquid argon would become about 800 liters of argon gas, or four bathtubs’ worth. 

Even without a leak, perhaps the most important challenge in transporting liquid argon is preventing it from evaporating into a gas along the way, according to Montanari. 

To remain a liquid, argon is kept below a brisk temperature of minus 180 degrees Celsius (minus 300 degrees Fahrenheit).

“You need a vacuum-insulated pipe that is a mile long inside a mine shaft,” Montanari says. “Not exactly the most comfortable place to install a vacuum-insulated pipe.”

To avoid these problems, the cryogenics team made the decision to send the argon down as gas instead. 

Routing the pipes containing liquid argon through a large bath of water will warm it up enough to turn it into gas, which will be able to travel down through a standard pipe. Re-condensers located underground act as massive air conditioners will then cool the gas until becomes a liquid again.

“The big advantage is we no longer have vacuum insulated pipe,” Montanari says. “It is just straight piece of pipe.”

Argon gas poses much less of a safety hazard because it is about 1000 times less dense than liquid argon. High pressures would be unlikely to build up and necessitate depressurizing stations, and if a leak occurred, it would not expand as much and cause the same kind of oxygen deficiency. 

The process of filling the detectors with argon will take place in four stages that will take almost two years, Montanari says. This is due to the amount of available cooling power for re-condensing the argon underground. There is also a limit to the amount of argon produced in the US every year, of which only so much can be acquired by the collaboration and transported to the site at a time.

 

Inline: Researchers face engineering puzzle

Illustration by Ana Kova

Argon for answers

Once filled, the liquid argon detectors will pick up light and electrons produced by neutrino interactions.

Part of what makes neutrinos so fascinating to physicists is their habit of oscillating from one flavor—electron, muon or tau—to another. The parameters that govern this “flavor change” are tied directly to some of the most fundamental questions in physics, including why there is more matter than antimatter. With careful observation of neutrino oscillations, scientists in the DUNE collaboration hope to unravel these mysteries in the coming years.  

“At the time of the Big Bang, in theory, there should have been equal amounts of matter and antimatter in the universe,” says Eric James, DUNE’s technical coordinator. That matter and antimatter should have annihilated, leaving behind an empty universe. “But we became a matter-dominated universe.” 

James and other DUNE scientists will be looking to neutrinos for the mechanism behind this matter favoritism. Although the fruits of this labor won’t appear for several years, scientists are looking forward to being able to make use of the massive detectors, which are hundreds of times larger than current detectors that hold only a few hundred tons of liquid argon. 

Currently, DUNE scientists and engineers are working at CERN to construct Proto-DUNE, a miniature replica of the DUNE detector filled with only 300 tons of liquid argon that can be used to test the design and components. 

“Size is really important here,” James says. “A lot of what we’re doing now is figuring out how to take those original technologies which have already being developed... and taking it to this next level with bigger and bigger detectors.”

by Daniel Garisto at March 07, 2017 05:27 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
April 30, 2017 02:51 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at