Particle Physics Planet


August 30, 2016

Christian P. Robert - xi'an's og

Florid’AISTATS

The next AISTATS conference is taking place in Florida, Fort Lauderdale, on April 20-22. (The website keeps the same address one conference after another, which means all my links to the AISTATS 2016 conference in Cadiz are no longer valid. And that the above sunset from Florida is named… cadiz.jpg!) The deadline for paper submission is October 13 and there are two novel features:

  1. Fast-track for Electronic Journal of Statistics: Authors of a small number of accepted papers will be invited to submit an extended version for fast-track publication in a special issue of the Electronic Journal of Statistics (EJS) after the AISTATS decisions are out. Details on how to prepare such extended journal paper submission will be announced after the AISTATS decisions.
  2. Review-sharing with NIPS: Papers previously submitted to NIPS 2016 are required to declare their previous NIPS paper ID, and optionally supply a one-page letter of revision (similar to a revision letter to journal editors; anonymized) in supplemental materials. AISTATS reviewers will have access to the previous anonymous NIPS reviews. Other than this, all submissions will be treated equally.

I find both initiatives worth applauding and replicating in other machine-learning conferences. Particularly in regard with the recent debate we had at Annals of Statistics.


Filed under: pictures, R, Statistics, Travel, University life Tagged: AISTATS 2016, AISTATS 2017, Annals of Statistics, Cadiz, Electronic Journal of Statistics, Florida, machine learning, NIPS 2017, proceedings, refereeing

by xi'an at August 30, 2016 10:16 PM

Clifford V. Johnson - Asymptotia

Some Action

sample_sketch_dialogues_30_08_2016I think that the Apple Pencil is one of the best things the company has produced in a very long time. It's good for both writing and sketching, and so is especially useful in all aspects of my work. I got one back in the Spring when the regular-sized iPad pro came out and it has been a joy to work with. I thought I'd share with you a video stroke by stroke logging of a quick sketch I did with it this morning on the subway on the way to work. The sketch itself is above and the video showing how I made it is embedded below. Yes, it's another version of the people you saw here.

This is typically how I produce some of the rough work for the book, by the way. (I still also use good old fashioned pen and paper too.) Since some of you have asked, the answer to whether I [...] Click to continue reading this post

The post Some Action appeared first on Asymptotia.

by Clifford at August 30, 2016 09:38 PM

Emily Lakdawalla - The Planetary Society Blog

Let’s be careful about this “SETI” signal
Several readers have contacted me recently about reports that a group of international astronomers have detected a strong signal coming from a distant star that could be a sign of a high-technology civilization. Here’s my reaction: it’s interesting, but it’s definitely not the sign of an alien civilization—at least not yet.

August 30, 2016 05:19 PM

Emily Lakdawalla - The Planetary Society Blog

Will Juno’s Instruments Observe the Moons of Jupiter?
It is not easy to observe Jupiter’s moons as more than points of light with Juno, because Juno will never get very close to any of the moons, but as its orbit shifts there will be opportunities to collect data on some of the moons.

August 30, 2016 03:38 PM

astrobites - astro-ph reader's digest

WISE-ing up to broken planets

Title: A subtle IR excess associated with a young White Dwarf in the Edinburgh-Cape Blue Object Survey

Authors: E. Dennihy, John H. Debes, B. H. Dunlap et al.

First Author’s Institution: Physics and Astronomy Department, University of North Carolina at Chapel Hill

Status: Accepted for publication in the Astrophysical Journal

How do planets meet their ends? For many of the smallest worlds, it maybe as debris discs strewn around the tiny white dwarfs that are all that is left of their stars. The faint infrared glow from nearly forty such discs have been discovered, their rocky origins given away by the chemical composition of the material falling onto the parent white dwarf. Today’s paper adds another disc to the sample, although not without difficulty.

At temperatures of a few hundred to a thousand Kelvin, discs around white dwarfs emit infrared light. This aids in their detection: as the central white dwarf gives off mostly blue and ultraviolet light, the light from the disc is not washed out. However, the downside is that the Earth’s atmosphere absorbs infrared light at the wavelengths the disc emits, so such detections have to be made from space.

The authors use data from the Wide-field Infrared Survey Explorer spacecraft, or WISE. As the name suggests, WISE was a survey mission, sweeping the whole sky looking for sources of infrared light. Taking a list white dwarfs from the ground-based Edinburgh-Cape Blue Object survey, the authors crossed-matched their positions with the infrared sources spotted by WISE. They found that the position of the white dwarf EC 05365 had a strong WISE signal, giving off much more infrared light than expected. Could this be a planetary debris disc?

fig4

Figure 1:  The left panel shows an image from the VISTA survey, with the white dwarf in the centre along with two other sources. The right panel shows the much lower-resolution WISE data, which, whilst roughly centred on the white dwarf, could be coming from the object to its left (Source A). The lines show the strength of the WISE signal building up towards the centre (Figure 4 from the paper).

Unfortunately it wasn’t quite that simple. The resolution of WISE is low in comparison with many telescopes, such that it can be difficult to tell exactly where the infrared light is coming from between close-by objects. Figure 1 shows the WISE data on the right, and an image of the same spot from the VISTA survey on the left. EC 05365 is just off the centre of the WISE data, so is the most likely candidate for the infrared light. However, two other sources appear on the VISTA image. The top right object is too faint to matter, but the closer object to the left of the white dwarf, designated “Source A” could be contributing a portion of the WISE signal. Was it light from the second object, rather than a debris disc, that WISE was picking up?

To tease apart the two possible infrared sources, the authors took two approaches. The first was to precisely measure the strength of the WISE signal at each point. The red lines on Figure 1 show lines of  equal strength of the WISE signal, building up towards the centre in a similar fashion to contour lines on a map. This technique shows the WISE signal to be roughly four times as strong at the position of the white dwarf than at Source A.

Secondly, the author used a technique called “forced photometry”, taking what they did know, such as the position of the objects, the distribution of the WISE signal, and the background noise, to simulate the relative signals of the two sources. They again found that the Source A contributed much less to the infrared signal than the white dwarf. With the two techniques argreeing, the authors are confident that they have indeed detected a debris disc around EC 05365.

fig3

Figure 2: The blue points show measurements of the light received from EC 05365 at different wavelengths, going from ultraviolet and blue light on the left to infrared on the right. The VISTA measurements of Source A are shown in red. The grey line shows the predicted signal from the white dwarf at each point. The green WISE points are clearly much higher than predicted, suggesting the presence of a debris disc. (Figure 3 from the paper).

The detection is shown more clearly in Figure 2, which shows the amount of light detected from EC 05365 at different wavelengths, with the extra infrared light from the disc easily visible. Our sample of ruined planetary systems grows again. The authors go on to try to model the shape of the disc, as well as probe the chemical composition of the debris. They finish by looking forwards to the launch of the James Webb Space Telescope which, with its  powerful infrared vision, could revolutionise our knowledge of these planetary graveyards.

Astrobiter’s note: In the interests of brevity I’ve focused on just one area of the paper here, which I hope provides an insight into the level of work behind even outwardly simple discoveries. Many more aspects of the EC 05365 system are discussed, so if you want to know more I invite you to read the paper, and I can answer questions in the comments.  

by David Wilson at August 30, 2016 03:34 PM

Symmetrybreaking - Fermilab/SLAC

Our galactic neighborhood

What can our cosmic neighbors tell us about dark matter and the early universe?

Imagine a mansion.

Now picture that mansion at the heart of a neighborhood that stretches irregularly around it, featuring other houses of different sizes—but all considerably smaller. Cloak the neighborhood in darkness, and the houses appear as clusters of lights. Many of the clusters are bright and easy to see from the mansion, but some can just barely be distinguished from the darkness. 

This is our galactic neighborhood. The mansion is the Milky Way, our 100,000-light-years-across home in the universe. Stretching roughly a million light years from the center of the Milky Way, our galactic neighborhood is composed of galaxies, star clusters and large roving gas clouds that are gravitationally bound to us.

The largest satellite galaxy, the Large Magellanic Cloud, is also one of the closest. It is visible to the naked eye from areas clear of light pollution in the Southern Hemisphere. If the Large Magellanic Cloud were around the size of the average American home—about 2,500 square feet—then by a conservative estimate the Milky Way mansion would occupy more than a full city block. On that scale, our most diminutive neighbors would occupy the same amount of space as a toaster.

Our cosmic neighbors promise answers to questions about hidden matter and the ancient universe. Scientists are setting out to find them.

What makes a neighbor

If we are the mansion, the neighboring houses are dwarf galaxies. Scientists have identified about 50 possible galaxies orbiting the Milky Way and have confirmed the identities of roughly 30 of them. These galaxies range in size from several billion stars to only a few hundred. For perspective, the Milky Way contains somewhere between 100 billion to a trillion stars. 

Dwarf galaxies are the most dark-matter-dense objects known in the universe. In fact, they have far more dark matter than regular matter. Segue 1, our smallest confirmed neighbor, is made of 99.97 percent dark matter.

Dark matter is key to galaxy formation. A galaxy forms when enough regular matter is attracted to a single area by the gravitational pull of a clump of dark matter.

Projects such as the Dark Energy Survey, or DES, find these galaxies by snapping images of a segment of the sky with a powerful telescope camera. Scientists analyze the resulting images, looking for the pattern of color and brightness characteristic of galaxies. 

Scientists can find dark matter clumps by measuring the motion and chemical composition of stars. If a smaller galaxy seems to be behaving like a more massive galaxy, observers can conclude a considerable amount of dark matter must anchor the galaxy.

“Essentially, they are nearby clouds of dark matter with just enough stars to detect them,” says Keith Bechtol, a postdoctoral researcher at the University of Wisconsin-Madison and a member of the Dark Energy Survey.

Through these methods of identification (and thanks to the new capabilities of digital cameras), the Sloan Digital Sky Survey kicked off the modern hunt for dwarf galaxies in the early 2000s. The survey, which looked at the northern part of the sky, more than doubled the number of known satellite dwarf galaxies from 11 to 26 galaxies between 2005 and 2010. Now DES, along with some other surveys, is leading the search. In the last few years DES and its Dark Energy Camera, which maps the southern part of the sky, brought the total to 50 probable galaxies. 

Dark matter mysteries

Dwarf galaxies serve as ideal tools for studying dark matter. While scientists haven’t yet directly discovered dark matter, in studying dwarf galaxies they’ve been able to draw more and more conclusions about how it behaves and, therefore, what it could be. 

“Dwarf galaxies tell us about the small-scale structure of how dark matter clumps,” says Alex Drlica-Wagner of Fermi National Accelerator Laboratory, one of the leaders of the DES analysis. “They are excellent probes for cosmology at the smallest scales.”

Dwarf galaxies also present useful targets for gamma-ray telescopes, which could tell us more about how dark matter particles behave. Some models posit that dark matter is its own antiparticle. If that were so, it could annihilate when it meets other dark matter particles, releasing gamma rays. Scientists are looking for those gamma rays. 

But while studying these neighbors provides clues about the nature of dark matter, they also raise more and more questions. The prevailing cosmological theory of dark matter has accurately described much of what scientists observe in the universe. But when scientists looked to our neighbors, some of the predictions didn’t hold up.

The number of galaxies appears to be lower than expected from calculations, for example, and those that are around seem to be too small. While some of the solutions to these problems may lie in the capabilities of the telescopes or the simulations themselves, we may also need to reconsider the way we think dark matter interacts. 

The elements of the neighborhood

Dwarf galaxies don’t just tell us about dark matter: They also present a window into the ancient past. Most dwarf galaxies’ stars formed more than 10 billion years ago, not long after the Big Bang. Our current understanding of galaxy formation, according to Bechtol, is that after small galaxies formed, some of them merged over billions of years into larger galaxies. 

If we didn’t have these ancient neighbors, we’d have to peer all the way across the universe to see far enough back in time to glimpse galaxies that formed soon after the big bang. While the Milky Way and other large galaxies bustle with activity and new star formation, the satellite galaxies remain mostly static—snapshots of galaxies soon after their birth. 

“They’ve mostly been sitting there, waiting for us to study them,” says Josh Simon, an astronomer at the Carnegie Institution for Science.

The abundance of certain elements in stars in dwarf galaxies can tell scientists about the conditions and mechanisms that produce them. Scientists can also look to the elements to learn about even older stars. 

The first generation of stars are thought to have looked very different than those formed afterward. When they exploded as supernovae, they released new elements that would later appear in stars of the next generation, some of which are found in our neighboring galaxies.

“They do give us the most direct fingerprint we can get as to what those first stars might have been like,” Simon says.

Scientists have learned a lot about our satellites in just the past few years, but there’s always more to learn. DES will begin its fourth year of data collection in August. Several other surveys are also underway. And the Large Synoptic Survey Telescope, an ambitious international project currently under construction in Chile, will begin operating fully in 2022. LSST will create a more detailed map than any of the previous surveys’ combined. 

 


Use this interactive graphic to explore our neighboring galaxies. Click on the abbreviated name of the galaxy to find out more about it. 

The size of each galaxy is listed in parsecs, a unit equal to about 3.26 light-years or 19 trillion miles. The distance from the Milky Way is described in kiloparsecs, or 1000 parsecs. The luminosity of each galaxy, L⊙, is explained in terms of how much energy it emits compared to our sun. Right ascension and declination are astronomical coordinates that specify the galaxy's location as viewed from Earth. 

Read extra descriptive text about some of our most notable neighboring galaxies (the abbreviations for which appear in darker red).

 

 
 

  • &odot

by Molly Olmstead at August 30, 2016 01:00 PM

August 29, 2016

Christian P. Robert - xi'an's og

parallel adaptive importance sampling

Following Paul Russell’s talk at MCqMC 2016, I took a look at his recently arXived paper. In the plane to Sydney. The pseudo-code representation of the method is identical to our population Monte Carlo algorithm as is the suggestion to approximate the posterior by a mixture, but one novel aspect is to use Reich’s ensemble transportation at the resampling stage, in order to maximise the correlation between the original and the resampled versions of the particle systems. (As in our later versions of PMC, the authors also use as importance denominator the entire mixture rather than conditioning on the selected last-step particle.)

“The output of the resampling algorithm gives us a set of evenly weighted samples that we believe represents the target distribution well”

I disagree with this statement: Reweighting does not improve the quality of the posterior approximation, since it introduces more variability. If the original sample is found missing in its adequation to the target, so is the resampled one. Worse, by producing a sample with equal weights, this step may give a false impression of adequate representation…

Another unclear point in the pape relates to tuning the parameters of the mixture importance sampler. The paper discusses tuning these parameters during a burn-in stage, referring to “due to the constraints on adaptive MCMC algorithms”, which indeed is only pertinent for MCMC algorithms, since importance sampling can be constantly modified while remaining valid. This was a major point for advocating PMC. I am thus unsure what the authors mean by a burn-in period in such a context. Actually, I am also unsure on how they use effective sample size to select the new value of the importance parameter, e.g., the variance β in a random walk mixture: the effective sample size involves this variance implicitly through the realised sample hence changing β means changing the realised sample… This seems too costly to contemplate so I wonder at the way Figure 4.2 is produced.

“A popular approach for adaptive MCMC algorithms is to view the scaling parameter as a random variable which we can sample during the course of the MCMC iterations.”

While this is indeed an attractive notion [that I played with in the early days of adaptive MCMC, with the short-lived notion of cyber-parameters], I do not think it is of much help in optimising an MCMC algorithm, since the scaling parameter need be optimised, resulting into a time-inhomogeneous target. A more appropriate tool is thus stochastic optimisation à la Robbins-Monro, as exemplified in Andrieu and Moulines (2006). The paper however remains unclear as to how the scales are updated (see e.g. Section 4.2).

“Ideally, we would like to use a resampling algorithm which is not prohibitively costly for moderately or large sized ensembles, which preserves the mean of the samples, and which makes it much harder for the new samples to forget a significant region in the density.”

The paper also misses on the developments of the early 2000’s about more sophisticated resampling steps, especially Paul Fearnhead’s contributions (see also Nicolas Chopin’s thesis). There exist valid resampling methods that require a single uniform (0,1) to be drawn, rather than m. The proposed method has a flavour similar to systematic resampling, but I wonder at the validity of returning values that are averages of earlier simulations, since this modifies their distribution into ones with slimmer tails. (And it is parameterisation dependent.) Producing xi with probability pi is not the same as returning the average of the pixi‘s.


Filed under: Statistics Tagged: adaptive importance sampling, MCqMC 2016, Monte Carlo simulation and resampling methods for social science, optimal transport, population Monte Carlo, systematic resampling

by xi'an at August 29, 2016 10:16 PM

Christian P. Robert - xi'an's og

astrobites - astro-ph reader's digest

The Darkest Galaxies

Title: A High Stellar Velocity Dispersion and ~ 100 Globular Clusters for the Ultra Diffuse Galaxy Dragonfly 44

Authors: Pieter van Dokkum, Roberto Abraham, et al.

First Author’s Institution: Yale University

Status: Published in ApJ Letters

 

Screenshot 2016-08-29 09.44.06

Figure 1: Also Figure 1 from the paper, showing Dragonfly 44 and its surroundings. It was made by combining deep g and i images from Gemini. The galaxy galaxy is spheroidal with low surface brightness.

If you ever find yourself in an area with little light pollution, go outside on a clear night and you’ll be able to see some of the hundreds of billions of stars that make up the Milky Way. You’ve probably heard by now that they account for a scant 5% of the mass of our galaxy—the rest is made up of dark matter. This might seem like a high ratio of dark matter to visible matter, but it is nothing compared to the amount of dark matter astronomers think could make up some of our ‘darkest’ galaxies. The focus of today’s paper is an ultra diffuse galaxy called Dragonfly 44, located approximately 100 Mpc away. It is roughly the same size as the Milky Way, with a similar estimated dark matter mass, but has only 1% as many stars. 

 

Astronomers have known about the existence of ultra diffuse galaxies (UDGs) for decades, but it was only recently that we discovered that they were much more common than we thought. More than 850 of them have been discovered in the Coma cluster alone, and many more have been located in other dense environments as well. Unlike our own barred spiral galaxy, UDGs are usually round, red, and featureless, with low surface brightness, as can be seen in the two images of Dragonfly 44 in Figure 1. They have been noted to resemble dwarf spheroidal galaxies but are an order of magnitude larger, similar in size to our own galaxy.

 

Despite finding many more UDGs, we still know very little about them. How did they form? Could they just be bloated dwarf spheroidal galaxies? Or could they more similar to bigger galaxies like the Milky Way? How massive are they? We often assume that more light is an indicator of mores mass at a certain location, but that’s not always true when dark matter is involved since it doesn’t interact with light. UDGs, which have been discovered in galaxy clusters, have been hypothesized to have a very high mass-to-light ratio, because the amount of mass that their low surface brightness suggests means they could easily be torn apart by tidal forces. The authors of today’s paper use their observations of Dragonfly 44 to answer these questions. The galaxy gets is name from the Dragonfly telescope (made up of an array of commercially-available lenses) used to discover it. 

Figure 2:

Figure 2: This is Figure 3 from the paper. The left plot shows the mass-to-light ratio plotted against the mass (both derived from measurements of the stellar velocity in the galaxies) for a two UDGs, VCC 1287 and Dragonfly 44, and other galaxies from previous papers. The two UDGs don’t fall in with the other galaxies. On the right, the number of globular clusters is plotted the dynamical mass. Dragonfly 44’s result is in black and and falls within the values expected from other galaxies of the same mass. This demonstrates its similarities to other galaxies of comparable mass when luminosity is not considered.

 

Dragonfly 44 is the second largest of the 47 UDGs the authors had discovered in the Coma cluster in a previous paper. Since it is the only UDG that was spectroscopically-confirmed (the redshift of the galaxy matched the redshift of the Coma cluster) to be in the Coma cluster, this made it an ideal target for study. Using measurements of the velocity of stars in the galaxy along with the estimated half-light radius of the galaxy, the authors were able to obtain estimates for both the mass of the galaxy and the mass-to-light ratio contained within the half-light radius. This results in a dark matter fraction of approximately 98%—enough mass to explain why the galaxy had survived being so close to the center of the Coma cluster. This also allowed them to confirm that Dragonfly did indeed have similar mass to the Milky Way, rather than being an extended dwarf galaxy.

 

In addition to calculating the high mass of Dragonfly 44, the authors also found 35 globular clusters within the half-number radius for globular clusters (their distribution tends to be more spread out than that of stars), and they estimate that there are approximately 100 globular clusters in the entire galaxy. A “half-number” radius is similar to the concept of a half-light radius, except it is the radius that is expected to contain around half of the globular clusters in the galaxy rather than half of the light. This number of globular cluster is an order of magnitude larger than what would be expected from galaxies of similar luminosity to Dragonfly 44, but is similar in number to what would be expected from galaxies of similar mass as Dragonfly 44. These results can be seen in Figure 2. 

 

The large mass and number of globular clusters indicate that Dragonfly 44 (and other UDGs) are likely to be more closely-related to galaxies like our own than the dwarf spheroidal galaxies that they resemble. Instead, the authors suggest that they are really “failed” versions of galaxies like our Milky Way—galaxies that normally would be expected to have much more star formation.

 

Far from closing the book on the mysteries of UDGs however, these findings leave us with many more questions to answer. We still don’t know what physical processes are behind the lack of star formation in UDGs. In addition, the authors estimate that the halo mass of Dragonfly 44 puts it right where the ratio of stellar mass to halo mass for a galaxy should peak, meaning that it is 100 times less bright than we would have previously expected. It’s worth noting, however, that the authors extrapolate the mass they derived from within half-light radius by about two orders of magnitude in order to arrive at an estimate of the halo mass. Only more research will be able to tell us if this really is at conflict with our current understanding of the relationship between halo mass and star formation.

by Caroline Huang at August 29, 2016 05:13 PM

Emily Lakdawalla - The Planetary Society Blog

Selecting the Next New Frontiers Mission
NASA’s managers have begun the process for a competition to select a new planetary mission to launch in the mid-2020s that will address one of the most important questions in planetary science.

August 29, 2016 01:09 PM

Lubos Motl - string vacua and pheno

The delirium over beryllium
Guest blog by Prof Flip Tanedo, a co-author of the first highlighted paper and the editor-in-chief of Particle Bites
Click at the pirate flag above for a widget-free version
Article: Particle Physics Models for the \(17\MeV\) Anomaly in Beryllium Nuclear Decays
Authors: J.L. Feng, B. Fornal, I. Galon, S. Gardner, J. Smolinsky, T. M. P. Tait, F. Tanedo
Reference: arXiv:1608.03591 (Submitted to Phys. Rev. D)

Also featuring the results from:
  • Gulyás et al., “A pair spectrometer for measuring multipolarities of energetic nuclear transitions” (description of detector; 1504.00489; NIM)
  • Krasznahorkay et al., “Observation of Anomalous Internal Pair Creation in 8Be: A Possible Indication of a Light, Neutral Boson” (experimental result; 1504.01527; PRL version; note PRL version differs from arXiv)
  • Feng et al., “Protophobic Fifth-Force Interpretation of the Observed Anomaly in 8Be Nuclear Transitions” (phenomenology; 1604.07411; PRL)
Recently there’s some press (see links below) regarding early hints of a new particle observed in a nuclear physics experiment. In this bite, we’ll summarize the result that has raised the eyebrows of some physicists, and the hackles of others.




A crash course on nuclear physics

Nuclei are bound states of protons and neutrons. They can have excited states analogous to the excited states of atoms, which are bound states of nuclei and electrons. The particular nucleus of interest is beryllium-8, which has four neutrons and four protons, which you may know from the triple alpha process. There are three nuclear states to be aware of: the ground state, the \(18.15\MeV\) excited state, and the \(17.64\MeV\) excited state.

Beryllium-8 excited nuclear states. The \(18.15\MeV\) state (red) exhibits an anomaly. Both the \(18.15\MeV\) and \(17.64\MeV\) states decay to the ground through a magnetic, \(p\)-wave transition. Image adapted from Savage et al. (1987). Click at images to zoom in.

Most of the time the excited states fall apart into a lithium-7 nucleus and a proton. But sometimes, these excited states decay into the beryllium-8 ground state by emitting a photon (γ-ray). Even more rarely, these states can decay to the ground state by emitting an electron-positron pair from a virtual photon: this is called internal pair creation and it is these events that exhibit an anomaly.




The beryllium-8 anomaly

Physicists at the Atomki nuclear physics institute in Hungary were studying the nuclear decays of excited beryllium-8 nuclei. The team, led by Attila J. Krasznahorkay, produced beryllium excited states by bombarding a lithium-7 nucleus with protons.

Beryllium-8 excited states are prepared by bombarding lithium-7 with protons.

The proton beam is tuned to very specific energies so that one can ‘tickle’ specific beryllium excited states. When the protons have around \(1.03\MeV\) of kinetic energy, they excite lithium into the \(18.15\MeV\) beryllium state. This has two important features:
  1. Picking the proton energy allows one to only produce a specific excited state so one doesn’t have to worry about contamination from decays of other excited states.
  2. Because the \(18.15\MeV\) beryllium nucleus is produced at resonance, one has a very high yield of these excited states. This is very good when looking for very rare decay processes like internal pair creation.
What one expects is that most of the electron-positron pairs have small opening angle with a smoothly decreasing number as with larger opening angles.

Expected distribution of opening angles for ordinary internal pair creation events. Each line corresponds to nuclear transition that is electric (E) or magnetic (M) with a given orbital quantum number, \(\ell\). The beryllium transitions that we’re interested in are mostly M1. Adapted from Gulyás et al. (1504.00489).

Instead, the Atomki team found an excess of events with large electron-positron opening angle. In fact, even more intriguing: the excess occurs around a particular opening angle (140 degrees) and forms a bump.

Number of events (\({\rm d}N/{\rm d}\theta\)) for different electron-positron opening angles and plotted for different excitation energies (\(E_p\)). For \(E_p=1.10\MeV\), there is a pronounced bump at 140 degrees which does not appear to be explainable from the ordinary internal pair conversion. This may be suggestive of a new particle. Adapted from Krasznahorkay et al., PRL 116, 042501.

Here’s why a bump is particularly interesting:
  1. The distribution of ordinary internal pair creation events is smoothly decreasing and so this is very unlikely to produce a bump.
  2. Bumps can be signs of new particles: if there is a new, light particle that can facilitate the decay, one would expect a bump at an opening angle that depends on the new particle mass.
Schematically, the new particle interpretation looks like this:

Schematic of the Atomki experiment and new particle (\(X\)) interpretation of the anomalous events. In summary: protons of a specific energy bombard stationary lithium-7 nuclei and excite them to the \(18.15\MeV\) beryllium-8 state. These decay into the beryllium-8 ground state. Some of these decays are mediated by the new \(X\) particle, which then decays in to electron-positron pairs of a certain opening angle that are detected in the Atomki pair spectrometer detector. Image from 1608.03591.

As an exercise for those with a background in special relativity, one can use the relation \((p_{e^+}+p_{e^-})^2 = m_X^2\) to prove the result:\[

m_X^2 = \zav{ 1 - \zav{\frac{E_{e^+}\!-\!E_{e^-}}{E_{e^+}\!+\!E_{e^-}}}^{\!2} } (E_{e^+}\!+\!E_{e^-})^2 \sin^2 \frac\theta 2

\] This relates the mass of the proposed new particle, \(X\), to the opening angle \(\theta\) and the energies \(E\) of the electron and positron. The opening angle bump would then be interpreted as a new particle with mass of roughly \(17\MeV\). To match the observed number of anomalous events, the rate at which the excited beryllium decays via the \(X\) boson must be \(6\times 10^{-6}\) times the rate at which it goes into a γ-ray.

The anomaly has a significance of 6.8σ. This means that it’s highly unlikely to be a statistical fluctuation, as the \(750\GeV\) diphoton bump appears to have been. Indeed, the conservative bet would be some not-understood systematic effect, akin to the \(130\GeV\) Fermi γ-ray line.

The beryllium that cried wolf?

Some physicists are concerned that beryllium may be the ‘boy that cried wolf,’ and point to papers by the late Fokke de Boer as early as 1996 and all the way to 2001. de Boer made strong claims about evidence for a new \(10\MeV\) particle in the internal pair creation decays of the \(17.64\MeV\) beryllium-8 excited state. These claims didn’t pan out, and in fact the instrumentation paper by the Atomki experiment rules out that original anomaly.

The proposed evidence for “de Boeron” is shown below:

The de Boer claim for a \(10\MeV\) new particle. Left: distribution of opening angles for internal pair creation events in an E1 transition of carbon-12. This transition has similar energy splitting to the beryllium-8 \(17.64\MeV\) transition and shows good agreement with the expectations; as shown by the flat “signal – background” on the bottom panel. Right: the same analysis for the M1 internal pair creation events from the \(17.64\MeV\) beryllium-8 states. The “signal – background” now shows a broad excess across all opening angles. Adapted from de Boer et al. PLB 368, 235 (1996).

When the Atomki group studied the same \(17.64\MeV\) transition, they found that a key background component—subdominant E1 decays from nearby excited states—dramatically improved the fit and were not included in the original de Boer analysis. This is the last nail in the coffin for the proposed \(10\MeV\) “de Boeron.”

However, the Atomki group also highlight how their new anomaly in the \(18.15\MeV\) state behaves differently. Unlike the broad excess in the de Boer result, the new excess is concentrated in a bump. There is no known way in which additional internal pair creation backgrounds can contribute to add a bump in the opening angle distribution; as noted above: all of these distributions are smoothly falling.

The Atomki group goes on to suggest that the new particle appears to fit the bill for a dark photon, a reasonably well-motivated copy of the ordinary photon that differs in its overall strength and having a non-zero (\(17\MeV\)?) mass.

Theory part 1: Not a dark photon

With the Atomki result was published and peer reviewed in Physics Review Letters, the game was afoot for theorists to understand how it would fit into a theoretical framework like the dark photon. A group from UC Irvine, University of Kentucky, and UC Riverside found that actually, dark photons have a hard time fitting the anomaly simultaneously with other experimental constraints. In the visual language of this recent ParticleBite, the situation was this:

It turns out that the minimal model of a dark photon cannot simultaneously explain the Atomki beryllium-8 anomaly without running afoul of other experimental constraints. Image adapted from this ParticleBite.

The main reason for this is that a dark photon with mass and interaction strength to fit the beryllium anomaly would necessarily have been seen by the NA48/2 experiment. This experiment looks for dark photons in the decay of neutral pions (\(\pi^0\)). These pions typically decay into two photons, but if there’s a \(17\MeV\) dark photon around, some fraction of those decays would go into dark-photon — ordinary-photon pairs. The non-observation of these unique decays rules out the dark photon interpretation.

The theorists then decided to “break” the dark photon theory in order to try to make it fit. They generalized the types of interactions that a new photon-like particle, \(X\), could have, allowing protons, for example, to have completely different charges than electrons rather than having exactly opposite charges. Doing this does gross violence to the theoretical consistency of a theory—but they goal was just to see what a new particle interpretation would have to look like. They found that if a new photon-like particle talked to neutrons but not protons—that is, the new force were protophobic—then a theory might hold together.

Schematic description of how model-builders “hacked” the dark photon theory to fit both the beryllium anomaly while being consistent with other experiments. This hack isn’t pretty—and indeed, comes at the cost of potentially invalidating the mathematical consistency of the theory—but the exercise demonstrates the target for how to a complete theory might have to behave. Image adapted from this ParticleBite.

Theory appendix: pion-phobia is protophobia

Editor’s note: what follows is for readers with some physics background interested in a technical detail; others may skip this section.

How does a new particle that is allergic to protons avoid the neutral pion decay bounds from NA48/2? Pions decay into pairs of photons through the well-known triangle-diagrams of the axial anomaly. The decay into photon–dark-photon pairs proceed through similar diagrams. The goal is then to make sure that these diagrams cancel.

A cute way to look at this is to assume that at low energies, the relevant particles running in the loop aren’t quarks, but rather nucleons (protons and neutrons). In fact, since only the proton can talk to the photon, one only needs to consider proton loops. Thus if the new photon-like particle, \(X\), doesn’t talk to protons, then there’s no diagram for the pion to decay into \(\gamma X\). This would be great if the story weren’t completely wrong.

Avoiding NA48/2 bounds requires that the new particle, \(X\), is pion-phobic. It turns out that this is equivalent to \(X\) being protophobic. The correct way to see this is on the left, making sure that the contribution of up-quark loops cancels the contribution from down-quark loops. A slick (but naively completely wrong) calculation is on the right, arguing that effectively only protons run in the loop.

The correct way of seeing this is to treat the pion as a quantum superposition of an up–anti-up and down–anti-down bound state, and then make sure that the \(X\) charges are such that the contributions of the two states cancel. The resulting charges turn out to be protophobic.

The fact that the “proton-in-the-loop” picture gives the correct charges, however, is no coincidence. Indeed, this was precisely how Jack Steinberger calculated the correct pion decay rate. The key here is whether one treats the quarks/nucleons linearly or non-linearly in chiral perturbation theory. The relation to the Wess-Zumino-Witten term—which is what really encodes the low-energy interaction—is carefully explained in chapter 6a.2 of Georgi’s revised Weak Interactions.

Theory part 2: Not a spin-0 particle

The above considerations focus on a new particle with the same spin and parity as a photon (spin-1, parity odd). Another result of the UCI study was a systematic exploration of other possibilities. They found that the beryllium anomaly could not be consistent with spin-0 particles. For a parity-odd, spin-0 particle, one cannot simultaneously conserve angular momentum and parity in the decay of the excited beryllium-8 state. (Parity violating effects are negligible at these energies.)

Parity and angular momentum conservation prohibit a “dark Higgs” (parity even scalar) from mediating the anomaly.

For a parity-odd pseudoscalar, the bounds on axion-like particles at \(20\MeV\) suffocate any reasonable coupling. Measured in terms of the pseudoscalar–photon–photon coupling (which has dimensions of inverse \({\rm GeV}\)), this interaction is ruled out down to the inverse Planck scale.

Bounds on axion-like particles exclude a \(20\MeV\) pseudoscalar with couplings to photons stronger than the inverse Planck scale. Adapted from 1205.2671 and 1512.03069.

Additional possibilities include:
  • Dark Z-bosons, cousins of the dark photon with spin-1 but indeterminate parity. This is very constrained by atomic parity violation.
  • Axial vectors, spin-1 bosons with positive parity. These remain a theoretical possibility, though their unknown nuclear matrix elements make it difficult to write a predictive model. (See section II.D of 1608.03591.)
Theory part 3: Nuclear input

The plot thickens when once also includes results from nuclear theory. Recent results from Saori Pastore, Bob Wiringa, and collaborators point out a very important fact: the \(18.15\MeV\) beryllium-8 state that exhibits the anomaly and the \(17.64\MeV\) state which does not are actually closely related.

Recall (e.g. from the first figure at the top) that both the \(18.15\MeV\) and \(17.64\MeV\) states are both spin-1 and parity-even. They differ in mass and in one other key aspect: the \(17.64\MeV\) state carries isospin charge, while the \(18.15\MeV\) state and ground state do not.

Isospin is the nuclear symmetry that relates protons to neutrons and is tied to electroweak symmetry in the full Standard Model. At nuclear energies, isospin charge is approximately conserved. This brings us to the following puzzle:
If the new particle has mass around \(17\MeV\), why do we see its effects in the \(18.15\MeV\) state but not the \(17.64\MeV\) state?
Naively, if the new particle emitted, \(X\), carries no isospin charge, then isospin conservation prohibits the decay of the \(17.64\MeV\) state through emission of an \(X\) boson. However, the Pastore et al. result tells us that actually, the isospin-neutral and isospin-charged states mix quantum mechanically so that the observed \(18.15\) and \(17.64\MeV\) states are mixtures of iso-neutral and iso-charged states. In fact, this mixing is actually rather large, with mixing angle of around 10 degrees!

The result of this is that one cannot invoke isospin conservation to explain the non-observation of an anomaly in the \(17.64\MeV\) state. In fact, the only way to avoid this is to assume that the mass of the \(X\) particle is on the heavier side of the experimentally allowed range. The rate for \(X\) emission goes like the 3-momentum cubed (see section II.E of 1608.03591), so a small increase in the mass can suppresses the rate of \(X\) emission by the lighter state by a lot.

The UCI collaboration of theorists went further and extended the Pastore et al. analysis to include a phenomenological parameterization of explicit isospin violation. Independent of the Atomki anomaly, they found that including isospin violation improved the fit for the \(18.15\MeV\) and \(17.64\MeV\) electromagnetic decay widths within the Pastore et al. formalism. The results of including all of the isospin effects end up changing the particle physics story of the Atomki anomaly significantly:

The rate of \(X\) emission (colored contours) as a function of the \(X\) particle’s couplings to protons (horizontal axis) versus neutrons (vertical axis). The best fit for a \(16.7\MeV\) new particle is the dashed line in the teal region. The vertical band is the region allowed by the NA48/2 experiment. Solid lines show the dark photon and protophobic limits. Left: the case for perfect (unrealistic) isospin. Right: the case when isospin mixing and explicit violation are included. Observe that incorporating realistic isospin happens to have only a modest effect in the protophobic region. Figure from 1608.03591.

The results of the nuclear analysis are thus that:
  1. An interpretation of the Atomki anomaly in terms of a new particle tends to push for a slightly heavier \(X\) mass than the reported best fit. (Remark: the Atomki paper does not do a combined fit for the mass and coupling nor does it report the difficult-to-quantify systematic errors associated with the fit. This information is important for understanding the extent to which the \(X\) mass can be pushed to be heavier.)
  2. The effects of isospin mixing and violation are important to include; especially as one drifts away from the purely protophobic limit.
Theory part 4: towards a complete theory

The theoretical structure presented above gives a framework to do phenomenology: fitting the observed anomaly to a particle physics model and then comparing that model to other experiments. This, however, doesn’t guarantee that a nice—or even self-consistent—theory exists that can stretch over the scaffolding.

Indeed, a few challenges appear:
  • The isospin mixing discussed above means the \(X\) mass must be pushed to the heavier values allowed by the Atomki observation.
  • The “protophobic” limit is not obviously anomaly-free: simply asserting that known particles have arbitrary charges does not generically produce a mathematically self-consistent theory.
  • Atomic parity violation constraints require that the \(X\) couple in the same way to left-handed and right-handed matter. The left-handed coupling implies that \(X\) must also talk to neutrinos: these open up new experimental constraints.
The Irvine/Kentucky/Riverside collaboration first note the need for a careful experimental analysis of the actual mass ranges allowed by the Atomki observation, treating the new particle mass and coupling as simultaneously free parameters in the fit.

Next, they observe that protophobic couplings can be relatively natural. Indeed: the Standard Model Z-boson is approximately protophobic at low energies—a fact well known to those hunting for dark matter with direct detection experiments. For exotic new physics, one can engineer protophobia through a phenomenon called kinetic mixing where two force particles mix into one another. A tuned admixture of electric charge and baryon number, (\(Q-B\)), is protophobic.

Baryon number, however, is an anomalous global symmetry—this means that one has to work hard to make a baryon-boson that mixes with the photon (see 1304.0576 and 1409.8165 for examples). Another alternative is if the photon kinetically mixes with not baryon number, but the anomaly-free combination of “baryon-minus-lepton number,” \(Q-(B-L)\). This then forces one to apply additional model-building modules to deal with the neutrino interactions that come along with this scenario.

In the language of the ‘model building blocks’ above, result of this process looks schematically like this:

A complete theory is completely mathematically self-consistent and satisfies existing constraints. The additional bells and whistles required for consistency make additional predictions for experimental searches. Pieces of the theory can sometimes be used to address other anomalies.

The theory collaboration presented examples of the two cases, and point out how the additional ‘bells and whistles’ required may tie to additional experimental handles to test these hypotheses. These are simple existence proofs for how complete models may be constructed.

What’s next?

We have delved rather deeply into the theoretical considerations of the Atomki anomaly. The analysis revealed some unexpected features with the types of new particles that could explain the anomaly (dark photon-like, but not exactly a dark photon), the role of nuclear effects (isospin mixing and breaking), and the kinds of features a complete theory needs to have to fit everything (be careful with anomalies and neutrinos). The single most important next step, however, is and has always been experimental verification of the result.

While the Atomki experiment continues to run with an upgraded detector, what’s really exciting is that a swath of experiments that are either ongoing or in construction will be able to probe the exact interactions required by the new particle interpretation of the anomaly. This means that the result can be independently verified or excluded within a few years. A selection of upcoming experiments is highlighted in section IX of 1608.03591:

Other experiments that can probe the new particle interpretation of the Atomki anomaly. The horizontal axis is the new particle mass, the vertical axis is its coupling to electrons (normalized to the electric charge). The dark blue band is the target region for the Atomki anomaly. Figure from 1608.03591; assuming 100% branching ratio to electrons.

We highlight one particularly interesting search: recently a joint team of theorists and experimentalists at MIT proposed a way for the LHCb experiment to search for dark photon-like particles with masses and interaction strengths that were previously unexplored. The proposal makes use of the LHCb’s ability to pinpoint the production position of charged particle pairs and the copious amounts of D-mesons produced at Run 3 of the LHC. As seen in the figure above, the LHCb reach with this search thoroughly covers the Atomki anomaly region.

Implications

So where we stand is this:
  • There is an unexpected result in a nuclear experiment that may be interpreted as a sign for new physics.
  • The next steps in this story are independent experimental cross-checks; the threshold for a ‘discovery’ is if another experiment can verify these results.
  • Meanwhile, a theoretical framework for understanding the results in terms of a new particle has been built and is ready-and-waiting. Some of the results of this analysis are important for faithful interpretation of the experimental results.
What if it’s nothing?

This is the conservative take—and indeed, we may well find that in a few years, the possibility that Atomki was observing a new particle will be completely dead. Or perhaps a source of systematic error will be identified and the bump will go away. That’s part of doing science.

Meanwhile, there are some important take-aways in this scenario. First is the reminder that the search for light, weakly coupled particles is an important frontier in particle physics. Second, for this particular anomaly, there are some neat take aways such as a demonstration of how effective field theory can be applied to nuclear physics (see e.g. chapter 3.1.2 of the new book by Petrov and Blechman, Amazon) and how tweaking our models of new particles can avoid troublesome experimental bounds. Finally, it’s a nice example of how particle physics and nuclear physics are not-too-distant cousins and how progress can be made in particle–nuclear collaborations—one of the Irvine group authors (Susan Gardner) is a bona fide nuclear theorist who was on sabbatical from the University of Kentucky.

What if it’s real?

This is a big “what if.” On the other hand, a 6.8σ effect is not a statistical fluctuation and there is no known nuclear physics to produce a new-particle-like bump given the analysis presented by the Atomki experimentalists.

The threshold for “real” is independent verification. If other experiments can confirm the anomaly, then this could be a huge step in our quest to go beyond the Standard Model. While this type of particle is unlikely to help with the Hierarchy problem of the Higgs mass, it could be a sign for other kinds of new physics. One example is the grand unification of the electroweak and strong forces; some of the ways in which these forces unify imply the existence of an additional force particle that may be light and may even have the types of couplings suggested by the anomaly.

Could it be related to other anomalies?

The Atomki anomaly isn’t the first particle physics curiosity to show up at the \({\rm MeV}\) scale. While none of these other anomalies are necessarily related to the type of particle required for the Atomki result (they may not even be compatible!), it is helpful to remember that the \({\rm MeV}\) scale may still have surprises in store for us.
  • The KTeV anomaly: The rate at which neutral pions decay into electron-positron pairs appears to be off from the expectations based on chiral perturbation theory. In 0712.0007, a group of theorists found that this discrepancy could be fit to a new particle with axial couplings. If one fixes the mass of the proposed particle to be \(20\MeV\), the resulting couplings happen to be in the same ballpark as those required for the Atomki anomaly. The important caveat here is that parameters for an axial vector to fit the Atomki anomaly are unknown, and mixed vector–axial states are severely constrained by atomic parity violation.

The KTeV anomaly interpreted as a new particle, \(U\). From 0712.0007.
  • The anomalous magnetic moment of the muon and the cosmic lithium problem: much of the progress in the field of light, weakly coupled forces comes from Maxim Pospelov. The anomalous magnetic moment of the muon, \((g-2)_\mu\), has a long-standing discrepancy from the Standard Model (see e.g. this blog post). While this may come from an error in the very, very intricate calculation and the subtle ways in which experimental data feed into it, Pospelov (and also Fayet) noted that the shift may come from a light (in the \(10\)s of \({\rm MeV}\) range!), weakly coupled new particle like a dark photon. Similarly, Pospelov and collaborators showed that a new light particle in the \(1\)-\(20\MeV\) range may help explain another longstanding mystery: the surprising lack of lithium in the universe (APS Physics synopsis).
  • The Proton Radius Problem: the charge radius of the proton appears to be smaller than expected when measured using the Lamb shift of muonic hydrogen versus electron scattering experiments. See this ParticleBite summary, and this recent review. Some attempts to explain this discrepancy have involved \({\rm MeV}\)-scale new particles, though the endeavor is difficult. There’s been some renewed popular interest after a new result using deuterium confirmed the discrepancy. However, there was a report that a result at the proton radius problem conference in Trento suggests that the 2S-4P determination of the Rydberg constant may solve the puzzle (though discrepant with other Rydberg measurements). [Those slides do not appear to be public.]
Could it be related to dark matter?

A lot of recent progress in dark matter has revolved around the possibility that in addition to dark matter, there may be additional light particles that mediate interactions between dark matter and the Standard Model. If these particles are light enough, they can change the way that we expect to find dark matter in sometimes surprising ways. One interesting avenue is called self-interacting dark matter and is based on the observation that these light force carriers can deform the dark matter distribution in galaxies in ways that seem to fit astronomical observations. A \(20\MeV\) dark photon-like particle even fits the profile of what’s required by the self-interacting dark matter paradigm, though it is very difficult to make such a particle consistent with both the Atomki anomaly and the constraints from direct detection.

Should I be excited?

Given all of the caveats listed above, some feel that it is too early to be in “drop everything, this is new physics” mode. Others may take this as a hint that’s worth exploring further—as has been done for many anomalies in the recent past. For researchers, it is prudent to be cautious, and it is paramount to be careful; but so long as one does both, then being excited about a new possibility is part what makes our job fun.

For the general public, the tentative hopes of new physics that pop up—whether it’s the Atomki anomaly, or the \(750\GeV\) diphoton bump, a \({\rm GeV}\) bump from the galactic center, γ-ray lines at \(3.5\keV\) and \(130\GeV\), or penguins at LHCb—these are the signs that we’re making use of all of the data available to search for new physics. Sometimes these hopes fizzle away, often they leave behind useful lessons about physics and directions forward. Maybe one of these days an anomaly will stick and show us the way forward.

Further Reading

Here are some of the popular-level press on the Atomki result. See the references at the top of this ParticleBite for references to the primary literature.

by Luboš Motl (noreply@blogger.com) at August 29, 2016 06:31 AM

August 28, 2016

Christian P. Robert - xi'an's og

winning entry at MCqMC’16

mcqmc4The nice logo of MCqMC 2016 was a collection of eight series of QMC dots on the unit (?) cube. The organisers set a competition to identify the principles behind those quasi-random sets and as I had no idea for most of them I entered very random sets unconnected with algorithmia, for which I got an honourable mention and a CD prize (if not the conference staff tee-shirt I was coveting!) Art Owen sent me back my entry, posted below and hopefully (or not!) readable.dots


Filed under: Books, Kids, pictures, Statistics, Travel, University life Tagged: California, MCqMC 2016, qMC, quasi-random sequences, scientific computing, Stanford University, tee-shirt, uniformity

by xi'an at August 28, 2016 10:16 PM

The n-Category Cafe

Topological Crystals (Part 4)

k4_crystal

Okay, let’s look at some examples of topological crystals. These are what got me excited in the first place. We’ll get some highly symmetrical crystals, often in higher-dimensional Euclidean spaces. The ‘triamond’, above, is a 3d example.

Review

First let me remind you how it works. We start with a connected graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. This has a space <semantics>C 0(X,)<annotation encoding="application/x-tex">C_0(X,\mathbb{R})</annotation></semantics> of 0-chains, which are formal linear combinations of vertices, and a space <semantics>C 1(X,)<annotation encoding="application/x-tex">C_1(X,\mathbb{R})</annotation></semantics> of 1-chains, which are formal linear combinations of edges.

We choose a vertex in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. Each path <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> starting at this vertex determines a 1-chain <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics>, namely the sum of its edges. These 1-chains give some points in <semantics>C 1(X,)<annotation encoding="application/x-tex">C_1(X,\mathbb{R})</annotation></semantics>. These points are the vertices of a graph <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> called the maximal abelian cover of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. The maximal abelian cover has an edge from <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics> to <semantics>c γ<annotation encoding="application/x-tex">c_{\gamma'}</annotation></semantics> whenever the path <semantics>γ<annotation encoding="application/x-tex">\gamma'</annotation></semantics> is obtained by adding an extra edge to <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics>. We can think of this edge as a straight line segment from <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics> to <semantics>c γ<annotation encoding="application/x-tex">c_{\gamma'}</annotation></semantics>.

So, we get a graph <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> sitting inside <semantics>C 1(X,)<annotation encoding="application/x-tex">C_1(X,\mathbb{R})</annotation></semantics>. But this is a high-dimensional space. To get something nicer we’ll project down to a lower-dimensional space.

There’s boundary operator

<semantics>:C 1(X,)C 0(X,)<annotation encoding="application/x-tex"> \partial : C_1(X,\mathbb{R}) \to C_0(X,\mathbb{R}) </annotation></semantics>

sending any edge to the difference of its two endpoints. The kernel of this operator is the space of 1-cycles, <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics>. There’s an inner product on the space of 1-chains such that edges form an orthonormal basis, so we get an orthogonal projection

<semantics>π:C 1(X,)Z 1(X,)<annotation encoding="application/x-tex"> \pi : C_1(X,\mathbb{R}) \to Z_1(X,\mathbb{R}) </annotation></semantics>

We can use this to take the maximal abelian cover <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> and project it down to the space of 1-cycles. The hard part is checking that <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> is one-to-one on <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>. But that’s what I explained last time! It’s true whenever our original graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has no bridges: that is, edges whose removal would disconnect our graph, like this:

So, when <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is a bridgeless graph, we get a copy of the maximal abelian cover embedded in <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics>. This is our topological crystal.

Let’s do some examples.

Graphene

I showed you this one before, but it’s a good place to start. Let <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> be this graph:

Since this graph has 3 edges, its space of 1-chains is 3-dimensional. Since this graph has 2 holes, its 1-cycles form a plane in that 3d space. If we take paths <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> starting at the red vertex, form the 1-chains <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics>, and project them down to this plane, we get this:

Here the 1-chains <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics> are the white and red dots. They’re the vertices of the maximal abelian cover <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>, while the line segments between them are the edges of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>. Projecting these vertices and edges onto the plane of 1-cycles, we get our topological crystal:

This is the pattern of graphene, a miraculous 2-dimensional form of carbon. The more familiar 3d crystal called graphite is made of parallel layers of graphene connected with some other bonds.

Puzzle 1. Classify bridgeless connected graphs with 2 holes (or more precisely, a 2-dimensional space of 1-cycles). What are the resulting 2d topological crystals?

Diamond

Now let’s try this graph:

Since it has 3 holes, it gives a 3d crystal:

This crystal structure is famous! It’s the pattern used by a diamond. Up to translation it has two kinds of atoms, corresponding to the two vertices of the original graph.

Triamond

Now let’s try this graph:

Since it has 3 holes, it gives another 3d crystal:

This is also famous: it’s sometimes called a ‘triamond’. If you’re a bug crawling around on this crystal, locally you experience the same topology as if you were crawling around on a wire-frame model of a tetrahedron. But you’re actually on the maximal abelian cover!

Up to translation the triamond has 4 kinds of atoms, corresponding to the 4 vertices of the tetrahedron. Each atom has 3 equally distant neighbors lying in a plane at <semantics>120 <annotation encoding="application/x-tex">120{}^\circ</annotation></semantics> angles from each other. These planes lie in 4 families, each parallel to one face of a regular tetrahedron. This structure was discovered by the crystallographer Laves, and it was dubbed the ‘Laves graph’ by Coxeter. Later Sunada called it the ‘<semantics>K 4<annotation encoding="application/x-tex">\mathrm{K}_4</annotation></semantics> lattice’ and studied its energy minimization properties. Theoretically it seems to be a stable form of carbon. Crystals in this pattern have not yet been seen, but this pattern plays a role in the structure of certain butterfly wings.

Puzzle 2. Classify bridgeless connected graphs with 3 holes (or more precisely, a 3d space of 1-cycles). What are the resulting 3d topological crystals?

Lonsdaleite and hyperquartz

There’s a crystal form of carbon called lonsdaleite that looks like this:

It forms in meteor impacts. It does not arise as 3-dimensional topological crystal.

Puzzle 3. Show that this graph gives a 5-dimensional topological crystal which can be projected down to give lonsdaleite in 3d space:

Puzzle 4. Classify bridgeless connected graphs with 4 holes (or more precisely, a 4d space of 1-cycles). What are the resulting 4d topological crystals? A crystallographer with the wonderful name of Eon calls this one hyperquartz, because it’s a 4-dimensional analogue of quartz:

All these classification problems are quite manageable if you notice there are certain ‘boring’, easily understood ways to get new bridgeless graphs with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> holes from old ones.

Platonic crystals

For any connected graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, there is a covering map

<semantics>q:X¯X<annotation encoding="application/x-tex"> q : \overline{X} \to X </annotation></semantics>

The vertices of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> come in different kinds, or ‘colors’, depending on which vertex of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> they map to. It’s interesting to look at the group of ‘covering symmetries’, <semantics>Cov(X)<annotation encoding="application/x-tex">\mathrm{Cov}(X)</annotation></semantics>, consisting of all symmetries of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> that map vertices of same color to vertices of the same color. Greg Egan and I showed that when <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has no bridges, <semantics>Cov(X)<annotation encoding="application/x-tex">\mathrm{Cov}(X)</annotation></semantics> also acts as symmetries of the topological crystal associated to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. This group fits into a short exact sequence:

<semantics>1H 1(X,)Cov(X)Aut(X)1<annotation encoding="application/x-tex"> 1 \longrightarrow H_1(X,\mathbb{Z}) \longrightarrow \mathrm{Cov}(X) \longrightarrow \mathrm{Aut}(X) \longrightarrow 1 </annotation></semantics>

where <semantics>Aut(X)<annotation encoding="application/x-tex">\mathrm{Aut}(X)</annotation></semantics> is the group of all symmetries of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. Thus, every symmetry of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is covered by some symmetry of its topological crystal, while <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{Z})</annotation></semantics> acts as translations of the crystal, in a way that preserves the color of every vertex.

For example consider the triamond, which comes from the tetrahedron. The symmetry group of the tetrahedron is this Coxeter group:

<semantics>A 3=s 1,s 2,s 3|(s 1s 2) 3=(s 2s 3) 3=s 1 2=s 2 2=s 3 2=1<annotation encoding="application/x-tex"> \mathrm{A}_3 = \langle s_1, s_2, s_3 \;| \; (s_1s_2)^3 = (s_2s_3)^3 = s_1^2 = s_2^2 = s_3^2 = 1\rangle </annotation></semantics>

Thus, the group of covering symmetries of the triamond is an extension of <semantics>A 3<annotation encoding="application/x-tex">\mathrm{A}_3</annotation></semantics> by <semantics> 3<annotation encoding="application/x-tex">\mathbb{Z}^3</annotation></semantics>. Beware the notation here: this is not the alternating group on the 3 letters. In fact it’s the permutation group on 4 letters, namely the vertices of the tetrahedron!

We can also look at other ‘Platonic crystals’. The symmetry group of the cube and octahedron is the Coxeter group

<semantics>B 3=s 1,s 2,s 3|(s 1s 2) 3=(s 2s 3) 4=s 1 2=s 2 2=s 3 2=1<annotation encoding="application/x-tex"> \mathrm{B}_3 = \langle s_1, s_2, s_3 \;| \; (s_1s_2)^3 = (s_2s_3)^4 = s_1^2 = s_2^2 = s_3^2 = 1\rangle </annotation></semantics>

Since the cube has 6 faces, the graph formed by its vertices and edges a 5d space of 1-cycles. The corresponding topological crystal is thus 5-dimensional, and its group of covering symmetries is an extension of <semantics>B 3<annotation encoding="application/x-tex">\mathrm{B}_3</annotation></semantics> by <semantics> 5<annotation encoding="application/x-tex">\mathbb{Z}^5</annotation></semantics>. Similarly, the octahedron gives a 7-dimensional topological crystal whose group of covering symmetries, an extension of <semantics>B 3<annotation encoding="application/x-tex">\mathrm{B}_3</annotation></semantics> by <semantics> 7<annotation encoding="application/x-tex">\mathbb{Z}^7</annotation></semantics>.

The symmetry group of the dodecahedron and icosahedron is

<semantics>H 3=s 1,s 2,s 3|(s 1s 2) 3=(s 2s 3) 5=s 1 2=s 2 2=s 3 2=1<annotation encoding="application/x-tex"> \mathrm{H}_3 = \langle s_1, s_2, s_3 \;| \; (s_1s_2)^3 = (s_2s_3)^5= s_1^2 = s_2^2 = s_3^2 = 1\rangle </annotation></semantics>

and these solids give crystals of dimensions 11 and 19. If you’re a bug crawling around on the the second of these, locally you experience the same topology as if you were crawling around on a wire-frame model of a icosahedron. But you’re actually in 19-dimensional space, crawling around on the maximal abelian cover!

There is also an infinite family of degenerate Platonic solids called ‘hosohedra’ with two vertices, <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> edges and <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> faces. These faces cannot be made flat, since each face has just 2 edges, but that is not relevant to our construction: the vertices and edges still give a graph. For example, when <semantics>n=6<annotation encoding="application/x-tex">n = 6</annotation></semantics>, we have the ‘hexagonal hosohedron’:

The corresponding crystal has dimension <semantics>n1<annotation encoding="application/x-tex">n-1</annotation></semantics>, and its group of covering symmetries is an extension of <semantics>S n×/2<annotation encoding="application/x-tex">\mathrm{S}_n \times \mathbb{Z}/2</annotation></semantics> by <semantics> n1<annotation encoding="application/x-tex">\mathbb{Z}^{n-1}</annotation></semantics>. The case <semantics>n=3<annotation encoding="application/x-tex">n = 3</annotation></semantics> gives the graphene crystal, while <semantics>n=4<annotation encoding="application/x-tex">n = 4</annotation></semantics> gives the diamond.

Exotic crystals

We can also get crystals from more exotic highly symmetrical graphs. For example, take the Petersen graph:

Its symmetry group is the symmetric group <semantics>S 5<annotation encoding="application/x-tex">\mathrm{S}_5</annotation></semantics>. It has 10 vertices and 15 edges, so its Euler characteristic is <semantics>5<annotation encoding="application/x-tex">-5</annotation></semantics>, which implies that its space of 1-cycles is 6-dimensional. It thus gives a 6-dimensional crystal whose group of covering symmetries is an extension of <semantics>S 5<annotation encoding="application/x-tex">\mathrm{S}_5</annotation></semantics> by <semantics> 6<annotation encoding="application/x-tex">\mathbb{Z}^6</annotation></semantics>.

Two more nice examples come from Klein’s quartic curve, a Riemann surface of genus three on which the 336-element group <semantics>PGL(2,𝔽 7)<annotation encoding="application/x-tex">\mathrm{PGL}(2,\mathbb{F}_7)</annotation></semantics> acts as isometries. These isometries preserve a tiling of Klein’s quartic curve by 56 triangles, with 7 meeting at each vertex. This picture is topologically correct, though not geometrically:

From this tiling we obtain a graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> embedded in Klein’s quartic curve. This graph has <semantics>56×3/2=84<annotation encoding="application/x-tex">56 \times 3 / 2 = 84</annotation></semantics> edges and <semantics>56×3/7=24<annotation encoding="application/x-tex">56 \times 3 / 7 = 24</annotation></semantics> vertices, so it has Euler characteristic <semantics>60<annotation encoding="application/x-tex">-60</annotation></semantics>. It thus gives a 61-dimensional topological crystal whose group of covering symmetries is extension of <semantics>PGL(2,𝔽 7)<annotation encoding="application/x-tex">\mathrm{PGL}(2,\mathbb{F}_7)</annotation></semantics> by <semantics> 61<annotation encoding="application/x-tex">\mathbb{Z}^{61}</annotation></semantics>.

There is also a dual tiling of Klein’s curve by 24 heptagons, 3 meeting at each vertex. This gives a graph with 84 edges and 56 vertices, hence Euler characteristic <semantics>28<annotation encoding="application/x-tex">-28</annotation></semantics>. From this we obtain a 29-dimensional topological crystal whose group of covering symmetries is an extension of <semantics>PGL(2,𝔽 7)<annotation encoding="application/x-tex">\mathrm{PGL}(2,\mathbb{F}_7)</annotation></semantics> by <semantics> 29<annotation encoding="application/x-tex">\mathbb{Z}^{29}</annotation></semantics>.

The packing fraction

Another interesting property of a topological crystal is its ‘packing fraction’. I like to call the vertices of a topological crystal atoms, for the obvious reason. The set <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> of atoms is periodic. It’s usually not a lattice. But it’s contained in the lattice <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> obtained by projecting the integral 1-chains down to the space of 1-cycles:

<semantics>L={π(c):cC 1(X,)}<annotation encoding="application/x-tex"> L = \{ \pi(c) : \; c \in C_1(X,\mathbb{Z}) \} </annotation></semantics>

We can ask what fraction of the points in this lattice are actually atoms. Let’s call this the packing fraction. Since <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{Z})</annotation></semantics> acts as translations on both <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics>, we can define it to be

<semantics>|A/Z 1(X,)||L/Z 1(X,)|<annotation encoding="application/x-tex"> \displaystyle{ \frac{|A/Z_1(X,\mathbb{Z})|}{|L/Z_1(X,\mathbb{Z})|} } </annotation></semantics>

For example, suppose <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is the graph that gives graphene:

Then the packing fraction is <semantics>23<annotation encoding="application/x-tex">\frac{2}{3}</annotation></semantics>, as can be seen here:

For any bridgeless connected graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, it turns out that the packing fraction is:

<semantics>|V||T|<annotation encoding="application/x-tex"> \displaystyle{ \frac{|V|}{|T|} } </annotation></semantics>

where <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> is the set of vertices and <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> is the set of spanning trees. The main tool used to prove this is Bacher, de la Harpe and Nagnibeda’s work on integral cycles and integral cuts, which in turn relies on Kirchhoff’s matrix tree theorem.

Greg Egan used Mathematica to count the spanning trees in all the examples discussed above, and this let us work out their packing fractions. They tend to be very low! For example, the maximal abelian cover of the dodecahedron gives an 11-dimensional crystal with packing fraction <semantics>1/27,648<annotation encoding="application/x-tex">1/27,648</annotation></semantics>, while the heptagonal tiling of Klein’s quartic gives a 29-dimensional crystal with packing fraction <semantics>1/688,257,368,064,000,000<annotation encoding="application/x-tex">1/688,257,368,064,000,000</annotation></semantics>.

So, we have some very delicate, wispy crystals in high-dimensional spaces, built from two simple ideas in topology: the maximal abelian cover of a graph, and the natural inner product on 1-chains. They have intriguing connections to tropical geometry, but they are just beginning to be understood in detail. Have fun with them!

For more, see:

by john (baez@math.ucr.edu) at August 28, 2016 02:48 PM

John Baez - Azimuth

Topological Crystals (Part 4)


k4_crystal

Okay, let’s look at some examples of topological crystals. These are what got me excited in the first place. We’ll get some highly symmetrical crystals, often in higher-dimensional Euclidean spaces. The ‘triamond’, above, is a 3d example.

Review

First let me remind you how it works. We start with a connected graph X. This has a space C_0(X,\mathbb{R}) of 0-chains, which are formal linear combinations of vertices, and a space C_1(X,\mathbb{R}) of 1-chains, which are formal linear combinations of edges.

We choose a vertex in X. Each path \gamma in X starting at this vertex determines a 1-chain c_\gamma, namely the sum of its edges. These 1-chains give some points in C_1(X,\mathbb{R}). These points are the vertices of a graph \overline{X} called the maximal abelian cover of X. The maximal abelian cover has an edge from c_\gamma to c_{\gamma'} whenever the path \gamma' is obtained by adding an extra edge to \gamma. We can think of this edge as a straight line segment from c_\gamma to c_{\gamma'}.

So, we get a graph \overline{X} sitting inside C_1(X,\mathbb{R}). But this is a high-dimensional space. To get something nicer we’ll project down to a lower-dimensional space.

There’s boundary operator

\partial : C_1(X,\mathbb{R}) \to C_0(X,\mathbb{R})

sending any edge to the difference of its two endpoints. The kernel of this operator is the space of 1-cycles, Z_1(X,\mathbb{R}). There’s an inner product on the space of 1-chains such that edges form an orthonormal basis, so we get an orthogonal projection

\pi : C_1(X,\mathbb{R}) \to Z_1(X,\mathbb{R})

We can use this to take the maximal abelian cover \overline{X} and project it down to the space of 1-cycles. The hard part is checking that \pi is one-to-one on \overline{X}. But that’s what I explained last time! It’s true whenever our original graph X has no bridges: that is, edges whose removal would disconnect our graph, like this:

So, when X is a bridgeless graph, we get a copy of the maximal abelian cover embedded in Z_1(X,\mathbb{R}). This is our topological crystal.

Let’s do some examples.

Graphene

I showed you this one before, but it’s a good place to start. Let X be this graph:

Since this graph has 3 edges, its space of 1-chains is 3-dimensional. Since this graph has 2 holes, its 1-cycles form a plane in that 3d space. If we take paths \gamma in X starting at the red vertex, form the 1-chains c_\gamma, and project them down to this plane, we get this:

Here the 1-chains c_\gamma are the white and red dots. They’re the vertices of the maximal abelian cover \overline{X}, while the line segments between them are the edges of \overline{X}. Projecting these vertices and edges onto the plane of 1-cycles, we get our topological crystal:

This is the pattern of graphene, a miraculous 2-dimensional form of carbon. The more familiar 3d crystal called graphite is made of parallel layers of graphene connected with some other bonds.

Puzzle 1. Classify bridgeless connected graphs with 2 holes (or more precisely, a 2-dimensional space of 1-cycles). What are the resulting 2d topological crystals?

Diamond

Now let’s try this graph:

Since it has 3 holes, it gives a 3d crystal:

This crystal structure is famous! It’s the pattern used by a diamond. Up to translation it has two kinds of atoms, corresponding to the two vertices of the original graph.

Triamond

Now let’s try this graph:

Since it has 3 holes, it gives another 3d crystal:

This is also famous: it’s sometimes called a ‘triamond’. If you’re a bug crawling around on this crystal, locally you experience the same topology as if you were crawling around on a wire-frame model of a tetrahedron. But you’re actually on the maximal abelian cover!

Up to translation the triamond has 4 kinds of atoms, corresponding to the 4 vertices of the tetrahedron. Each atom has 3 equally distant neighbors lying in a plane at 120° angles from each other. These planes lie in 4 families, each parallel to one face of a regular tetrahedron. This structure was discovered by the crystallographer Laves, and it was dubbed the Laves graph by Coxeter. Later Sunada called it the ‘\mathrm{K}_4 lattice’ and studied its energy minimization properties. Theoretically it seems to be a stable form of carbon. Crystals in this pattern have not yet been seen, but this pattern plays a role in the structure of certain butterfly wings.

Puzzle 2. Classify bridgeless connected graphs with 3 holes (or more precisely, a 3d space of 1-cycles). What are the resulting 3d topological crystals?

Lonsdaleite and hyperquartz

There’s a crystal form of carbon called lonsdaleite that looks like this:

It forms in meteor impacts. It does not arise as 3-dimensional topological crystal.

Puzzle 3. Show that this graph gives a 5-dimensional topological crystal which can be projected down to give lonsdaleite in 3d space:

Puzzle 4. Classify bridgeless connected graphs with 4 holes (or more precisely, a 4d space of 1-cycles). What are the resulting 4d topological crystals? A crystallographer with the wonderful name of Eon calls this one hyperquartz, because it’s a 4-dimensional analogue of quartz:

All these classification problems are quite manageable if you notice there are certain ‘boring’, easily understood ways to get new bridgeless connected graphs with n holes from old ones.

Platonic crystals

For any connected graph X, there is a covering map

q : \overline{X} \to X

The vertices of \overline{X} come in different kinds, or ‘colors’, depending on which vertex of X they map to. It’s interesting to look at the group of ‘covering symmetries’, \mathrm{Cov}(X), consisting of all symmetries of \overline{X} that map vertices of same color to vertices of the same color. Greg Egan and I showed that if X has no bridges, \mathrm{Cov}(X) also acts as symmetries of the topological crystal associated to X. This group fits into a short exact sequence:

1 \longrightarrow H_1(X,\mathbb{Z}) \longrightarrow \mathrm{Cov}(X) \longrightarrow \mathrm{Aut}(X) \longrightarrow 1

where \mathrm{Aut}(X) is the group of all symmetries of X. Thus, every symmetry of X is covered by some symmetry of its topological crystal, while H_1(X,\mathbb{Z}) acts as translations of the crystal, in a way that preserves the color of every vertex.

For example consider the triamond, which comes from the tetrahedron. The symmetry group of the tetrahedron is this Coxeter group:

\mathrm{A}_3 = \langle s_1, s_2, s_3 \;| \; (s_1s_2)^3 = (s_2s_3)^3 = s_1^2 = s_2^2 = s_3^2 = 1\rangle

Thus, the group of covering symmetries of the triamond is an extension of \mathrm{A}_3 by \mathbb{Z}^3. Beware the notation here: this is not the alternating group on the 3 letters. In fact it’s the permutation group on 4 letters, namely the vertices of the tetrahedron!

We can also look at other ‘Platonic crystals’. The symmetry group of the cube and octahedron is the Coxeter group

\mathrm{B}_3 = \langle s_1, s_2, s_3 \;| \; (s_1s_2)^3 = (s_2s_3)^4 = s_1^2 = s_2^2 = s_3^2 = 1\rangle

Since the cube has 6 faces, the graph formed by its vertices and edges a 5d space of 1-cycles. The corresponding topological crystal is thus 5-dimensional, and its group of covering symmetries is an extension of \mathrm{B}_3 by \mathbb{Z}^5. Similarly, the octahedron gives a 7-dimensional topological crystal whose group of covering symmetries, an extension of \mathrm{B}_3 by \mathbb{Z}^7.

The symmetry group of the dodecahedron and icosahedron is

\mathrm{H}_3 = \langle s_1, s_2, s_3 \;| \; (s_1s_2)^3 = (s_2s_3)^5= s_1^2 = s_2^2 = s_3^2 = 1\rangle

and these solids give crystals of dimensions 11 and 19. If you’re a bug crawling around on the the second of these, locally you experience the same topology as if you were crawling around on a wire-frame model of a icosahedron. But you’re actually in 19-dimensional space, crawling around on the maximal abelian cover!

There is also an infinite family of degenerate Platonic solids called ‘hosohedra’ with two vertices, n edges and n faces. These faces cannot be made flat, since each face has just 2 edges, but that is not relevant to our construction: the vertices and edges still give a graph. For example, when n = 6, we have the ‘hexagonal hosohedron’:

The corresponding crystal has dimension n-1, and its group of covering symmetries is an extension of \mathrm{S}_n \times \mathbb{Z}/2 by \mathbb{Z}^{n-1}. The case n = 3 gives the graphene crystal, while n = 4 gives the diamond.

Exotic crystals

We can also get crystals from more exotic highly symmetrical graphs. For example, take the Petersen graph:

Its symmetry group is the symmetric group \mathrm{S}_5. It has 10 vertices and 15 edges, so its Euler characteristic is -5, which implies that its space of 1-cycles is 6-dimensional. It thus gives a 6-dimensional crystal whose group of covering symmetries is an extension of \mathrm{S}_5 by \mathbb{Z}^6.

Two more nice examples come from Klein’s quartic curve, a Riemann surface of genus three on which the 336-element group \mathrm{PGL}(2,\mathbb{F}_7) acts as isometries. These isometries preserve a tiling of Klein’s quartic curve by 56 triangles, with 7 meeting at each vertex. This picture is topologically correct, though not geometrically:

From this tiling we obtain a graph X embedded in Klein’s quartic curve. This graph has 56 \times 3 / 2 = 84 edges and 56 \times 3 / 7 = 24 vertices, so it has Euler characteristic -60. It thus gives a 61-dimensional topological crystal whose group of covering symmetries is extension of \mathrm{PGL}(2,\mathbb{F}_7) by \mathbb{Z}^{61}.

There is also a dual tiling of Klein’s curve by 24 heptagons, 3 meeting at each vertex. This gives a graph with 84 edges and 56 vertices, hence Euler characteristic -28. From this we obtain a 29-dimensional topological crystal whose group of covering symmetries is an extension of \mathrm{PGL}(2,\mathbb{F}_7) by \mathbb{Z}^{29}.

The packing fraction

Now that we’ve got a supply of highly symmetrical crystals in higher dimensions, we can try to study their structure. We’ve only made a bit of progress on this.

One interesting property of a topological crystal is its ‘packing fraction’. I like to call the vertices of a topological crystal atoms, for the obvious reason. The set A of atoms is periodic. It’s usually not a lattice. But it’s contained in the lattice L obtained by projecting the integral 1-chains down to the space of 1-cycles:

L = \{   \pi(c) : \; c \in C_1(X,\mathbb{Z})  \}

We can ask what fraction of the points in this lattice are actually atoms. Let’s call this the packing fraction. Since Z_1(X,\mathbb{Z}) acts as translations on both A and L, we can define it to be

\displaystyle{     \frac{|A/Z_1(X,\mathbb{Z})|}{|L/Z_1(X,\mathbb{Z})|} }

For example, suppose X is the graph that gives graphene:

Then the packing fraction is 2/3, as can be seen here:

For any bridgeless connected graph X, it turns out that the packing fraction equals

\displaystyle{    \frac{|V|}{|T|} }

where V is the set of vertices and T is the set of spanning trees. The main tool used to prove this is Bacher, de la Harpe and Nagnibeda’s work on integral cycles and integral cuts, which in turn relies on Kirchhoff’s matrix tree theorem.

Greg Egan used Mathematica to count the spanning trees in the examples discussed above, and this let us work out their packing fractions. They tend to be very low! For example, the maximal abelian cover of the dodecahedron gives an 11-dimensional crystal with packing fraction 1/27,648, while the heptagonal tiling of Klein’s quartic gives a 29-dimensional crystal with packing fraction 1/688,257,368,064,000,000.

So, we have some very delicate, wispy crystals in high-dimensional spaces, built from two simple ideas in topology: the maximal abelian cover of a graph, and the natural inner product on 1-chains. They have intriguing connections to tropical geometry, but they are just beginning to be understood in detail. Have fun with them!

For more, see:

• John Baez, Topological crystals.


by John Baez at August 28, 2016 07:12 AM

Emily Lakdawalla - The Planetary Society Blog

Juno's first Jupiter close approach successful; best JunoCam images yet to come
NASA announced this afternoon that Juno passed through its first perijove since entering orbit successfully, with science instruments operating all the way. This is a huge relief, given all the unknowns about the effects of Jupiter's nasty radiation environment on its brand-new orbiter.

August 28, 2016 01:07 AM

August 27, 2016

The n-Category Cafe

Jobs at Heriot-Watt

We at the mathematics department at the University of Edinburgh are doing more and more things in conjunction with our sisters and brothers at Heriot–Watt University, also in Edinburgh. For instance, our graduate students take classes together, and about a dozen of them are members of both departments simultaneously. We’re planning to strengthen those links in the years to come.

The news is that Heriot–Watt are hiring.

They’re looking for one or more “pure” mathematicians. These are permanent jobs at any level, from most junior to most senior. There’s significant interest in category theory there, in contexts such as mathematical physics and semigroup theory — e.g. when I taught an introductory category theory course last year, there was a good bunch of participants from Heriot–Watt.

In case you were wondering, Heriot was goldsmith to the royal courts of Scotland and Denmark in the 16th century. Gold <semantics><annotation encoding="application/x-tex">\mapsto</annotation></semantics> money <semantics><annotation encoding="application/x-tex">\mapsto</annotation></semantics> university, apparently. Watt is the Scottish engineer James Watt, as in “60-watt lightbulb”.

by leinster (Tom.Leinster@ed.ac.uk) at August 27, 2016 04:11 AM

August 26, 2016

Symmetrybreaking - Fermilab/SLAC

Winners declared in SUSY bet

Physicists exchanged cognac in Copenhagen at the conclusion of a bet about supersymmetry and the LHC.

As a general rule, theorist Nima Arkani-Hamed does not get involved in physics bets.

“Theoretical physicists like to take bets on all kinds of things,” he says. “I’ve always taken the moral high ground… Nature decides. We’re all in pursuit of the truth. We’re all on the same side.”

But sometimes you’re in Copenhagen for a conference, and you’re sitting in a delightfully unusual restaurant—one that sort of reminds you of a cave—and a fellow physicist gives you the opportunity to get in on a decade-old wager about supersymmetry and the Large Hadron Collider. Sometimes then, you decide to bend your rule. “It was just such a jovial atmosphere, I figured, why not?”

That’s how Arkani-Hamed found himself back in Copenhagen this week, passing a 1000-Krone bottle of cognac to one of the winners of the bet, Director of the Niels Bohr International Academy Poul Damgaard.

Arkani-Hamed had wagered that experiments at the LHC would find evidence of supersymmetry by the arbitrary date of June 16, 2016. Supersymmetry, SUSY for short, is a theory that predicts the existence of partner particles for the members of the Standard Model of particle physics

The deadline was not met. But in a talk at the Niels Bohr Institute, Arkani-Hamed pointed out that the end of the gamble does not equal the end of the theory.

“I was not a good student in school,” Arkani-Hamed explained. “One of my big problems was not getting homework done on time. It was a constant battle with my teachers… Just give me another week! It’s kind of like the bet.”

He pointed out that so far the LHC has gathered just 1 percent of the total amount of data it aims to collect.

With that data, scientists can indeed rule out the most vanilla form of supersymmetry. But that’s not the version of supersymmetry Arkani-Hamed would expect the LHC to find anyway, he said.

It is still possible LHC experiments will find evidence of other SUSY models—including the one Arkani-Hamed prefers, called split SUSY, which adds superpartners to just half of the Standard Model’s particles. And if LHC scientists don’t find evidence of SUSY, Arkani-Hamed pointed out, the theoretical problems it aimed to solve will remain an exciting challenge for the next generation of theorists to figure out.

“I think Winston Churchill said that in victory you should be magnanimous,” Damgaard said after Arkani-Hamed’s talk. “I know also he said that in defeat you should be defiant. And that’s certainly Nima.”

Arkani-Hamed shrugged. But it turned out he was not the only optimist in the room. Panelist Yonit Hochberg of the University of California, Berkeley conducted an informal poll of attendees. She found that the majority still think that in the next 20 years, as data continues to accumulate, experiments at the LHC will discover something new.

Video of As9raVaTFGA

by Kathryn Jepsen at August 26, 2016 09:48 PM

astrobites - astro-ph reader's digest

Earth’s New Neighbor: Proxima b

Title:  A terrestrial planet candidate in a temperate orbit around Proxima Centauri
Authors: Guillem Anglada-Escudé, Pedro J. Amado, John Barnes, et al.
First Author’s Institution: Queen Mary University of London
Status: Published in Nature

Earth just got a new neighbor.  Proxima Centauri, the closest star to Earth at just 4.22 light years away, appears to host a planet, but not just any planet.  The planet, Proxima b, is potentially Earth-mass and lies within the habitable zone where liquid water might exist on its surface. The quest for a truly Earth-like planet is the Holy Grail of exoplanet research.  The fact that we might have to go only one star over to find another habitable planet indicates that Earth-like planets are either very common or that we got very lucky.  Whatever the case, this makes Proxima b one of the most interesting planets in the search for extraterrestrial life.

This artist’s impression shows the planet Proxima b orbiting the red dwarf star Proxima Centauri, the closest star to the Solar System. The double star Alpha Centauri AB also appears in the image between the planet and Proxima itself. Proxima b is a little more massive than the Earth and orbits in the habitable zone around Proxima Centauri, where the temperature is suitable for liquid water to exist on its surface.

Figure 1:  This artist’s impression shows the planet Proxima b orbiting the red dwarf star Proxima Centauri, the closest star to the Solar System. The double star Alpha Centauri AB also appears in the image between the planet and Proxima itself. Proxima b is a little more massive than the Earth and orbits in the habitable zone around Proxima Centauri, where the temperature is suitable for liquid water to exist on its surface.  Image and caption credit: ESO. 

The star: Proxima Centauri

Proxima Centauri is tiny.  It’s barely big enough to be called a star.  At just 12% the mass of the Sun, it is what is called a red dwarf (or M-dwarf).  Red dwarf stars, due to their low mass, are much cooler than more massive stars. Whereas our Sun is a toasty 5780 K, Proxima Centauri is a “chilly” 3050 K.  Its temperature, along with its small size (just 14.1% the radius of the Sun), means that Proxima is very faint.  The luminosity of Proxima Centauri is only 0.155% that of the Sun’s.  This means that the habitable zone is very close to the star, just 0.042-0.082 AU from the star (1 AU = distance from Earth to Sun).  For comparison, the habitable zone for the Sun is 0.95-1.68 AU.

How the discovery was made

Proxima b was discovered by measuring its gravitational effect on its star using the radial velocity method (also known as the Doppler or wobble method). Just like how the star pulls on the planet, the planet pulls on the star. Usually, Earth-mass planets are too small to have a measurable effect on their stars.  Earth, for example, only causes an 8 cm/s shift in the Sun’s velocity over the course of a year, which is currently beyond our technological limits to measure.  However, because Proxima Centauri is so small and the planet is so close, Proxima b caused a much more detectable 1.38 m/s shift over its much shorter (and therefore more detectable) orbital period of 11 days.

These small shifts in the star’s velocity caused by the planet’s gravity are measured using spectroscopy. Stars emit light across the electromagnetic spectrum. However, elements and molecules in the upper atmosphere of the star absorb very precise regions of the spectrum called spectral lines.  Because of the Doppler effect, the location of these lines shift depending on the velocity of the star.  Therefore, astronomers can measure the velocity of a star as a function of time by measuring how these spectral lines shift.  If the star’s velocity moves up and down periodically, this indicates the existence of a body orbiting the star, with the magnitude of the shift depending on the body’s mass and distance.  The velocity measurements for Proxima Centauri are shown below.

Radial velocity measurement of Proxima Centauri.

Figure 2:  Radial velocity measurements of Proxima Centauri using UVES data and two periods of HARPS data.  The data is phase-folded, meaning that the measurements of each 11.2 day period are plotted on top of one another.  The sinusoidal nature of the measurements indicate the existence of a planet orbiting the star.

Things are never quite as easy as they sound though.  Spectra are very complicated things.  First, you need a very stable instrument to measure the velocities of stars down to just 1 m/s over the course of months or years.  The instruments used to discover Proxima b, HARPS and UVES, are two of the most stable and precise spectrometers in the world, each being stable to about 1-2 m/s over a long timespan.  Other factors, such as turbulence in the air, can cause fake velocity shifts in the data that can be extremely difficult to correct for.

The star’s innate variability and magnetic activity can also mimic the signal shown by planets.  Improper correction of stellar activity led to the disproving of the suspected Earth-mass planet around Proxima Centauri’s neighbor, Alpha Centauri B.  Due to this major confounding factor, the authors were very careful to separate any signals of activity from the planetary signal.  Much of the activity works on the same timescale as the star’s rotation rate.  The 88 day rotation period of the star doesn’t match the 11.2 day period of the planetary signal, nor is it a multiple of the planet’s period, which gives credibility to the planetary hypothesis.  Magnetic activity can also change the shape of the spectral lines, which can induce a planet-like radial velocity signature in the data.  Several spectral lines that are especially sensitive to these changes were carefully examined, and the shifts in these lines did not correlate with the planetary signal.  All other explanations for the 1.38 m/s shift over 11.2 days having been disproved, this confirms the signal as being caused by a planet.

The planet: Proxima b

Very little can be said about Proxima b.  The radial velocity technique only measures the minimum mass of the planet, which in this case is 1.27 times Earth’s mass.  The exact mass depends on whether we’re seeing the planetary system edge-on, in which case it would be 1.27 Earth-masses, or whether we’re seeing it at some angle, in which case it would be larger (although it’s likely to be mostly rocky in almost all orientations).

Proxima b even lies in the habitable zone, even when conservatively defined.  However, there’s always a catch.  The habitability of red dwarfs has been heavily debated for years.  Two commonly cited problems with the habitability of red dwarf planets are tidal locking and stellar activity.  Tidal locking, where a planet shows the same face to its star all year long, leads to extreme temperature variations, which likely impede life.  For example, water vapor from the hot, sunny side might move to the cold, dark side and freeze out of the atmosphere, forever locking it up as ice.  X-ray and UV radiation from stellar flares, more common on red dwarfs, could also sterilize the surface.  On the other hand, there are some astronomers who say that these two issues are overblown and that these are not fatal problems for the formation and evolution of life on these planets.

Future research on Proxima b will undoubtedly look for a potential planetary transit of Proxima Centauri.  A transit allows for the measurement of the planet’s radius, which when combined with the radial velocity measurements, leads to the planet’s mass and density as well, which could prove or disprove Proxima b as a rocky planet.  However, the geometric probability of it transiting is only 1.5%, so it’s unlikely to occur.  But if it doesn’t transit, nothing much can be done to characterize the planet until the next generation of telescopes comes online with the (potential) ability to directly image Proxima b and actually measure its temperature and composition to more definitely determine its habitability.

Whatever future research may hold, for now, Proxima b is the closest potentially habitable exoplanet to the Earth.  In Galactic terms, it’s literally right next door.

by Joseph Schmitt at August 26, 2016 02:06 PM

Quantum Diaries

The Delirium over Beryllium

This post is cross-posted from ParticleBites.

Article: Particle Physics Models for the 17 MeV Anomaly in Beryllium Nuclear Decays
Authors: J.L. Feng, B. Fornal, I. Galon, S. Gardner, J. Smolinsky, T. M. P. Tait, F. Tanedo
Reference: arXiv:1608.03591 (Submitted to Phys. Rev. D)
Also featuring the results from:
— Gulyás et al., “A pair spectrometer for measuring multipolarities of energetic nuclear transitions” (description of detector; 1504.00489NIM)
— Krasznahorkay et al., “Observation of Anomalous Internal Pair Creation in 8Be: A Possible Indication of a Light, Neutral Boson”  (experimental result; 1504.01527PRL version; note PRL version differs from arXiv)
— Feng et al., “Protophobic Fifth-Force Interpretation of the Observed Anomaly in 8Be Nuclear Transitions” (phenomenology; 1604.07411; PRL)

Editor’s note: the author is a co-author of the paper being highlighted. 

Recently there’s some press (see links below) regarding early hints of a new particle observed in a nuclear physics experiment. In this bite, we’ll summarize the result that has raised the eyebrows of some physicists, and the hackles of others.

A crash course on nuclear physics

Nuclei are bound states of protons and neutrons. They can have excited states analogous to the excited states of at lowoms, which are bound states of nuclei and electrons. The particular nucleus of interest is beryllium-8, which has four neutrons and four protons, which you may know from the triple alpha process. There are three nuclear states to be aware of: the ground state, the 18.15 MeV excited state, and the 17.64 MeV excited state.

Beryllium-8 excited nuclear states. The 18.15 MeV state (red) exhibits an anomaly. Both the 18.15 MeV and 17.64 states decay to the ground through a magnetic, p-wave transition. Image adapted from Savage et al. (1987).

Most of the time the excited states fall apart into a lithium-7 nucleus and a proton. But sometimes, these excited states decay into the beryllium-8 ground state by emitting a photon (γ-ray). Even more rarely, these states can decay to the ground state by emitting an electron–positron pair from a virtual photon: this is called internal pair creation and it is these events that exhibit an anomaly.

The beryllium-8 anomaly

Physicists at the Atomki nuclear physics institute in Hungary were studying the nuclear decays of excited beryllium-8 nuclei. The team, led by Attila J. Krasznahorkay, produced beryllium excited states by bombarding a lithium-7 nucleus with protons.

Preparation of beryllium-8 excited state

Beryllium-8 excited states are prepare by bombarding lithium-7 with protons.

The proton beam is tuned to very specific energies so that one can ‘tickle’ specific beryllium excited states. When the protons have around 1.03 MeV of kinetic energy, they excite lithium into the 18.15 MeV beryllium state. This has two important features:

  1. Picking the proton energy allows one to only produce a specific excited state so one doesn’t have to worry about contamination from decays of other excited states.
  2. Because the 18.15 MeV beryllium nucleus is produced at resonance, one has a very high yield of these excited states. This is very good when looking for very rare decay processes like internal pair creation.

What one expects is that most of the electron–positron pairs have small opening angle with a smoothly decreasing number as with larger opening angles.

Screen Shot 2016-08-22 at 9.18.11 AM

Expected distribution of opening angles for ordinary internal pair creation events. Each line corresponds to nuclear transition that is electric (E) or magenetic (M) with a given orbital quantum number, l. The beryllium transitionsthat we’re interested in are mostly M1. Adapted from Gulyás et al. (1504.00489).

Instead, the Atomki team found an excess of events with large electron–positron opening angle. In fact, even more intriguing: the excess occurs around a particular opening angle (140 degrees) and forms a bump.

Adapted from Krasznahorkay et al.

Number of events (dN/dθ) for different electron–positron opening angles and plotted for different excitation energies (Ep). For Ep=1.10 MeV, there is a pronounced bump at 140 degrees which does not appear to be explainable from the ordinary internal pair conversion. This may be suggestive of a new particle. Adapted from Krasznahorkay et al., PRL 116, 042501.

Here’s why a bump is particularly interesting:

  1. The distribution of ordinary internal pair creation events is smoothly decreasing and so this is very unlikely to produce a bump.
  2. Bumps can be signs of new particles: if there is a new, light particle that can facilitate the decay, one would expect a bump at an opening angle that depends on the new particle mass.

Schematically, the new particle interpretation looks like this:

Schematic of the Atomki experiment.

Schematic of the Atomki experiment and new particle (X) interpretation of the anomalous events. In summary: protons of a specific energy bombard stationary lithium-7 nuclei and excite them to the 18.15 MeV beryllium-8 state. These decay into the beryllium-8 ground state. Some of these decays are mediated by the new X particle, which then decays in to electron–positron pairs of a certain opening angle that are detected in the Atomki pair spectrometer detector. Image from 1608.03591.

As an exercise for those with a background in special relativity, one can use the relation (pe+ + pe)2 = mX2 to prove the result:

Untitled

This relates the mass of the proposed new particle, X, to the opening angle θ and the energies E of the electron and positron. The opening angle bump would then be interpreted as a new particle with mass of roughly 17 MeV. To match the observed number of anomalous events, the rate at which the excited beryllium decays via the X boson must be 6×10-6 times the rate at which it goes into a γ-ray.

The anomaly has a significance of 6.8σ. This means that it’s highly unlikely to be a statistical fluctuation, as the 750 GeV diphoton bump appears to have been. Indeed, the conservative bet would be some not-understood systematic effect, akin to the 130 GeV Fermi γ-ray line.

The beryllium that cried wolf?

Some physicists are concerned that beryllium may be the ‘boy that cried wolf,’ and point to papers by the late Fokke de Boer as early as 1996 and all the way to 2001. de Boer made strong claims about evidence for a new 10 MeV particle in the internal pair creation decays of the 17.64 MeV beryllium-8 excited state. These claims didn’t pan out, and in fact the instrumentation paper by the Atomki experiment rules out that original anomaly.

The proposed evidence for “de Boeron” is shown below:

Beryllium

The de Boer claim for a 10 MeV new particle. Left: distribution of opening angles for internal pair creation events in an E1 transition of carbon-12. This transition has similar energy splitting to the beryllium-8 17.64 MeV transition and shows good agreement with the expectations; as shown by the flat “signal – background” on the bottom panel. Right: the same analysis for the M1 internal pair creation events from the 17.64 MeV beryllium-8 states. The “signal – background” now shows a broad excess across all opening angles. Adapted from de Boer et al. PLB 368, 235 (1996).

When the Atomki group studied the same 17.64 MeV transition, they found that a key background component—subdominant E1 decays from nearby excited states—dramatically improved the fit and were not included in the original de Boer analysis. This is the last nail in the coffin for the proposed 10 MeV “de Boeron.”

However, the Atomki group also highlight how their new anomaly in the 18.15 MeV state behaves differently. Unlike the broad excess in the de Boer result, the new excess is concentrated in a bump. There is no known way in which additional internal pair creation backgrounds can contribute to add a bump in the opening angle distribution; as noted above: all of these distributions are smoothly falling.

The Atomki group goes on to suggest that the new particle appears to fit the bill for a dark photon, a reasonably well-motivated copy of the ordinary photon that differs in its overall strength and having a non-zero (17 MeV?) mass.

Theory part 1: Not a dark photon

With the Atomki result was published and peer reviewed in Physics Review Letters, the game was afoot for theorists to understand how it would fit into a theoretical framework like the dark photon. A group from UC Irvine, University of Kentucky, and UC Riverside found that actually, dark photons have a hard time fitting the anomaly simultaneously with other experimental constraints. In the visual language of this recent ParticleBite, the situation was this:

Beryllium-8

It turns out that the minimal model of a dark photon cannot simultaneously explain the Atomki beryllium-8 anomaly without running afoul of other experimental constraints. Image adapted from this ParticleBite.

The main reason for this is that a dark photon with mass and interaction strength to fit the beryllium anomaly would necessarily have been seen by the NA48/2 experiment. This experiment looks for dark photons in the decay of neutral pions (π0). These pions typically decay into two photons, but if there’s a 17 MeV dark photon around, some fraction of those decays would go into dark-photon — ordinary-photon pairs. The non-observation of these unique decays rules out the dark photon interpretation.

The theorists then decided to “break” the dark photon theory in order to try to make it fit. They generalized the types of interactions that a new photon-like particle, X, could have, allowing protons, for example, to have completely different charges than electrons rather than having exactly opposite charges. Doing this does gross violence to the theoretical consistency of a theory—but they goal was just to see what a new particle interpretation would have to look like. They found that if a new photon-like particle talked to neutrons but not protons—that is, the new force were protophobic—then a theory might hold together.

Schematic description of how model-builders “hacked” the dark photon theory to fit both the beryllium anomaly while being consistent with other experiments. This hack isn’t pretty—and indeed, comes at the cost of potentially invalidating the mathematical consistency of the theory—but the exercise demonstrates the target for how to a complete theory might have to behave. Image adapted from this ParticleBite.

Theory appendix: pion-phobia is protophobia

Editor’s note: what follows is for readers with some physics background interested in a technical detail; others may skip this section.

How does a new particle that is allergic to protons avoid the neutral pion decay bounds from NA48/2? Pions decay into pairs of photons through the well-known triangle-diagrams of the axial anomaly. The decay into photon–dark-photon pairs proceed through similar diagrams. The goal is then to make sure that these diagrams cancel.

A cute way to look at this is to assume that at low energies, the relevant particles running in the loop aren’t quarks, but rather nucleons (protons  and neutrons). In fact, since only the proton can talk to the photon, one only needs to consider proton loops. Thus if the new photon-like particle, X, doesn’t talk to protons, then there’s no diagram for the pion to decay into γX. This would be great if the story weren’t completely wrong.

Avoiding NA48

Avoiding NA48/2 bounds requires that the new particle, X, is pion-phobic. It turns out that this is equivalent to X being protophobic. The correct way to see this is on the left, making sure that the contribution of up-quark loops cancels the contribution from down-quark loops. A slick (but naively completely wrong) calculation is on the right, arguing that effectively only protons run in the loop.

The correct way of seeing this is to treat the pion as a quantum superposition of an up–anti-up and down–anti-down bound state, and then make sure that the X charges are such that the contributions of the two states cancel. The resulting charges turn out to be protophobic.

The fact that the “proton-in-the-loop” picture gives the correct charges, however, is no coincidence. Indeed, this was precisely how Jack Steinberger calculated the correct pion decay rate. The key here is whether one treats the quarks/nucleons linearly or non-linearly in chiral perturbation theory. The relation to the Wess-Zumino-Witten term—which is what really encodes the low-energy interaction—is carefully explained in chapter 6a.2 of Georgi’s revised Weak Interactions.

Theory part 2: Not a spin-0 particle

The above considerations focus on a new particle with the same spin and parity as a photon (spin-1, parity odd). Another result of the UCI study was a systematic exploration of other possibilities. They found that the beryllium anomaly could not be consistent with spin-0 particles. For a parity-odd, spin-0 particle, one cannot simultaneously conserve angular momentum and parity in the decay of the excited beryllium-8 state. (Parity violating effects are negligible at these energies.)

Parity

Parity and angular momentum conservation prohibit a “dark Higgs” (parity even scalar) from mediating the anomaly.

For a parity-odd pseudoscalar, the bounds on axion-like particles at 20 MeV suffocate any reasonable coupling. Measured in terms of the pseudoscalar–photon–photon coupling (which has dimensions of inverse GeV), this interaction is ruled out down to the inverse Planck scale.

Screen Shot 2016-08-24 at 4.01.07 PM

Bounds on axion-like particles exclude a 20 MeV pseudoscalar with couplings to photons stronger than the inverse Planck scale. Adapted from 1205.2671 and 1512.03069.

Additional possibilities include:

  • Dark Z bosons, cousins of the dark photon with spin-1 but indeterminate parity. This is very constrained by atomic parity violation.
  • Axial vectors, spin-1 bosons with positive parity. These remain a theoretical possibility, though their unknown nuclear matrix elements make it difficult to write a predictive model. (See section II.D of 1608.03591.)

Theory part 3: Nuclear input

The plot thickens when once also includes results from nuclear theory. Recent results from Saori Pastore, Bob Wiringa, and collaborators point out a very important fact: the 18.15 MeV beryllium-8 state that exhibits the anomaly and the 17.64 MeV state which does not are actually closely related.

Recall (e.g. from the first figure at the top) that both the 18.15 MeV and 17.64 MeV states are both spin-1 and parity-even. They differ in mass and in one other key aspect: the 17.64 MeV state carries isospin charge, while the 18.15 MeV state and ground state do not.

Isospin is the nuclear symmetry that relates protons to neutrons and is tied to electroweak symmetry in the full Standard Model. At nuclear energies, isospin charge is approximately conserved. This brings us to the following puzzle:

If the new particle has mass around 17 MeV, why do we see its effects in the 18.15 MeV state but not the 17.64 MeV state?

Naively, if the new particle emitted, X, carries no isospin charge, then isospin conservation prohibits the decay of the 17.64 MeV state through emission of an X boson. However, the Pastore et al. result tells us that actually, the isospin-neutral and isospin-charged states mix quantum mechanically so that the observed 18.15 and 17.64 MeV states are mixtures of iso-neutral and iso-charged states. In fact, this mixing is actually rather large, with mixing angle of around 10 degrees!

The result of this is that one cannot invoke isospin conservation to explain the non-observation of an anomaly in the 17.64 MeV state. In fact, the only way to avoid this is to assume that the mass of the X particle is on the heavier side of the experimentally allowed range. The rate for emission goes like the 3-momentum cubed (see section II.E of 1608.03591), so a small increase in the mass can suppresses the rate of emission by the lighter state by a lot.

The UCI collaboration of theorists went further and extended the Pastore et al. analysis to include a phenomenological parameterization of explicit isospin violation. Independent of the Atomki anomaly, they found that including isospin violation improved the fit for the 18.15 MeV and 17.64 MeV electromagnetic decay widths within the Pastore et al. formalism. The results of including all of the isospin effects end up changing the particle physics story of the Atomki anomaly significantly:

Parameter fits

The rate of X emission (colored contours) as a function of the X particle’s couplings to protons (horizontal axis) versus neutrons (vertical axis). The best fit for a 16.7 MeV new particle is the dashed line in the teal region. The vertical band is the region allowed by the NA48/2 experiment. Solid lines show the dark photon and protophobic limits. Left: the case for perfect (unrealistic) isospin. Right: the case when isospin mixing and explicit violation are included. Observe that incorporating realistic isospin happens to have only a modest effect in the protophobic region. Figure from 1608.03591.

The results of the nuclear analysis are thus that:

  1. An interpretation of the Atomki anomaly in terms of a new particle tends to push for a slightly heavier X mass than the reported best fit. (Remark: the Atomki paper does not do a combined fit for the mass and coupling nor does it report the difficult-to-quantify systematic errors  associated with the fit. This information is important for understanding the extent to which the X mass can be pushed to be heavier.)
  2. The effects of isospin mixing and violation are important to include; especially as one drifts away from the purely protophobic limit.

Theory part 4: towards a complete theory

The theoretical structure presented above gives a framework to do phenomenology: fitting the observed anomaly to a particle physics model and then comparing that model to other experiments. This, however, doesn’t guarantee that a nice—or even self-consistent—theory exists that can stretch over the scaffolding.

Indeed, a few challenges appear:

  • The isospin mixing discussed above means the X mass must be pushed to the heavier values allowed by the Atomki observation.
  • The “protophobic” limit is not obviously anomaly-free: simply asserting that known particles have arbitrary charges does not generically produce a mathematically self-consistent theory.
  • Atomic parity violation constraints require that the X couple in the same way to left-handed and right-handed matter. The left-handed coupling implies that X must also talk to neutrinos: these open up new experimental constraints.

The Irvine/Kentucky/Riverside collaboration first note the need for a careful experimental analysis of the actual mass ranges allowed by the Atomki observation, treating the new particle mass and coupling as simultaneously free parameters in the fit.

Next, they observe that protophobic couplings can be relatively natural. Indeed: the Standard Model Z boson is approximately protophobic at low energies—a fact well known to those hunting for dark matter with direct detection experiments. For exotic new physics, one can engineer protophobia through a phenomenon called kinetic mixing where two force particles mix into one another. A tuned admixture of electric charge and baryon number, (Q-B), is protophobic.

Baryon number, however, is an anomalous global symmetry—this means that one has to work hard to make a baryon-boson that mixes with the photon (see 1304.0576 and 1409.8165 for examples). Another alternative is if the photon kinetically mixes with not baryon number, but the anomaly-free combination of “baryon-minus-lepton number,” Q-(B-L). This then forces one to apply additional model-building modules to deal with the neutrino interactions that come along with this scenario.

In the language of the ‘model building blocks’ above, result of this process looks schematically like this:

Model building block

A complete theory is completely mathematically self-consistent and satisfies existing constraints. The additional bells and whistles required for consistency make additional predictions for experimental searches. Pieces of the theory can sometimes  be used to address other anomalies.

The theory collaboration presented examples of the two cases, and point out how the additional ‘bells and whistles’ required may tie to additional experimental handles to test these hypotheses. These are simple existence proofs for how complete models may be constructed.

What’s next?

We have delved rather deeply into the theoretical considerations of the Atomki anomaly. The analysis revealed some unexpected features with the types of new particles that could explain the anomaly (dark photon-like, but not exactly a dark photon), the role of nuclear effects (isospin mixing and breaking), and the kinds of features a complete theory needs to have to fit everything (be careful with anomalies and neutrinos). The single most important next step, however, is and has always been experimental verification of the result.

While the Atomki experiment continues to run with an upgraded detector, what’s really exciting is that a swath of experiments that are either ongoing or in construction will be able to probe the exact interactions required by the new particle interpretation of the anomaly. This means that the result can be independently verified or excluded within a few years. A selection of upcoming experiments is highlighted in section IX of 1608.03591:

Experimental searches

Other experiments that can probe the new particle interpretation of the Atomki anomaly. The horizontal axis is the new particle mass, the vertical axis is its coupling to electrons (normalized to the electric charge). The dark blue band is the target region for the Atomki anomaly. Figure from 1608.03591; assuming 100% branching ratio to electrons.

We highlight one particularly interesting search: recently a joint team of theorists and experimentalists at MIT proposed a way for the LHCb experiment to search for dark photon-like particles with masses and interaction strengths that were previously unexplored. The proposal makes use of the LHCb’s ability to pinpoint the production position of charged particle pairs and the copious amounts of D mesons produced at Run 3 of the LHC. As seen in the figure above, the LHCb reach with this search thoroughly covers the Atomki anomaly region.

Implications

So where we stand is this:

  • There is an unexpected result in a nuclear experiment that may be interpreted as a sign for new physics.
  • The next steps in this story are independent experimental cross-checks; the threshold for a ‘discovery’ is if another experiment can verify these results.
  • Meanwhile, a theoretical framework for understanding the results in terms of a new particle has been built and is ready-and-waiting. Some of the results of this analysis are important for faithful interpretation of the experimental results.

What if it’s nothing?

This is the conservative take—and indeed, we may well find that in a few years, the possibility that Atomki was observing a new particle will be completely dead. Or perhaps a source of systematic error will be identified and the bump will go away. That’s part of doing science.

Meanwhile, there are some important take-aways in this scenario. First is the reminder that the search for light, weakly coupled particles is an important frontier in particle physics. Second, for this particular anomaly, there are some neat take aways such as a demonstration of how effective field theory can be applied to nuclear physics (see e.g. chapter 3.1.2 of the new book by Petrov and Blechman) and how tweaking our models of new particles can avoid troublesome experimental bounds. Finally, it’s a nice example of how particle physics and nuclear physics are not-too-distant cousins and how progress can be made in particle–nuclear collaborations—one of the Irvine group authors (Susan Gardner) is a bona fide nuclear theorist who was on sabbatical from the University of Kentucky.

What if it’s real?

This is a big “what if.” On the other hand, a 6.8σ effect is not a statistical fluctuation and there is no known nuclear physics to produce a new-particle-like bump given the analysis presented by the Atomki experimentalists.

The threshold for “real” is independent verification. If other experiments can confirm the anomaly, then this could be a huge step in our quest to go beyond the Standard Model. While this type of particle is unlikely to help with the Hierarchy problem of the Higgs mass, it could be a sign for other kinds of new physics. One example is the grand unification of the electroweak and strong forces; some of the ways in which these forces unify imply the existence of an additional force particle that may be light and may even have the types of couplings suggested by the anomaly.

Could it be related to other anomalies?

The Atomki anomaly isn’t the first particle physics curiosity to show up at the MeV scale. While none of these other anomalies are necessarily related to the type of particle required for the Atomki result (they may not even be compatible!), it is helpful to remember that the MeV scale may still have surprises in store for us.

  • The KTeV anomaly: The rate at which neutral pions decay into electron–positron pairs appears to be off from the expectations based on chiral perturbation theory. In 0712.0007, a group of theorists found that this discrepancy could be fit to a new particle with axial couplings. If one fixes the mass of the proposed particle to be 20 MeV, the resulting couplings happen to be in the same ballpark as those required for the Atomki anomaly. The important caveat here is that parameters for an axial vector to fit the Atomki anomaly are unknown, and mixed vector–axial states are severely constrained by atomic parity violation.
KTeV anomaly

The KTeV anomaly interpreted as a new particle, U. From 0712.0007.

  • The anomalous magnetic moment of the muon and the cosmic lithium problem: much of the progress in the field of light, weakly coupled forces comes from Maxim Pospelov. The anomalous magnetic moment of the muon, (g-2)μ, has a long-standing discrepancy from the Standard Model (see e.g. this blog post). While this may come from an error in the very, very intricate calculation and the subtle ways in which experimental data feed into it, Pospelov (and also Fayet) noted that the shift may come from a light (in the 10s of MeV range!), weakly coupled new particle like a dark photon. Similarly, Pospelov and collaborators showed that a new light particle in the 1-20 MeV range may help explain another longstanding mystery: the surprising lack of lithium in the universe (APS Physics synopsis).

Could it be related to dark matter?

A lot of recent progress in dark matter has revolved around the possibility that in addition to dark matter, there may be additional light particles that mediate interactions between dark matter and the Standard Model. If these particles are light enough, they can change the way that we expect to find dark matter in sometimes surprising ways. One interesting avenue is called self-interacting dark matter and is based on the observation that these light force carriers can deform the dark matter distribution in galaxies in ways that seem to fit astronomical observations. A 20 MeV dark photon-like particle even fits the profile of what’s required by the self-interacting dark matter paradigm, though it is very difficult to make such a particle consistent with both the Atomki anomaly and the constraints from direct detection.

Should I be excited?

Given all of the caveats listed above, some feel that it is too early to be in “drop everything, this is new physics” mode. Others may take this as a hint that’s worth exploring further—as has been done for many anomalies in the recent past. For researchers, it is prudent to be cautious, and it is paramount to be careful; but so long as one does both, then being excited about a new possibility is part what makes our job fun.

For the general public, the tentative hopes of new physics that pop up—whether it’s the Atomki anomaly, or the 750 GeV diphoton bumpa GeV bump from the galactic center, γ-ray lines at 3.5 keV and 130 GeV, or penguins at LHCb—these are the signs that we’re making use of all of the data available to search for new physics. Sometimes these hopes fizzle away, often they leave behind useful lessons about physics and directions forward. Maybe one of these days an anomaly will stick and show us the way forward.

Further Reading

Here are some of the popular-level press on the Atomki result. See the references at the top of this ParticleBite for references to the primary literature.

UC Riverside Press Release
UC Irvine Press Release
Nature News
Quanta Magazine
Quanta Magazine: Abstractions
Symmetry Magazine
Los Angeles Times

by Flip Tanedo at August 26, 2016 01:52 AM

The n-Category Cafe

Monoidal Categories with Projections

Monoidal categories are often introduced as an abstraction of categories with products. Instead of having the categorical product <semantics>×<annotation encoding="application/x-tex">\times</annotation></semantics>, we have some other product <semantics><annotation encoding="application/x-tex">\otimes</annotation></semantics>, and it’s required to behave in a somewhat product-like way.

But you could try to abstract more of the structure of a category with products than monoidal categories do. After all, when a category has products, it also comes with special maps <semantics>X×YX<annotation encoding="application/x-tex">X \times Y \to X</annotation></semantics> and <semantics>X×YY<annotation encoding="application/x-tex">X \times Y \to Y</annotation></semantics> for every <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> (the projections). Abstracting this leads to the notion of “monoidal category with projections”.

I’m writing this because over at this thread on magnitude homology, we’re making heavy use of semicartesian monoidal categories. These are simply monoidal categories whose unit object is terminal. But the word “semicartesian” is repellently technical, and you’d be forgiven for believing that any mathematics using “semicartesian” anythings is bound to be going about things the wrong way. Name aside, you might simply think it’s rather ad hoc; the nLab article says it initially sounds like centipede mathematics.

I don’t know whether semicartesian monoidal categories are truly necessary to the development of magnitude homology. But I do know that they’re a more reasonable and less ad hoc concept than they might seem, because:

Theorem   A semicartesian monoidal category is the same thing as a monoidal category with projections.

So if you believe that “monoidal category with projections” is a reasonable or natural concept, you’re forced to believe the same about semicartesian monoidal categories.

I’m going to keep this post light and sketchy. A monoidal category with projections is a monoidal category <semantics>V=(V,,I)<annotation encoding="application/x-tex">V = (V, \otimes, I)</annotation></semantics> together with a distinguished pair of maps

<semantics>π X,Y 1:XYX,π X,Y 2:XYY<annotation encoding="application/x-tex"> \pi^1_{X, Y} \colon X \otimes Y \to X, \qquad \pi^2_{X, Y} \colon X \otimes Y \to Y </annotation></semantics>

for each pair of objects <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics>. We might call these “projections”. The projections are required to satisfy whatever equations they satisfy when <semantics><annotation encoding="application/x-tex">\otimes</annotation></semantics> is categorical product <semantics>×<annotation encoding="application/x-tex">\times</annotation></semantics> and the unit object <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> is terminal. For instance, if you have three objects <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics> and <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>, then I can think of two ways to build a “projection” map <semantics>XYZX<annotation encoding="application/x-tex">X \otimes Y \otimes Z \to X</annotation></semantics>:

  • think of <semantics>XYZ<annotation encoding="application/x-tex">X \otimes Y \otimes Z</annotation></semantics> as <semantics>X(YZ)<annotation encoding="application/x-tex">X \otimes (Y \otimes Z)</annotation></semantics> and take <semantics>π X,YZ 1<annotation encoding="application/x-tex">\pi^1_{X, Y \otimes Z}</annotation></semantics>; or

  • think of <semantics>XYZ<annotation encoding="application/x-tex">X \otimes Y \otimes Z</annotation></semantics> as <semantics>(XY)Z<annotation encoding="application/x-tex">(X \otimes Y) \otimes Z</annotation></semantics>, use <semantics>π XY,Z 1<annotation encoding="application/x-tex">\pi^1_{X \otimes Y, Z}</annotation></semantics> to project down to <semantics>XY<annotation encoding="application/x-tex">X \otimes Y</annotation></semantics>, then use <semantics>π X,Y 1<annotation encoding="application/x-tex">\pi^1_{X, Y}</annotation></semantics> to project from there to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

One of the axioms for a monoidal category with projections is that these two maps are equal. You can guess the others.

A monoidal category is said to be cartesian if its monoidal structure is given by the categorical (“cartesian”) product. So, any cartesian monoidal category becomes a monoidal category with projections in an obvious way: take the projections <semantics>π X,Y i<annotation encoding="application/x-tex">\pi^i_{X, Y}</annotation></semantics> to be the usual product-projections.

That’s the motivating example of a monoidal category with projections, but there are others. For instance, take the ordered set <semantics>(,)<annotation encoding="application/x-tex">(\mathbb{N}, \geq)</annotation></semantics>, and view it as a category in the usual way but with a reversal of direction: there’s one object for each natural number <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>, and there’s a map <semantics>nm<annotation encoding="application/x-tex">n \to m</annotation></semantics> iff <semantics>nm<annotation encoding="application/x-tex">n \geq m</annotation></semantics>. It’s monoidal under addition, with <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> as the unit. Since <semantics>m+nm<annotation encoding="application/x-tex">m + n \geq m</annotation></semantics> and <semantics>m+nn<annotation encoding="application/x-tex">m + n \geq n</annotation></semantics> for all <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics> and <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>, we have maps <semantics>m+nm<annotation encoding="application/x-tex">m + n \to m</annotation></semantics> and <semantics>m+nn<annotation encoding="application/x-tex">m + n \to n</annotation></semantics>.

In this way, <semantics>(,)<annotation encoding="application/x-tex">(\mathbb{N}, \geq)</annotation></semantics> is a monoidal category with projections. But it’s not cartesian, since the categorical product of <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics> and <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> in <semantics>(,)<annotation encoding="application/x-tex">(\mathbb{N}, \geq)</annotation></semantics> is <semantics>max{m,n}<annotation encoding="application/x-tex">max\{m, n\}</annotation></semantics>, not <semantics>m+n<annotation encoding="application/x-tex">m + n</annotation></semantics>.

Now, a monoidal category <semantics>(V,,I)<annotation encoding="application/x-tex">(V, \otimes, I)</annotation></semantics> is semicartesian if the unit object <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> is terminal. Again, any cartesian monoidal category gives an example, but this isn’t the only kind of example. And again, the ordered set <semantics>(,)<annotation encoding="application/x-tex">(\mathbb{N}, \geq)</annotation></semantics> demonstrates this: with the monoidal structure just described, <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> is the unit object, and it’s terminal.

The point of this post is:

Theorem   A semicartesian monoidal category is the same thing as a monoidal category with projections.

I’ll state it no more precisely than that. I don’t know who this result is due to; the nLab page on semicartesian monoidal categories suggests it might be Eilenberg and Kelly, but I learned it from a Part III problem sheet of Peter Johnstone.

The proof goes roughly like this.

Start with a semicartesian monoidal category <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics>. To build a monoidal category with projections, we have to define, for each <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics>, a projection map <semantics>XYX<annotation encoding="application/x-tex">X \otimes Y \to X</annotation></semantics> (and similarly for <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics>). Now, since <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> is terminal, we have a unique map <semantics>YI<annotation encoding="application/x-tex">Y \to I</annotation></semantics>. Tensoring with <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> gives a map <semantics>XYYI<annotation encoding="application/x-tex">X \otimes Y \to Y \otimes I</annotation></semantics>. But <semantics>YIY<annotation encoding="application/x-tex">Y \otimes I \cong Y</annotation></semantics>, so we’re done. That is, <semantics>π X,Y 1<annotation encoding="application/x-tex">\pi^1_{X, Y}</annotation></semantics> is the composite

<semantics>XYX!XIX.<annotation encoding="application/x-tex"> X \otimes Y \stackrel{X \otimes !}{\longrightarrow} X \otimes I \cong X. </annotation></semantics>

After a few checks, we see that this makes <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> into a monoidal category with projections.

In the other direction, start with a monoidal category <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> with projections. We need to show that <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> is semicartesian. In other words, we have to prove that for each object <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, there is exactly one map <semantics>XI<annotation encoding="application/x-tex">X \to I</annotation></semantics>. There’s at least one, because we have

<semantics>XXIπ X,I 2I.<annotation encoding="application/x-tex"> X \cong X \otimes I \stackrel{\pi^2_{X, I}}{\longrightarrow} I. </annotation></semantics>

I’ll skip the proof that there’s at most one, but it uses the axiom that the projections are natural transformations. (I didn’t mention that axiom, but of course it’s there.)

So we now have a way of turning a semicartesian monoidal category into a monoidal category with projections and vice versa. To finish the proof of the theorem, we have to show that these two processes are mutually inverse. That’s straightforward.

Here’s something funny about all this. A monoidal category with projections appears to be a monoidal category with extra structure, whereas a semicartesian monoidal category is a monoidal category with a certain property. The theorem tells us that in fact, there’s at most one possible way to equip a monoidal category with projections (and there is a way if and only if <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> is terminal). So having projections turns out to be a property, not structure.

And that is my defence of semicartesian monoidal categories.

by leinster (Tom.Leinster@ed.ac.uk) at August 26, 2016 01:41 AM

August 25, 2016

Tommaso Dorigo - Scientificblogging

A Great Blitz Game
As an old time chessplayer who's stopped competing in tournaments, I often entertain myself with the odd blitz game in some internet chess server. And more often than not, I play rather crappy chess. So nothing to report there... However fluctuations do occur.
I just played a combinative-style game which I wish to share, although I did not have the time yet (and I think I won't have time in the near future) to check the moves with a computer program. So my moves might well be flawed. Regardless, I enjoyed playing the game so that's enough motivation to report it here.

read more

by Tommaso Dorigo at August 25, 2016 02:11 PM

astrobites - astro-ph reader's digest

Through the Lenses of Black Holes

Title: A Search for Stellar-Mass Black Holes via Astrometric Microlensing
Author: J. R. Lu, E. Sinukoff, E. O. Ofek, A. Udalski, S. Kozlowski
First Author’s Institution: Institute for Astronomy, University of Hawai’i
Status: Accepted for publication in ApJ

When high-mass (>= 8 Msun) stars end their lives in blinding explosions known as core-collapse supernovae, they can rip through the fabric of space-time and create black holes with similar masses, known as stellar-mass black holes. These vermin black holes dwarf in comparison to their big brothers, supermassive black holes that typically have masses of 10⁶-10⁹ Msun. However, as vermin usually do, they massively outnumber supermassive black holes. It is estimated that 10⁸-10⁹ stellar mass black holes are crawling around our own Milky Way, but we’ve only caught sight of a few dozens of them.

As black holes don’t emit light, we can only infer their presence from their effects on nearby objects. All stellar mass black holes detected so far reside in binary systems, where they actively accrete from their companions. Radiation is emitted as matter from the companion falls onto the accretion disk of the black hole. Isolated black holes don’t have any companions, so they can only accrete from the surrounding diffuse interstellar medium, producing a very weak signal. That is why isolated black holes, which make up the bulk of the black hole population, have long escaped our discovery. Perhaps, until now.

The authors of today’s paper turned the intense gravity of black holes against themselves. While isolated black holes do not produce detectable emission, their gravity can bend and focus light from background objects. This bending and focusing of light through gravity is known as gravitational lensing. Astronomers categorize gravitational lensing based on the source and degree of lensing: strong lensing (lensing by a galaxy or a galaxy cluster producing giant arcs or multiple images), weak lensing (lensing by a galaxy or a galaxy cluster where signals are weaker and detected statistically), and microlensing (lensing by a star or planet). During microlensing, as the lens approaches the star, the star will brighten momentarily as more and more light is being focused, up until maximum magnification at closest approach, after which the star gradually fades as the lens leaves. This effect is known as photometric microlensing (see this astrobite). Check out this microlensing simulation, courtesy of Professor Scott Gaudi at The Ohio State University: the star (orange) is located at the origin, the lens (open red circle) is moving to the right, the gray regions trace out the lensed images (blue) as the lens passes by the star, while the green circle is the Einstein radius. The Einsten radius is the radius of the annular image when the observer, the lens, and the star are perfectly aligned.

Something more subtle can also happen during microlensing, and that is the shifting of the center of light (on a telescope’s detector) relative to the true position of the source — astrometric microlensing. While photometric microlensing has been widely used to search for exoplanets and MACHOs (massive astrophysical compact halo objects), for instance by OGLE (Optical Gravitational Lensing Experiment), astrometric microlensing has not been put to good use as it requires extremely precise measurements. Typical astrometric shifts caused by stellar mass black holes are sub-milliarcsecond (sub-mas), whereas the best astrometric precision we can achieve from the ground is typically ~1 mas or more. Figure 1 shows the signal evolution of photometric and astrometric microlensing and the astrometric shifts caused by different masses.

 
fig1

Fig. 1 – Left panel shows an example of photometric magnification (dashed line) and astrometric shift (solid line) as function of time since the closest approach between the lens and the star. Note that the peak of the astrometric shift occurs after the peak of the photometric magnification. Right panel shows the astrometric shift as a function of the projected separation between the lens and the star, in units of the Einstein radius, for different lens masses. [Figure 1 in paper]

In this paper, the authors used adaptive optics on the Keck telescope to detect astrometric microlensing signals from stellar mass black holes. Over a period of 1-2 years, they monitored three microlensing events detected by the OGLE survey. As astrometric shift reaches a maximum after the peak of photometric microlensing (see Figure 1), astrometric follow-up was started post-peak for each event. The authors fit an astrometric lensing model to their data, not all of which were taken under good observing conditions. Figure 2 shows the results of the fit: all three targets are consistent with linear motion within the uncertainties of their measurements, i.e. no astrometric microlensing. Nonetheless, as photometric microlensing is still present, the authors used their astrometric model combined with a photometric lensing model to back out various lensing parameters, the most important one being the lens masses. They found one lens to have comparable mass as a stellar-mass black hole, although verification would require future observations.

 
fig2

Fig. 2 – Results of fitting an astrometric model (blue dashed lines) to the proper motions of the three microlensing targets, where xE and xN are the observed positions in the East and North directions in milli-arcsecond. The results do not show any signs of astrometric microlensing. [Figure 11 in paper]

Despite not detecting astrometric microlensing signals, the authors demonstrated that they achieved the precision needed in a few epochs; had the weather goddess been on their side during some critical observing periods, some signals could have been seen. This study is also the first to combine both photometric and astrometric measurements to constrain lensing event parameters, ~20 years after this technique was first conceived. For now, we’ll give stellar-mass black holes a break, but it won’t be long until we catch up.

 

 

by Suk Sien Tie at August 25, 2016 01:00 PM

Lubos Motl - string vacua and pheno

Experimental physicists shouldn't be trained as amateur theorists
Theoretical work is up to theorists who must do it with some standards

Tommaso Dorigo of CMS is an experimental particle physicist and in his two recent blog posts, he shares two problems that students at their institutions are tortured with. The daily problema numero uno wants them to calculate a probability from an inelastic cross section which is boring but more or less comprehensible and well-defined.



Dorigo just returned from a trip to Malta.

The problema numero due is less clear. I won't change anything about the spirit if I simplify the problem like this:
A collider produces two high-energy electrons, above \(50\GeV\). Think in all possible ways and tell us all possible explanations related to accelerators, detectors, as well as physical theories what's going on.
Cool. I have highly mixed feelings about such a vague, overarching task. On one hand, I do think that a very good experimenter such as Enrico Fermi is capable of producing answers of this broad type – and very good answers. And the problem isn't "too dramatically" different from the rather practical, "know everything" problems I was solving in the PhD qualifying exams at Rutgers – and I am too modest to hide that I got great results in the oral modern exam, good (A) results in the oral classical exam, and the best historical score in the written exam. ;-)

On the other hand, I don't think that there are too many Enrico Fermis in Italy these days – and even outside Italy – and the idea that a big part of the Italian HEP students are Enrico Fermis look even more implausible to me. The problem described by Dorigo is simply more vague and speculative than the problems that look appropriate.




Unsurprisingly, this "problem numero duo" was sufficiently vague and unfocused that it attracted no comments. What should one talk about if he's told to discuss "everything he knows or thinks about particle physics, with a special focus on high-energy electrons"? Well, there have been no comments except for mine. Let me repost my conversation with Dorigo.

LM: Sorry, Tommaso, but the kind of "problems"
...Consider all thinkable physics- and detector-related sources of the detected signature and discuss their chance to be the true source of the observation. Make all the credible assumptions you need about the system you deem appropriate.
are very poor from a pedagogic viewpoint. It's like "be a renaissance man who knows everything, just like the great Tommaso Dorigo". It's not possible to consider "all thinkable sources" in any kind of rigor – some of them are understood well, others are speculative or due to personal prejudices. So it would be impossible to grade such a thing. And at the end, the implicit assumption that Tommaso Dorigo himself is a renaissance man is sort of silly, especially because you basically never include "new physics" among the thinkable sources of observations and even if you did, you just have no idea which new physics is more likely and why.




TD: Hi Lumo, sorry but you should think a bit more about the task that the selection committee has in front of them. They have to select experimental physicists, not theorists. An experimental physicist should be able to produce an approximate answer, by considering the things worth mentioning, and omit the inessential ones.

Prejudices are good – they betray the background of the person that is being examined. This question is not too different in style to the 42 questions that were asked at the last big INFN exam, 11 years ago.



LM: Dear Tommaso, your order to "consider all thinkable things" is even more wrong and *especially* wrong if the student is supposed to be an experimenter because an experimenter simply shouldn't be solving the task of "thinking in all possible ways". He is not qualified (and usually talented) for such things – and that's also why it's not the kind of an activity that he will be doing in his job.

An experimenter is doing a complex but largely mechanical job "within a box" – at most, as a clever engineer, he is inventing ways how to test things assuming that one is within a box. He isn't supposed to understand why the box is what it is (derivation or justification of current or new theories) and his work doesn't depend on having opinions whether the box is a good one or a bad one and whether it should be replaced by another one and how likely it is.

High-energy electrons at a collider are *obviously* a chance that new physics has been discovered. That's why the colliders are being built. They *want* to and *should* find all the cracks in the existing theories of physics that are accessible within a given budget and a kind of a machine. What the possible theories explaining new phenomena with high-energy electrons are isn't something that an experimenter should answer, it's a difficult eclectic question for a theorist. And theorists wouldn't agree "what kind of a theory" should explain new high-energy electron events. They have their priorities and ways to read the available evidence. But they're not hired or rated or graded for their opinions. Theorists are hired and rated for the ability to mentally create, derive, calculate, and construct arguments. You want to change that – you want an experimenter to be an amateur theorist who is graded for his opinions, not for actual theoretical work that he isn't producing.

A task like that could only be about the knowledge of the "box" – about all the reasons why the Standard Model could produce such a thing, including all the phenomena that appear in the dirty real world of the experiment. But [because the work on many distinct parts of an experiment is usually left to the specialists,] it's questionable whether it's helpful to formulate tasks that ask the student to consider *all* these dirty things at the same moment. It's almost like a question "tell us everything you know about experimental particle physics". There are lots of things and what are the priorities is unavoidably largely subjective and dependent on the context. It's bad to grade such things because the examiner is still imposing his personal opinions onto the students.

You know, what I am generally afraid of in this pedagogic style of yours is that you want to upgrade yourself to a kind of a duce – and grade students for the degree to which they agree with you. But that's exactly what science is *not* – and you wouldn't get an A if you were graded by real experts in these parts of thinking yourself. Science works independently of duces, including duces like you. It is building on the objective evidence and by telling the students to show their thinking about almost everything and the balance between all these ideas, you are clearly shifting science away from the cold and unquestionable realm of objective evidence to the realm of personal opinions – and you (and every experimental physicist) is simply too small a man for such an ambitious position.



LM: added for TRF: Note that I was respectful towards Dorigo's nation – for example, I have used the original term "duce" instead of the derivative term "Führer". ;-)

But back to the issue. All of our disagreement is basically about "one idea" in some way. But we express it in various ways. For example, I haven't commented on Dorigo's comment that INFN has been using similar problems in many other exams (well, that's even worse if there's something fundamentally wrong with the problems) and especially his quote:
Prejudices are good – they betray the background of the person that is being examined.
Prejudices – that indeed betray the background – aren't "good" in science. Prejudices are unavoidable and the approach of people (theorists and experimenters) unavoidably depends to their background and prejudices to one extent or another. And some backgrounds may perhaps be more helpful in achieving a certain advance in science which is why the diversity of backgrounds could sometimes help – although I am generally skeptical about similar claims. But a key insight of Dorigo's countrymate Galileo Galilei that Tommaso Dorigo clearly misunderstands is that the whole scientific method is a clever and careful way to make the conclusions as independent of the prejudices as possible. And that's why rational people think of science (and Galileo) so highly! The processes included in the scientific method simply "dilute" the effect of the prejudices to homeopathically low levels at the end – the final conclusions are almost all about the cold hard evidence that is independent of the prejudices.

So prejudices may always exist to some extent (and I often like the fun of the interactions of the backgrounds, as my repeated references to Dorigo's Italian nationality in this very blog post also show) but science, if it is done well, is treating them as annoying stinky pests. Feminists and reverse racists may celebrate the influence of backgrounds and their diversity for their own sake (and so do all other racists, nationalists, staunch religious believers, and all other herd animals) but a person who thinks as a scientist never does. Science maximally minimizes the concentration and influence of these arbitrary cultural and psychological pests – and that's why science is more successful than all the previous philosophies, religions, nationalisms and group thinks combined. Dorigo worships the opposite situation and he worships the dependence on the background, so he is simply not approaching these fundamental issues as a scientist.

Backgrounds and prejudices may be fun but if a talk or an argumentation depends on them too much, this fun is not good science. I am sure that everyone whom I have ever met and who was considered a good scientist – at all institutions in the U.S. and my homeland – agrees with that basic assertion. Too bad Dorigo doesn't.

As I have explained, the question "what kind of new physics produces high-energy electrons" is a typical question that should be answered by a high-energy physics phenomenologist, not an experimenter. Similarly, the detector part of Dorigo's question should be answered by experimenters – but it's OK when they become specialized (someone understands calorimeters, others are great at the pile-up effect). And the phenomenologists will disagree about the most likely type of new physics that may lead to some observation! The answer hasn't been scientifically established and people have different guesses which concepts are natural or likely to change physics in the future etc. Their being good or the right men to be hired isn't determined by their agreement with each other or with some universal authorities. Phenomenologists and theorists are people who are good at calculating, organizing derivations, arguments, inventing new schemes and ideas etc. It doesn't matter whether they agree with some previous authorities.

Obviously, the Italian student facing Dorigo's question can't write a sensible scientific paper that would consider all possible schemes of new physics that produce high-energy electrons at the level of standards and rigor that is expected in particle physics – he would have to cover almost all ideas in high-energy physics phenomenology. This student is basically order to offer just the final answers – his opinions and his prejudices, without the corresponding argumentation or calculation. And that's what the student must be unavoidably graded for.

That's just bad, bad, bad. In practice, the student will be graded for the similarity of his prejudices and Dorigo's prejudices. And that's simply not how the grading works in the scientific method or any meritocracy. If a question hasn't been settled by convincing scientific evidence, the student shouldn't be graded for his opinions on that question. If he is, then this process can't systematically select good students, especially because Tommaso Dorigo – the ultimate benchmark in this pedagogic scheme – would get a grade rather distant from an A from some true experts if he were asked about any questions that depend on theory, especially theory and phenomenology beyond the Standard Model. For the Italian institutions, to produce clones (and often even worse, imperfect clones) of such a Dorigo is just counterproductive.

Dorigo's "pedagogic style" is ultimately rather similar to the postmodern education systems that don't teach hard facts, methods, algorithms, and well-defined and justifiable principles but that indoctrinate children and students and make them parrot some ideological and political dogmas (usually kitschy politically correct lies). Instead of learning how to integrate functions, children spend more time by writing "creative" (identical to each other) essays how the European Union is wonderful and how they dream about visiting Jean-Claude Juncker's rectum on a sunny day (outside) in the future.

That's not a good way to train experts in anything. And Dorigo's method is sadly very similar. Needless to say, my main specific worry is that people like Dorigo are ultimately allowed by the system to impose their absolutely idiotic opinions about the theory – something they know almost nothing about – on the students. It seems plausible that an Italian student who is much brighter than Dorigo – to make things extreme, imagine someone like me – will be removed by the Italian system because that has been hijacked by duces such as Dorigo who have completely misunderstood the lessons by giants such as Galileo. In this way, the corrupt Italian system may be terminating lots of Galileos, Majoranas, and Venezianos, among others, while it is producing lots of small marginal appendices to Dorigo who is ultimately just a little Šmoit himself.

By the way, after the blog post above was published, a new comment was posted by another user nicknamed "Please Respect Anonymity" on Dorigo's blog (the post about the "problema numero due"):
Based on credible assumptions about the system, a possible theory is that some not-very-good physicist expert of leptons, photons and missing energy must win the "concorso". So the commissars choose a undefined question with a subjective answer about leptons, photons and missing energy.

PS: the question would be very good in a different system.
Exactly, amen to that. Perhaps at a school with 500 clones of Enrico Fermi (among commissars and students) who were made up-to-date, the question would be great. But in the realistic system we know, with the names we know, it's pretty likely that the question brings at most noise to the grades etc. Or something worse.

Dorigo's reply is that the question is great and the right answer is that there had to be a Z-boson or two W-bosons with either one or two photons, or a spurious signal in the calorimeter. I think that only a small fraction of the true experts would answer the question exactly in this way. Moreover, the answer completely omits any new physics possibility, as all of Dorigo's thinking. It's extremely problematic for experimenters not to have an idea what a discovery of new physics would actually look like – or to encourage them to believe that it can't happen – because that's the main achievement that the lucky ones among them will make.

Experimenters simply must be able to look for signals of new proposed theories, at least some of them – whether they "like" the new theories (as a group or individually) or not. Whether they like them or not should be an irrelevant curiosity because they are simply not experts in these matters so this expert work shouldn't be left to them. Experimenters' opinions about particular new theories' "a priori value" should be as irrelevant as the opinion of the U.S. president's toilet janitor about the Israeli-Palestinian conflict. She can have an opinion but unless the system is broken, the opinion won't affect the U.S. policies much. If she believes that there can't be an excrement at a wrong place that needs to be cleaned after the visit by a Palestinian official, and that's why she doesn't clean it, well, she must be fired and replaced by another one (be sure that that may happen). The same is true for a CMS experimenter who is hired to look for new physics but is unable to look for SUSY because of some psychological problems.

by Luboš Motl (noreply@blogger.com) at August 25, 2016 04:31 AM

August 24, 2016

astrobites - astro-ph reader's digest

The Great Wall (of Galaxies, in Sloan)

Title:  Sloan Great Wall as a complex of superclusters with collapsing cores
Authors:  M. Einasto, H. Lietzen, M. Gramann, E. Tempel, E. Saar, L. J. Liivamägi, P. Heinämäki, P. Nurmi, J. Einasto
First Author’s Institution:  Tartu Observatory, Tõravere, Estonia
Status:  Accepted for publication in Astronomy and Astrophysics

The sheer scale of the universe is overwhelming.  Much of the universe is filled with a complex web of matter—what cosmologists like to call the “structure” of the universe.  Most everything we know and love—the wispy cloud in the sky, the bright nebulas that pierce the natal darkness of giant molecular clouds,  the faint and distant galaxies—are gravitationally bound to something.  One can continually zoom out to larger and larger scales, and you’ll see that objects that looked lonely on one scale are often surrounded with similar objects: our Sun is but one of the 200 billion stars the Milky Way Galaxy, which in turn is but one of about 50 galaxies in the Local Group, which in turn is one of 300-500 galaxy groups and clusters in the Laniakea Supercluster.  But this is where the cosmic, fractal structure of hierarchically gravitationally bound objects ends.  The largest structures we see in the universe—superclusters—enter into territory where gravity no longer reins, and all collections of matter at larger scales are subject to the expansion of the universe.

These vast and massive superclusters are the objects of study of the authors of today’s paper.  Clues to uncover the universe’s remaining secrets—such as the cosmological model and the processes that formed the present-day web—are encoded in the structures of superclusters.  Superclusters are also the birthplaces of clusters, and the resultant structures therein.  Thus the authors seek to study in detail such properties of superclusters.  They turn to the closest collection of superclusters bursting with galaxies discovered in the Sloan Digital Sky Survey (SDSS), the eponym for the Sloan Great Wall (SGW) of galaxies, which contain galaxies spanning a redshift z of 0.04 to 0.12.

einasto2016-gal_group_distrib

Figure 1. Galaxy groups (circles) in the Sloan Great Wall.  The groups have been color coded by the supercluster they were identified to be in.  The size of the circles denotes the spatial extent of the group as we would see them in the sky.  Note the elongated morphologies of the superclusters.  Figure taken from today’s paper.

The authors uncovered a rich hierarchy of structure in the Great Wall.  They find five superclusters with a luminosity density cutoff, all massive—accounting for invisible gas and galaxies too faint to detect, they estimate that these superclusters range in mass from about 1015 M☉ to a few 1016 M—one to ten thousand times the mass of the Milky Way.  Two of the superclusters are visibly “rich” and contain 2000-3000 galaxies each (superclusters 1 and 2 in Figure 1), while three are “poor,” containing just a few hundred visible galaxies.

Using a novel method to identify how the galaxies cluster, they find that each supercluster in turn contains highly dense “cores” of galaxies.  The rich superclusters contain several cores with tens to hundreds of galaxy groups and range from 1014 M☉ to a few 1015 M.  These cores in turn contain galaxy clusters, comprising a single galaxy cluster or containing multiple clusters.  Within these cores are what astronomers would consider the first “true” structures—extremely dense regions which no longer grow with the expansion of the universe but instead collapse into bound objects.  The authors derive density profiles for the cores in the rich superclusters and find that the inner 8 h-1 Mpc (about 2000 times larger than the Milky Way; h-1 denotes the normalized Hubble constant) of each core is or will soon be collapsing.

The superclusters are lush with mysterious order beyond their spatial hierarchy.  One of the rich superclusters (#1 in Figure 1) appears to have a filamentary shape and which contains many red, old galaxies.  The other rich supercluster (#2) is more spidery, a conglomeration of chains and clusters of galaxies, all connected, and contains more blue, young galaxies.  These differences in shape and color could indicate that the superclusters have different dynamical histories.  The largest objects of our universe remain to be further explored!

by Stacy Kim at August 24, 2016 07:01 PM

Emily Lakdawalla - The Planetary Society Blog

Proxima Centauri b: Have we just found Earth’s cousin right on our doorstep?
What began as a tantalizing rumor has just become an astonishing fact. Today a group of thirty-one scientists announced the discovery of a terrestrial exoplanet orbiting Proxima Centauri. The discovery of this planet, Proxima Centauri b, is a huge breakthrough not just for astronomers but for all of us. Here’s why.

August 24, 2016 05:01 PM

Tommaso Dorigo - Scientificblogging

Anomaly!: Book News And A Clip
The book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab" is going to press as we speak, and its distribution in bookstores is foreseen for the beginning of November. In the meantime, I am getting ready to present it in several laboratories and institutes. I am posting here the coordinates of events which are already scheduled, in case anybody lives nearby and/or has an interest in attending.
- On November 29th at 4PM there will be a presentation at CERN (more details will follow).

read more

by Tommaso Dorigo at August 24, 2016 12:52 PM

Peter Coles - In the Dark

Interlude

Rather later than originally planned I’ve finally got the nod to be a guest of the National Health Service for a while. I’ll therefore  be taking a break from blogging until they’re done with me. Normal services will be resumed as soon as possible, probably but, for the time being, there will now follow a short intermission.

 


by telescoper at August 24, 2016 07:10 AM

August 23, 2016

Peter Coles - In the Dark

Glamorgan versus Sussex

Another of life’s little coincidences came my way today in the form of a County Championship match between Glamorgan and Sussex in Cardiff. Naturally, being on holiday, and the SWALEC Stadium being very close to my house, I took the opportunity to see the first day’s play.

image

Sussex used the uncontested toss to put Glamorgan in to bat. It was a warm sunny day with light cloud and no wind. One would have imagined conditions would have been good for batting, but the Sussex skipper may have seen something in the pitch or, perhaps more likely, knew about Glamorgan’s batting frailties…

As it turned out, there didn’t seem to be much pace in the pitch, but there was definitely some swing and movement for the Sussex bowlers from the start. Glamorgan’s batsman struggled early on, losing a wicket in the very first over, and slumped to 54 for 5 at one stage, recovering only slightly to 87 for 5 at lunch.

After the interval the recovery continued, largely because of Wagg (who eventually fell for an excellent 57) and Morgan who was unbeaten at the close. Glamorgan finished on 252 all out on the stroke of the tea interval. Not a great score, but a lot better than looked likely at 54 for 5.

During the tea interval I wandered onto the field and looked at the pitch, which had quite a bit of green in it:

image

Perhaps that’s why Sussex put Glamorgan in?

Anyway, when Sussex came out to bat it was a different story. Openers Joyce and Nash put on 111 for the first wicket, but Nelson did the trick for Glamorgan and Joyce was out just before stumps bringing in a nightwatchman (Briggs) to face the last couple of overs.

A full day’s cricket of 95 overs in the sunshine yielded 363 runs for the loss of 12 wickets. Not bad at all! It’s just a pity there were only a few hundred people in the crowd!

Sussex are obviously in a strong position but the weather forecast for the later part of this week is not good so they should push on tomorrow and try to force a result!


by telescoper at August 23, 2016 09:13 PM

Clifford V. Johnson - Asymptotia

Sometimes a Sharpie…

Sometimes a sharpie and a bit of bristol are the best defense against getting lost in the digital world*... (Click for larger view.)

adding_faces_23_08_2016

(Throwing down some additional faces for a story in the book. Just wasn't feeling it in [...] Click to continue reading this post

The post Sometimes a Sharpie… appeared first on Asymptotia.

by Clifford at August 23, 2016 08:31 PM

Symmetrybreaking - Fermilab/SLAC

Five facts about the Big Bang

It’s the cornerstone of cosmology, but what is it all about?

Astronomers Edwin Hubble and Milton Humason in the early 20th century discovered that galaxies are moving away from the Milky Way. More to the point: Every galaxy is moving away from every other galaxy on average, which means the whole universe is expanding. In the past, then, the whole cosmos must have been much smaller, hotter and denser. 

That description, known as the Big Bang model, has stood up against new discoveries and competing theories for the better part of a century. So what is this “Big Bang” thing all about?

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

The Big Bang happened everywhere at once. 

The universe has no center or edge, and every part of the cosmos is expanding. That means if we run the clock backward, we can figure out exactly when everything was packed together—13.8 billion years ago. Because every place we can map in the universe today occupied the same place 13.8 billion years ago, there wasn't a location for the Big Bang: Instead, it happened everywhere simultaneously.

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

The Big Bang may not describe the actual beginning of everything. 

“Big Bang” broadly refers to the theory of cosmic expansion and the hot early universe. However, sometimes even scientists will use the term to describe a moment in time—when everything was packed into a single point. The problem is that we don’t have either observations or theory that describes that moment, which is properly (if clumsily) called the “initial singularity.” 

The initial singularity is the starting point for the universe we observe, but there might have been something that came before. 

The difficulty is that the very hot early cosmos and the rapid expansion called “inflation” that likely happened right after the singularity wiped out most—if not all—of the information about any history that preceded the Big Bang. Physicists keep thinking of new ways to check for signs of an earlier universe, and though we haven’t seen any of them so far, we can’t rule it out yet.

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

The Big Bang theory explains where all the hydrogen and helium in the universe came from. 

In the 1940s, Ralph Alpher and George Gamow calculated that the early universe was hot and dense enough to make virtually all the helium, lithium and deuterium (hydrogen with a neutron attached) present in the cosmos today; later research showed where the primordial hydrogen came from. This is known as “Big Bang nucleosynthesis,” and it stands as one of the most successful predictions of the theory. The heavier elements (such as oxygen, iron and uranium) were formed in stars and supernova explosions.

The best evidence for the Big Bang is in the form of microwaves. Early on, the whole universe was dense enough to be completely opaque. But at a time roughly 380,000 years after the Big Bang, expansion spread everything out enough to make the universe transparent. 

The light released from this transition, known as the cosmic microwave background (CMB), still exists. It was first observed in the 1960s by Arno Penzias and Robert Wilson. That discovery cemented the Big Bang theory as the best description of the universe; since then, observatories such WMAP and Planck have used the CMB to tell us a lot about the total structure and content of the cosmos.

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

One of the first people to think scientifically about the origin of the universe was a Catholic priest. 

In addition to his religious training and work, Georges Lemaître was a physicist who studied the general theory of relativity and worked out some of the conditions of the early cosmos in the 1920s and ’30s. His preferred metaphors for the origin of the universe were “cosmic egg” and “primeval atom,” but they never caught on, which is too bad, because …

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

It seems nobody likes the name "Big Bang." 

Until the 1960s, the idea of a universe with a beginning was controversial among physicists. The name “Big Bang” was actually coined by astronomer Fred Hoyle, who was the leading proponent of an alternative theory, where universe continues forever without a beginning.

His shorthand for the theory caught on, and now we’re kind of stuck with it. Calvin and Hobbes’ attempt to get us to adopt “horrendous space kablooie” has failed so far.

 

The Big Bang is the cornerstone of cosmology, but it’s not the whole story. Scientists keep refining the theory of the universe, motivated by our observation of all the weird stuff out there. Dark matter (which holds galaxies together) and dark energy (which makes the expansion of the universe accelerate) are the biggest mysteries that aren't described by the Big Bang theory by itself. 

Our view of the universe, like the cosmos itself, keeps evolving as we discover more and more new things. But rather than fading away, our best explanation for why things are the way they are has remained—the fire at the beginning of the universe.

by Matthew R. Francis at August 23, 2016 01:00 PM

August 22, 2016

Sean Carroll - Preposterous Universe

Maybe We Do Not Live in a Simulation: The Resolution Conundrum

Greetings from bucolic Banff, Canada, where we’re finishing up the biennial Foundational Questions Institute conference. To a large extent, this event fulfills the maxim that physicists like to fly to beautiful, exotic locations, and once there they sit in hotel rooms and talk to other physicists. We did manage to sneak out into nature a couple of times, but even there we were tasked with discussing profound questions about the nature of reality. Evidence: here is Steve Giddings, our discussion leader on a trip up the Banff Gondola, being protected from the rain as he courageously took notes on our debate over “What Is an Event?” (My answer: an outdated notion, a relic of our past classical ontologies.)

stevegiddings

One fun part of the conference was a “Science Speed-Dating” event, where a few of the scientists and philosophers sat at tables to chat with interested folks who switched tables every twenty minutes. One of the participants was philosopher David Chalmers, who decided to talk about the question of whether we live in a computer simulation. You probably heard about this idea long ago, but public discussion of the possibility was recently re-ignited when Elon Musk came out as an advocate.

At David’s table, one of the younger audience members raised a good point: even simulated civilizations will have the ability to run simulations of their own. But a simulated civilization won’t have access to as much computing power as the one that is simulating it, so the lower-level sims will necessarily have lower resolution. No matter how powerful the top-level civilization might be, there will be a bottom level that doesn’t actually have the ability to run realistic civilizations at all.

This raises a conundrum, I suggest, for the standard simulation argument — i.e. not only the offhand suggestion “maybe we live in a simulation,” but the positive assertion that we probably do. Here is one version of that argument:

  1. We can easily imagine creating many simulated civilizations.
  2. Things that are that easy to imagine are likely to happen, at least somewhere in the universe.
  3. Therefore, there are probably many civilizations being simulated within the lifetime of our universe. Enough that there are many more simulated people than people like us.
  4. Likewise, it is easy to imagine that our universe is just one of a large number of universes being simulated by a higher civilization.
  5. Given a meta-universe with many observers (perhaps of some specified type), we should assume we are typical within the set of all such observers.
  6. A typical observer is likely to be in one of the simulations (at some level), rather than a member of the top-level civilization.
  7. Therefore, we probably live in a simulation.

Of course one is welcome to poke holes in any of the steps of this argument. But let’s for the moment imagine that we accept them. And let’s add the observation that the hierarchy of simulations eventually bottoms out, at a set of sims that don’t themselves have the ability to perform effective simulations. Given the above logic, including the idea that civilizations that have the ability to construct simulations usually construct many of them, we inevitably conclude:

  • We probably live in the lowest-level simulation, the one without an ability to perform effective simulations. That’s where the vast majority of observers are to be found.

Hopefully the conundrum is clear. The argument started with the premise that it wasn’t that hard to imagine simulating a civilization — but the conclusion is that we shouldn’t be able to do that at all. This is a contradiction, therefore one of the premises must be false.

This isn’t such an unusual outcome in these quasi-anthropic “we are typical observers” kinds of arguments. The measure on all such observers often gets concentrated on some particular subset of the distribution, which might not look like we look at all. In multiverse cosmology this shows up as the “youngness paradox.”

Personally I think that premise 1. (it’s easy to perform simulations) is a bit questionable, and premise 5. (we should assume we are typical observers) is more or less completely without justification. If we know that we are members of some very homogeneous ensemble, where every member is basically the same, then by all means typicality is a sensible assumption. But when ensembles are highly heterogeneous, and we actually know something about our specific situation, there’s no reason to assume we are typical. As James Hartle and Mark Srednicki have pointed out, that’s a fake kind of humility — by asserting that “we are typical” in the multiverse, we’re actually claiming that “typical observers are like us.” Who’s to say that is true?

I highly doubt this is an original argument, so probably simulation cognoscenti have debated it back and forth, and likely there are standard responses. But it illustrates the trickiness of reasoning about who we are in a very big cosmos.

by Sean Carroll at August 22, 2016 04:45 PM

Peter Coles - In the Dark

Poll – Do you Listen to Music while you Study?

A propos de nothing in particular, the other day I posted a little poll on Twitter inquiring whether or not people like to have music playing while they work. The responses surprised me, so I thought I’d try the same question on here (although I won’t spill the beans on here immediately. I’ve made the question quite general in the hope that as wide a range of people as possible (e.g. students, researchers and faculty) will feel able to respond. By “study” I mean anything that needs you to concentrate, including practical work, coding, data analysis, reading papers, writing papers, etc. It doesn’t mean any mindless activity, such as bureaucracy.

Please fill the poll in before reading my personal response, which comes after the “read more” tag.

Oh, and if you pick “Depends” then please let me know what it depends on through the comments box (e.g. type of music, type of study..)

<noscript><a href="http://polldaddy.com/poll/9502852">Take Our Poll</a></noscript>

My response was definitely “no”. I often listen to music while preparing to work, but I find it too hard to concentrate if there’s music playing, especially if I’m trying to do calculations.

 


by telescoper at August 22, 2016 02:52 PM

Tommaso Dorigo - Scientificblogging

Post-Doctoral Positions In Experimental Physics For Foreigners
The Italian National Institute for Nuclear Physics offers 20 post-doctoral positions in experimental physics to foreigners with a PhD obtained no earlier than November 2008. 
So if have a PhD (in Physics, but I guess other disciplines are also valid as long as your cv conforms), you like Italy, or if you would like to come and work with me at the search and study of the Higgs boson with the CMS experiment (or even if you would like to do something very different, in another town, with another experiment) you might consider applying!

The economical conditions are not extraordinary in an absolute sense, but you would still end up getting a salary more or less like mine, which in Italy sort of allows one to live a decent life.

read more

by Tommaso Dorigo at August 22, 2016 01:11 PM

CERN Bulletin

Administrative Circular No. 22B (Rev. 2) - Compensation for hours of long-term shift work

Administrative Circular No. 22B (Rev. 2) entitled "Compensation for hours of long-term shift work",  approved by the Director-General following discussion in the Standing Concertation Committee meeting on 22 March 2016, will be available on 1 September 2016 via the following link: https://cds.cern.ch/record/2208538.

 

This revised circular cancels and replaces Administrative Circular No. 22B (Rev. 1) also entitled "Compensation for hours of long-term shift work" of March 2011.

This document contains minor changes to reflect the new career structure.

This circular will enter into force on 1 September 2016.

August 22, 2016 10:08 AM

CERN Bulletin

Administrative Cicular No. 31 (Rev. 2) - International indemnity and non-resident allowance

Administrative Circular No. 31 (Rev. 2) entitled "International indemnity and non-resident allowance", approved by the Director-General following discussion in the Standing Concertation Committee meeting on 23 June 2016, will be available on 1 September 2016 via the following link: https://cds.cern.ch/record/2208547.

 

This revised circular cancels and replaces Administrative Circular No. 31 (Rev. 1) also entitled "International indemnity and non-resident allowance" of October 2007.

The main changes reflect the decision taken in the framework of the five-yearly review to extend eligibility for international indemnity to all staff members, as well to introduce a distinction between current staff members and those recruited as from 1 September 2016. For the latter, the international indemnity will be calculated as a percentage of the minimum salary of the grade into which they are recruited; the amount granted to the former will not change, and is now expressed as a percentage of the midpoint salary of the grade corresponding to their career path at the time of recruitment.

This circular will enter into force on 1 September 2016.

August 22, 2016 10:08 AM

CERN Bulletin

Staff Rules and Regulations - Modification No. 11 to the 11th edition

The following modifications to the Staff Rules and Regulations have been implemented:

 

  • In the framework of the Five-Yearly Review 2015, in accordance with the decisions taken by the Council in December 2015 (CERN/3213), relating to the new CERN career structure;
     
  • In accordance with the decisions taken by the Council in June 2016 (CERN/3247), relating to the status of apprentices and the remaining technical adjustments.

 

The modifications relating to the status of apprentices have entered into force on 1 August 2016 and those relating to the new CERN career structure and the technical adjustments will enter into force on 1 September 2016.

  • Preliminary Note, Contents - amendment of page iv.
     
  • Chapter I, General Provisions
    • Section 2 (Categories of members of the personnel) - amendment of pages 2 and 3.
       
  • Chapter II, Conditions of Employment and Association
    • Section 1 (Employment and association) - amendment of pages 11, 12, 13, 14 and 15.
    • Section 2 (Classification and merit recognition) – amendment of pages 16, 17 and 18.
    • Section 3 (Learning and development) - amendment of pages 19 and 20.
    • Section 4 (Leave) - amendment of pages 21, 22, 23, 25 and 26.
    • Section 5 (Termination of contract) - amendment of page 29.
       
  • Chapter III, Working Conditions
    • Section 1 (Working hours) – amendment of pages 30, 31 and 32.
       
  • Chapter IV, Social Conditions
  • Section 1 (Family and family benefits) - amendment of pages 37 and 38.
  • Section 2 (Social insurance cover) - amendment of pages 39 and 40.
     
  • Chapter V, Financial conditions
    • Section 1 (Financial benefits) – amendment of pages 41, 42, 43, 45, 46 and 47.
       
  • Chapter VI, Settlement of Disputes and Discipline
    • Section 1 (Settlement of disputes) - amendment of page 50.
    • Section 2 (Discipline) – amendment of pages 55, 56, 57 and 58.
       
  • Annex A1 (Periodic review of the financial and social conditions of members of the personnel) – amendment of page 62.
  • Annex RA1 (General definition of career paths) – page 66 is deleted.
  • Annex RA2 (Financial awards) - amendment of page 67.
  • Annex RA5 (Monthly basic salaries of staff members) - amendment of page 71.
  • Annex RA8 (International indemnity) – amendment of page 74.
  • Annex RA9 (Installation indemnity) – amendment of page 75.
  • Annex RA10 (Reinstallation indemnity) – amendment of page 76.



The complete updated electronic version of the Staff Rules and Regulation will be accessible via CDS on 1 September 2016.

August 22, 2016 10:08 AM

CERN Bulletin

Administrative Circular No. 23 (Rev. 4) - Special working hours

Administrative Circular No. 23 (Rev. 4) entitled "Special working hours", approved by the Director-General following discussion in the Standing Concertation Committee meeting on 22 March 2016, will be available on 1 September 2016 via the following link: https://cds.cern.ch/record/2208539.

 

This revised circular cancels and replaces Administrative Circular No. 23 (Rev. 3) also entitled "Special working hours" of January 2013.

This document contains modifications to reflect the new career structure and ensuring the provision consistent with practice that compensation or remuneration of special working hours performed remotely is possible only in case of emergency.  

This circular will enter into force on 1 September 2016.

August 22, 2016 10:08 AM

CERN Bulletin

Administrative Circular No. 13 (Rev. 4) - Guarantees for representatives of the personnel

Administrative Circular No. 13 (Rev. 4) entitled "Guarantees for representatives of the personnel", approved by the Director-General following discussion in the Standing Concertation Committee meeting on 22 March 2016, will be available on 1 September 2016 via the following link: https://cds.cern.ch/record/2208527.

 

This revised circular cancels and replaces Administrative Circular No. 13 (Rev. 3) also entitled "Guarantees for representatives of the personnel" of January 2014.

This document contains a single change to reflect the terminology under the new career structure: the term "career path" is replaced by "grade".

This circular will enter into force on 1 September 2016.

August 22, 2016 10:08 AM

August 21, 2016

Peter Coles - In the Dark

Collector’s Item

I read in today’s Observer an interesting opinion piece by Martin Jacques, who was editor of a magazine called Marxism Today until it folded at the end of 1991. I was a subscriber, in fact, and for some reason I have kept my copy of the final edition all this time. Here’s the front cover:

image

I note that it says “Collector’s Item” on the front, though I’m not at all sure it’s worth any more now than the £1.80 I paid nearly 25 years ago!


by telescoper at August 21, 2016 01:32 PM

Peter Coles - In the Dark

An American doctor experiences the NHS. Again.

Remember that story a couple of years ago by an American doctor about her experiences of the NHS? Well, here’s a sequel…

Dr. Jen Gunter

WIth my cousin WIth my cousin

Two years ago I wrote about my experience in a London emergency department with my son, Victor. That post has since been viewed > 450,000 times. There are over 800 comments with no trolls (a feat unto itself) and almost all of them express love for the NHS.

I was in England again this week. And yes, I was back in an emergency department, but this time with my cousin (who is English).

This is what happened.

My cousin loves high heels. As a former model she makes walking in the highest of heels look easy. However, cobblestone streets have challenges not found on catwalks and so she twisted her ankle very badly. Despite ice and elevation there was significant swelling and bruising and she couldn’t put any weight on her foot. I suggested we call her doctor and explain the situation. I was worried about a…

View original post 1,414 more words


by telescoper at August 21, 2016 11:15 AM

August 20, 2016

ZapperZ - Physics and Physicists

Brain Region Responsible For Understanding Physics?
A group of researchers seem to think that they have found the region of the brain responsible for "understanding physics".

With both sets of experiments, the researchers found that when the subjects tried predicting physical outcomes, activity was most responsive in the premotor cortex and supplementary motor region of the brain: an area described as the brain’s action-planning region.

“Our findings suggest that physical intuition and action planning are intimately linked in the brain,” said Fischer. “We believe this might be because infants learn physics models of the world as they hone their motor skills, handling objects to learn how they behave. Also, to reach out and grab something in the right place with the right amount of force, we need real-time physical understanding.”

But is this really "understanding physics", though?

Zz.

by ZapperZ (noreply@blogger.com) at August 20, 2016 03:19 PM

ZapperZ - Physics and Physicists

Who Will Host The Next LHC?
Nature has an interesting article on the issues surrounding the politics, funding, and physics in building the next giant particle collider beyond the LHC.

The Japanese are the front-runner to host the ILC, but the Chinese have their own plans on a circular electron-positron collider that can be upgraded to a future proton-proton collider.

And of course, all of these will require quite a bit of chump change to fund, and will be an international collaboration.

The climate in the US continues to be very sour in building anything like this.

Zz.

by ZapperZ (noreply@blogger.com) at August 20, 2016 02:49 PM

August 19, 2016

Symmetrybreaking - Fermilab/SLAC

The $100 muon detector

A doctoral student and his adviser designed a tabletop particle detector they hope to make accessible to budding young engineering physicists.

When Spencer Axani was an undergraduate physics student, his background in engineering led him to a creative pipe dream: a pocket-sized device that could count short-lived particles called muons all day.

Muons, heavier versions of electrons, are around us all the time, a byproduct of the cosmic rays that shoot out from supernovae and other high-energy events in space. When particles from those rays hit Earth’s atmosphere, they often decay into muons.

Muons are abundant on the surface of the Earth, but in Axani’s University of Alberta underground office, shielded by the floors above, they might be few and far between. A pocket detector would be the perfect gadget for measuring the difference.

Now a doctoral student at Massachusetts Institute of Technology, Axani has nearly made this device a reality. Along with an undergraduate student and Axani’s adviser, Janet Conrad, he’s developed a detector that sits on a desk and tallies the muons that pass by. The best part? The whole system can be built by students for under $100.

“Compared to most detectors, it’s by far the cheapest and smallest I’ve found,” Axani says. “If you make 100,000 of these, it starts becoming a very large detector. Instrumenting airplanes and ships would let you start measuring cosmic ray rates around the world.”

Particle physicists deal with cosmic rays all of the time, says Conrad, a physics professor at MIT. “Sometimes we love them, and sometimes we hate them. We love them if we can use them for calibration of our detectors, and we hate them if they provide a background for what it is that we are trying to do.”

Conrad used small muon detectors similar to the one Axani dreamed about when leading a neutrino experiment at Fermi National Accelerator Laboratory called MiniBooNE. When a professor at the University of Alberta proposed adding mini-muon detectors to another neutrino experiment, Axani was ready to pitch in.

The idea was to create muon detectors to add to IceCube, a neutrino detector built into the ice in Antarctica. They would be inserted into IceCube’s proposed low-energy upgrade, known as PINGU (Precision IceCube Next Generation Upgrade).

First, they needed a prototype. Axani got to work and quickly devised a rough detector housed in PVC pipe. “It looked pretty lab,” Axani said. It also gave off a terrible smell, the result of using a liquid called toluene as a scintillator, a material that gives off light when hit by a charged particle.

Over the next few months, Axani refined the device, switching to an odorless plastic scintillator and employing silicon photomultipliers (SiPM), which amplify the light from the scintillator into a signal that can be read. Adding some electronics allowed him to build a readout screen that ticks off the amount of energy from muon interactions and registers the time of the event.

Sitting in Axani’s office, the counter shows a rate of one muon every few seconds, which is what they expected from the size of the detector. Though it’s fairly constant, even minor changes like increased humidity or heavy rain can alter it.

Conrad and Axani have taken the detector down into the Boston subway, using the changes in the muon count to calculate the depth of the train tunnels. They’ve also brought it into the caverns of Fermilab’s neutrino experiments to measure the muon flux more than 300 feet underground.

Axani wants to take it to higher elevations—say, in an airplane at 30,000 feet above sea level—where muon counts should be higher, since the particles have had less time to decay after their creation in the atmosphere.

Fermilab physicist Herman White suggested taking one of the the tiny detectors on a ship to study muon counts at sea. Mapping out the muon rate around the globe at sea has never been achieved. Liquid scintillator can be harmful to marine life, and the high voltage and power consumption of the large devices present a safety hazard.

While awaiting review of the PINGU upgrade, both Conrad and Axani see value in their project as an educational tool. With a low cost and simple instructions, the muon counter they created can be assembled by undergraduates and high school students, who would learn about machining, circuits, and particle physics along the way—no previous experience required.

“The idea was, students building the detectors would develop skills typically taught in undergraduate lab classes,” Spencer says. “In return, they would end up with a device useful for all sorts of physics measurements.”

Conrad has first-hand knowledge of how hands-on experience like this can teach students new skills. As an undergraduate at Swarthmore College, she took a course that taught all the basic abilities needed for a career in experimental physics: using a machine shop, soldering, building circuits. As a final project, she constructed a statue that she’s held on to ever since.

Creating the statue helped Conrad cement the lessons she learned in the class, but the product was abstract, not a functioning tool that could be used to do real science.

“We built a bunch of things that were fun, but they weren’t actually useful in any way,” Conrad says. “This [muon detector] takes you through all of the exercises that we did and more, and then produces something at the end that you would then do physics with.”

Axani and Conrad published instructions for building the detector on the open-source physics publishing site arXiv, and have been reworking the project with the aim of making it accessible to high-school students. No math more advanced than division and multiplication is needed, Axani says. And the parts don’t need to be new, meaning students could potentially take advantage of leftovers from experiments at places like Fermilab.

“This should be for students to build,” Axani says. “It’s a good project for creative people who want to make their own measurements.”

by Laura Dattaro at August 19, 2016 03:57 PM

The n-Category Cafe

Compact Closed Bicategories

I’m happy to announce that this paper has been published:

Abstract. A compact closed bicategory is a symmetric monoidal bicategory where every object is equipped with a weak dual. The unit and counit satisfy the usual ‘zig-zag’ identities of a compact closed category only up to natural isomorphism, and the isomorphism is subject to a coherence law. We give several examples of compact closed bicategories, then review previous work. In particular, Day and Street defined compact closed bicategories indirectly via Gray monoids and then appealed to a coherence theorem to extend the concept to bicategories; we restate the definition directly.

We prove that given a 2-category <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> with finite products and weak pullbacks, the bicategory of objects of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, spans, and isomorphism classes of maps of spans is compact closed. As corollaries, the bicategory of spans of sets and certain bicategories of ‘resistor networks” are compact closed.

This paper is dear to my heart because it forms part of Mike Stay’s thesis, for which I served as co-advisor. And it’s especially so because his proof that objects, spans, and maps-of-spans in a suitable 2-category forms a compact symmetric monoidal bicategory turned out to be much harder than either of us were prepared for!

A problem worthy of attack
Proves its worth by fighting back.

In a compact closed category every object comes with morphisms called the ‘cap’ and ‘cup’, obeying the ‘zig-zag identities’. For example, in the category where morphisms are 2d cobordisms, the zig-zag identities say this:

But in a compact closed bicategory the zig-zag identities hold only up to 2-morphisms, which in turn must obey equations of their own: the ‘swallowtail identities’. As the name hints, these are connected to the swallowtail singularity, which is part of René Thom’s classification of catastrophes. This in turn is part of a deep and not yet fully worked out connection between singularity theory and coherence laws for ‘<semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-categories with duals’.

But never mind that: my point is that proving the swallowtail identities for a bicategory of spans in a 2-category turned out to be much harder than expected. Luckily Mike rose to the challenge, as you’ll see in this paper!

This paper is also gaining a bit of popularity for its beautiful depictions of the coherence laws for a symmetric monoidal bicategory. And symmetric monoidal bicategories are starting to acquire interesting applications.

The most developed of these are in mathematical physics — for example, 3d topological quantum field theory! To understand 3d TQFTs, we need to understand the symmetric monoidal bicategory where objects are collections of circles, morphisms are 2d cobordisms, and 2-morphisms are 3d cobordisms-between-cobordisms. The whole business of ‘modular tensor categories’ is immensely clarified by this approach. And that’s what this series of papers, still underway, is all about:

Mike Stay, on the other hand, is working on applications to computer science. That’s always been his focus — indeed, his Ph.D. was not in math but computer science. You can get a little taste here:

But there’s a lot more coming soon from him and Greg Meredith.

As for me, I’ve been working on applied math lately, like bicategories where the morphisms are electrical circuits, or Markov processes, or chemical reaction networks. These are, in fact, also compact closed symmetric monoidal bicategories, and my student Kenny Courser is exploring that aspect.

Basically, whenever you have diagrams that you can stick together to form new diagrams, and processes that turn one diagram into another, there’s a good chance you’re dealing with a symmetric monoidal bicategory! And if you’re also allowed to ‘bend wires’ in your diagrams to turn inputs into outputs and vice versa, it’s probably compact closed. So these are fundamental structures — and it’s great that Mike’s paper on them is finally published.

by john (baez@math.ucr.edu) at August 19, 2016 03:17 PM

August 18, 2016

Quantum Diaries

What is “Model Building”?

Hi everyone! It’s been a while since I’ve posted on Quantum Diaries. This post is cross-posted from ParticleBites.

One thing that makes physics, and especially particle physics, is unique in the sciences is the split between theory and experiment. The role of experimentalists is clear: they build and conduct experiments, take data and analyze it using mathematical, statistical, and numerical techniques to separate signal from background. In short, they seem to do all of the real science!

So what is it that theorists do, besides sipping espresso and scribbling on chalk boards? In this post we describe one type of theoretical work called model building. This usually falls under the umbrella of phenomenology, which in physics refers to making connections between mathematically defined theories (or models) of nature and actual experimental observations of nature.

One common scenario is that one experiment observes something unusual: an anomaly. Two things immediately happen:

  1. Other experiments find ways to cross-check to see if they can confirm the anomaly.
  2. Theorists start figure out the broader implications if the anomaly is real.

#1 is the key step in the scientific method, but in this post we’ll illuminate what #2 actually entails. The scenario looks a little like this:

An unusual experimental result (anomaly) is observed. One thing we would like to know is whether it is consistent with other experimental observations, but these other observations may not be simply related to the anomaly.

An unusual experimental result (anomaly) is observed. One thing we would like to know is whether it is consistent with other experimental observations, but these other observations may not be simply related to the anomaly.

Theorists, who have spent plenty of time mulling over the open questions in physics, are ready to apply their favorite models of new physics to see if they fit. These are the models that they know lead to elegant mathematical results, like grand unification or a solution to the Hierarchy problem. Sometimes theorists are more utilitarian, and start with “do it all” Swiss army knife theories called effective theories (or simplified models) and see if they can explain the anomaly in the context of existing constraints.

Here’s what usually happens:

Usually the nicest models of new physics don't fit! In the explicit example, the minimal supersymmetric Standard Model doesn't include a good candidate to explain the 750 GeV diphoton bump.

Usually the nicest models of new physics don’t fit! In the explicit example, the minimal supersymmetric Standard Model doesn’t include a good candidate to explain the 750 GeV diphoton bump.

Indeed, usually one needs to get creative and modify the nice-and-elegant theory to make sure it can explain the anomaly while avoiding other experimental constraints. This makes the theory a little less elegant, but sometimes nature isn’t elegant.

Candidate theory extended with a module (in this case, an additional particle). This additional model is "bolted on" to the theory to make it fit the experimental observations.

Candidate theory extended with a module (in this case, an additional particle). This additional model is “bolted on” to the theory to make it fit the experimental observations.

Now we’re feeling pretty good about ourselves. It can take quite a bit of work to hack the well-motivated original theory in a way that both explains the anomaly and avoids all other known experimental observations. A good theory can do a couple of other things:

  1. It points the way to future experiments that can test it.
  2. It can use the additional structure to explain other anomalies.

The picture for #2 is as follows:

A good hack to a theory can explain multiple anomalies. Sometimes that makes the hack a little more cumbersome. Physicists often develop their own sense of 'taste' for when a module is elegant enough.

A good hack to a theory can explain multiple anomalies. Sometimes that makes the hack a little more cumbersome. Physicists often develop their own sense of ‘taste’ for when a module is elegant enough.

Even at this stage, there can be a lot of really neat physics to be learned. Model-builders can develop a reputation for particularly clever, minimal, or inspired modules. If a module is really successful, then people will start to think about it as part of a pre-packaged deal:

A really successful hack may eventually be thought of as it's own variant of the original theory.

A really successful hack may eventually be thought of as it’s own variant of the original theory.

Model-smithing is a craft that blends together a lot of the fun of understanding how physics works—which bits of common wisdom can be bent or broken to accommodate an unexpected experimental result? Is it possible to find a simpler theory that can explain more observations? Are the observations pointing to an even deeper guiding principle?

Of course—we should also say that sometimes, while theorists are having fun developing their favorite models, other experimentalists have gone on to refute the original anomaly.

pheno_05

Sometimes anomalies go away and the models built to explain them don’t hold together.

 

But here’s the mark of a really, really good model: even if the anomaly goes away and the particular model falls out of favor, a good model will have taught other physicists something really neat about what can be done within the a given theoretical framework. Physicists get a feel for the kinds of modules that are out in the market (like an app store) and they develop a library of tricks to attack future anomalies. And if one is really fortunate, these insights can point the way to even bigger connections between physical principles.

I cannot help but end this post without one of my favorite physics jokes, courtesy of T. Tait:

 A theorist and an experimentalist are having coffee. The theorist is really excited, she tells the experimentalist, “I’ve got it—it’s a model that’s elegant, explains everything, and it’s completely predictive.”The experimentalist listens to her colleague’s idea and realizes how to test those predictions. She writes several grant applications, hires a team of postdocs and graduate students, trains them,  and builds the new experiment. After years of design, labor, and testing, the machine is ready to take data. They run for several months, and the experimentalist pores over the results.

The experimentalist knocks on the theorist’s door the next day and says, “I’m sorry—the experiment doesn’t find what you were predicting. The theory is dead.”

The theorist frowns a bit: “What a shame. Did you know I spent three whole weeks of my life writing that paper?”

by Flip Tanedo at August 18, 2016 10:53 PM

Clifford V. Johnson - Asymptotia

Stranger Stuff…

Ok all you Stranger Things fans. You were expecting a physicist to say a few things about the show weren't you? Over at Screen Junkies, they've launched the first episode of a focus on TV Science (a companion to the Movie Science series you already know about)... and with the incomparable host Hal Rudnick, I talked about Stranger Things. There are spoilers. Enjoy.

screen_junkies_stranger_things

(Embed and link after the fold:)
[...] Click to continue reading this post

The post Stranger Stuff… appeared first on Asymptotia.

by Clifford at August 18, 2016 06:35 PM

Tommaso Dorigo - Scientificblogging

The Daily Physics Problem - 13, 14
While I do not believe that this series of posts can be really useful to my younger colleagues, who will in a month have to participate in a tough selection for INFN researchers in Rome, I think there is some value in continuing what I have started last month. 
After all, as physicists we are problem solvers, and some exercise is good for all of us. Plus, the laypersons who occasionally visit this blog may actually enjoy fiddling with the questions. For them, though, I thought it would be useful to also get to see the answers to the questions, or at least _some_ answer.

read more

by Tommaso Dorigo at August 18, 2016 03:35 PM

ZapperZ - Physics and Physicists

Could You Pass A-Level Physics Now?
This won't tell if you will pass it, since A-Level Physics consists of several papers, including essay questions. But it is still an interesting test, and you might make a careless mistake if you don't read the question carefully.

And yes, I did go through the test, and I got 13 out of 13 correct even though I guessed at one of them (I wasn't sure what "specific charge" meant and was too lazy to look it up). The quiz at the end asked if I was an actual physicist! :)

You're probably an actual physicist, aren't you?

Check it out. This is what those A-level kids had to content with.

Zz.

by ZapperZ (noreply@blogger.com) at August 18, 2016 03:23 PM

August 17, 2016

Clifford V. Johnson - Asymptotia

New Style…

character-design-dialogues-share

Style change. For a story-within-a-story in the book, I'm changing styles, going to a looser, more cartoony style, which sort of fits tonally with the subject matter in the story. The other day on the subway I designed the characters in that style, and I share them with you here. It's lots of fun to draw in this looser [...] Click to continue reading this post

The post New Style… appeared first on Asymptotia.

by Clifford at August 17, 2016 09:00 PM

August 16, 2016

Lubos Motl - string vacua and pheno

Cold summer, mushrooms, Czech wine, \(17\MeV\) boson
Central Europe is experiencing one of the coldest Augusts in recent memory. (For example, Czechia's top and oldest weather station in Prague-Klementinum broke the record cold high for August 11th, previously from 1869.) But it's been great for mushroom pickers.



When you spend just an hour in the forest, you may find dozens of mushrooms similar to this one-foot-in-diameter bay bolete (Czech: hřib hnědý [brown], Pilsner Czech: podubák). I don't claim that we broke the Czech record today.

Also, the New York Times ran a story on the Moravian (Southeastern Czechia's) wine featuring an entrepreneur who came from Australia to improve the business. He reminds me of Josef Groll, the cheeky Bavarian brewmaster who was hired by the dissatisfied dignified citizens of Pilsen in 1842 and improved the beer in the city by 4 categories. Well, the difference is that the Moravian wine has never really sucked, unlike Pilsen's beer, except for the Moravian beer served to the tourists from Prague, as NYT also explains.

Hat tip: the U.S. ambassador.




More seriously, the April 2016 UC Irvine paper that showed much more confidence in the bold 6.8-sigma claims about the evidence for a new \(17\MeV\) gauge boson was published in PRL, probably the most prestigious place where such papers may be published. UC Irvine released a press release to celebrate this publication.




Again, it would be wonderful if this new boson existed. But people around the Hungarian team have the record of having made many similar claims in the past – I don't want to go into details which people, how they overlapped, and how many claims here (see e.g. this discussion) – and \(17\MeV\) is a typical energy from the regime of nuclear physics which is a very messy emergent discipline of science. So there's a big risk that numerous observable effects in this regime may be misattributed and misinterpreted.

The right "X boson" to explain the observation has to be "protophobic" – its interactions with electrons and neutrons must be nonzero but its interactions with protons have to vanish. This may look extremely unnatural but maybe it's not so bad.

by Luboš Motl (noreply@blogger.com) at August 16, 2016 05:42 PM

Symmetrybreaking - Fermilab/SLAC

The physics photographer

Fermilab’s house photographer of almost 30 years, Reidar Hahn, shares four of his most iconic shots.

Science can produce astounding images. Arcs of electricity. Microbial diseases, blown up in full color. The bones of a long-dead beasts. The earth, a hazy blue marble in the distance. 

But scientific progress is not always so visually dramatic. In laboratories in certain fields, such as high-energy particle physics, the stuff that excites the scientists might be hidden within the innards of machinery or encrypted as data.

Those labs need visual translators to show to the outside world the beauty and significance of their experiments. 

Reidar Hahn specializes in bringing physics to life. As Fermilab’s house photographer, he has been responsible for documenting most of what goes on in and around the lab for the past almost 30 years. His photos reveal the inner workings of complicated machinery. They show the grand scale of astronomical studies. 

Hahn took up amateur photography in his youth, gaining experience during trips to the mountains out West. He attended Michigan Technological University to earn a degree in forestry and in technical communications. The editor of the school newspaper noticed Hahn’s work and recruited him; he eventually became the principal photographer. 

After graduating, Hahn landed a job with a group of newspapers in the suburbs of Chicago. He became interested in Fermilab after covering the opening of the Tevatron, Fermilab’s now-decommissioned landmark accelerator. He began popping in to the lab to look for things to photograph. Eventually, they asked him to stay.

Reidar says he was surprised by what he found at the lab. “I had this misconception that when I came here, there would be all these cool, pristine cleanrooms with guys in white suits and rubber gloves. And there are those things here. But a lot of it is concrete buildings with duct tape and cable ties on the floor. Sometimes, the best thing you can do for a photo is sweep the floor before you shoot.”

Hahn says he has a responsibility, when taking photos for the public, to show the drama of high-energy physics, to impart a sense of excitement for the state of modern science.

Below, he shares the techniques he used to make some of his iconic images for Fermilab.

 

Tevatron

Photo by Reidar Hahn, Fermilab

The Tevatron

“I knew they were going to be shutting down the Tevatron—our large accelerator—and I wanted to get a striking or different view of it. It was 2011, and it would be big news when the lab shut it down. 

“This was composed of seven different photos. You can’t keep the shutter open on a digital camera very long, so I would do a two-minute exposure, then do another two-minute exposure, then another. This shot was done in the dead of winter on a very cold day; It was around zero [degrees]. I was up on the roof probably a good hour.

“It took a little time to prepare and think out. I could have shot it in the daylight, but it wouldn’t have had as much drama. So I had fire trucks and security vehicles and my wife driving around in circles with all their lights on for about half an hour. The more lights the better. I was on the 16th floor roof of the high-rise [Fermilab’s Wilson Hall]. I had some travelling in other directions, because if they were all going counter-clockwise, you’d just see headlights on the left and taillights on the other end. They were slowly driving around—10, 15 miles an hour—and painting a circle [of light] with their headlights and taillights. 

“This image shows a sense of physics on a big scale. And it got a lot of play. It got a full double spread in Scientific American. It was in a lot of other publications.

“I think particle physics has some unique opportunities for photography because of these scale differences. We’re looking for the smallest constituents of matter using the biggest machines in the world to do it.”

 

SRF Cavities

Photo by Reidar Hahn, Fermilab

SRF cavities

“This was an early prototype superconducting [radio-frequency] cavity, which is used to accelerate particles. Every one of those donuts there forces a particle to go faster and faster. In 2005, these cavities were just becoming a point of interest here at Fermilab. 

“This was sitting in a well-lit room with a lot of junk around it. They didn’t want it moved. So I had to think how I could make this interesting. How could I give it some motion, some feel that there’s some energy here?

“So I [turned] all the room lights out. This whole photo was done with a flashlight. You leave the shutter open, and you move the light over the subject and paint the light onto the subject. It’s a way to selectively light certain things. This is about four exposures combined in Photoshop. I had a flashlight with different color gels on it, and I just walked back and forth. 

“I wanted something dynamic to the photo. It’s an accelerator cavity; it should look like something that provides movement. So in the end, I took the gels off, and I dragged the flashlight through the scene [to create the streak of light above the cavity]. It could represent a [particle] beam, but it just provides some drama. 

“A good photo can help communicate that excitement we all have here about science. Scientists may not use [this photo] as often for technical things, but we’re also trying to make science exciting for the non-scientists. And people can learn that some of these things are beautiful objects. They can see some kind of elegance to the equipment that scientists develop and build for the tools of discovery.”

 

Scintillating material

Photo by Reidar Hahn, Fermilab

Scintillating material

“This was taken back in ’93. It was done on film—we bought our first digital camera in 1998. 

“This is a chemist here at the lab, and she's worked a lot on different kinds of scintillating compounds. A scintillator is something that takes light in the invisible spectrum and turns it to the visible spectrum. A lot of physics detectors use scintillating material to image particles that you can't normally see.

“[These] are some test samples she had. She needed the photo to illustrate various types of wave-shifting scintillator. I wanted to add her to the photo because—it all goes back to my newspaper days—people make news, not things. But the challenge gets tougher when you have to add a person to the picture. You can’t have someone sit still for three minutes while making an exposure.

“There’s a chemical in this plastic that wave-shifts some type of particle from UV to visible light. So I painted the scintillating plastic with the UV light in the dark and then had Anna come over and sit down at the stool. I had two flashes set up to light her. [The samples] all light internally. That’s the beauty of scintillator materials. 

“But it goes to show you how we have to solve a lot of problems to actually make our experiments work.”

 

Cerro Tololo observatory

Photo by Reidar Hahn, Fermilab

Cerro Tololo Observatory

“This is the Cerro Tololo [Inter-American] Observatory in Chile, taken in October 2012. We have a lot of involvement in the Dark Energy Survey, [a collaboration to map galaxies and supernovae and to study dark energy and the expansion of the universe]. Sometimes we get to go to places to document things the lab’s involved in.

“This one is hundreds of photos stacked together. If you look close, you can see it’s a series of dots. A 30-second exposure followed by a second for the shutter to reset and then another 30-second exposure.  

“The Earth spins. When you point a camera around the night sky and happen to get the North Star or Southern Cross—this is the Southern Cross—in the shot, you can see how the Earth rotates: This is what people refer to as star-trails. It’s a good reminder that we live in a vast universe and we’re spinning through it.

“We picked a time when there’s no moon because it’s hard to do this kind of shot when the moon comes up. Up on the top of the mountain, they don’t want a lot of light. We walked around with little squeeze lights or no lights at all because we didn’t want to have anything affect the telescopes. But every once in awhile I would notice a car go down from the top, and as it would go around the corner, they’d tap the brake lights. We learned to use the brake lights to light the building. It gives some drama to the dome.

“You’ve got to improvise. You have to work with some very tight parameters and still come back with the shot.”

by Molly Olmstead at August 16, 2016 01:00 PM

The n-Category Cafe

Two Miracles of Algebraic Geometry

In real analysis you get just what you pay for. If you want a function to be seven times differentiable you have to say so, and there’s no reason to think it’ll be eight times differentiable.

But in complex analysis, a function that’s differentiable is infinitely differentiable, and its Taylor series converges, at least locally. Often this lets you extrapolate the value of a function at some faraway location from its value in a tiny region! For example, if you know its value on some circle, you can figure out its value inside. It’s like a fantasy world.

Algebraic geometry has similar miraculous properties. I recently learned about two.

Suppose if I told you:

  1. Every group is abelian.
  2. Every function between groups that preserves the identity is a homomorphism.

You’d rightly say I’m nuts. But all this is happening in the category of sets. Suppose we go to the category of connected projective algebraic varieties. Then a miracle occurs, and the analogous facts are true:

  1. Every connected projective algebraic group is abelian. These are called abelian varieties.
  2. If <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> are abelian varieties and <semantics>f:AB<annotation encoding="application/x-tex">f : A \to B</annotation></semantics> is a map of varieties with <semantics>f(1)=1<annotation encoding="application/x-tex">f(1) = 1</annotation></semantics>, then <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> is a homomorphism.

The connectedness is crucial here. So, as Qiaochu Yuan pointed out in our discussion of these issues on MathOverflow, the magic is not all about algebraic geometry: you can see signs of it in topology. As a topological group, an abelian variety is just a torus. Every continuous basepoint-preserving map between tori is homotopic to a homomorphism. But the rigidity of algebraic geometry takes us further, letting us replace ‘homotopic’ by ‘equal’.

This gives some interesting things. From now on, when I say ‘variety’ I’ll mean ‘connected projective complex algebraic variety’. Let <semantics>Var *<annotation encoding="application/x-tex">Var_*</annotation></semantics> be the category of varieties equipped with a basepoint, and basepoint-preserving maps. Let <semantics>AbVar<annotation encoding="application/x-tex">AbVar</annotation></semantics> be the category of abelian varieties, and maps that preserve the group operation. There’s a forgetful functor

<semantics>U:AbVarVar *<annotation encoding="application/x-tex"> U: AbVar \to Var_* </annotation></semantics>

sending any abelian variety to its underlying pointed variety. <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> is obviously faithful, but Miracle 2 says that it’s is a full functor.

Taken together, these mean that <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> is only forgetting a property, not a structure. So, shockingly, being abelian is a mere property of a variety.

Less miraculously, the functor <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> has a left adjoint! I’ll call this

<semantics>Alb:Var *AbVar<annotation encoding="application/x-tex"> Alb: Var_* \to AbVar </annotation></semantics>

because it sends any variety <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> with basepoint to something called its Albanese variety.

In case you don’t thrill to adjoint functors, let me say what this mean in ‘plain English’ — or at least what some mathematicians might consider plain English.

Given any variety <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> with a chosen basepoint, there’s an abelian variety <semantics>Alb(X)<annotation encoding="application/x-tex">Alb(X)</annotation></semantics> that deserves to be called the ‘free abelian variety on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’. Why? Because it has the following universal property: there’s a basepoint-preserving map called the Albanese map

<semantics>i X:XAlb(X)<annotation encoding="application/x-tex">i_X \colon X \to Alb(X)</annotation></semantics>

such that any basepoint-preserving map <semantics>f:XA<annotation encoding="application/x-tex">f: X \to A</annotation></semantics> where <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> happens to be abelian factors uniquely as <semantics>i X<annotation encoding="application/x-tex">i_X</annotation></semantics> followed by a map

<semantics>f¯:Alb(X)A<annotation encoding="application/x-tex">\overline{f} \colon Alb(X) \to A </annotation></semantics>

that is also a group homomorphism. That is:

<semantics>f=f¯i X<annotation encoding="application/x-tex"> f = \overline{f} \circ i_X </annotation></semantics>

Okay, enough ‘plain English’. Back to category theory.

As usual, the adjoint functors

<semantics>U:AbVarVar *,Alb:Var *AbVar<annotation encoding="application/x-tex"> U: AbVar \to Var_* , \qquad Alb: Var_* \to AbVar </annotation></semantics>

define a monad

<semantics>T=UAlb:Var *Var *<annotation encoding="application/x-tex"> T = U \circ Alb : Var_* \to Var_* </annotation></semantics>

The unit of this monad is the Albanese map. Moreover <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> is monadic, meaning that abelian varieties are just algebras of the monad <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>.

All this is very nice, because it means the category theorist in me now understands the point of Albanese varieties. At a formal level, the Albanese variety of a pointed variety is a lot like the free abelian group on a pointed set!

But then comes a fact connected to Miracle 2: a way in which the Albanese variety is not like the free abelian group! <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> is an idempotent monad:

<semantics>T 2T<annotation encoding="application/x-tex"> T^2 \cong T </annotation></semantics>

Since the right adjoint <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> is only forgetting a property, the left adjoint <semantics>Alb<annotation encoding="application/x-tex">Alb</annotation></semantics> is only ‘forcing that property to hold’, and forcing it to hold again doesn’t do anything more for you!

In other words: the Albanese variety of the Albanese variety is just the Albanese variety.

(I am leaving some forgetful functors unspoken in this snappy statement: I really mean “the underlying pointed variety of the Albanese variety of the underlying pointed variety of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is isomorphic to the Albanese variety of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>”. But forgetful functors often go unspoken in ordinary mathematical English: they’re not just forgetful, they’re forgotten.)

Four puzzles:

Puzzle 1. Where does Miracle 1 fit into this story?

Puzzle 2. Where does the Picard variety fit into this story? (There’s a kind of duality for abelian varieties, whose categorical significance I haven’t figured out, and the dual of the Albanese variety of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is called the Picard variety of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.)

Puzzle 3. Back to complex analysis. Suppose that instead of working with connected projective algebraic varieties we used connected compact complex manifolds. Would we still get a version of Miracles 1 and 2?

Puzzle 4. How should we pronounce ‘Albanese’?

( I don’t think it rhymes with ‘Viennese’. I believe Giacomo Albanese was one of those ‘Italian algebraic geometers’ who always get scolded for their lack of rigor. If he’d just said it was a bloody monad…)

by john (baez@math.ucr.edu) at August 16, 2016 03:13 AM

August 15, 2016

Sean Carroll - Preposterous Universe

You Should Love (or at least respect) the Schrödinger Equation

Over at the twitter dot com website, there has been a briefly-trending topic #fav7films, discussing your favorite seven films. Part of the purpose of being on twitter is to one-up the competition, so I instead listed my #fav7equations. Slightly cleaned up, the equations I chose as my seven favorites are:

  1. {\bf F} = m{\bf a}
  2. \partial L/\partial {\bf x} = \partial_t ({\partial L}/{\partial {\dot {\bf x}}})
  3. {\mathrm d}*F = J
  4. S = k \log W
  5. ds^2 = -{\mathrm d}t^2 + {\mathrm d}{\bf x}^2
  6. G_{ab} = 8\pi G T_{ab}
  7. \hat{H}|\psi\rangle = i\partial_t |\psi\rangle

In order: Newton’s Second Law of motion, the Euler-Lagrange equation, Maxwell’s equations in terms of differential forms, Boltzmann’s definition of entropy, the metric for Minkowski spacetime (special relativity), Einstein’s equation for spacetime curvature (general relativity), and the Schrödinger equation of quantum mechanics. Feel free to Google them for more info, even if equations aren’t your thing. They represent a series of extraordinary insights in the development of physics, from the 1600’s to the present day.

Of course people chimed in with their own favorites, which is all in the spirit of the thing. But one misconception came up that is probably worth correcting: people don’t appreciate how important and all-encompassing the Schrödinger equation is.

I blame society. Or, more accurately, I blame how we teach quantum mechanics. Not that the standard treatment of the Schrödinger equation is fundamentally wrong (as other aspects of how we teach QM are), but that it’s incomplete. And sometimes people get brief introductions to things like the Dirac equation or the Klein-Gordon equation, and come away with the impression that they are somehow relativistic replacements for the Schrödinger equation, which they certainly are not. Dirac et al. may have originally wondered whether they were, but these days we certainly know better.

As I remarked in my post about emergent space, we human beings tend to do quantum mechanics by starting with some classical model, and then “quantizing” it. Nature doesn’t work that way, but we’re not as smart as Nature is. By a “classical model” we mean something that obeys the basic Newtonian paradigm: there is some kind of generalized “position” variable, and also a corresponding “momentum” variable (how fast the position variable is changing), which together obey some deterministic equations of motion that can be solved once we are given initial data. Those equations can be derived from a function called the Hamiltonian, which is basically the energy of the system as a function of positions and momenta; the results are Hamilton’s equations, which are essentially a slick version of Newton’s original {\bf F} = m{\bf a}.

There are various ways of taking such a setup and “quantizing” it, but one way is to take the position variable and consider all possible (normalized, complex-valued) functions of that variable. So instead of, for example, a single position coordinate x and its momentum p, quantum mechanics deals with wave functions ψ(x). That’s the thing that you square to get the probability of observing the system to be at the position x. (We can also transform the wave function to “momentum space,” and calculate the probabilities of observing the system to be at momentum p.) Just as positions and momenta obey Hamilton’s equations, the wave function obeys the Schrödinger equation,

\hat{H}|\psi\rangle = i\partial_t |\psi\rangle.

Indeed, the \hat{H} that appears in the Schrödinger equation is just the quantum version of the Hamiltonian.

The problem is that, when we are first taught about the Schrödinger equation, it is usually in the context of a specific, very simple model: a single non-relativistic particle moving in a potential. In other words, we choose a particular kind of wave function, and a particular Hamiltonian. The corresponding version of the Schrödinger equation is

\displaystyle{\left[-\frac{1}{\mu^2}\frac{\partial^2}{\partial x^2} + V(x)\right]|\psi\rangle = i\partial_t |\psi\rangle}.

If you don’t dig much deeper into the essence of quantum mechanics, you could come away with the impression that this is “the” Schrödinger equation, rather than just “the non-relativistic Schrödinger equation for a single particle.” Which would be a shame.

What happens if we go beyond the world of non-relativistic quantum mechanics? Is the poor little Schrödinger equation still up to the task? Sure! All you need is the right set of wave functions and the right Hamiltonian. Every quantum system obeys a version of the Schrödinger equation; it’s completely general. In particular, there’s no problem talking about relativistic systems or field theories — just don’t use the non-relativistic version of the equation, obviously.

What about the Klein-Gordon and Dirac equations? These were, indeed, originally developed as “relativistic versions of the non-relativistic Schrödinger equation,” but that’s not what they ended up being useful for. (The story is told that Schrödinger himself invented the Klein-Gordon equation even before his non-relativistic version, but discarded it because it didn’t do the job for describing the hydrogen atom. As my old professor Sidney Coleman put it, “Schrödinger was no dummy. He knew about relativity.”)

The Klein-Gordon and Dirac equations are actually not quantum at all — they are classical field equations, just like Maxwell’s equations are for electromagnetism and Einstein’s equation is for the metric tensor of gravity. They aren’t usually taught that way, in part because (unlike E&M and gravity) there aren’t any macroscopic classical fields in Nature that obey those equations. The KG equation governs relativistic scalar fields like the Higgs boson, while the Dirac equation governs spinor fields (spin-1/2 fermions) like the electron and neutrinos and quarks. In Nature, spinor fields are a little subtle, because they are anticommuting Grassmann variables rather than ordinary functions. But make no mistake; the Dirac equation fits perfectly comfortably into the standard Newtonian physical paradigm.

For fields like this, the role of “position” that for a single particle was played by the variable x is now played by an entire configuration of the field throughout space. For a scalar Klein-Gordon field, for example, that might be the values of the field φ(x) at every spatial location x. But otherwise the same story goes through as before. We construct a wave function by attaching a complex number to every possible value of the position variable; to emphasize that it’s a function of functions, we sometimes call it a “wave functional” and write it as a capital letter,

\Psi[\phi(x)].

The absolute-value-squared of this wave functional tells you the probability that you will observe the field to have the value φ(x) at each point x in space. The functional obeys — you guessed it — a version of the Schrödinger equation, with the Hamiltonian being that of a relativistic scalar field. There are likewise versions of the Schrödinger equation for the electromagnetic field, for Dirac fields, for the whole Core Theory, and what have you.

So the Schrödinger equation is not simply a relic of the early days of quantum mechanics, when we didn’t know how to deal with much more than non-relativistic particles orbiting atomic nuclei. It is the foundational equation of quantum dynamics, and applies to every quantum system there is. (There are equivalent ways of doing quantum mechanics, of course, like the Heisenberg picture and the path-integral formulation, but they’re all basically equivalent.) You tell me what the quantum state of your system is, and what is its Hamiltonian, and I will plug into the Schrödinger equation to see how that state will evolve with time. And as far as we know, quantum mechanics is how the universe works. Which makes the Schrödinger equation arguably the most important equation in all of physics.

While we’re at it, people complained that the cosmological constant Λ didn’t appear in Einstein’s equation (6). Of course it does — it’s part of the energy-momentum tensor on the right-hand side. Again, Einstein didn’t necessarily think of it that way, but these days we know better. The whole thing that is great about physics is that we keep learning things; we don’t need to remain stuck with the first ideas that were handed down by the great minds of the past.

by Sean Carroll at August 15, 2016 10:28 PM

Tommaso Dorigo - Scientificblogging

The 2016 Perseids, And The Atmosphere As A Detector
As a long-time meteor observer, I never lose an occasion to watch the peak of good showers. The problem is that similar occasions have become less frequent in the recent times, due to a busier agenda. 
In the past few days, however, I was at CERN and could afford going out to observe the night sky, so it made sense to spend at least a couple of hours to check on the peak activity of the Perseids, which this year was predicted to be stronger than usual.

read more

by Tommaso Dorigo at August 15, 2016 12:16 PM

August 14, 2016

Clifford V. Johnson - Asymptotia

A Known Fact…

“The fact that certain bodies, after being rubbed, appear to attract other bodies, was known to the ancients.” Thus begins, rather awesomely, the preface to Maxwell’s massively important “Treatise on Electricity and Magnetism” (1873). -cvj

The post A Known Fact… appeared first on Asymptotia.

by Clifford at August 14, 2016 07:29 PM

August 13, 2016

Geraint Lewis - Cosmic Horizons

For the love of Spherical Harmonics
I hate starting every blog post with an apology as I have been busy, but I have. But I have. Teaching Electromagnetism to our first year class, computational physics using MatLab, and six smart talented students to wrangle, takes up a lot of time.

But I continue to try and learn a new thing every day! And so here's a short summary of what I've been doing recently.

There's no secret I love maths. I'm not skilled enough to be a mathematician, but I am an avid user. One of the things I love about maths is its shock value. What, I hear you say, shock? Yes, shock.

I remember when I discovered that trigonometric functions can be written as infinite series, and finding you can calculate these series numerically on a computer by adding the terms together, getting more and more accurate as we add higher terms.

And then there is Fourier Series! The fact that you can add these trigonometric functions together, appropriately weighted, to make other functions, functions that look nothing like sines and cosines. And again, calculating these on a computer.

This is my favourite, the fact that you can add waves together to make a square wave.
But we can go one step higher. We can think of waves on a sphere. These are special waves called called Spherical Harmonics.

Those familiar with Schrodinger's equation know that these appear in the solution for the hydrogen atom, describing the wave function, telling us about the properties of the electron.

But spherical harmonics on a sphere are like the sines and cosines above, and we can describe any function over a sphere by summing up the appropriately weighted harmonics. What function you might be thinking? How about the heights of the land and the depths of the oceans over the surface of the Earth?

This cool website has done this, and provide the coefficients that you need to use to describe the surface of the Earth in terms of spherical harmonics. The coefficients are complex numbers as they describe not only how much of a harmonic you need to add, but also how much you need to rotate it.

So, I made a movie.
What you are seeing is the surface of the Earth. At the start, we have only the zeroth "mode", which is just a constant value across the surface. Then we add the first mode, which is a "dipole", which is negative on one side of the Earth and positive on the other, but appropriately rotated. And then we keep adding higher and higher modes, which adds more and more detail. And I think it looks very cool!

Why are you doing this, I hear you cry. Why, because to make this work, I had to beef up my knowledge of python and povray, learn how to fully wrangle healpy to deal with functions on a sphere, a bit of scripting, a bit of ffmpeg, and remember just what spherical harmonics are. And as I've written about before, I think it is an important thing for a researcher to grow these skills.

When will I need these skills? Dunno, but they're now in my bag of tricks and ready to use.

by Cusp (noreply@blogger.com) at August 13, 2016 03:47 AM

August 12, 2016

ZapperZ - Physics and Physicists

Proton Radius Problem
John Timmer on Ars Technica has written a wonderful article on the "proton radius problem". The article gives a brief background on an earlier discovery, and then moves on to a new result on a deuterium atom.

This area is definitely a work-in-progress, and almost as exciting as the neutrino mass deficiency mystery from a many years ago.

Zz.

by ZapperZ (noreply@blogger.com) at August 12, 2016 03:34 PM

ZapperZ - Physics and Physicists

The Science of Sports
With the Olympics in full swing right now, the Perimeter Institute has released a series that discusses the physics behind various sports at the Games. Called The Physics of the Olympics, it covers a wide range of events.

Zz.

by ZapperZ (noreply@blogger.com) at August 12, 2016 03:20 PM

August 11, 2016

Lubos Motl - string vacua and pheno

Modern obsession with permanent revolutions in physics
Francisco Villatoro joined me and a few others in pointing out that it's silly to talk about crises in physics. The LHC just gave us a big package of data at unprecedented energies to confirm a theory that's been around for mere four decades. It's really wonderful.

People want new revolutions and they want them quickly. This attitude to physics is associated with the modern era. It's not new as I will argue but on the other hand, I believe that it was unknown in the era of Newton or even Maxwell. Modern physics – kickstarted by the discovery of relativity and quantum mechanics – has shown that something can be seriously wrong with the previous picture of physics. So people naturally apply "mathematical induction" and assume that a similar revolution will occur infinitely many times.

Well, I don't share this expectation. The framework of quantum mechanics is very likely to be with us forever. And even when it comes to the choice of the dynamics, string theory will probably be the most accurate theory forever – and quantum field theory will forever be the essential framework for effective theories.




David Gross' 1994 discussion of the 1938 conference in Warsaw shows that the desire to "abandon all the existing knowledge" is in no way new. It was surely common among physicists before the war.




Let me remind you that Gross has argued that all the heroes of physics were basically wrong and deluded about various rather elementary things – except for Oskar Klein who presented his "almost Standard Model" back in the late 1930s. Klein was building on the experience with the Kaluza-Klein theory of extra dimensions and his candidate for a theory of everything
  • appreciated the relationship between particles and fields, both for fermions and bosons that he treated analogously
  • used elementary particles analogous to those in the Standard Model – gauge bosons as well as elementary fermions
  • it had doublets of fermions (electron, neutrino; proton, neutron) and some massless and massive bosons mediating the forces
  • just a little bit was missing for him to construct the full Yang-Mills Lagrangian etc.
If Klein had gotten 10 brilliant graduate students, he must have discovered the correct Standard Model before Hitler's invasion of Poland a year later. Well, they would probably have to do some quick research of the hadrons, to learn about the colorful \(SU(3)\) and other things, but they still had a chance. 3 decades of extra work looks excessive from the contemporary viewpoint. But maybe we're even sillier these days.

Gross says that there were some borderline insane people at the conference – like Eddington – and even the sane giants were confused about the applicability of quantum fields for photons, among other things. And Werner Heisenberg, the main father of quantum mechanics, was among those who expected all of quantum mechanics to break down imminently. Gross recalls:
Heisenberg concluded from the existence both of ultraviolet divergences and multi-particle production that there had to be a fundamental length of order the classical radius of the electron, below which the concept of length loses its significance and quantum mechanics breaks down. The classical electron radius, \(e^2/mc^2\) is clearly associated with the divergent electron self-energy, but also happens to be the range of nuclear forces, so it has something to do with the second problem. Quantum mechanics itself, he said, should break down at these lengths. I have always been amazed at how willing the great inventors of quantum mechanics were to give it up all at the drop of a divergence or a new experimental discovery.
The electron Compton wavelength, \(10^{-13}\) meters, was spiritually their "Planck scale". Everything – probably including the general rules of quantum mechanics – was supposed to break down over there. We know that quantum mechanics (in its quantum field theory incarnation) works very well at distances around \(10^{-20}\) meters, seven orders of magnitude shorter than the "limit" considered by Heisenberg.

This is just an accumulated piece of evidence supporting the statement that the belief in the "premature collapse of the status quo theories" has been a disease that physicists have suffered from for a century or so.

You know, if you want to localize the electron at shorter distances than the Compton wavelength, the particle pair production becomes impossible to neglect. Also, the loop diagrams produce integrals whose "majority" is the ultraviolet divergence, suggesting that you're in the regime where the theory breaks down. In some sense, it was reasonable to expect that a "completely new theory" would have to take over.

In reality, we know that the divergences may be removed by renormalization and the theory – quantum field theory – has a much greater range of validity. In some sense, the "renormalized QED" may be viewed as the new theory that Heisenberg et al. had in mind. Except that by its defining equations, it's still the same theory as the QED written down around 1930. One simply adds rules how to subtract the infinities to get the finite experimental predictions.

I want to argue that these two historical stories could be analogous:
Heisenberg believed that around \(e^2/mc^2 \sim 10^{-13}\,{\rm meters}\), all hell breaks loose because of the particle production and UV divergences.

Many phenomenologists have believed that around \(1/m_{\rm Higgs}\sim 10^{-19}\,{\rm meters}\), all hell breaks loose in order to make the light Higgs mass natural.
In both cases, there is no "inevitable" reason why the theory should break down. The UV divergences are there and dominate above the momenta \(|p|\sim m_e\). But they don't imply an inconsistency because the renormalization may deal with them.

In the case of the naturalness, everyone knows that there is not even a potential for an inconsistency. The Standard Model is clearly a consistent effective field theory up to much higher energies. It just seems that it's fine-tuned, correspondingly unnatural, and therefore "unlikely" assuming that the parameters take some "rather random values" from the allowed parameter space, using some plausible measure on the parameter space.

At the end, Heisenberg was wrong that QED had to break down beneath the Compton wavelength. However, he was morally right about a broader point – that theories may break down and be replaced by others because of divergences. Fermi's four-fermion theory produces divergences that cannot be cured by renormalization and that's the reason why W-bosons, Z-bosons, and the rest of the electroweak theory has to be added at the electroweak scale. An analogous enhancement of the whole quantum field theory framework to string theory is needed near the string/Planck scale or earlier, thanks to the analogous non-renormalizability of Einstein's GR.

So something about the general philosophy believed by Heisenberg was right but the details just couldn't have been trusted as mechanically as the folks in the 1930s tended to do. Whether QED was consistent at length scales shorter than the Compton wavelength was a subtle question and the answer was ultimately Yes, it's consistent. So there was no reason why the theory "had to" break down and it didn't break down at that point.

Similarly, the reasons why the Standard Model should break down already at the electroweak scale are analogously vague and fuzzy. As I wrote a year ago, naturalness is fuzzy, subjective, model-dependent, and uncertain. You simply can't promote it to something that will reliably inform you about the next discovery in physics and the precise timing.

But naturalness is still a kind of an argument that broadly works, much like Heisenberg's argument was right whenever applied more carefully in different, luckier contexts. One simply needs to be more relaxed about the validity of naturalness. There may be many reasons why things look unnatural even though they are actually natural. Just compare the situation with that of Heisenberg. Before the renormalization era, it may have been sensible to consider UV divergences as a "proof" that the whole theory had to be superseded by a different one. But it wasn't true for subtle reasons.

The relaxed usage of naturalness should include some "tolerance towards a hybrid thinking of naturalness and the anthropic selection". Naturalness and the anthropic reasoning are very different ways of thinking. But that doesn't mean that they're irreconcilable. Future physicists may very well be forced to take both of them into account. Let me offer you a futuristic, relaxed, Lumoesque interpretation why supersymmetry or superpartner masses close to the electroweak scale are preferred.

Are the statements made by the supporters of the "anthropic principle" universally wrong? Not at all. Some of them are true – in fact, tautologically true. For example, the laws of physics and the parameters etc. are such that they allow the existence of stars and life (and everything else we see around, too). You know, the subtle anthropic issue is that the anthropic people also want to okay other laws of physics that admit "some other forms of intelligent life" but clearly disagree with other features of our Universe. They look at some "trans-cosmic democracy" in which all intelligent beings, regardless of their race, sex, nationality, and string vacuum surrounding them, are allowed to vote in some Multiverse United Nations. ;-)

OK, my being an "opponent of the anthropic principle as a way to discover new physics" means that I don't believe in this multiverse multiculturalism. It's impossible to find rules that would separate objects in different vacua to those who can be considered our peers and those who can't. For example, even though the PC people are upset, I don't consider e.g. Muslims who just mindlessly worship Allah to be my peers, to be the "same kind of observers as I am". So you may guess what I could think about some even stupider bound states of some particles in a completely different vacuum of string theory. Is that bigotry or racism not to consider some creatures from a different heterotic compactification a subhuman being? ;-)

So I tend to think that the only way to use the anthropic reasoning rationally is simply to allow the selection of the vacua according to everything we have already measured. I have measured that there exists intelligent life in the Universe surrounding me. But I have also measured the value of the electron's electric charge (as an undergrad, and I hated to write the report that almost no one was reading LOL). So I have collapsed the wave function into the space of the possible string vacua that are compatible with these – and all other – facts.

If all vacua were non-supersymmetric but if they were numerous, I would agree with the anthropic people that it's enough to have one in which the Higgs mass is much lower than the Planck scale if you want to have life – with long-lived stars etc. So the anthropic selection is legitimate. It's totally OK to assume that the vacua that admit life are the focus of the physics research, that there is an extra "filter" that picks the viable vacua and doesn't need further explanations.

However, what fanatical champions of the anthropic principle miss – and that may be an important point of mine – is that even if I allow this "life exists" selection of the vacua as a legitimate filter or a factor in the probability distributions for the vacua, I may still justifiably prefer the natural vacua with a rather low-energy supersymmetry breaking scale. Why?

Well, simply because these vacua are much more likely to produce life than the non-supersymmetric or high-SUSY-breaking-scale vacua! In those non-SUSY vacua, the Higgs is likely to be too heavy and the probability that one gets a light Higgs (needed for life) is tiny. On the other hand, there may be a comparable number of vacua that have a low-energy SUSY and a mechanism that generates an exponentially low SUSY breaking scale by some mechanism (an instanton, gluino condensate, something). And in this "comparably large" set of vacua, a much higher percentage will include a light Higgs boson and other things that are helpful or required for life.

So even if one reduces the "probability of some kind of a vacuum" to the "counting of vacua of various types", the usual bias equivalent to the conclusions of naturalness considerations may still emerge!

You know, some anthropic fanatics – and yes, I do think that even e.g. Nima has belonged to this set – often loved or love to say that once we appreciate the anthropic reasoning, it follows that we must abandon the requirement that the parameters are natural. Instead, the anthropic principle takes care of them. But this extreme "switch to the anthropic principle" is obviously wrong. It basically means that all of remaining physics arguments get "turned off". But it isn't possible to turn off physics. The naturalness-style arguments are bound to re-emerge even in a consistent scheme that takes the anthropic filters into account.

Take F-theory on a Calabi-Yau four-fold of a certain topology. It produces some number of non-SUSY (or high-energy SUSY) vacua, and some number of SUSY (low-energy SUSY) vacua. These two numbers may differ by a few orders of magnitude. But the probability to get a light Higgs may be some \(10^{30}\) times higher in the SUSY vacua. So the total number of viable SUSY vacua will be higher than the total number of non-SUSY vacua. We shouldn't think that this is some high-precision science because the pre-anthropic ratio of the number of vacua could have differed from one by an order of magnitude or two. But it's those thirty orders of magnitude (or twenty-nine) that make us prefer the low-energy SUSY vacua.

On the other hand, there's no reliable argument that would imply that "new particles as light as the Higgs boson" have to exist. The argument sketched in the previous paragraph only works up to an order of magnitude or two (or a few).

You know, it's also possible that superpartners that are too light also kill life for some reason; or there is no stringy vacuum in which the superpartners are too light relatively to the Higgs boson. In that case, well, it's not the end of the world. The actual parameters allowed by string theory (and life) beat whatever distribution you could believe otherwise (by their superior credibility). If the string vacuum with the lightest gluino that is compatible with the existing LHC observations has a \(3\TeV\) gluino, then the gluino simply can't be lighter. You can protest against it but that's the only thing you can do against a fact of Nature. The actual constraints resulting from full-fledged string theory or a careful requirement of "the existence of life" always beat some vague distributions derived from the notion of naturalness.

So when I was listing the adjectives that naturalness deserves, another one could be "modest" i.e. "always prepared to be superseded by a more rigorous or quantitative argument or distribution". Naturalness is a belief that some parameters take values of order one – but we only need to talk about the values in this vague way up to the moment when we find a better or more precise or more provable way to determine or constrain the value of the parameter.

Again, both the champions of the anthropic principle and the warriors for naturalness often build on exaggerated, fanatical, oversimplified, or native theses. Everyone should think more carefully about the aspects of these two "philosophies" – their favorite one as well as the "opposite" one – and realize that there are lots of statements and principles in these "philosophies" that are obviously right and also lots of statements made by the fanatical supporters that are obviously wrong. Even more importantly, "naturalness" and "anthropic arguments" are just the most philosophically flavored types of arguments in physics – but aside from them, there still exist lots of normal, "technical" physics arguments. I am sure that the latter will be a majority of physics in the future just like they were a majority of physics in the past.

At the end, I want to say that people could have talked about the scales in ways that resemble the modern treatment of the scale sometime in the 1930s, too. The first cutoff where theories were said to break down was the electron mass, below an \({\rm MeV}\). Quantum field theory was basically known in the 1930s. Experiments went from \(1\keV\) to \(1\MeV\) and \(1\GeV\) to \(13\TeV\) – it was many, many orders of magnitude – but the framework of quantum field theory as the right effective theory survived. All the changes have been relatively minor since the 1930s. Despite the talk about some glorious decades in the past, people have been just adjusting technical details of quantum field theory since the 1930s.

And the theory was often ahead of experiments. In particular, new quarks (at least charm and top) were predicted before they were observed. The latest example of this gap was the discovery of the Higgs boson that took place some 48 years after it was theoretically proposed. If string theory were experimentally proven 48 years after its first formula were written down, we would see a proof in 2016. But you know, the year 48 isn't a high-precision law of physics. ;-)

Both experimental discoveries and theoretical discoveries are still taking place. Theories are being constructed and refined every year – even in recent years. And the experiments are finding particles previously unknown to the experiments – most recently, the Higgs boson in 2012. It's the "separate schedules" of the theory and experiment that confuses lots of people. But if you realize that it's normal and it's been a fact for many decades, you will see that there's nothing "unusually slow or frustrating" about the current era. Just try to fairly assess how many big experimental discoveries confirming big theories were done in the 1930s or 1940s or 1950s or 1980s etc.

The talk about frustration, nightmares, walls, and dead ends can't be justified by the evidence. It's mostly driven by certain people's anti-physics agenda.

by Luboš Motl (noreply@blogger.com) at August 11, 2016 06:03 PM

August 10, 2016

Symmetrybreaking - Fermilab/SLAC

#AskSymmetry Twitter chat with Risa Wechsler

See cosmologist Risa Wechsler's answers to readers' questions about dark matter and dark energy.

<noscript>[<a href="http://storify.com/Symmetry/asksymmetry-twitter-chat-with-cosmologist-risa-wec" target="_blank">View the story "#AskSymmetry Twitter Chat with Risa Wechsler - Aug. 9, 2016" on Storify</a>]</noscript>

August 10, 2016 07:06 PM

Lubos Motl - string vacua and pheno

Naturalness, a null hypothesis, hasn't been superseded
Quanta Magazine's Natalie Wolchover has interviewed some real physicists to learn
What No New Particles Means for Physics
so you can't be surprised that the spirit of the article is close to my take on the same question published three days ago. Maria Spiropulu says that experimenters like her know no religion so her null results are a discovery, too. I agree with that. I am just obliged to add that if she were surprised she is not getting some big prizes for the discovery of the Standard Model at \(\sqrt{s}=13\TeV\), it's because her discovery is too similar to the discovery of the Standard Model at \(\sqrt{s}=1.96\TeV\), \(\sqrt{s}=7\TeV\), and \(\sqrt{s}=8\TeV\), among others. ;-) And the previous similar discoveries were already done by others.

She and others at the LHC are doing a wonderful job and tell us the truth but the opposite answer – new physics – would still be more interesting for the theorists – or any "client" of the experimenters. I believe that this point is obvious and it makes no sense to try to hide it.

Nima Arkani-Hamed says lots of things I appreciate, too, although his assertions are exaggerated, as I will discuss. It's crazy to talk about a disappointment, he tells us. Experimenters have worked hard and well. Those who whine that some new pet model hasn't been confirmed are spoiled brats who scream because they didn't get their favorite lollipop and they should be spanked.




Yup. But when you look under the surface, you will see that there are actually many different opinions about naturalness and the state of physics expressed by different physicists. If you're strict enough, many of these opinions almost strictly contradict each other.




Nathaniel Craig whom I know as a brilliant student at Harvard says that the absence of new physics will have to be taken into account and addressed but he implicitly makes it clear that he will keep on thinking about theories such as his "neutral naturalness". Some kind of naturalness will almost certainly be believed and elaborated upon by people like him in the future, anyway. I think that Nathaniel and other bright folks like that should grow balls and say some of these things more clearly – even if they contradict some more senior colleagues.

Aside from saying that the diphoton could have been groundbreaking (yup), Raman Sundrum said:
Naturalness is so well-motivated that its actual absence is a major discovery.
Well, it is well-motivated but it hasn't been shown not to exist. This claim of mine contradicting Sundrum's assertion above was used in the title of this blog post.

What does it mean that you show that naturalness doesn't exist? Well, naturalness is a hypothesis and you want to exclude it. Except that naturalness – while very "conceptual" – is a classic example of a null hypothesis. If you want to exclude it, you should exclude it at least by a five-sigma deviation! You need to find a phenomenon whose probability is predicted to be smaller than 1 in 1,000,000 according to the null hypothesis.

We are routinely used to require just a 2-sigma (95%) exclusion for particular non-null hypotheses that add some new particles of effects. But naturalness is clearly not one of those. Naturalness is the null hypothesis in these discussions. So you need to exclude it by the five-sigma evidence. Has it taken place?

Naturalness isn't a sharply defined Yes/No adjective. As parameters become (gradually) much smaller than one, the theory becomes (gradually) less natural. When some fundamental parameter in the Lagrangian is fine-tuned to 1 part in 300, we say that \(\Delta=300\) and the probability that the parameter is this close to the special value (typically zero) or closer is \(p=1/300\).

(The precise formula to define \(\Delta\) in MSSM or a general model is a seemingly technical but also controversial matter. There are many ways to do so. Also, there are lots of look-elsewhere effects that could be added as factors in \(\Delta\) or removed from it. For these reasons, I believe that you should only care about the order of magnitude of \(\Delta\), not some precise changes of values.)

The simplest supersymmetric models have been shown to be unnatural at \(\Delta \gt 300\) or something like that. That means that some parameters look special or fine-tuned. The probability of this degree of fine-tuning is \(p=1/300\) or so. Does it rule out naturalness? No because we require a five-sigma falsification of the null hypothesis e.g. \(p=1/1,000,000\) or so. We're very far from it. Superpartners at masses comparable to \(10\TeV\) will still allow naturalness to survive.

Twenty years ago, I wasn't fond of using this X-sigma terminology but my conclusions were basically the same. If some parameters are comparable to \(0.01\), they may still be said to be of order one. We know such parameters. The fine-structure constant is \(\alpha\approx 1/137.036\). We usually don't say that it's terribly unnatural. The value may be rather naturally calculated from the \(SU(2)\) and \(U(1)_Y\) electroweak coupling constants and those become more natural, and so on. But numbers of order \(0.01\) only differ from "numbers of order one" by some 2.5 sigma.

My taste simply tells me that \(1/137.036\) is a number of order one. When you need to distinguish it from one, you really need a precise calculation. For me, there's pretty much "no qualitative realm in between" \(1\) and \(1/137.036\). Numbers like \(0.01\) have to be allowed in Nature because we surely know that there are dimensionless ratios (like \(m_{\rm Planck}/m_{\rm proton}\)) that are vastly different from one and they have to come from somewhere. Even if SUSY or something else stabilizes the weak scale, it must still be explained why the scale – and the QCD scale (it's easier) – is so much lower than the Planck scale. The idea that everything is "really" of the same order is surely silly at the end.

OK, assuming that \(\Delta\gt 300\) has been established, Sundrum's claim that it disproves naturalness is logically equivalent to the claim that any 3-sigma deviation seen anywhere falsifies the null hypothesis, and therefore proves some new physics. Well, we know it isn't the case. We had a 4-sigma and 3-sigma diphoton excess. Despite the fact that the Pythagorean combination is exactly 5 (with the rounded numbers I chose), we know that it was a fluke.

Now, the question whether naturalness is true is probably (even) more fundamental than the question whether the diphoton bump came from a real particle. But the degree of certainty hiding in 3-sigma or \(p=1/300\)-probable propositions is exactly the same. If you (Dr Sundrum) think that the observation that some \(\Delta \gt 300\) disproves naturalness, then you're acting exactly as sloppily as if you consider any 3-sigma bump to be a discovery!

A physicist respecting that particle physics is a hard science simply shouldn't do so. Naturalness is alive and well. 3-sigma deviations such as the observation that \(\Delta \gt 300\) in some models simply do sometimes occur. We can't assume that they are impossible. And we consider naturalness to be the "basic story to discuss the values of parameters" because this story looks much more natural or "null" than known competitors. If and when formidable competitors were born, one could start to distinguish them and naturalness could lose the status of "the null hypothesis". But no such a convincing competitor exists now.

As David Gross likes to say, naturalness isn't a real law of physics. It's a strategy. Some people have used this strategy too fanatically. They wanted to think that even \(\Delta \gt 10\) was too unnatural and picked other theories. But this is logically equivalent to the decision to follow research directions according to 1.5-sigma deviations. Whenever there's a 1.5-sigma bump somewhere, such a physicist would immediately focus on it. That's simply not how a solid physicist behaves in the case of specific channels at the LHC. So it's not how he should behave when it comes to fundamental conceptual questions such as naturalness, either.

Naturalness is almost certainly a valid principle but when you overuse it – in a way that is equivalent to the assumption that more than 3-sigma or 2-sigma or even 1.5-sigma deviations can't exist – you're pretty much guaranteed to be sometimes proven wrong by Mother Nature because statistics happens. If you look carefully, you should be able to find better guides for your research than 1.5-sigma bumps in the data. And the question whether \(\Delta\lt 10\) or \(\Delta \gt 10\) in some model is on par with any other 1.5-sigma bump. You just shouldn't care about those much.

While I share much of the spirit of Nima's comments, they're questionable at the same moment, too. For example, Nima said:
It’s striking that we’ve thought about these things for 30 years and we have not made one correct prediction that they have seen.
Who hasn't made the correct predictions? Surely many people have talked about the observation of "just the Standard Model" (Higgs and nothing else that is new) at the LHC. I've surely talked about it for decades. And we had discussions about it with virtually all high-energy physicists who have ever discussed anything about broader physics. I think that the Standard Model was by far the single most likely particular theory expected from the first serious LHC run at energies close to \(\sqrt{s}=14\TeV\).

The actual adjective describing this scenario wasn't "considered unlikely" but rather "considered uninteresting". It was simply not too interesting for theorists to spend hours about the possibility that the LHC sees just the Standard Model. And it isn't too interesting for theorists now, either. A hard-working theorist would hardly write a paper in 2010 about the "Standard Model at \(\sqrt{s}=13\TeV\)". There's simply nothing new and interesting to say and no truly new calculation to be made. But the previous sentence – and the absence of such papers – doesn't mean that physicists have generally considered this possibility unlikely.

I am a big SUSY champion but even in April 2007, before the LHC was running, I wrote that the probability was 50% that the LHC would observe SUSY. Most of the remaining 50% is "just the Standard Model" because I considered – and I still consider – the discovery of all forms of new physics unrelated to SUSY before SUSY to be significantly less likely than SUSY.

So I think that the statement that "we haven't made any correct prediction" to be an ill-defined piece of sloppy social science. The truth value depends on who is allowed to be counted as "we" and how one quantifies these voters' support for various possible answers. When it came to a sensible ensemble of particle physicists who carefully talk about probabilities rather than hype or composition of their papers and who are not (de facto or de iure) obliged to paint things in more optimistic terms, I am confident that the average probability they would quote for "just the Standard Model at the LHC" was comparable to 50%.

I want to discuss one more quote from Nima:
There are many theorists, myself included, who feel that we’re in a totally unique time, where the questions on the table are the really huge, structural ones, not the details of the next particle. We’re very lucky to get to live in a period like this — even if there may not be major, verified progress in our lifetimes.
You know, Nima is great at many things including these two:
  1. A stellar physicist
  2. A very good motivational speaker
I believe that the quote above proves the second item, not the first. Are we in a totally unique time? When did the unique time start? Nima has already been talking about it for quite some time. ;-) If the young would-be physicists believe it, the quote may surely increase their excitement. But aren't they being fooled? Is there some actual evidence or a rational reason to think that "physics is going through a totally unique time when structural paradigm shifts are around the corner"?

I think that the opposite statement is much closer to be a rational conclusion of the available evidence. Take the statements by Michelson or Kelvin over 100 years. Physics is almost over. All that remains is to measure the values of the parameters with a better precision.

Well, I think that the evidence is rather strong that this statement would actually be extremely appropriate for the present situation of physics and the near future! This surely looks like a period in which no truly deep paradigm shifts are taking place and none are expected in coming months or years. I think that Nima's revolutionary proposition reflects his being a motivational speaker rather than a top physicist impartially evaluating the evidence.

We should really divide the question of paradigm shifts to those that demand an experimental validation; and those that may proceed without an experimental validation. These two "realms of physics" have become increasingly disconnected – and this evolution has always been unavoidable. And it's simply primarily the first one, the purely theoretical branch, where truly new things are happening. When you only care about theories that explain the actual doable experiments, the situation is well-described by the Michelson-Kelvin quote about the physics of decimals (assuming that Michelson or Kelvin become a big defenders of the Standard Model).

Sometimes motivational speeches are great and needed. But at the end, I think that physicists should also be grownups who actually know what they're doing even if they're shaping their opinions about big conceptual questions.

For years, Nima liked to talk about the unique situation and the crossroad where it's being decided whether physics will pursue the path of naturalness or the completely different path of the anthropic reasoning. Well, maybe and Nima sounds persuasive but I have always had problems with these seemingly oversimplified assertions.

First, the natural-vs-anthropic split is just a description of two extreme philosophies that may be well-defined for the theories we already know but may become inadequate for the discussion of the future theories in physics. In particular, it seems very plausible to me that the types of physics theories in the future will not "clearly fall" into one of the camps (natural or anthropic). They may be hybrids, they may be completely different, they may show that the two roads discussed by Nima don't actually contradict each other. At any rate, I am convinced that when new frameworks to discuss the vacuum selection and other things become persuasive, they will be rather quantitative again: they will have nothing to do with the vagueness and arbitrariness of the anthropic principle as we know it today. Also, they will be careful in the sense that they will avoid some of the naive strategies by the "extreme fans of naturalness" who think that 2-sigma deviations or \(\Delta \gt 20\) are too big and can't occur. Future theories of physics will be theories studied by grownups – physicists who will avoid the naivite and vagueness of both the extreme naturalness cultists as well as the anthropic metaphysical babblers.

Even if some new physics were – or had been – discovered at the LHC, including supersymmetry (not sure about the extra dimensions, those would be really deep), I would still tend to think that the paradigm shift in physics would probably be somewhat less deep than the discovery of quantum mechanics 90 years ago.

So while I think that it's silly to talk about some collapse of particle or fundamental physics and similar things, I also think that the talk about the exceptionally exciting situation in physics etc. has become silly. It's surely not my obsession to be a golden boy in the middle. But I am in the middle when it comes to the question whether the future years in physics are going to be interesting. We don't know and the answer is probably gonna be a lukewarm one, too, although both colder and hotter answers can't be excluded. And I am actually confident that the silent majority of physicists agrees that the contemporary physics and fundamental physics in the foreseeable future is and will be medium-interesting. ;-)

by Luboš Motl (noreply@blogger.com) at August 10, 2016 06:49 PM

Symmetrybreaking - Fermilab/SLAC

Dark matter hopes dwindle with X-ray signal

A previously detected, anomalously large X-ray signal is absent in new Hitomi satellite data, setting tighter limits for a dark matter interpretation.  

In the final data sent by the Hitomi spacecraft, a surprisingly large X-ray signal previously seen emanating from the Perseus galaxy cluster did not appear. This casts a shadow over previous speculation that the anomalously bright signal might have come from dark matter. 

“We would have been able to see this signal much clearer with Hitomi than with other satellites,” says Norbert Werner from the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory.

“However, there is no unidentified X-ray line at the high flux level found in earlier studies.”

Werner and his colleagues from the Hitomi collaboration report their findings in a paper submitted to The Astrophysical Journal Letters.      

The mysterious signal was first discovered with lower flux in 2014 when researchers looked at the superposition of X-ray emissions from 73 galaxy clusters recorded with the European XMM-Newton satellite. These stacked data increase the sensitivity to signals that are too weak to be detected in individual clusters.   

The scientists found an unexplained X-ray line at an energy of about 3500 electronvolts (3.5 keV), says Esra Bulbul from the MIT Kavli Institute for Astrophysics and Space Research, the lead author of the 2014 study and a co-author of the Hitomi paper.

“After careful analysis we concluded that it wasn’t caused by the instrument itself and that it was unlikely to be caused by any known astrophysical processes,” she says. “So we asked ourselves ‘What else could its origin be?’”

One interpretation of the so-called 3.5-keV line was that it could be caused by hypothetical dark matter particles called sterile neutrinos decaying in space.

Yet, there was something bizarre about the 3.5-keV line. Bulbul and her colleagues found it again in data taken with NASA’s Chandra X-ray Observatory from just the Perseus cluster. But in the Chandra data, the individual signal was inexplicably strong—about 30 times stronger than it should have been according to the stacked data.

Adding to the controversy was the fact that some groups saw the X-ray line in Perseus and other objects using XMM-Newton, Chandra and the Japanese Suzaku satellite, while others using the same instruments reported no detection.

Astrophysicists highly anticipated the launch of the Hitomi satellite, which carried an instrument—the soft X-ray spectrometer (SXS)—with a spectral resolution 20 times better than the ones aboard previous missions. The SXS would be able to record much sharper signals that would be easier to identify.

Hitomi recorded the X-ray spectrum of the Perseus galaxy cluster with the protective filter still attached to its soft X-ray spectrometer.

Hitomi collaboration

The new data were collected during Hitomi’s first month in space, just before the satellite was lost due to a series of malfunctions. Unfortunately during that time, the SXS was still covered with a protective filter, which absorbed most of the X-ray photons with energies below 5 keV.

“This limited our ability to take enough data of the 3.5-keV line,” Werner says. “The signal might very well still exist at the much lower flux level observed in the stacked data.”

Hitomi’s final data at least make it clear that, if the 3.5-keV line exists, its X-ray signal is not anomalously strong. A signal 30 times stronger than expected would have made it through the filter.

The Hitomi results rule out that the anomalously bright signal in the Perseus cluster was a telltale sign of decaying dark matter particles. But they leave unanswered the question of what exactly scientists detected in the past.

“It’s really unfortunate that we lost Hitomi,” Bulbul says. “We’ll continue our observations with the other X-ray satellites, but it looks like we won’t be able to solve this issue until another mission goes up.”

Chances are this might happen in a few years. According to a recent report, the Japan Aerospace Exploration Agency and NASA have begun talks about launching a replacement satellite.

by Manuel Gnida at August 10, 2016 03:36 PM

August 09, 2016

Symmetrybreaking - Fermilab/SLAC

The contents of the universe

How do scientists know what percentages of the universe are made up of dark matter and dark energy?

Cosmologist Risa Wechsler of the Kavli Institute for Particle Astrophysics and Cosmology explains.

Video of d6S4PyJ01IA

Have a burning question about particle physics? Let us know via email or Twitter (using the hashtag #AskSymmetry). We might answer you in a future video!

You can watch a playlist of the #AskSymmetry videos here. You can see Risa Wechsler's answers to readers' questions about dark matter and dark energy on Twitter here.

by Amanda Solliday at August 09, 2016 05:23 PM

Symmetrybreaking - Fermilab/SLAC

Sterile neutrinos in trouble

The IceCube experiment reports ruling out to a high degree of certainty the existence of a theoretical low-mass sterile neutrino.

This week scientists on the world’s largest neutrino experiment, IceCube, dealt a heavy blow to theories predicting a new type of particle—and left a mystery behind.

More than two decades ago, the LSND neutrino experiment at Los Alamos National Laboratory produced a result that challenged what scientists knew about neutrinos. The most popular theory is that the LSND anomaly was caused by the hidden influence of a new type of particle, a sterile neutrino.

A sterile neutrino would not interact with other matter through any of the forces of the Standard Model of particle physics, save perhaps gravity.

With their new result, IceCube scientists are fairly certain the most popular explanation for the anomaly is incorrect. In a paper published in Physical Review Letters, they report that after searching for the predicted form of the stealthy particle, they excluded its existence at approximately the 99 percent confidence level.

“The sterile neutrino would’ve been a profound discovery,” says physicist Ben Jones of the University of Texas, Arlington, who worked on the IceCube analysis. “It would really have been the first particle discovered beyond the Standard Model of particle physics.”

It’s surprising that such a result would come from IceCube. The detector, buried in about a square kilometer of Antarctic ice, was constructed to study very different neutrinos: high-energy ones propelled toward Earth by violent events in space. But by an accident of nature, IceCube happens to be in just the right position to study low-mass sterile neutrinos as well.

There are three known types of neutrinos: electron neutrinos, muon neutrinos and tau neutrinos. Scientists have caught all three types, but they have never built a detector that could catch a sterile neutrino.

Neutrinos are shape-shifters; as they travel, they change from one type to another. The likelihood that a neutrino has shifted to a new type at any given point depends on its mass and the distance it has traveled.

It also depends on what the neutrino has traveled through. Neutrinos very rarely interact with other matter, so they can travel through the entire Earth without hitting any obstacles. But they are affected by all of the electrons in the Earth’s atoms along the way.

“The Earth acts like an amplifier,” says physicist Carlos Argüelles of MIT, who worked on the IceCube analysis.

Traveling through that density of electrons raises the likelihood that a neutrino will change into the predicted sterile neutrino quite significantly—to almost 100 percent, Argüelles says. At a specific energy, the scientists on IceCube should have noticed a mass disappearance of neutrinos as they shifted identities into particles they could not see.

“The position of the dip [in the number of neutrinos found] depends on the mass of sterile neutrinos,” says theorist Joachim Kopp of the Johannes Gutenberg University Mainz. “If they were heavier, the dip would move to a higher energy, a disadvantage for IceCube. At a lower mass, it would move to a lower energy, at which IceCube cannot see neutrinos anymore. IceCube happens to be in a sweet spot.”

And yet, the scientists found no such dip.

This doesn’t mean they can completely rule out the existence of low-mass sterile neutrinos, Jones says. “But it’s also true to say that the likelihood that a sterile neutrino exists is now the lowest it has ever been before.”

The search for the sterile neutrino continues. Kopp says the planned Short Baseline Neutrino program at Fermilab will be perfectly calibrated to investigate the remaining mass region most likely to hold low-mass sterile neutrinos, if they do exist.

The IceCube analysis was based on data taken over the course of a year starting in 2011. The IceCube experiment has since collected five times as much data, and scientists are already working to update their search.

In the end, if these experiments throw cold water on the low-mass sterile neutrino theory, they will still have another question to answer: If sterile neutrinos did not cause the anomaly at Los Alamos, what did?

by Kathryn Jepsen at August 09, 2016 02:38 PM

Jester - Resonaances

Game of Thrones: 750 GeV edition
The 750 GeV diphoton resonance has made a big impact on theoretical particle physics. The number of papers on the topic is already legendary, and they keep coming at the rate of order 10 per week. Given that the Backović model is falsified, there's no longer a theoretical upper limit.  Does this mean we are not dealing with the classical ambulance chasing scenario? The answer may be known in the next days.

So who's leading this race?  What kind of question is that, you may shout, of course it's Strumia! And you would be wrong, independently of the metric.  For this contest, I will consider two different metrics: the King Beyond the Wall that counts the number of papers on the topic, and the Iron Throne that counts how many times these papers have been cited.

In the first category,  the contest is much more fierce than one might expect: it takes 8 papers to be the leader, and 7 papers may not be enough to even get on the podium!  Among the 3 authors with 7 papers the final classification is decided by trial by combat the citation count.  The result is (drums):

Citations, tja...   Social dynamics of our community encourages referencing all previous work on the topic, rather than just the relevant ones, which in this particular case triggered a period of inflation. One day soon citation numbers will mean as much as authorship in experimental particle physics. But for now the size of the h-factor is still an important measure of virility for theorists. If the citation count rather the number of papers is the main criterion, the iron throne is taken by a Targaryen contender (trumpets):

This explains why the resonance is usually denoted by the letter S.

Update 09.08.2016. Now that the 750 GeV excess is officially dead, one can give the final classification. The race for the iron throne was tight till the end, but there could only be one winner:

As you can see, in this race the long-term strategy and persistence proved to be more important than pulling off a few early victories.  In the other category there have also been  changes in the final stretch: the winner added 3 papers in the period between the un-official and official announcement of the demise of the 750 GeV resonance. The final standing are:


Congratulations for all the winners.  For all the rest, wish you more luck and persistence in the next edition,  provided it will take place.


by Jester (noreply@blogger.com) at August 09, 2016 12:56 PM

August 06, 2016

John Baez - Azimuth

Topological Crystals (Part 3)


k4_crystal

Last time I explained how to create the ‘maximal abelian cover’ of a connected graph. Now I’ll say more about a systematic procedure for embedding this into a vector space. That will give us a topological crystal, like the one above.

Some remarkably symmetrical patterns arise this way! For example, starting from this graph:

we get this:

Nature uses this pattern for crystals of graphene.

Starting from this graph:

we get this:

Nature uses this for crystals of diamond! Since the construction depends only on the topology of the graph we start with, we call this embedded copy of its maximal abelian cover a topological crystal.

Today I’ll remind you how this construction works. I’ll also outline a proof that it gives an embedding of the maximal abelian cover if and only if the graph has no bridges: that is, edges that disconnect the graph when removed. I’ll skip all the hard steps of the proof, but they can be found here:

• John Baez, Topological crystals.

The homology of graphs

I’ll start with some standard stuff that’s good to know. Let X be a graph. Remember from last time that we’re working in a setup where every edge e goes from a vertex called its source s(e) to a vertex called its target t(e). We write e: x \to y to indicate that e is going from x to y. You can think of the edge as having an arrow on it, and if you turn the arrow around you get the inverse edge, e^{-1}: y \to x. Also, e^{-1} \ne e.

The group of integral 0-chains on X, C_0(X,\mathbb{Z}), is the free abelian group on the set of vertices of X. The group of integral 1-chains on X, C_1(X,\mathbb{Z}), is the quotient of the free abelian group on the set of edges of X by relations e^{-1} = -e for every edge e. The boundary map is the homomorphism

\partial : C_1(X,\mathbb{Z}) \to C_0(X,\mathbb{Z})

such that

\partial e = t(e) - s(e)

for each edge e, and

Z_1(X,\mathbb{Z}) =  \ker \partial

is the group of integral 1-cycles on X.

Remember, a path in a graph is a sequence of edges, the target of each one being the source of the next. Any path \gamma = e_1 \cdots e_n in X determines an integral 1-chain:

c_\gamma = e_1 + \cdots + e_n

For any path \gamma we have

c_{\gamma^{-1}} = -c_{\gamma},

and if \gamma and \delta are composable then

c_{\gamma \delta} = c_\gamma + c_\delta

Last time I explained what it means for two paths to be ‘homologous’. Here’s the quick way to say it. There’s groupoid called the fundamental groupoid of X, where the objects are the vertices of X and the morphisms are freely generated by the edges except for relations saying that the inverse of e: x \to y really is e^{-1}: y \to x. We can abelianize the fundamental groupoid by imposing relations saying that \gamma \delta = \delta \gamma whenever this equation makes sense. Each path \gamma : x \to y gives a morphism which I’ll call [[\gamma]] : x \to y in the abelianized fundamental groupoid. We say two paths \gamma, \gamma' : x \to y are homologous if [[\gamma]] = [[\gamma']].

Here’s a nice thing:

Lemma A. Let X be a graph. Two paths \gamma, \delta : x \to y in X are homologous if and only if they give the same 1-chain: c_\gamma = c_\delta.

Proof. See the paper. You could say they give ‘homologous’ 1-chains, too, but for graphs that’s the same as being equal.   █

We define vector spaces of 0-chains and 1-chains by

C_0(X,\mathbb{R}) = C_0(X,\mathbb{Z}) \otimes \mathbb{R}, \qquad C_1(X,\mathbb{R}) = C_1(X,\mathbb{Z}) \otimes \mathbb{R},

respectively. We extend the boundary map to a linear map

\partial :  C_1(X,\mathbb{R}) \to C_0(X,\mathbb{R})

We let Z_1(X,\mathbb{R}) be the kernel of this linear map, or equivalently,

Z_1(X,\mathbb{R}) = Z_0(X,\mathbb{Z}) \otimes \mathbb{R}  ,

and we call elements of this vector space 1-cycles. Since Z_1(X,\mathbb{Z}) is a free abelian group, it forms a lattice in the space of 1-cycles. Any edge of X can be seen as a 1-chain, and there is a unique inner product on C_1(X,\mathbb{R}) such that edges form an orthonormal basis (with each edge e^{-1} counting as the negative of e.) There is thus an orthogonal projection

\pi : C_1(X,\mathbb{R}) \to Z_1(X,\mathbb{R}) .

This is the key to building topological crystals!

The embedding of atoms

We now come to the main construction, first introduced by Kotani and Sunada. To build a topological crystal, we start with a connected graph X with a chosen basepoint x_0. We define an atom to be a homology class of paths starting at the basepoint, like

[[\alpha]] : x_0 \to x

Last time I showed that these atoms are the vertices of the maximal abelian cover of X. Now let’s embed these atoms in a vector space!

Definition. Let X be a connected graph with a chosen basepoint. Let A be its set of atoms. Define the map

i : A \to Z_1(X,\mathbb{R})

by

i([[ \alpha ]]) = \pi(c_\alpha) .

That i is well-defined follows from Lemma A. The interesting part is this:

Theorem A. The following are equivalent:

(1) The graph X has no bridges.

(2) The map i : A \to Z_1(X,\mathbb{R}) is one-to-one.

Proof. The map i is one-to-one if and only if for any atoms [[ \alpha ]] and [[ \beta ]], i([[ \alpha ]])  = i([[ \beta ]]) implies [[ \alpha ]]= [[ \beta ]]. Note that \gamma = \beta^{-1} \alpha is a path in X with c_\gamma = c_{\alpha} - c_\beta, so

\pi(c_\gamma) = \pi(c_{\alpha} - c_\beta) =   i([[ \alpha ]]) - i([[ \beta ]])

Since \pi(c_\gamma) vanishes if and only if c_\gamma is orthogonal to every 1-cycle, we have

c_{\gamma} \textrm{ is orthogonal to every 1-cycle}   \; \iff \;   i([[ \alpha ]])  = i([[ \beta ]])

On the other hand, Lemma A says

c_\gamma = 0 \; \iff \; [[ \alpha ]]= [[ \beta ]].

Thus, to prove (1)\iff(2), it suffices to that show that X has no bridges if and only if every 1-chain c_\gamma orthogonal to every 1-cycle has c_\gamma =0. This is Lemma D below.   █

The following lemmas are the key to the theorem above — and also a deeper one saying that if X has no bridges, we can extend i : A \to Z_1(X,\mathbb{R}) to an embedding of the whole maximal abelian cover of X.

For now, we just need to show that any nonzero 1-chain coming from a path in a bridgeless graph has nonzero inner product with some 1-cycle. The following lemmas, inspired by an idea of Ilya Bogdanov, yield an algorithm for actually constructing such a 1-cycle. This 1-cycle also has other desirable properties, which will come in handy later.

To state these, let a simple path be one in which each vertex appears at most once. Let a simple loop be a loop \gamma : x \to x in which each vertex except x appears at most once, while x appears exactly twice, as the starting point and ending point. Let the support of a 1-chain c, denoted \mathrm{supp}(c), be the set of edges e such that \langle c, e\rangle> 0. This excludes edges with \langle c, e \rangle= 0 , but also those with \langle c , e \rangle < 0, which are inverses of edges in the support. Note that

c = \sum_{e \in \mathrm{supp}(c)} \langle c, e \rangle  .

Thus, \mathrm{supp}(c) is the smallest set of edges such that c can be written as a positive linear combination of edges in this set.

Okay, here are the lemmas!

Lemma B. Let X be any graph and let c be an integral 1-cycle on X. Then for some n we can write

c = c_{\sigma_1} + \cdots +  c_{\sigma_n}

where \sigma_i are simple loops with \mathrm{supp}(c_{\sigma_i}) \subseteq \mathrm{supp}(c).

Proof. See the paper. The proof is an algorithm that builds a simple loop \sigma_1 with\mathrm{supp}(c_{\sigma_1}) \subseteq \mathrm{supp}(c). We subtract this from c, and if the result isn’t zero we repeat the algorithm, continuing to subtract off 1-cycles c_{\sigma_i} until there’s nothing left.   █

Lemma C. Let \gamma: x \to y be a path in a graph X. Then for some n \ge 0 we can write

c_\gamma = c_\delta + c_{\sigma_1} + \cdots +  c_{\sigma_n}

where \delta: x \to y is a simple path and \sigma_i are simple loops with \mathrm{supp}(c_\delta), \mathrm{supp}(c_{\sigma_i}) \subseteq \mathrm{supp}(c_\gamma).

Proof. This relies on the previous lemma, and the proof is similar — but when we can’t subtract off any more c_{\sigma_i}’s we show what’s left is c_\delta for a simple path \delta: x \to y.   █

Lemma D. Let X be a graph. Then the following are equivalent:

(1) X has no bridges.

(2) For any path \gamma in X, if c_\gamma is orthogonal to every 1-cycle then c_\gamma = 0.

Proof. It’s easy to show a bridge e gives a nonzero 1-chain c_e that’s orthogonal to all 1-cycles, so the hard part is showing that for a bridgeless graph, if c_\gamma is orthogonal to every 1-cycle then c_\gamma = 0. The idea is to start with a path for which c_\gamma \ne 0. We hit this path with Lemma C, which lets us replace \gamma by a simple path \delta. The point is that a simple path is a lot easier to deal with than a general path: a general path could wind around crazily, passing over every edge of our graph multiple times.

Then, assuming X has no bridges, we use Ilya Bogdanov’s idea to build a 1-cycle that’s not orthogonal to c_\delta. The basic idea is to take the path \delta : x \to y and write it out as \delta = e_1 \cdots e_n. Since the last edge e_n is not a bridge, there must be a path from y back to x that does not use the edge e_n or its inverse. Combining this path with \delta we can construct a loop, which gives a cycle having nonzero inner product with c_\delta and thus with c_\gamma.

I’m deliberately glossing over some difficulties that can arise, so see the paper for details!   █

Embedding the whole crystal

Okay: so far, we’ve taken a connected bridgeless graph X and embedded its atoms into the space of 1-cycles via a map

i : A \to Z_1(X,\mathbb{R})  .

These atoms are the vertices of the maximal abelian cover \overline{X}. Now we’ll extend i to an embedding of the whole graph \overline{X} — or to be precise, its geometric realization |\overline{X}|. Remember, for us a graph is an abstract combinatorial gadget; its geometric realization is a topological space where the edges become closed intervals.

The idea is that just as i maps each atom to a point in the vector space Z_1(X,\mathbb{R}), j maps each edge of |\overline{X}| to a straight line segment between such points. These line segments serve as the ‘bonds’ of a topological crystal. The only challenge is to show that these bonds do not cross each other.

Theorem B. If X is a connected graph with basepoint, the map i : A \to Z_1(X,\mathbb{R}) extends to a continuous map

j : |\overline{X}| \to Z_1(X,\mathbb{R})

sending each edge of |\overline{X}| to a straight line segment in Z_1(X,\mathbb{R}). If X has no bridges, then j is one-to-one.

Proof. The first part is easy; the second part takes real work! The problem is to show the edges don’t cross. Greg Egan and I couldn’t do it using just Lemma D above. However, there’s a nice argument that goes back and uses Lemma C — read the paper for details.

As usual, history is different than what you read in math papers: David Speyer gave us a nice proof of Lemma D, and that was good enough to prove that atoms are mapped into the space of 1-cycles in a one-to-one way, but we only came up with Lemma C after weeks of struggling to prove the edges don’t cross.   █

Connections to tropical geometry

Tropical geometry sets up a nice analogy between Riemann surfaces and graphs. The Abel–Jacobi map embeds any Riemann surface \Sigma in its Jacobian, which is the torus H_1(\Sigma,\mathbb{R})/H_1(\Sigma,\mathbb{Z}). We can similarly define the Jacobian of a graph X to be H_1(X,\mathbb{R})/H_1(X,\mathbb{Z}). Theorem B yields a way to embed a graph, or more precisely its geometric realization |X|, into its Jacobian. This is the analogue, for graphs, of the Abel–Jacobi map.

After I put this paper on the arXiv, I got an email from Matt Baker saying that he had already proved Theorem A — or to be precise, something that’s clearly equivalent. It’s Theorem 1.8 here:

• Matthew Baker and Serguei Norine, Riemann–Roch and Abel–Jacobi theory on a finite graph.

This says that the vertices of a bridgeless graph X are embedded in its Jacobian by means of the graph-theoretic analogue of the Abel–Jacobi map.

What I really want to know is whether someone’s written up a proof that this map embeds the whole graph, not just its vertices, into its Jacobian in a one-to-one way. That would imply Theorem B. For more on this, try my conversation with David Speyer.

Anyway, there’s a nice connection between topological crystallography and tropical geometry, and not enough communication between the two communities. Once I figure out what the tropical folks have proved, I will revise my paper to take that into account.

Next time I’ll talk about more examples of topological crystals!


by John Baez at August 06, 2016 10:12 AM

Marco Frasca - The Gauge Connection

In the aftermath of ICHEP 2016

ICHEP2016

ATLAS and CMS nuked our illusions on that bump. More than 500 papers were written on it and some of them went through Physical Review Letters. Now, we are contemplating the ruins of that house of cards. This says a lot about the situation in hep in these days. It should be emphasized that people at CERN warned that that data were not enough to draw a conclusion and if they fix the threshold at 5\sigma a reason must exist. But carelessness acts are common today if you are a theorist and no input from experiment is coming for long.

It should be said that the fact that LHC could confirm the Standard Model and nothing else is one of the possibilities. We should hope that a larger accelerator could be built, after LHC decommissioning, as there is a long way to the Planck energy that we do not know how to probe yet.

What does it remain? I think there is a lot yet. My analysis of the Higgs sector is still there to be checked as I will explain in a moment but this is just another way to treat the equations of the Standard Model, not beyond it. Besides, for the end of the year they will reach 30\ fb^{-1}, almost triplicating the actual integrated luminosity and something interesting could ever pop out. There are a lot of years of results ahead and there is no need to despair. Just to wait. This is one of the most important activities of a theorist. Impatience does not work in physics and mostly for hep.

About the signal strength, things seem yet too far to be settled. I hope to see better figures for the end of the year. ATLAS is off the mark, going well beyond unity for WW, as happened before. CMS claimed 0.3\pm 0.5 for WW decay, worsening their excellent measurement of 0.72^{+0.20}_{-0.18} reached in Run I. CMS agrees fairly well with my computations but I should warn that the error bar is yet too large and now is even worse. I remember that the signal strength is obtained by the ratio of the measured cross section to the one obtained from the Standard Model. The fact that is smaller does not necessarily mean that we are beyond the Standard Model but that we are just solving the Higgs sector in a different way than standard perturbation theory. This solution entails higher excitations of the Higgs field but they are strongly depressed and very difficult to observe now. The only mark could be the signal strength for the observed Higgs particle. Finally, the ZZ channel is significantly less sensible and error bars are so large that one can accommodate whatever she likes yet. Overproduction seen by ATLAS is just a fluctuation that will go away in the future.

The final sentence to this post is what we have largely heard in these days: Standard Model rules.


Filed under: Particle Physics, Physics Tagged: 750 GeV, ATLAS, CERN, CMS, Higgs particle, ICHEP 2016, LHC

by mfrasca at August 06, 2016 09:31 AM

August 05, 2016

Matt Strassler - Of Particular Significance

The 2016 Data Kills The Two-Photon Bump

Results for the bump seen in December have been updated, and indeed, with the new 2016 data — four times as much as was obtained in 2015 — neither ATLAS nor CMS [the two general purpose detectors at the Large Hadron Collider] sees an excess where the bump appeared in 2015. Not even a hint, as we already learned inadvertently from CMS yesterday.

All indications so far are that the bump was a garden-variety statistical fluke, probably (my personal guess! there’s no evidence!) enhanced slightly by minor imperfections in the 2015 measurements. Should we be surprised? No. If you look back at the history of the 1970s and 1980s, or at the recent past, you’ll see that it’s quite common for hints — even strong hints — of new phenomena to disappear with more data. This is especially true for hints based on small amounts of data (and there were not many two photon events in the bump — just a couple of dozen).  There’s a reason why particle physicists have very high standards for statistical significance before they believe they’ve seen something real.  (Many other fields, notably medical research, have much lower standards.  Think about that for a while.)  History has useful lessons, if you’re willing to learn them.

Back in December 2011, a lot of physicists were persuaded that the data shown by ATLAS and CMS was convincing evidence that the Higgs particle had been discovered. It turned out the data was indeed showing the first hint of the Higgs. But their confidence in what the data was telling them at the time — what was called “firm evidence” by some — was dead wrong. I took a lot of flack for viewing that evidence as a 50-50 proposition (70-30 by March 2012, after more evidence was presented). Yet the December 2015 (March 2016) evidence for the bump at 750 GeV was comparable to what we had in December 2011 for the Higgs. Where’d it go?  Clearly such a level of evidence is not so firm as people claimed. I, at least, would not have been surprised if that original Higgs hint had vanished, just as I am not surprised now… though disappointed of course.

Was this all much ado about nothing? I don’t think so. There’s a reason to have fire drills, to run live-fire exercises, to test out emergency management procedures. A lot of new ideas, both in terms of new theories of nature and new approaches to making experimental measurements, were generated by thinking about this bump in the night. The hope for a quick 2016 discovery may be gone, but what we learned will stick around, and make us better at what we do.


Filed under: History of Science, LHC News Tagged: #LHC #Higgs #ATLAS #CMS #diphoton

by Matt Strassler at August 05, 2016 05:18 PM

Quantum Diaries

Plusieurs petits pas mais pas de grand bond en avant

Les grandes percées sont rares en physique. La recherche est plutôt jalonnée d’innombrables petites avancées et c’est ce qui ressortira de la Conférence Internationale de la Physique des Hautes Énergies (ICHEP) qui s’est ouverte hier à Chicago. On y espérait un pas de géant mais aujourd’hui les expériences CMS et ATLAS ont toutes deux rapporté que l’effet prometteur observé à 750 GeV dans les données de 2015 avait disparu. Il est vrai que ce genre de choses n’est pas rare en physique des particules étant donné la nature statistique de tous les phénomènes que nous observons.

CMS-2016-750GeV

Sur chaque figure, l’axe vertical indique le nombre d’évènements trouvés contenant une paire de photons dont la masse combinée apparaît sur l’axe horizontal en unités de GeV. (À gauche) Les points en noir représentent les données expérimentales recueillies et analysées jusqu’à présent par la Collaboration CMS, soit 12.9 fb-1, à comparer aux 2.7 fb-1 disponibles en 2015. Le trait vertical associé à chaque point représente la marge d’erreur expérimentale. En tenant compte de ces erreurs, les données sont compatibles avec ce à quoi on s’attend pour le bruit de fond, tel qu’indiqué par la courbe en vert. (À droite) Une nouvelle particule se serait manifestée sous forme d’un pic tel que celui en rouge si elle avait eu les mêmes propriétés que celles pressenties dans les données de 2015 à 750 GeV. Visiblement, les données expérimentales (points noirs) reproduisent simplement le bruit de fond. Il faut donc conclure que ce qui avait été aperçu dans les données de 2015 n’était que le fruit d’une variation statistique.

Mais dans ce cas, c’était particulièrement convainquant car le même effet avait été observé indépendamment par deux équipes qui travaillent sans se consulter et utilisent des méthodes d’analyse et des détecteurs différents. Cela avait déclenché beaucoup d’activités et d’optimisme : à ce jour, 540 articles scientifiques ont été écrits sur cette particule hypothétique qui n’a jamais existé, tant l’implication de son existence serait profonde.

Mais les théoriciens et théoriciennes ne furent pas les seuls à nourrir autant d’espoir. Beaucoup d’expérimentalistes y ont cru et ont parié sur son existence, un de mes collègues allant jusqu’à mettre en jeu une caisse d’excellent vin.

Si beaucoup de physiciens et physiciennes avaient bon espoir ou étaient même convaincus de la présence d’une nouvelle particule, les deux expériences ont néanmoins affiché la plus grande prudence. En l’absence de preuves irréfutables de sa présence, aucune des deux collaborations, ATLAS et CMS, n’a revendiqué quoi que ce soit. Ceci est caractéristique des scientifiques : on parle de découvertes seulement lorsqu’il ne subsiste plus aucun doute.

Mais beaucoup de physiciens et physiciennes, moi y compris, ont délaissé un peu leurs réserves, non seulement parce que les chances que cet effet disparaisse étaient très minces, mais aussi parce que cela aurait été une découverte beaucoup plus grande que celle du boson de Higgs, générant du coup beaucoup d’enthousiasme. Tout le monde soupçonne qu’il doit exister d’autres particules au-delà de celles déjà connues et décrites par le Modèle standard de la physique des particules. Mais malgré des années passées à leur recherche, nous n’avons toujours rien à nous mettre sous la dent.

Depuis que le Grand collisionneur de hadrons (LHC) du CERN opère à plus haute énergie, ayant passé de 8 TeV à 13 TeV en 2015, les chances d’une découverte majeure sont plus fortes que jamais. Disposer de plus d’énergie donne accès à des territoires jamais explorés auparavant.

Jusqu’ici, les données de 2015 n’ont pas révélé la présence de particules ou phénomènes nouveaux mais la quantité de données recueillies était vraiment limitée. Au contraire, cette année le LHC se surpasse, ayant déjà produit cinq fois plus de données que l’année dernière. On espère y découvrir éventuellement les premiers signes d’un effet révolutionnaire. Des dizaines de nouvelles analyses basées sur ces données récentes seront présentées à la conférence ICHEP jusqu’au 10 août et j’en reparlerai sous peu.

Il a fallu 48 ans pour découvrir le boson de Higgs après qu’il fut postulé théoriquement alors qu’on savait ce que l’on voulait trouver. Mais aujourd’hui, nous ne savons même pas ce que nous cherchons. Cela pourrait donc prendre encore un peu de temps. Il y a autre chose, tout le monde le sait. Mais quand le trouverons nous, ça, c’est une autre histoire.

Pauline Gagnon

Pour en savoir plus sur la physique des particules et les enjeux du LHC, consultez mon livre : « Qu’est-ce que le boson de Higgs mange en hiver et autres détails essentiels».

Pour recevoir un avis lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution.

by Pauline Gagnon at August 05, 2016 03:50 PM

Quantum Diaries

Many small steps but no giant leap

Giant leaps are rare in physics. Scientific research is rather a long process made of countless small steps and this is what will be presented throughout the week at the International Conference on High Energy Physics (ICHEP) in Chicago. While many hoped for a major breakthrough, today, both the CMS and ATLAS experiments reported that the promising effect observed at 750 GeV in last year’s data has vanished. True, this is not uncommon in particle physics given the statistical nature of all phenomena we observe.

CMS-2016-750GeV

On both plots, the vertical axis gives the number of events found containing a pair of photons with a combined mass given in units of GeV (horizontal axis) (Left plot) The black dots represent all data collected in 2016 and analysed so far by the CMS Collaboration, namely 12.9 fb-1, compared to the 2.7 fb-1 available in 2015. The vertical line associated with each data point represents the experimental error margin. Taking these errors into account, the data are compatible with what is expected from various backgrounds, as indicated by the green curve. (Right) A new particle would have manifested itself as a peak as big as the red one shown here if it had the same features as what had been seen in the 2015 data around 750 GeV. Clearly, the black data points pretty much reproduce the background. Hence, we must conclude that what was seen in the 2015 data was simply due to a statistical fluctuation.

What was particularly compelling in this case was that the very same effect had been observed by two independent teams, who worked without consulting each other and used different detectors and analysis methods. This triggered frantic activity and much expectation: to date, 540 scientific theory papers have been written on a hypothetical particle that never was, so profound the implications of the existence of such a new particle would be.

But theorists were not the only ones to be so hopeful. Many experimentalists had taken strong bets, one of my colleagues going as far as putting a case of very expensive wine on it.

If many physicists were hopeful or even convinced of the presence of a new particle, both experiments nevertheless had been very cautious. Without unambiguous signs of its presence, neither the ATLAS nor the CMS Collaborations had made claims. This is very typical of scientists: one should not claim anything until it has been established beyond any conceivable doubt.

But many theorists and experimentalists, including myself, threw some of our caution to the air, not only because the chances it would vanish were so small but also because it would have been a much bigger discovery than that of the Higgs boson, generating much enthusiasm. As it stands, we all suspect that there are other particles out there, beyond the known ones, those described by the Standard Model of particle physics. But despite years spent looking for them, we still have nothing to chew on. In 2015, the Large Hadron Collider at CERN raised its operating energy, going from 8 TeV to the current 13 TeV, making the odds for a discovery stronger than ever since higher energy means access to territories never explored before.

So far, the 2015 data has not revealed any new particle or phenomena but the amount of data collected was really small. On the contrary, this year, the LHC is outperforming itself, having already delivered five times more data than last year. The hope is that these data will eventually reveal the first signs of something revolutionary. Dozens of new analyses based on the recent data will be presented until August 10 at the ICHEP conference and I’ll present some of them later on.

It took 48 years to discover the Higgs boson after it was first theoretically predicted when we knew what to expect. This time, we don’t even know what we are looking for. So it could still take a little longer. There is more to be found, we all know it. But when will we find it, is another story.

Pauline Gagnon

To find out more about particle physics, check out my book « Who Cares about Particle Physics: making sense of the Higgs boson, the Large Hadron Collider and CERN ».

To be notified of new blogs, follow me on Twitter : @GagnonPauline or sign up on this distribution list

 

by Pauline Gagnon at August 05, 2016 03:49 PM

Jester - Resonaances

After the hangover
The loss of the 750 GeV diphoton resonance is a big blow to the particle physics community. We are currently going through the 5 stages of grief, everyone at their own pace, as can be seen e.g. in this comments section. Nevertheless, it may already be a good moment to revisit the story one last time, so as  to understand what went wrong.

In the recent years, physics beyond the Standard Model has seen 2 other flops of comparable impact: the faster-than-light neutrinos in OPERA, and the CMB tensor fluctuations in BICEP.  Much as the diphoton signal, both of the above triggered a binge of theoretical explanations, followed by a massive hangover. There was one big difference, however: the OPERA and BICEP signals were due to embarrassing errors on the experiments' side. This doesn't seem to be the case for the diphoton bump at the LHC. Some may wonder whether the Standard Model background may have been slightly underestimated,  or whether one experiment may have been biased by the result of the other... But, most likely, the 750 GeV bump was just due to a random fluctuation of the background at this particular energy. Regrettably, the resulting mess cannot be blamed on experimentalists, who were in fact downplaying the anomaly in their official communications. This time it's the theorists who  have some explaining to do.

Why did theorists write 500 papers about a statistical fluctuation?  One reason is that it didn't look like one at first sight. Back in December 2015, the local significance of the diphoton  bump in ATLAS run-2 data was 3.9 sigma, which means the probability of such a fluctuation was 1 in 10000. Combining available run-1 and run-2 diphoton data in ATLAS and CMS, the local significance was increased to 4.4 sigma.  All in all, it was a very unusual excess, a 1-in-100000 occurrence! Of course, this number should be interpreted with care. The point is that the LHC experiments perform gazillion different measurements, thus they are bound to observe seemingly unlikely outcomes in a small fraction of them. This can be partly taken into account by calculating the global significance, which is the probability of finding a background fluctuation of the observed size anywhere in the diphoton spectrum. The global significance of the 750 GeV bump quoted by ATLAS was only about two sigma, the fact strongly emphasized by the collaboration.  However, that number can be misleading too.  One problem with the global significance is that, unlike for the local one, it cannot be  easily combined in the presence of separate measurements of the same observable. For the diphoton final state we  have ATLAS and CMS measurements in run-1 and run-2,  thus 4 independent datasets, and their robust concordance was crucial  in creating the excitement.  Note also that what is really relevant here is the probability of a fluctuation of a given size in any of the  LHC measurement, and that is not captured by the global significance.  For these reasons, I find it more transparent work with the local significance, remembering that it should not be interpreted as the probability that the Standard Model is incorrect. By these standards, a 4.4 sigma fluctuation in a combined ATLAS and CMS dataset is still a very significant effect which deserves a special attention. What we learned the hard way is that such large fluctuations do happen at the LHC...   This lesson will certainly be taken into account next time we encounter a significant anomaly.

Another reason why the 750 GeV bump was exciting is that the measurement is rather straightforward.  Indeed, at the LHC we often see anomalies in complicated final states or poorly controlled differential distributions, and we treat those with much skepticism.  But a resonance in the diphoton spectrum is almost the simplest and cleanest observable that one can imagine (only a dilepton or 4-lepton resonance would be cleaner). We already successfully discovered one particle this way - that's how the Higgs boson first showed up in 2011. Thus, we have good reasons to believe that the collaborations control this measurement very well.

Finally, the diphoton bump was so attractive because theoretical explanations were  plausible.  It was trivial to write down a model fitting the data, there was no need to stretch or fine-tune the parameters, and it was quite natural that the particle first showed in as a diphoton resonance and not in other final states. This is in stark contrast to other recent anomalies which typically require a great deal of gymnastics to fit into a consistent picture.   The only thing to give you a pause was the tension with the LHC run-1 diphoton data, but even that became  mild after the Moriond update this year.

So we got a huge signal of a new particle in a clean channel with plausible theoretic models to explain it...  that was a really bad luck.  My conclusion may not be shared by everyone but I don't think that the theory community committed major missteps  in this case.  Given that for 30 years we have been looking for a clue about the fundamental theory beyond the Standard Model, our reaction was not disproportionate once a seemingly reliable one had arrived.  Excitement is an inherent part of physics research. And so is disappointment, apparently.

There remains a question whether we really needed 500 papers...   Well, of course not: many of  them fill an important gap.  Yet many are an interesting read, and I personally learned a lot of exciting physics from them.  Actually, I suspect that the fraction of useless papers among the 500 is lower than for regular daily topics.  On a more sociological side, these papers exacerbate the problem with our citation culture (mass-grave references), which undermines the citation count as a means to evaluate the research impact.  But that is a wider issue which I don't know how to address at the moment.

Time to move on. The ICHEP conference is coming next week, with loads of brand new results based on up to 16 inverse femtobarns of 13 TeV LHC data.  Although the rumor is that there is no new exciting  anomaly at this point, it will be interesting to see how much room is left for new physics. The hope lingers on, at least until the end of this year.

In the comments section you're welcome to lash out on the entire BSM community - we made a wrong call so we deserve it. Please, however, avoid personal attacks (unless on me). Alternatively, you can also give us a hug :) 

by Jester (noreply@blogger.com) at August 05, 2016 03:14 PM

Symmetrybreaking - Fermilab/SLAC

LHC bump fades with more data

Possible signs of new particle seem to have washed out in an influx of new data.

A curious anomaly seen by two Large Hadron Collider experiments is now looking like a statistical fluctuation.

The anomaly—an unanticipated excess of photon pairs with a combined mass of 750 billion electronvolts—was first reported by both the ATLAS and CMS experiments in December 2015.

Such a bump in the data could indicate the existence of a new particle. The Higgs boson, for instance, materialized in the LHC data as an excess of photon pairs with a combined mass of 125 billion electronvolts. However, with only a handful of data points, the two experiments could not discern whether that was the case or if it were the result of normal statistical variance.  

After quintupling their 13-TeV dataset between April and July this year, both experiments report that the bump has greatly diminished and, in some analyses, completely disappeared.

What made this particular bump interesting is that both experiments saw the same anomaly in completely independent data sets, says Wade Fisher, a physicist at Michigan State University.

“It’s like finding your car parked next to an identical copy,” he says. “That’s a very rare experience, but it doesn’t mean that you’ve discovered something new about the world. You’d have to keep track of every time it happened and compare what you observe to what you’d expect to see if your observation means anything.”

Theorists predicted that a particle of that size could have been a heavier cousin of the Higgs boson or a graviton, the theoretical particle responsible for gravity. While data from more than 1000 trillion collisions have smoothed out this bump, scientists on the ATLAS experiment still cannot completely rule out its existence.

“There’s up fluxes and down fluxes in statistics,” Fisher says. “Up fluctuations can sometimes look like the early signs of a new particles, and down fluctuations can sometimes make the signatures of a particle disappear. We’ll need the full 2016 data set to be more confident about what we’re seeing.”

Scientists on both experiments are currently scrutinizing the huge influx of data to both better understand predicted processes and look for new physics and phenomena.

"New physics can manifest itself in many different ways—we learn more if it surprises us rather than coming in one of the many many theories we're already probing," says Steve Nahn, a CMS researcher working at Fermilab. "So far the canonical Standard Model is holding up quite well, and we haven't seen any surprises, but there's much more data coming from the LHC, so there's much more territory to explore."

by Sarah Charley at August 05, 2016 02:46 PM

Jon Butterworth - Life and Physics

For the record…

Recently I was involved in discussions that led to a meeting of climate-change skeptics moving from UCL premises. As a result, a couple of articles appeared about me on a climate website and on the right-wing “Breitbart” site.

In the more measured article by Christopher Monckton (who has more reason to be ticked off, since he was one of the meeting organisers) I’m called a “useless bureaucrat” and “forgettable”. The other article uses “cockwomble” (a favourite insult of mine), “bullying” and “climate prat”. Obviously it is difficult to respond to such disarming eloquence, but I do want to set the record straight on one thing.

My involvement was by means of an email to an honorary – ie unpaid – research fellow in the department of Physics & Astronomy, of which I am currently head, based on concerns expressed to me by UCL colleagues on the nature of the meeting and the way it was being advertised. The letter I sent is quoted partially, and reproducing the full text might give a better idea of the interaction. It’s below. The portions quoted by Monckton et al are in italics.


Dear Prof [redacted]

Although you have been an honorary research associate with the department since before I became head, I don’t believe we have ever met, which is a shame. I understand you have made contributions to outreach at the observatory on occasion, for which thanks.

It has been brought to my attention that you have booked a room at UCL for an external conference in September for a rather fringe group discussing aspects of climate science. This is apparently an area well beyond your expertise as an astronomer, and this group is also one which many scientists at UCL have had negative interactions. The publicity gives the impression that you are a professor of astronomy at UCL, which is inaccurate, and some of the publicity could be interpreted as implying that UCL, and the department of Physics & Astronomy in particular, are hosting the event, rather than it being an external event booked by you in a personal capacity.

If this event were to go ahead at UCL, it would generate a great deal of strong feeling, indeed it already has, as members of the UCL community are expressing concern to me that we are giving a platform to speakers who deny anthropogenic climate change while flying in the face of accepted scientific methods. I am sure you have no desire to bring UCL into disrepute, or to cause dissension in the UCL community, and I would encourage you to think about moving the event to a different venue, not on UCL premises. If it is going to proceed as planned I must insist that the website and other publicity is amended to make clear that the event has no connection to UCL or this department in particular, and that you are not a UCL Professor.

Best wishes,
Jon


 

After receiving this, the person concerned expressed frustration at the impression given by the meeting publicity, and decided to cancel the room booking. I understand the meeting was successfully rebooked at Conway Hall, which seems like a decent solution to me. As you can see in the full letter, the meeting wasn’t in any sense banned.

Free speech and debate are good things, though the quality that I’ve experienced during this episode hasn’t much impressed me. As far as I’m concerned, people are welcome to have meetings like this to their hearts’ content, so long as they don’t appropriate spurious endorsement from the place in which they have booked a room.

I have since met the honorary research fellow concerned, and while mildly embarrassed by the whole episode (which is why I haven’t mentioned his name here, though it’s easy to find it if you really care), he did not seem at all upset or intimidated, and we had a friendly and interesting discussion about his scientific work and other matters.

Bit of a storm in a teacup, really, I think, though I’m sure James Delingpole was glad of the opportunity to deploy the Eng. Lit. skills of which he seems so proud.

 

 


Filed under: Climate Change, Politics, Science, Science Policy Tagged: UCL

by Jon Butterworth at August 05, 2016 06:15 AM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
August 30, 2016 10:36 PM
All times are UTC.

Suggest a blog:
planet@teilchen.at