Particle Physics Planet

November 27, 2015

astrobites - astro-ph reader's digest

We’ll be counting galaxies…

Title: The galaxy luminosity function at z ~ 6 and evidence for rapid evolution in the bright end from z  ~ 7 to 5
Authors: R. A. A. Bowler et al.
First Author Institution: SUPA, Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UK
Status: Published in MNRAS, 451, 1817

A lot of the time Astrophysicists might use some fancy computing and equations to figure out things about the Universe – but sometimes just counting the number of galaxies you can see, can tell you a whole lot more. You can turn the number of galaxies you can see in a certain area of sky into a number density and then look how this value changes with increasing stellar mass (or luminosity) of a galaxy (i.e. how bright a galaxy is and therefore how much mass it has in stars). Astronomers call this the luminosity function (LF) and it’s really important if you want to get a handle on how galaxies have been built up over time.

We can also get an estimate of the LF from simulations that try to emulate the Universe. Comparing both the observed and simulated galaxy number densities tells us how well our models of what shapes the Universe perform. Our most accepted model of the Universe is λ Cold Dark Matter (λCDM) where the Universe contains a cosmological constant, λ (i.e. dark energy) and cold dark matter (cold to represent their minimal energy). However λCDM does not give us the same number density of galaxies as observed at all stellar galaxy masses. It overestimates the numbers of galaxies at the extremely low mass and extremely high mass end of the LF. People have tried to explain this discrepancy in many different ways. For example at the low mass end it’s thought that this is caused by what astronomers call a selection effect – that the galaxies should be there, but they’re so faint that there’s no way we can currently detect them. We don’t have that excuse at the high mass end though, as these galaxies are really bright, so they’d be hard to miss.  But one thing that isn’t included in the simulations is feedback of energy from AGN (active galactic nuclei – i.e. active black holes in the centres of galaxies) which can throw out or heat gas needed for star formation in a galaxy. This feedback can stop a galaxy from growing overly large in mass, so people have argued that the inclusion of this in the simulations removes the discrepancy with observations.

Studying this problem locally with galaxies that are nearby is relatively easy compared to trying to get this LF for galaxies at high redshift, because even the brightest galaxies at huge distances are hard to detect. Getting accurate estimates for their number counts is therefore really difficult, especially if we want to study how the LF changes with increasing redshift (i.e. increasing distance and therefore looking further back into the Universe’s past). The author’s of this paper have been tackling this problem of getting accurate number counts for galaxies at z > 6 (in the first billion years of the Universe’s life) by using two ultra deep surveys of the sky in the infra-red: COSMOS and UDS. They found 266 candidate galaxies (after removing possible contaminant sources that look like galaxies at big distances, such as brown dwarf stars in our galaxy) all within a redshift of 5.5 < z < 6.5. Using these galaxies and other results from previous publications, they looked at how the LF changes from z= 5 to z = 7 (over 500 million years of the Universe’s history) , which is shown below in Figure 1.



Figure 1; The y-axis shows the number density of galaxies at a given infra-red magnitude on the x-axis. Bright, massive galaxies are on the left of the plot and faint, small galaxies are on the right of the plot. The different coloured lines show how the LF changes with increasing redshift.


What’s cool about Figure 1 is that it shows that, at the faint end (M_{1500} \sim -18), the LFs at the three different redshifts are extremely similar; whereas at the bright end(M_{1500} \sim -22) the three differ quite significantly. The authors argue that this could be because the AGN feedback that’s thought to affect the bright end of the LF in the local Universe has less of an effect with increasing redshift (i.e. decreasing age of the Universe). At such early times in the Universe’s life (z=7  is only ~700  million years after the Big Bang) black holes haven’t managed to grow and accrete sufficient material to be powerful enough to output enough energy to feedback on the star formation of a galaxy. As the Universe ages, black holes keep growing, eventually become active, so that AGN feedback becomes a big influence on the numbers of super massive galaxies at the bright end of the LF. This is a really intriguing idea, one that needs to be looked at further with the advent of deeper and wider surveys of the sky in the future, such as those using ALMA and the JWST. So watch this space!

by Becky Smethurst at November 27, 2015 03:02 PM

Tommaso Dorigo - Scientificblogging

Single Top Production At The LHC
As an editor of the new Elsevier journal "Reviews in Physics" I am quite proud to see that the first submissions of review articles are reaching publication stage. Four such articles are going to be published in the course of the next couple of months, and more are due shortly thereafter. 

read more

by Tommaso Dorigo at November 27, 2015 03:01 PM

November 26, 2015

Christian P. Robert - xi'an's og

on the origin of the Bayes factor

Alexander Etz and Eric-Jan Wagenmakers from the Department of Psychology of the University of Amsterdam just arXived a paper on the invention of the Bayes factor. In particular, they highlight the role of John Burdon Sanderson (J.B.S.) Haldane in the use of the central tool for Bayesian comparison of hypotheses. In short, Haldane used a Bayes factor before Jeffreys did!

“The idea of a significance test, I suppose, putting half the probability into a constant being 0, and distributing the other half over a range of possible values.”H. Jeffreys

The authors analyse Jeffreys’ 1935 paper on significance tests, which appears to be the very first occurrence of a Bayes factor in his bibliography, testing whether or not two probabilities are equal. They also show the roots of this derivation in earlier papers by Dorothy Wrinch and Harold Jeffreys. [As an “aside”, the early contributions of Dorothy Wrinch to the foundations of 20th Century Bayesian statistics are hardly acknowledged. A shame, when considering they constitute the basis and more of Jeffreys’ 1931 Scientific Inference, Jeffreys who wrote in her necrology “I should like to put on record my appreciation of the substantial contribution she made to [our joint] work, which is the basis of all my later work on scientific inference.” In retrospect, Dorothy Wrinch should have been co-author to this book…] As early as 1919. These early papers by Wrinch and Jeffreys are foundational in that they elaborate a construction of prior distributions that will eventually see the Jeffreys non-informative prior as its final solution [Jeffreys priors that should be called Lhostes priors according to Steve Fienberg, although I think Ernest Lhoste only considered a limited number of transformations in his invariance rule]. The 1921 paper contains de facto the Bayes factor but it does not appear to be advocated as a tool per se for conducting significance tests.

“The historical records suggest that Haldane calculated the first Bayes factor, perhaps almost by accident, before Jeffreys did.” A. Etz and E.J. Wagenmakers

As another interesting aside, the historical account points out that Jeffreys came out in 1931 with what is now called Haldane’s prior for a Binomial proportion, proposed in 1931 (when the paper was read) and in 1932 (when the paper was published in the Mathematical Proceedings of the Cambridge Philosophical Society) by Haldane. The problem tackled by Haldane is again a significance on a Binomial probability. Contrary to the authors, I find the original (quoted) text quite clear, with a prior split before a uniform on [0,½] and a point mass at ½. Haldane uses a posterior odd [of 34.7] to compare both hypotheses but… I see no trace in the quoted material that he ends up using the Bayes factor as such, that is as his decision rule. (I acknowledge decision rule is anachronistic in this setting.) On the side, Haldane also implements model averaging. Hence my reading of this reading of the 1930’s literature is that it remains unclear that Haldane perceived the Bayes factor as a Bayesian [another anachronism] inference tool, upon which [and only which] significance tests could be conducted. That Haldane had a remarkably modern view of splitting the prior according to two orthogonal measures and of correctly deriving the posterior odds is quite clear. With the very neat trick of removing the infinite integral at p=0, an issue that Jeffreys was fighting with at the same time. In conclusion, I would thus rephrase the major finding of this paper as Haldane should get the priority in deriving the Bayesian significance test for point null hypotheses, rather than in deriving the Bayes factor. But this may be my biased views of Bayes factors speaking there…

Another amazing fact I gathered from the historical work of Etz and Wagenmakers is that Haldane and Jeffreys were geographically very close while working on the same problem and hence should have known and referenced their respective works. Which did not happen.

Filed under: Books, Statistics Tagged: Bayes factors, full Bayesian significance test, Haldane's prior, Harold Jeffreys, Jack Haldane, Jeffreys priors, non-informative priors, scientific inference

by xi'an at November 26, 2015 11:15 PM

Sean Carroll - Preposterous Universe


This year we give thanks for an area of mathematics that has become completely indispensable to modern theoretical physics: Riemannian Geometry. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, Landauer’s Principle, and the Fourier Transform. Ten years of giving thanks!)

Now, the thing everyone has been giving thanks for over the last few days is Albert Einstein’s general theory of relativity, which by some measures was introduced to the world exactly one hundred years ago yesterday. But we don’t want to be everybody, and besides we’re a day late. So it makes sense to honor the epochal advance in mathematics that directly enabled Einstein’s epochal advance in our understanding of spacetime.

Highly popularized accounts of the history of non-Euclidean geometry often give short shrift to Riemann, for reasons I don’t quite understand. You know the basic story: Euclid showed that geometry could be axiomatized on the basis of a few simple postulates, but one of them (the infamous Fifth Postulate) seemed just a bit less natural than the others. That’s the parallel postulate, which has been employed by generations of high-school geometry teachers to torture their students by challenging them to “prove” it. (Mine did, anyway.)

It can’t be proved, and indeed it’s not even necessarily true. In the ordinary flat geometry of a tabletop, initially parallel lines remain parallel forever, and Euclidean geometry is the name of the game. But we can imagine surfaces on which initially parallel lines diverge, such as a saddle, or ones on which they begin to come together, such as a sphere. In those contexts it is appropriate to replace the parallel postulate with something else, and we end up with non-Euclidean geometry.


Historically, this was first carried out by Hungarian mathematician János Bolyai and the Russian mathematician Nikolai Lobachevsky, both of whom developed the hyperbolic (saddle-shaped) form of the alternative theory. Actually, while Bolyai and Lobachevsky were the first to publish, much of the theory had previously been worked out by the great Carl Friedrich Gauss, who was an incredibly influential mathematician but not very good about getting his results into print.

The new geometry developed by Bolyai and Lobachevsky described what we would now call “spaces of constant negative curvature.” Such a space is curved, but in precisely the same way at every point; there is no difference between what’s happening at one point in the space and what’s happening anywhere else, just as had been the case for Euclid’s tabletop geometry.

Real geometries, as takes only a moment to visualize, can be a lot more complicated than that. Surfaces or solids can twist and turn in all sorts of ways. Gauss thought about how to deal with this problem, and came up with some techniques that could characterize a two-dimensional curved surface embedded in a three-dimensional Euclidean space. Which is pretty great, but falls far short of the full generality that mathematicians are known to crave.

Georg_Friedrich_Bernhard_Riemann.jpeg Fortunately Gauss had a brilliant and accomplished apprentice: his student Bernard Riemann. (Riemann was supposed to be studying theology, but he became entranced by one of Gauss’s lectures, and never looked back.) In 1853, Riemann was coming up for Habilitation, a German degree that is even higher than the Ph.D. He suggested a number of possible dissertation topics to his advisor Gauss, who (so the story goes) chose the one that Riemann thought was the most boring: the foundations of geometry. The next year, he presented his paper, “On the hypotheses which underlie geometry,” which laid out what we now call Riemannian geometry.

With this one paper on a subject he professed not to be all that interested in, Riemann (who also made incredible contributions to analysis and number theory) provided everything you need to understand the geometry of a space of arbitrary numbers of dimensions, with an arbitrary amount of curvature at any point in the space. It was as if Bolyai and Lobachevsky had invented the abacus, Gauss came up with the pocket calculator, and Riemann had turned around a built a powerful supercomputer.

Like many great works of mathematics, a lot of new superstructure had to be built up along the way. A subtle but brilliant part of Riemann’s work is that he didn’t start with a larger space (like the three-dimensional almost-Euclidean world around us) and imagine smaller spaces embedded with it. Rather, he considered the intrinsic geometry of a space, or how it would look “from the inside,” whether or not there was any larger space at all.

Next, Riemann needed a tool to handle a simple but frustrating fact of life: “curvature” is not a single number, but a way of characterizing many questions one could possibly ask about the geometry of a space. What you need, really, are tensors, which gather a set of numbers together in one elegant mathematical package. Tensor analysis as such didn’t really exist at the time, not being fully developed until 1890, but Riemann was able to use some bits and pieces of the theory that had been developed by Gauss.

Finally and most importantly, Riemann grasped that all the facts about the geometry of a space could be encoded in a simple quantity: the distance along any curve we might want to draw through the space. He showed how that distance could be written in terms of a special tensor, called the metric. You give me segment along a curve inside the space you’re interested in, the metric lets me calculate how long it is. This simple object, Riemann showed, could ultimately be used to answer any query you might have about the shape of a space — the length of curves, of course, but also the area of surfaces and volume of regions, the shortest-distance path between two fixed points, where you go if you keep marching “forward” in the space, the sum of the angles inside a triangle, and so on.

Unfortunately, the geometric information implied by the metric is only revealed when you follow how the metric changes along a curve or on some surface. What Riemann wanted was a single tensor that would tell you everything you needed to know about the curvature at each point in its own right, without having to consider curves or surfaces. So he showed how that could be done, by taking appropriate derivatives of the metric, giving us what we now call the Riemann curvature tensor. Here is the formula for it:


This isn’t the place to explain the whole thing, but I can recommend some spiffy lecture notes, including a very short version, or the longer and sexier textbook. From this he deduced several interesting features about curvature. For example, the intrinsic curvature of a one-dimensional space (a line or curve) is alway precisely zero. Its extrinsic curvature — how it is embedded in some larger space — can be complicated, but to a tiny one-dimensional being, all spaces have the same geometry. For two-dimensional spaces there is a single function that characterizes the curvature at each point; in three dimensions you need six numbers, in four you need twenty, and it goes up from there.

There were more developments in store for Riemannian geometry, of course, associated with names that are attached to various tensors and related symbols: Christoffel, Ricci, Levi-Civita, Cartan. But to a remarkable degree, when Albert Einstein needed the right mathematics to describe his new idea of dynamical spacetime, Riemann had bequeathed it to him in a plug-and-play form. Add the word “time” everywhere we’ve said “space,” introduce some annoying minus signs because time and space really aren’t precisely equivalent, and otherwise the geometry that Riemann invented is the same we use today to describe how the universe works.

Riemann died of tuberculosis before he reached the age of forty. He didn’t do bad for such a young guy; you know you’ve made it when you not only have a Wikipedia page for yourself, but a separate (long) Wikipedia page for the list of things named after you. We can all be thankful that Riemann’s genius allowed him to grasp the tricky geometry of curved spaces several decades before Einstein would put it to use in the most beautiful physical theory ever invented.

by Sean Carroll at November 26, 2015 05:24 PM

Peter Coles - In the Dark

Why is General Relativity so difficult?

Just a brief post following yesterday’s centenary of General Relativity, after which somebody asked me what is so difficult about the theory. I had two answers to that, one mathematical and one conceptual.


The Field Equations of General Relativity are written above. In the notation used they don’t look all that scary, but they are more complicated than they look. For a start it looks like there is only one equation, but the subscripts μ and ν can each take four values (usually 0, 1, 2 or 3), each value standing for one of the dimensions of four-dimensional space time. It therefore looks likes there are actually 16 equations. However, the equations are the same if you swap μ  and ν around. This means that there are “only” ten independent equations. The terms on the left hand side are the components of the Einstein Tensor which expresses the effect of gravity through the curvature of space time and the right hand side describes the energy and momentum of “stuff”, prefaced by some familiar constants.

The Einstein Tensor is made up of lots of partial derivatives of another tensor called the metric tensor (which describes the geometry of space time), which relates, through the Field Equations, to how matter and energy are distributed and how these components move and interact. The ten equations that need to be solved simultaneously are second-order non-linear partial different equations. This is to be compared with the case of Newtonian gravity in which only ordinary different equations are involved.

Problems in Newtonian mechanics can be difficult enough to solve but the much greater mathematical complexity in General Relativity means that problems in GR can only be solved in cases of very special symmetry, in which the number of independent equations can be reduced dramatically.

So that’s why it’s difficult mathematically. As for the conceptual problem it’s that most people (I think) consider “space” to be “what’s in between the matter” which seems like it must be “nothing”. But how can “nothing” possess an attribute like curvature? This leads you to conclude that space is much more than nothing. But it’s not a form of matter. So what is it? This chain of thought often leads people to think of space as being like the Ether, but that’s not right either. Hmm.

I tend to avoid this problem by not trying to think about space or space-time at all, and instead think only in terms of particle trajectories or ligh rays and how matter and energy affect them. But that’s because I’m lazy and only have a small brain…



by telescoper at November 26, 2015 03:07 PM

Christian P. Robert - xi'an's og

The Richard Price Society

As an item of news coming to me via ISBA News, I learned of the Richard Price Society and of its endeavour to lobby for the Welsh government to purchase Richard Price‘s birthplace as an historical landmark. As discussed in a previous post, Price contributed so much to Bayes’ paper that one may wonder who made the major contribution. While I am not very much inclined in turning old buildings into museums, feel free to contact the Richard Price Society to support this action! Or to sign the petition there. Which I cannot resist but  reproduce in Welsh:

Datblygwch Fferm Tynton yn Ganolfan Ymwelwyr a Gwybodaeth

​Rydym yn galw ar Lywodraeth Cymru i gydnabod cyfraniad pwysig Dr Richard Price nid yn unig i’r Oes Oleuedig yn y ddeunawfed ganrif, ond hefyd i’r broses o greu’r byd modern yr ydym yn byw ynddo heddiw, a datblygu ei fan geni a chartref ei blentyndod yn ganolfan wybodaeth i ymwelwyr lle gall pobl o bob cenedl ac oed ddarganfod sut mae ei gyfraniadau sylweddol i ddiwinyddiaeth, mathemateg ac athroniaeth wedi dylanwadu ar y byd modern.

Filed under: Books, pictures, Statistics, Travel, University life Tagged: ISBA, Richard Price, Richard Price Society, Thomas Bayes, Wales, Welsh

by xi'an at November 26, 2015 01:18 PM

arXiv blog

Why Ball Tracking Works for Tennis and Cricket but Not Soccer or Basketball

Following the examples of tennis and cricket, a new generation of ball-tracking algorithms is attempting to revolutionize the analysis and refereeing of soccer, volleyball, and basketball.

When it comes to ball sports, machine-vision techniques have begun to revolutionize the way analysts study the game and how umpires and referees make decisions. In cricket and tennis, for example, these systems routinely record ball movement in three dimensions and then generate a virtual replay that shows exactly where a ball hit the ground and even predicts its future trajectory (to determine whether it would have hit the wicket, for example).

November 26, 2015 05:30 AM

Clifford V. Johnson - Asymptotia

Happy Centennial, General Relativity!

general_relativity_centennial_kip_thorne(Click for larger view.) Well, I've already mentioned why today is such an important day in the history of human thought - One Hundred years of Certitude was the title of the post I used, in talking about the 100th Anniversary (today) of Einstein completing the final equations of General Relativity - and our celebration of it back last Friday went very well indeed. Today on NPR Adam Frank did an excellent job expanding on things a bit, so have a listen here if you like.

As you might recall me saying, I was keen to note and celebrate not just what GR means for science, but for the broader culture too, and two of the highlights of the day were examples of that. The photo above is of Kip Thorne talking about the science (solid General Relativity coupled with some speculative ideas rooted in General Relativity) of the film Interstellar, which as you know [...] Click to continue reading this post

The post Happy Centennial, General Relativity! appeared first on Asymptotia.

by Clifford at November 26, 2015 03:38 AM

November 25, 2015

Emily Lakdawalla - The Planetary Society Blog

How Can We Write About Science When People Are Dying?
Stories about exploration and wonder can be powerful antidotes to seemingly endless suffering and destruction.

November 25, 2015 11:42 PM

Christian P. Robert - xi'an's og

pseudo slice sampling

The workshop in Warwick last week made me aware of (yet) another arXiv posting I had missed: Pseudo-marginal slice sampling by Iain Murray and Matthew Graham. The idea is to mix the pseudo-marginal approach of Andrieu and Roberts (2009) with a noisy slice sampling scheme à la Neal (2003). The auxiliary random variable u used in the (pseudo-marginal) unbiased estimator of the target I(θ), Î(θ,u), and with distribution q(u) is merged with the random variable of interest so that the joint is


and a Metropolis-Hastings proposal on that target simulating from k(θ,θ’)q(u’) [meaning the auxiliary is simulated independently] recovers the pseudo-marginal Metropolis-Hastings ratio


(which is a nice alternative proof that the method works!). The novel idea in the paper is that the proposal on the auxiliary u can be of a different form, while remaining manageable. For instance, as a two-block Gibbs sampler. Or an elliptical slice sampler for the u component. The argument being that an independent update of u may lead the joint chain to get stuck. Among the illustrations in the paper, an Ising model (with no phase transition issue?) and a Gaussian process applied to the Pima Indian data set (despite a recent prohibition!). From the final discussion, I gather that the modification should be applicable to every (?) case when a pseudo-marginal approach is available, since the auxiliary distribution q(u) is treated as a black box. Quite an interesting read and proposal!

Filed under: Books, Statistics, University life Tagged: Alan Turing Institute, auxiliary variable, doubly intractable problems, pseudo-marginal MCMC, slice sampling, University of Warwick

by xi'an at November 25, 2015 11:15 PM

Emily Lakdawalla - The Planetary Society Blog

In Pictures: LightSail Cameras Prepped for Flight
LightSail's flight cameras are being prepped for installation after receiving a software upgrade and checkout from their manufacturer.

November 25, 2015 07:13 PM

astrobites - astro-ph reader's digest

KIC 8462852 – What’s the Fuss?

Kepler light curve of the star

Four years of monitoring this star reveals erratic events when more than 20% of the light, or flux, is missing. The small numbers at the top of the figure correspond to the 17 quarters of Kepler‘s primary operations. This light curve graph shows the fraction of this star’s maximum brightness over time, measured in days.

You’ve probably heard of the star in today’s paper. The “WTF star” (WTF stands for “Where’s the flux?” of course), also informally known as “Tabby’s star,” for the paper’s first author, has been in the media since its discovery and two followup papers hit astro-ph. Today, a group of astrobiters pool our expertise to bring you a comprehensive look at KIC 8462852 and what new observations may reveal.

An otherwise normal star

By nearly all accounts, KIC 8462852 is a normal star. It is one of over 150,000 stars observed by the Kepler space telescope during its initial four-year mission and looks like a run-of-the-mill F-type star, a little more massive than our Sun. It has no companion star yanking it around and no out-of-the-ordinary rotation or magnetic activity. It was passed over by algorithms that search for transiting exoplanets. The only reason this star stood out is thanks to Planet Hunters, a citizen science project that harnesses humans’ pattern recognition skills. Trained volunteers pored over data from Kepler and noted that KIC 8462852 dimmed significantly about two years into Kepler‘s mission, as shown above. They kept an eye on it until a huge fraction of light suddenly went missing again, nearly two years later, but differently this time. The huge dimming and irregular pattern made this star noteworthy.

What could be blocking the flux?

So what could be causing these unusual dips in flux? First, the authors did a careful analysis of their dataset and ruled out any glitches due to things like cosmic ray events and electronic errors within the instrument, concluding that these dips are astrophysically “real.” With glitches ruled out, another possibility is inherent stellar variability, but the shape of the light curve and other characteristics of this star rule out any known type of variable star.

A more likely possibility is that the star is orbited by clumps of dust, which are spread out in an area larger than the size of a planet, and can therefore block more light. But where would this dust come from? There would have to be enough dust to block up to 20% of the star’s visible light, yet not enough dust to produce a telltale infrared glow. The authors suggest that dust near the star could have been produced in a collision between planets, or it might be orbiting large planetesimals, which in turn orbit the star. However, these scenarios both predict a bright infrared signal, which was not detected when WISE and Spitzer observed the system in 2010 and 2015, respectively.

Finally, the authors suggest that the dips may be caused by chunks of some kind giant comet, which is breaking up as it approaches the star. This would provide an explanation for the dips in brightness without the system being bright in the infrared. Though the comet scenario seems to fit the data best, it is still not perfect, and more observations and modeling are needed to show that a comet breakup could produce the light curve of KIC 8462852.

How will we ever know?

Now that we have some ideas of what may be causing the anomalous signal, the next task is to eliminate or verify hypotheses with follow-up observations. Probably the most important piece will be long-term monitoring to look for more dips in brightness. This will answer a multitude of questions to help determine the true cause of the signal: Are the dips in brightness periodic? How much does the depth of the dips vary? Do they change in shape or duration? Do they disappear entirely?

Disintegration of a comet in our Solar System caught by the eye of the Hubble Space Telescope. This comet, named 73P/Schwassmann-Wachmann 3, fragmented off many pieces as it plummeted toward the Sun in 2006. As the radiation from a star heats a comet, the ices that hold it together sublimate, releasing large chunks of rock into space. Something similar may be happening near KIC 8462852.

Disintegration of a comet in our Solar System caught by the eye of the Hubble Space Telescope. This comet, named 73P/Schwassmann-Wachmann 3, fragmented off many pieces as it plummeted toward the Sun in 2006. As the radiation from a star heats a comet, the ices that hold it together sublimate, releasing large chunks of rock into space. Something similar may be happening near KIC 8462852.

Discovering that the dips are periodic would add credence to the dust cloud scenario, though the lack of infrared light would still be a problem. If we measure color information of future dips, that could constrain the size of any dust in the vicinity. On the other hand, if the comet scenario is correct, we would expect to find weaker dips or no future dips as chunks of the fragmented comet spread out, no longer eclipsing the star. There is a small star about 1000 AU from KIC 8462852 which may have provoked a barrage of comets, so measuring the motion of this nearby star could provide insights into the timings of “comet showers” near KIC 8462852.

If future observations manage to rule out all of these hypotheses, the mystery of the “WTF star” will grow stranger still.

The elephant alien in the room

Of course, much of the interest in this star has to do with a follow-up paper by Wright et al. They suggest a more esoteric reason for the huge drops in flux. Over the past few decades, some astronomers have speculated that advanced civilizations could build structures so large that they would block some of the light from their star. The most extreme of these is the Dyson sphere, a vast globe that could theoretically surround a star and harvest its light as a power source. But explaining KIC 8462852’s flux dips in this way doesn’t need something quite as dramatic. Instead, Wright et al. propose a swarm of alien-built objects sequentially passing in front of the star, with variously sized structures causing different dips in the light curve. These so-called megastructures would need to be enormous—up to half the size of the star.

Although this explanation is extremely speculative (starting, as it does, with “suppose an alien civilization exists”), it is consistent with the observations, so Wright et al. suggest searching for artificial radio signals coming from the system. An initial survey has drawn a blank, although only for very powerful signals. So what will we do next? Though further observations will surely take place, for now we need to wait; KIC 8462852 has “moved” into Earth’s daytime sky, making most follow-up observations impossible for several months.

The last word

Here at astrobites, the consensus is that the “WTF” light curve is almost certainly a natural phenomenon. Frequent readers will recognize a common astrobite narrative: The authors of this paper observed something new and unusual! None of our theoretical models explain it very well, so we’re going to get more observations and keep working on simulations!

That said, finding clear signs of an extraterrestrial civilization would be one of the most important discoveries of all time. According to some random guy on twitter, the WTF light curve could clearly be the Milky Way’s own Death Star:


Aliens or not, KIC 8462852 is certainly worth a closer look.


This post was written by Erika Nesvold, Meredith Rawls, David Wilson, and Michael Zevin.

by Astrobites at November 25, 2015 06:40 PM

David Berenstein, Moshe Rozali - Shores of the Dirac Sea

GR turns 100

For those of you who have free time to read on the history of gravitation, here is a good link to many famous papers on the subject:

Happy anniversary GR!


Filed under: gravity, relativity

by dberenstein at November 25, 2015 06:24 PM

John Baez - Azimuth

Regime Shift?

There’s no reason that the climate needs to change gradually. Recently scientists have become interested in regime shifts, which are abrupt, substantial and lasting changes in the state of a complex system.

Rasha Kamel of the Azimuth Project pointed us to a report in Science Daily which says:

Planet Earth experienced a global climate shift in the late 1980s on an unprecedented scale, fueled by anthropogenic warming and a volcanic eruption, according to new research. Scientists say that a major step change, or ‘regime shift,’ in Earth’s biophysical systems, from the upper atmosphere to the depths of the ocean and from the Arctic to Antarctica, was centered around 1987, and was sparked by the El Chichón volcanic eruption in Mexico five years earlier.

As always, it’s good to drill down through the science reporters’ summaries to the actual papers.  So I read this one:

• Philip C. Reid et al, Global impacts of the 1980s regime shift on the Earth’s climate and systems, Global Change Biology, 2015.

The authors of this paper analyzed 72 time series of climate and ecological data to search for such a regime shift, and found one around 1987. If such a thing really happened, this could be very important.

Here are some of the data they looked at:

Click to enlarge them—they’re pretty interesting! Vertical lines denote regime shift years, colored in different ways: 1984 blue, 1985 green, 1986 orange, 1987 red, 1988 brown, 1989 purple and so on. You can see that lots are red.

The paper has a lot of interesting and informed speculations about the cause of this shift—so give it a look.  For now I just want to tackle an important question of a more technical nature: how did they search for regime shifts?

They used the ‘STARS’ method, which stands for Sequential t-Test Analysis of Regime Shifts. They explain:

The STARS method (Rodionov, 2004; Rodionov & Overland, 2005) tests whether the end of one period (regime) of a certain length is different from a subsequent period (new regime). The cumulative sum of normalized deviations from the hypothetical mean level of the new regime is calculated, and then compared with the mean level of the preceding regime. A shift year is detected if the difference in the mean levels is statistically significant according to a Student’s t-test.

In his third paper, Rodionov (2006) shows how autocorrelation can be accounted for. From each year of the time series (except edge years), the rules are applied backwards and forwards to test that year as a potential shift year. The method is, therefore, a running procedure applied on sequences of years within the time series. The multiple STARS method used here repeats the procedure for 20 test-period lengths ranging from 6 to 25 years that are, for simplicity (after testing many variations), of the same length on either side of the regime shift.

Elsewhere I read that the STARS method is ‘too sensitive’. Could it be due to limitations of the ‘statistical significance’ idea involved in Student’s t-test?

You can download software that implements the STARS method here. The method is explained in the papers by Rodionov.

Do you know about this stuff?  If so, I’d like to hear your views on this paper and the STARS method.

by John Baez at November 25, 2015 05:13 PM

Peter Coles - In the Dark

Autumn Statement – Summary for Science

I’ve been in meetings all afternoon so far so I missed the live broadcast of the Chancellor’s Autumn Statement.

Now that I’ve caught up a little it seems that there’s much to be relieved about. Yet again it seems the Government has deployed the tactic of allowing scare stories of dire cuts to spread in order that the actual announcement  appears much better than people feared, even if it is mediocre.

You can find the overall key results of the spending review and autumn statement here, but along with many colleagues who work in research and higher education I went straight to the outcome for the Department of Business, Innovation and Skills (BIS) which you can find here.

The main results for me – from the narrow perspective of a scientist working in a university –  are:

  1. The overall budget for BIS will be cut by 17% in cash terms between now and 2020.
  2. Most of the above cut will happens from 2018 onwards by, among other things, “asking universities to take more responsibility for student access”.
  3. In more detail (quoted from here) “In this context, the government will reduce the teaching grant by £120 million in cash terms by 2019 to 2020, but allow funding for high cost subjects to be protected in real terms. The government will work with the Director of Fair Access to ensure universities take more responsibility for widening access and social mobility, and ask the Higher Education Funding Council for England to retarget and reduce by up to half the student opportunity fund, focusing funding on institutions with the most effective outcomes. The government will also make savings in other areas of the teaching grant.”
  4. My current employer, the University of Sussex, has done extremely well on widening participation so this is good news locally. Many big universities have achieved nothing in this area so, frankly, deserve this funding to be withdrawn.
  5. It is also to be welcomed that the premium for high cost subjects (i.e. STEM disciplines) is to be protected in real terms, although it still does not affect the actual cost of teaching these subjects.
  6. Contrary to many expectations it seems that HEFCE will not be scrapped immediately. That is significant in itself.
  7. The level of science funding will increase from £4.6 billion to £4.7 billion next year, and will thereafter be protected in real terms over the Parliament.
  8. The real terms protection sounds good but of course we currently have a very low rate of inflation, so this is basically five more years of almost flat cash.
  9. There is supposed to be an additional £500m by 2020 which George Osborne didn’t mention in his speech. I don’t know whether this is extra money or just the cash increase estimated by inflation-proofing the £4.7bn.
  10. The above two points sound like good news….
  11. …but the total budget  will include a £1.5 billion new “Global Challenges Fund” which will build up over this period. This suggests that there may be a significant transfer of funds into this from existing programmes. There could be big losers in this process, as it amounts to a sizeable fraction of the total research expenditure.
  12. In any event the fraction of GDP the UK spends on science is not going to increase, leaving us well behind our main economic competitors.
  13. The Government is committed to implementing the Nurse Review, which will give it more direct leverage to reprioritise science spending.
  14. It isn’t clear to me how  “pure” science research will fare as a result of all this. We will have to wait and see….

The Autumn Statement includes only a very high level summary of allocations so we don’t know anything much about how these decisions will filter down to specific programmes at this stage. The Devil is indeed in the Detail. Having said that, the overall settlement for HE and Research looks much better than many of us had feared so I’d give it a cautious welcome. For now.

If anyone has spotted anything I’ve missed or wishes to comment in any other way please use the box below!


by telescoper at November 25, 2015 04:18 PM

ZapperZ - Physics and Physicists

Hot Cocoa Physics
Just in time for the cold weather, at least here in the upper northern hemisphere, APS Physics Central has a nice little experiment that you can do at home with your friends and family. Using just a regular mug, hot water/milk, cocoa mix, and a spoon, you can do a demo that might elicit a few questions and answers.

For those celebrating Thanksgiving this week, I wish you all a happy and safe celebration.


by ZapperZ ( at November 25, 2015 04:01 PM

Symmetrybreaking - Fermilab/SLAC

Revamped LHC goes heavy metal

Physicists will collide lead ions to replicate and study the embryonic universe.

“In the beginning there was nothing, which exploded.”

~ Terry Pratchett, author

For the next three weeks physicists at the Large Hadron Collider will cook up the oldest form of matter in the universe by switching their subatomic fodder from protons to lead ions.

Lead ions consist of 82 protons and 126 neutrons clumped into tight atomic nuclei. When smashed together at extremely high energies, lead ions transform into the universe’s most perfect super-fluid: the quark gluon plasma. Quark gluon plasma is the oldest form of matter in the universe; it is thought to have formed within microseconds of the big bang.

“The LHC can bring us back to that time,” says Rene Bellwied, a professor of physics at the University of Houston and a researcher on the ALICE experiment. “We can produce a tiny sample of the nascent universe and study how it cooled and coalesced to make everything we see today.”

Scientists first observed this prehistoric plasma after colliding gold ions in the Relativistic Heavy Ion Collider, a nuclear physics research facility located at the US Department of Energy’s Brookhaven National Laboratory.

“We expected to create matter that would behave like a gas, but it actually has properties that make it more like a liquid,” says Brookhaven physicist Peter Steinberg, who works on both RHIC and the ATLAS heavy ion program at the LHC. “And it’s not just any liquid; it’s a near perfect liquid, with a very uniform flow and almost no internal resistance."

The LHC is famous for accelerating and colliding protons at the highest energies on Earth, but once a year physicists tweak its magnets and optimize its parameters for lead-lead or lead-proton collisions.

The lead ions are accelerated until each proton and neutron inside the nucleus has about 2.51 trillion electronvolts of energy. This might seem small compared to the 6.5 TeV protons that zoomed around the LHC ring during the summer. But because lead ions are so massive, they get a lot more bang for their buck.

“If protons were bowling balls, lead ions would be wrecking balls,” says Peter Jacobs, a scientist at Lawrence Berkeley National Laboratory working on the ALICE experiment. “When we collide them inside the LHC, the total energy generated is huge; reaching temperatures around 100,000 times hotter than the center of the sun. This is a state of matter we cannot make by just colliding two protons.”

Compared to the last round of LHC lead-lead collisions at the end of Run I, these collisions are nearly twice as energetic. New additions to the ALICE detector will also give scientists a more encompassing picture of the nascent universe’s behavior and personality.

“The system will be hotter, so the quark gluon plasma will live longer and expand more,” Bellwied says. “This increases our chances of producing new types of matter and will enable us to study the plasma’s properties more in depth.”

The Department of Energy, Office of Science, and the National Science Foundation support this research and sponsor the US-led upgrades the LHC detectors.

Bellwied and his team are particularly interested in studying a heavy and metastable form of matter called strange matter. Strange matter is made up of clumps of quarks, much like the original colliding lead ions, but it contains at least one particularly heavy quark, called the strange quark.

“There are six quarks that exist in nature, but everything that is stable is made only out of the two lightest ones,” he says. “We want to see what other types of matter are possible. We know that matter containing strange quarks can exist, but how strange can we make it?”

Examining the composition, mass and stability of ‘strange’ matter could help illuminate how the early universe evolved and what role (if any) heavy quarks and metastable forms of matter played during its development.

by Sarah Charley at November 25, 2015 02:00 PM

Lubos Motl - string vacua and pheno

Does dark matter clump to radial filaments?
Earth's dark matter hair?

Lots of media including The Washington Post, Popular Science, Space Daily, Christian Science Monitor, Russia Today, and Fox News bring us the happy news that Nude Socialist already hyped in August.

The Earth is sprouting hair – radial filaments of dark matter.

This claim is taken from the July 2015 paper by Gary Prézeau, an experimenter at JPL NASA in Pasadena and a member of Planck,
Dense Dark Matter Hairs Spreading Out from Earth, Jupiter and Other Compact Bodies (arXiv)
which has just appeared in the Astrophysical Journal (which produced the new wave of interest). He claims that the ordinary cold dark matter (CDM) is organizing itself in such a way that compact objects including the Earth or other planets develop radial thick enough filaments of dark matter, the hair.

Does it make any sense? I spent about 5 minutes by efforts to understand why would such an anthropomorphic structure completely differing from the usual distributions develop. After some failed attempts to understand what this guy is talking about, I looked at the citation count and it remains at zero at this point. So I am surely not the only one who has problems.

Prézeau claims to have used some computer models but one shouldn't need computer models to explain the qualitative character of the "shape of ordinary dark matter", should he? After 10 minutes, I finally began to understand why he would believe such a thing. It's an interesting point but I still don't believe the conclusion.

A priori, you could think that for the dark matter to organize to well-defined filaments like that, it would need to be rather strongly interacting. After all, powerful molecular forces boiling down to electromagnetism have to act within the human hair for the hair to remain compact. There are no strong forces like that in CDM. So what is responsible for the clumping?

But as I understood, his reasoning is exactly the opposite one. The dark matter is supposed to be clumped into these filaments because it has almost no interactions. And interactions are needed for thermalization etc. So Prézeau claims that at the last scattering surface, the dark matter particles only live at/near a 3-dimensional submanifold of the 6-dimensional phase space.

And the subsequent evolution preserves the regularity and peaky character of the distribution. Only if the dark matter manages to orbit around the galaxy several times, the position and momentum became "chaotic" – more or less Maxwell-Boltzmann-distributed – as the particles are perturbed by various local gravitational fields. But the WIMPs are only flying at 220 kilometers per second. With a circumference over \(10^{18}\) kilometers, it may take some \(10^{16}\) seconds or 0.3 billion years to orbit the galaxy. Those numbers say some 50 orbits since the Big Bang which seems enough to randomize but maybe it is not.

So he claims – while referring to the authority of some computers – that because of the concentrated character at the last scattering surface and "too short and simple" evolution of the phase space in the subsequent 14 billion years, there will be easy to detect clumps. And because of the Earth's or other planetary gravitational fields, there will be hair that starts at a "root" with a very high density and goes outwards.

I seem to have problems with too many statements in the paper. First, I don't really see why the dark matter particle should start at a 3-dimensional manifold in the phase space only. It was spatially everywhere and only the magnitude of the momentum could have been constrained, approximately, right? And the kinetic energy was nonzero so it's still a 5-dimensional space.

Also, he talks about the general relativistic metric. I don't see why he would need general relativity to discuss the hypothetical clumping of matter particles near the weak Earth's gravitational field. Also, he admits that the focal points are \(2\times 10^{15}\) meters, some 10 AU, away from the Earth for the dark matter speed. But why doesn't he agree that this huge distance means that the Earth's gravity is way too weak to modify the distribution of the dark matter at nearby distances – thousands or tens of thousands of kilometers from our planet?

And where does the hypothetical clumping to "preferred angular locations" of the hair come from? The thickness of these filaments is supposed to be vastly smaller than the Earth's radius. Where would such a hugely accurate localization come from? He even proposes these "soon-to-be-discovered" filaments as probes to study geological layers of the Earth!

Also, even his claims about the Kepler problem seem to be wrong to me. When an "unbound" particle moves in the Earth's gravitational field, the trajectory is a hyperbola. At infinity, the hyperbola approaches two lines – in a plane that crosses the center of the Earth. But he seems to claim that the lines themselves go through the Earth's center but they don't. Well, the asymptotic lines are become "close" to lines through the center visually, in the spherical coordinates, but the distance remains nonzero (and much greater than the Earth's radius) in the absolute sense. Prézeau seems to use his wrong idea about the asymptotics to claim that there is some focusing that doesn't actually exist.

And so on and so on. The paper offers lots of technically sounding claims and even elegant equations but it does seem to do almost nothing to explain the extraordinary claim about the shape of the dark matter. At this point, the paper seems to make almost no sense to me. Obviously, this detail doesn't prevent the journalists from selling this 0-citation paper as a scientific fact. For example, Forbes used the title "Strange But True: Dark Matter Grows Hair Around Stars And Planets".

Oh really? Wow, this text is actually by Ethan Siegel.

Does someone understand the paper more well than I do so that she could make me think that the paper is less nonsensical than I thought?

by Luboš Motl ( at November 25, 2015 09:14 AM

Peter Coles - In the Dark

100 Years of General Relativity

Many people have been celebrating the centenary of the birth of Einstein’s Theory of General Relativity this year, but it’s not obvious precisely what date to select. I’ve decided to go for today, partly because the News on BBC Radio 3 did when I work up this morning, but also because there is a well-known publication that mentions that date:


The 25th November 1915 was the date on which Einstein presented the “final” form of his theory to the Prussian Academy of Sciences. You can find a full translation of the paper “The Field Equations of Gravitation” here. You will see that he refers to a couple of earlier papers in that work, but I think this one is the first presentation of the full theory. It fascinated me when I was looking at the history of GR for the textbook I was working on about 20 years ago that the main results (e.g. on cosmology, the bending of light and on the perihelion of mercury) are spread over a large number of rather short papers rather than all being in one big one. I guess that was the style of the times!

So there you are, General Relativity has been around for 100 years. At least according to one particular reference frame…


Oh, and here’s a cute little video – funded by the Science and Technology Facilities Council – celebrating the centenary:


by telescoper at November 25, 2015 08:59 AM

Clifford V. Johnson - Asymptotia

The New Improved Scooby-Gang? (Part 1)

This is a group shot from an excellent event I mentioned on here only briefly:


(Click for larger view. Photo from album linked below.) It was on Back to the Future Day... the date (October 21st 2015) that Marty McFly came forward in time to in the second of the BTTF movies... where we found hover boards and so forth, if you recall. The Science and Entertainment Exchange hosted a packed event at the Great Company (in downtown LA) which had several wonderful things and people, including some of the props from the films, the designer of lots of the props from the films, a ballroom done up like the high school prom of the first film, the actor who played George McFly (in the second two films), an actual DeLorean, and so much more. Oh! Also four experts who talked a bit about aspects of the science and other technical matters in the movies, such as [...] Click to continue reading this post

The post The New Improved Scooby-Gang? (Part 1) appeared first on Asymptotia.

by Clifford at November 25, 2015 01:56 AM

astrobites - astro-ph reader's digest

Zooming in on Betelgeuse
  • Title: The close circumstellar environment of Betelgeuse – III. SPHERE/ZIMPOL imaging polarimetry in the visible
  • Authors: P. Kervella, E. Lagadec, M. Montargès, S. T. Ridgway, A. Chiavassa, X. Haubois, H.-M. Schmid, M. Langlois, A. Gallenne, G. Perrin
  • First Author’s Institution: Unidad Mixta Internacional Franco-Chilena de Astronomía, CNRS/INSU, France & Departamento de Astronomía, Universidad de Chile and LESIA, Observatoire de Paris, PSL, CNRS, UPMC, Univ. Paris-Diderot
  • Paper Status: In press at Astronomy & Astrophysics

Have you ever wondered how to tell the difference between a bright star and a planet in the night sky? Astronomers have a trick: see if the Earth’s atmosphere makes it twinkle. Planets don’t twinkle because, even with a small telescope, they appear as little circles in the sky. It takes a lot of atmospheric turbulence to distort the image of a circular disk. But if you point that same telescope at a star and zoom all the way in, you’ll never zoom far enough to turn that dot into a disk. A star-dot is a “point source” in astro-speak, and dots twinkle.

In recent years, however, advances that pair adaptive optics with powerful telescopes have started resolving real images of the closest, biggest stars. Adaptive optics essentially cancels out the turbulence of twinkling, allowing a telescope to see the disks of stars.

Today’s paper uses this technique to study Betelgeuse. You know Betelgeuse—it’s the bright red one in Orion’s shoulder that’s bound to explode “soon” (so, maybe in the next 100,000 years). It’s an immense red supergiant; the type of star that regularly pushes the envelope of stellar physics. The first paper in this series used images in near-infrared wavelengths to partially resolve Betelgeuse’s photosphere, or visible surface, and begin to characterize the asymmetric material surrounding it. A second paper discovered that Betelgeuse’s circumstellar material is clumpy in the infrared, likely due to dust formation, and extends to tens of solar radii. (That sounds huge, but keep in mind Betelgeuse itself is nearly 1000 times larger than the Sun!).

As it turns out, adaptive optics and polarized visible light are a great way to probe Betelgeuse’s secrets, and that is the focus of today’s paper. This light has lots to tell us about interactions among the star’s surface, the closest and most-recently-ejected clumps of gas, and brand new polarized dust.


Asymmetric Betelgeuse and its environment imaged in visible light (top) and polarized visible light (bottom). Each column is a different filter. The red dashed circle indicates Betelgeuse’s infrared photospheric radius. The light dashed circle is three times this.

You have to squint really hard to make Betelgeuse spherical

As you can see in the images above, Betelgeuse is not symmetric, and neither is its circumstellar material. The top row shows brightness in different visible-light filters while the bottom row shows degree of polarization (light colors are more polarized than dark).

Most of the imaged polarized light is far from the star’s photosphere, and is probably polarized due to dust scattering. However, bits of this dust are close to the star, too! It’s well known that red supergiants like Betelgeuse lose significant amounts of mass. Mass loss seems to be connected to the huge convective cells inside supergiants, because they too are not spherically symmetric, but we don’t know precisely how. We do know the lost mass forms a circumstellar envelope around the star and and provides the material from which dust can form. It follows that if the dust was all far away or all close-in, that would tell us something about how it got there. Instead, at any single distance away from the star, we find different amounts of dust and gas in a range of different temperatures and densities.


Left: A map of hydrogen emission (red) and absorption (blue) in the vicinity of Betelgeuse, with the same dashed lines as before for reference. Right: Color composite of three of the filters from the first figure (the narrow hydrogen alpha filter is excluded).

Hydrogen clues

Two of the filters used to image Betelgeuse are sensitive to the familiar red hydrogen alpha spectral feature. Because one filter is broader than the other, subtracting the light in the narrow filter from the light seen with the broad filter yields a map of where hydrogen gas is emitting or absorbing light. It also turns out to be highly asymmetric. Most of the hydrogen emission is confined within a distance of three times Betelgeuse’s near-infrared radius. It’s a similar distance from the star as most of the polarized dust, but the spatial distributions are different.

The main result of the paper is that Betelgeuse’s asymmetries persist in both in dust and gas, with a major interface between the two located around three times the near-infrared stellar radius. These asymmetries agree with different types of past observations and also strongly point toward a connection between supergiant mass loss and vigorous convection.

Unlike many cosmic blobs, we should be able to witness Betelgeuse change shape. The authors close by suggesting we study how the inner edge of Betelgeuse’s circumstellar envelope evolves with time. So far we only have static images, but if there’s one thing astronomers like more than pictures, it’s movies. Besides, who knows—maybe we’ll catch a supernova in the making!

by Meredith Rawls at November 25, 2015 12:18 AM

November 24, 2015

Lubos Motl - string vacua and pheno

Point-like QFTs in the bulk can't be a consistent theory of QG
Dixon's research is impressive applied science using deep insights by others, mainly string theorists

Lance Dixon is a prominent particle theorist at SLAC. A few days ago, he gave an interview about quantum gravity.
Q&A: SLAC Theorist Lance Dixon Explains Quantum Gravity
He's been most tightly associated with multiloop calculations in quantum field theory (including some calculations at four loops, for example) and various tricks to climb over the seemingly "insurmountably difficult" technical obstacles that proliferate as you are adding loops to the Feynman diagrams. However, as a Princeton graduate student in the 1980s, he's done important research in string theory as well. Most famously, he is one of the co-fathers of the technique of the "orbifolds".

Also, most of his claims in the interview are just fine. But some of his understanding of the big picture is so totally wrong that you could easily post it at one of the crackpots' forums on the Internet.

To answer the question "what is quantum gravity?", he begins as follows:
With the exception of gravity, we can describe nature’s fundamental forces using the concepts of quantum mechanics.
Well, one needs to be more specific about the meaning of "can". In this form, the sentence pretty much says that as we know it, gravity is inconsistent with quantum mechanics. But this isn't right. Frank Wilczek's view is the opposite extreme. Frank says that gravity is so compatible with the quantum field theory (The Standard Model) that he already clumps them into one theory he calls "The Core Theory".

The truth is somewhere in between. Gravity as a set of phenomena is demonstrably consistent with quantum mechanics – we observe both of them in Nature while the gravitational (and other) phenomena simply can't escape the quantum logic of the Universe. And in fact, even our two old-fashioned theories are "basically consistent" for all practical and many of the impractical purposes. We can derive the existence of the Hawking radiation, gravitons, and even their approximate cross sections at any reasonable accuracy from the quantized version of general relativity. Using a straightforward combination of GR and QFT, we may even calculate the primordial gravitational fluctuations that have grown into galaxies and patterns in the CMB.

The "only" problem is that those theories can't be fully compatible or the predictions can't be arbitrarily precise, at least if we want to avoid the complete loss of predictivity (the need to measure infinitely many continuous parameters before any calculation of a prediction may be completed).

OK, Dixon says lots of sane things about the similarities and differences between electromagnetism and gravity, the character of difficulties we encounter when we apply the QFT methods to gravity, and some new hard gravitational phenomena such as the Hawking radiation. But things become very strange when he is asked:
Why is it so difficult to find a quantum theory of gravity?

One version of quantum gravity is provided by string theory, but we’re looking for other possibilities.
You may also look for X-Men in New York. You may spend lots of time with this search which doesn't mean that you will have a reasonable chance to find them. There are no X-Men! The case of "other possible" theories of quantum gravity aside from string theory is fully analogous.

Moreover, I don't really think that any substantial part of Dixon's own work could be described as this meaningless "search for other theories of quantum gravity". Whenever gravity enters his papers at all, he is researching well-known i.e. old approximate theories of quantum gravity – such as various supergravity theories.

Dixon says that gravitons' spin is two, the force is weak, and universally attractive. But the next question is:
How does this affect the calculations?

It makes the mathematical treatment much more difficult.

We generally calculate quantum effects by starting with a dominant mathematical term to which we then add a number of increasingly smaller terms.
He's describing "perturbative calculations" – almost all of his work may be said to be about "perturbative calculations". However, it is simply not true that this is the right way to do research of quantum mechanics "in general". Perturbation theory is just an important method.

It is true that if we talk about "quantum effects", in the sense of corrections, we must start with a "non-quantum effect" i.e. the classical approximation and calculate the more accurate result by adding the "quantum corrections". But it is simply not always the case that a chosen "classical result" is the dominant contribution. Sometimes, physics is so intrinsically quantum that one must try to make the full-fledged quantum calculations right away.

Even more importantly, he tries to obscure the fact that the perturbative – power law – corrections are not the only effects of quantum mechanics. When he does these power-law perturbative calculations, and his papers arguably never do anything else, he is not getting the exact result. There almost always exist infinitely many non-perturbative corrections, instantons etc. The existence of the non-perturbative effects is actually related to the divergence of the perturbative series as a whole.

To summarize, he is just vastly overstating the importance of the perturbative – and especially multiloop – calculations, the kind of calculations his work has focused on. You know, these multiloop terms are only important relatively to the classical term if the quantum effects are sufficiently strong. But if they are strong enough to contribute \(O(100)\) percent of the result, then the non-perturbative terms neglected by Dixon will contribute \(O(100)\) percent, too. In other words, the multiloop terms are "in between" two other types of contributions, classical and nonperturbative, which is why they generally aren't the key terms.

In practice, Dixon's work has been about the question "up to how many loops do all the divergences cancel" in a given supersymmetric theory. Does \(d=11\) supergravity cancel all divergences even at seven loops? True experts have to care about this question but ultimately, it is a technical detail. Supersymmetry allows the theory to "get rather far" but at the end, this theory and its toroidal compactifications can't be consistent and have to be completed to the full string/M-theory for consistency.

If you click the link in the previous sentence, you may remind yourself that nonperturbatively, the \(\NNN=8\) \(d=4\) SUGRA theory simply isn't OK. It wouldn't be OK even if the whole perturbative expansion of SUGRA were convergent (which I am not self-confidently excluding at all even though I do tend to believe those who say that there are divergences at 7 loops). This is why all the hard technical work in Dixon's multiloop papers consists of irrelevant technical details that simply don't affect the answers to the truly important questions. You don't need to know anything about Dixon's papers but you may still comprehend and verify the arguments in the following paragraph.

The theory has the noncompact continuous symmetry but the symmetry has to be broken because the spectum of charged black hole microstates has to be discrete thanks to the Dirac quantization rule (the minimum electric and the minimum magnetic charge are "inverse" to one another if the Dirac string is invisible). That's why the \(E_{k(k)}(\RR)\) symmetry is unavoidably broken to a discrete subgroup of it, \(E_{k(k)}(\ZZ)\), the subgroup that preserves the lattice of the charges, just like in string/M-theory, and all the other "purely stringy phenomena" that go beyond SUGRA (starting with the existence of low-tension/light strings in a weakly coupled, stringy limit of the moduli space we just identified) may then be proven to be there, too.

Also, the \(\NNN=8\) SUGRA is too constrained because it's too supersymmetric. To get more realistic spectra, you need to reduce the SUSY and then the divergences unavoidably appear at a small number of loops. So effective gravitational QFTs are either realistic or relatively convergent at the multiloop level but not both. There is a trade-off here. Again, string/M-theory is the only way to make the theories realistic while preserving the convergence properties. In some sense, all the SUSY breaking in string theory may be said to be spontaneous (the compactification on a complicated manifold is a spontaneous symmetry breaking of symmetries that would be present for other manifolds).

SUGRA-like quantum field theories are wrong for other, perhaps more qualitative reasons. They can't really agree with holography or, more immediately, with the high-mass spectrum of the excitations. High mass excitations must be dominated by black hole microstates with the entropy scaling like the area. But the high energy density behavior of a QFT in a pre-existing background always sees the entropy scale like the volume. The real problem is that the background just can't be assumed to be fixed in any sense if we get to huge (Planckian) energy or entropy densities. It follows that the causal structure is heavily non-classical in the quantum gravity regime as well, and this is what makes the bulk QFT inapplicable as a framework.

This was an example but I want to stress a very general point that makes Dixon's argumentation totally weird:
Dixon uses all the self-evidently pro-string arguments as if they were arguments in favor of "another theory".
This paradox manifests itself almost in every aspect of Dixon's story. Let me be more specific. There are several paragraphs saying things like
We’ve succeeded in using this discovery to calculate quantum effects to increasingly higher order, which helps us better understand when divergences occur.
And these comments are implicitly supposed to substantiate Dixon's previous claim that "he is looking for other theories of quantum gravity". Except that virtually all the good surprises he has encountered exists thanks to insights discovered in string theory!

First of all, the cancellations of divergences in his SUGRA papers depend on supersymmetry – plus other structures but SUSY is really needed. (All known cancellations of divergences in \(\NNN=8\) SUGRA may be fuly derived from SUSY and the non-compact \(E_{7(7)}(\RR)\) symmetry!) In the West, SUSY was discovered when people were trying to find a better string theory than the old \(d=26\) bosonic string theory. The world sheet supersymmetry was found to be necessary to incorporate fermions. And the spacetime supersymmetry emerged and seemed necessary to eliminate the tachyon in the spacetime. The ability of SUSY to cancel lots of (mostly divergent) terms was quickly revealed and became established. It was clear that SUSY is capable of cancelling the divergences; the only remaining questions were "which ones" and "how accurately".

You know, this kind of "silence" about the importance of SUSY for the cancellation of divergences; and about SUSY's role within string theory is unavoidably inviting some insane interpretations. In the past, the notorious "Not Even Wrong" crackpot forum has often promoted the ludicrous story – implicitly encouraged by Dixon's comments – that maybe we don't need string theory because field theories might cancel the divergences.

The following blog post on that website would attack supersymmetry.

The irony is that the good news in the first story are primarily thanks to supersymmetry which is trashed in the second story. So the two criticisms of string-theory-related physics directly contradict one another! You may either say that SUSY should be nearly banned in the search for better theories of Nature; or you may celebrate results that depend on SUSY. But you surely shouldn't do both at the same moment, should you?

But it's not just supersymmetry and the reasons behind the cancellation of divergences where Dixon's story sounds ludicrously self-contradictory. What about the relationship between gravitons and gluons?
What have you learned about quantum gravity so far?

Over the past decades, researchers in the field have made a lot of progress in better understanding how to do calculations in quantum gravity. For example, it was empirically found that in certain theories and to certain orders, we can replace the complicated mathematical expression for the interaction of gravitons with the square of the interaction of gluons – a simpler expression that we already know how to calculate.
So the insight that gravitons behave like squared gluons is also supposed to be an achievement of the "search for other, non-stringy theories of quantum gravity"? Surely you're joking, Mr Dixon. You know, this "gravitons are squared gluons" relationship is known as the KLT (Kawai-Lewellen-Tye) relationship. Their 1986 paper was called A Relation Between Tree Amplitudes of Closed and Open Strings. Do you see any strings in that paper? ;-) It is all about string theory – and the characteristic stringy properties of the particle spectrum and interactions (including the detailed analysis of the topologies of different strings).

The point is that an open string – a string with two endpoints – has the \(n\)-th standing wave and the corresponding modes to be excited, \(\alpha_n^\mu\). A closed string – topologically a circle – has the \(n\)-th left-moving wave and \(n\)-th right-moving wave. The operators capable of exciting the closed string come from left-movers and right-movers, \(\alpha_n^\mu\) and \(\tilde \alpha_n^\mu\). So the closed string has twice as many operators that may excite it – it looks like a pair of open strings living together (its Hilbert space is close to a tensor product of two open string Hilbert spaces). Similarly, the amplitudes for closed strings look like (combinations of) products of analogous amplitudes from two copies of open strings. That's the basic reason behind all these KLT relationships. And now, in 2015, Dixon indirectly suggests that this relationship is an achievement of the search for non-stringy theories of quantum gravity?

This relationship was found purely within string theory and it only remains valid and non-vacuous to the extent to which you preserve a significant portion of the string dynamics. The relationship tells you lots about the dynamics of the massless states as well. But you can't really find any good quantitative explanation why the relationship works in so many detailed situations that would be non-stringy. It's only in string theory where the graviton \(\alpha^{\mu}_{-1}\tilde\alpha^\nu_{-1}\ket 0\) is "made of" two gluons – because it has these two creation operators which are analogous to the one creation operator in the gluon open string state \(\alpha^{\mu}_{-1}\ket 0\). The point-like graviton has the spin two, as two times the spin of a gluon, but you can't "see" the two gluons inside the graviton because all the particles are infinitely small.

And this kind of irony goes on and on and on. He has used SUSY and KLT relationships as evidence for a "non-stringy" theory of quantum gravity. Is there something else that he can use against strings? Sure, dualities! ;-)
We were also involved in a recent study in which we looked at the theory of two gravitons bouncing off each other. It was shown over 30 years ago that divergences occurring on the second order of these calculations can change under so-called duality transformations that replace one description of the gravitational field with a different but equivalent one. These changes were a surprise because they could mean that the descriptions are not equivalent on the quantum level. However, we’ve now demonstrated that these differences actually don’t change the underlying physics.
This is about equally amazing. You know, this whole way of "duality" reasoning – looking for and finding theories whose physics is the same although superficially, there seem to be serious technical differences between two theories or vacua – has spread purely because of the research done by string theorists in the early and mid 1990s. The first paper that Dixon et al. cite is a 1980 SUGRA paper by Duff and Nieuwenhuizen and the duality is meant to be "just" an electromagnetic duality for the \(p\)-forms. But before string theory, people indeed believed that such dualities weren't exact symmetries of the theories. Only within the string-theory-based research, many such dualities were shown to be surprisingly exact. They are just claiming a similar phenomenon in a simpler theory. They would probably never dare to propose such a conjecture if there were no stringy precedents for this remarkably exact relationship. The previous sentence may be a speculation but what is not a speculation is that they're far from the first ones who have brought evidence for the general phenomenon, a previously disbelieved exact equivalence (duality). Tons of examples of this phenomenon has previously been found by string theorists.

Most of the examples of dualities arose in the context of string theory but even the cases of dualities that apply to field theories, like insights about the Seiberg-Witten \(\NNN=2\) gauge theories etc., were found when the authors were thinking about the full stringy understanding of the physical effects. They may have tried to hide their reasoning in their paper to make the paper more influential even among the non-stringy researchers but you can't hide the truth forever. Most experts doing this stuff today are thinking in terms of the embeddings to string/M-theory anyway because those embeddings are extremely natural if not paramount.

So what Dixon was doing was just trying to apply a powerful tool discovered in the string theory research to a situation that is less rich than the situations dealt with in string theory.

Near the end, Dixon joined the irrational people who don't like that string theory has many solutions:
However, over the years, researchers have found more and more ways of making string theories that look right. I began to be concerned that there may be actually too many options for string theory to ever be predictive, when I studied the subject as a graduate student at Princeton in the mid-1980s. About 10 years ago, the number of possible solutions was already on the order of \(10^{500}\). For comparison, there are less than \(10^{10}\) people on Earth and less than \(10^{12}\) stars in the Milky Way. So how will we ever find the theory that accurately describes our universe?

For quantum gravity, the situation is somewhat the opposite, making the approach potentially more predictive than string theory, in principle. There are probably not too many theories that would allow us to properly handle divergences in quantum gravity – we haven’t actually found a single one yet.
I had to laugh out loud. So Dixon wants one particular theory. He has zero of them so he's equally far from a theory of everything as string theorists who have a theory with many solutions. Is that meant seriously? Zero is nothing! In Czech, when you have zero of something, we say that you have "a šit". Nothing is just not too much.

Moreover, Dixon's comment about "making string theories" has been known to be totally wrong since the mid 1990s. There is only one string theory which has many solutions – just like the equation for the hydrogen energy eigenstates has many solutions. There are not "many string theories". This fact wasn't clear before the mid 1990s but it became totally clear afterwards. When Dixon continues to talk about "many string theories", it's just like someone who talks about evolution but insists that someone created many species at the same moment. The whole point of evolution is that this isn't the case. Even though the species look different, they ultimately arose from the same ancestors.

To talk about very many ancestors of known species means to seriously misunderstand or distort the very basics of biology and Dixon is doing exactly the same thing with the different string vacua. What he's saying is as wrong as creationism. A professional theoretical physicist simply shouldn't embarrass himself in this brutal way in 2015.

Dixon wants to say that we want "one right theory" and he has "zero" why string theorists have "\(10^{500}\)" which is also far from the number he wants, one. But even if this "distance" were measuring the progress, the whole line of reasoning would be totally irrational because the number "one" is pure prejudice with zero empirical or rational support. You may fool yourself by saying that a theory of nuclei predicting 1 or 50 possible nuclei (or a theory of biology predicting that there should be 1 or 50 mammal species) is "more predictive" and therefore "better" but this rhetorical sleight-of-hand won't make the number 1 or 50 right. The right number of nuclei or mammal species or vacuum-like solutions to the underlying equations is much higher than 1 or 50. Emotional claims about a "better predictivity" can never beat or replace the truth! It's too bad that Dixon basically places himself among the dimwits who don't understand this simple point.

What we observe is that there exists at least one kind of a Universe, or one string vacuum if you describe physics by string theory. There is no empirical evidence whatever that the number isn't greater than one or much greater than one. Instead, there is a growing body of theoretical evidence that the right number almost certainly exceeds one dramatically.

At some moment, Lance Dixon decided to study the heavily technical multiloop questions and similar stuff. It's a totally serious subset of work in theoretical physics but it simply lacks the "wow" factor. Maybe he wants to fool others as well as himself into thinking that the "wow" factor is there. But it isn't there. A cancellation of 4-loop divergences in a process described by a theory is simply a technicality. A physicist who can calculate such things is surely impressively technically powerful and that is what will impress fellow physicists. But the result itself is unlikely to be a game-changer. Most of such results are entries in a long telephone directory of comparable technical results and non-renormalization theorems.

The true game-changers in the recent 40 years were concepts like supersymmetry, duality, KLT relations, holography and AdS/CFT, Matrix theory, ER-EPR or entanglement-glue correspondence, and perhaps things like the Yangian, recursive organization of amplitudes sorted by the helicities etc. Many of them have been use in Dixon's technically impressive research. But this research has been an application of conceptually profound discoveries made by others, not a real source of new universally important ideas, and it's just very bad if Dixon tries to pretend something else.

And that's the memo.

by Luboš Motl ( at November 24, 2015 06:54 PM

Emily Lakdawalla - The Planetary Society Blog

Blue Origin Lands Spent Suborbital Rocket Stage in Texas
Secretive spaceflight company Blue Origin flew its New Shepard launch vehicle to the edge of space, deployed a suborbital spacecraft and returned the spent booster rocket to Earth for an upright landing.

November 24, 2015 04:24 PM

Emily Lakdawalla - The Planetary Society Blog

2015 Reviews of childrens' books about space
Continuing an annual tradition, Emily Lakdawalla reviews children's books about space -- what's out there, how we explore, and why. Many of the books on this list aren't just for kids!

November 24, 2015 04:13 PM

Peter Coles - In the Dark

Chancellor’s Autumn Statement Poll

There’s a not inconsiderable amount of anxiety around as tomorrow’s  Autumn Statement approaches. The likelihood is that we will see drastic cuts to everything, including science and education, and huge jobs losses and cuts to public services around the country.

In order to gauge public opinion, ahead of the announcement of the end of British Civil Society I have decided to conduct a poll.

<noscript><a href="">Take Our Poll</a></noscript>

And in case it’s all too depressing to think about, Dorothy has knitted a Soup Dragon to cheer you up.



by telescoper at November 24, 2015 03:17 PM

Symmetrybreaking - Fermilab/SLAC

Charge-parity violation

Matter and antimatter behave differently. Scientists hope that investigating how might someday explain why we exist.

One of the great puzzles for scientists is why there is more matter than antimatter in the universe—the reason we exist.

It turns out that the answer to this question is deeply connected to the breaking of fundamental conservation laws of particle physics. The discovery of these violations has a rich history, dating back to 1956.

Parity violation

It all began with a study led by scientist Chien-Shiung Wu of Columbia University. She and her team were studying the decay of cobalt-60, an unstable isotope of the element cobalt. Cobalt-60 decays into another isotope, nickel-60, and in the process, it emits an electron and an electron antineutrino. The nickel-60 isotope then emits a pair of photons. 

The conservation law being tested was parity conservation, which states that the laws of physics shouldn’t change when all the signs of a particle’s spatial coordinates are flipped. The experiment observed the decay of cobalt-60 in two arrangements that mirrored one another. 

The release of photons in the decay is an electromagnetic process, and electromagnetic processes had been shown to conserve parity. But the release of the electron and electron antineutrino is a radioactive decay process, mediated by the weak force. Such processes had not been tested in this way before.

Parity conservation dictated that, in this experiment, the electrons should be emitted in the same direction and in the same proportion as the photons. 

But Wu and her team found just the opposite to be true. This meant that nature was playing favorites. Parity, or P symmetry, had been violated.

Two theorists, Tsung Dao Lee and Chen Ning Yang, who had suggested testing parity in this way, shared the 1957 Nobel Prize in physics for the discovery.

Charge-parity violation

Many scientists were flummoxed by the discovery of parity violation, says Ulrich Nierste, a theoretical physicist at the Karlsruhe Institute of Technology in Germany. 

“Physicists then began to think that they may have been looking at the wrong symmetry all along,” he says. 

The finding had ripple effects. For one, scientists learned that another symmetry they thought was fundamental—charge conjugation, or C symmetry—must be violated as well. 

Charge conjugation is a symmetry between particles and their antiparticles. When applied to particles with a property called spin, like quarks and electrons, the C and P transformations are in conflict with each other.

“Physicists then began to think that they may have been looking at the wrong symmetry all along.”

This means that neither can be a good symmetry if one of them is violated. But, scientists thought, the combination of the two—called CP symmetry—might still be conserved. If that were the case, there would at least be a symmetry between the behavior of particles and their oppositely charged antimatter partners. 

Alas, this also was not meant to be. In 1964, a research group led by James Cronin and Val Fitch discovered in an experiment at Brookhaven National Laboratory that CP is violated, too. 

The team studied the decay of neutral kaons into pions; both are composite particles made of a quark and antiquark. Neutral kaons come in two versions that have different lifetimes: a short-lived one that primarily decays into two pions and a long-lived relative that prefers to leave three pions behind.  

However, Cronin, Fitch and their colleagues found that, rarely, long-lived kaons also decayed into two instead of three pions, which required CP symmetry to be broken. 

The discovery of CP violation was recognized with the 1980 Nobel Prize in physics. And it led to even more discoveries. 

It prompted theorists Makoto Kobayashi and Toshihide Maskawa to predict in 1973 the existence of a new generation of elementary particles. At the time, only two generations were known. Within a few years, experiments at SLAC National Accelerator Laboaratory found the tau particle—the third generation of a group including electrons and muons. Scientists at Fermi National Accelerator Laboratory later discovered a third generation of quarks—bottom and top quarks.

Digging further into CP violation

In the late 1990s, scientists at Fermilab and European laboratory CERN found more evidence of CP violation in decays of neutral kaons. And starting in 1999, the BaBar experiment at SLAC and the Belle experiment at KEK in Japan began to look into CP violation in decays of composite particles called B mesons

By analyzing dozens of different types of B meson decays, scientists on BaBar and Belle revealed small differences in the way B mesons and their antiparticles fall apart. The results matched the predictions of Kobayashi and Maskawa, and in 2008 their work was recognized with one half of the physics Nobel Prize.

“But checking if the experimental data agree with the theory was only one of our goals,” says BaBar spokesperson Michael Roney of the University of Victoria in Canada. “We also wanted to find out if there is more to CP violation than we know.”

This is because these experiments are seeking to answer a big question: Why are we here?

When the universe formed in the big bang 14 billion years ago, it should have generated matter and antimatter in equal amounts. If nature treated both exactly the same way, matter and antimatter would have annihilated each other, leaving nothing behind but energy. 

And yet, our matter-dominated universe exists.

CP violation is essential to explain this imbalance. However, the amount of CP violation observed in particle physics experiments so far is a million to a billion times too small.

Current and future studies

Recently, BaBar and Belle combined their data treasure troves in a joint analysis1First observation of CP violation in B0->D(*)CP h0 decays by a combined time-dependent analysis of BaBar and Belle data. It revealed for the first time CP violation in a class of B meson decays that each experiment couldn't have analyzed alone due to limited statistics.

This and all other studies to date are in full agreement with the standard theory. But researchers are far from giving up hope on finding unexpected behaviors in processes governed by CP violation.

The future Belle II, currently under construction at KEK, will produce B mesons at a much higher rate than its predecessor, enabling future CP violation studies with higher precision.

And the LHCb experiment at CERN’s Large Hadron Collider is continuing studies of B mesons, including heavier ones that were only rarely produced in the BaBar and Belle experiments. The experiment will be upgraded in the future to collect data at 10 times the current rate.

To date, CP violation has been observed only in particles like these ones made of quarks.

“We know that the types of CP violation already seen using some quark decays cannot explain matter’s dominance in the universe,” says LHCb collaboration member Sheldon Stone of Syracuse University. “So the question is: Where else could we possibly find CP violation?”

One place for it to hide could be in the decay of the Higgs boson. Another place to look for CP violation is in the behavior of elementary leptons—electrons, muons, taus and their associated neutrinos. It could also appear in different kinds of quark decays. 

“To explain the evolution of the universe, we would need a large amount of extra CP violation,” Nierste says. “It’s possible that this mechanism involves unknown particles so heavy that we’ll never be able to create them on Earth.”

Such heavyweights would have been produced last in the very early universe and could be related to the lack of antimatter in the universe today. Researchers search for CP violation in much lighter neutrinos, which could give us a glimpse of a possible large violation at high masses.

The search continues.

by Manuel Gnida and Kathryn Jepsen at November 24, 2015 02:00 PM

arXiv blog

Data Mining Reveals How Smiling Evolved During a Century of Yearbook Photos

By mining a vast database of high-school yearbook photos, a machine-vision algorithm reveals the change in hairstyles, clothing, and even smiles over the last century.

Data mining has changed the way we think about information. Machine-learning algorithms now routinely chomp their way through data sets of Twitter conversations, travel patterns, phone calls, and health records, to name just a few. And the insights this brings is dramatically improving our understanding of communication, travel, health, and so on.

November 24, 2015 11:27 AM

November 23, 2015

Emily Lakdawalla - The Planetary Society Blog

NASA Orders First Official SpaceX Crew Flight to ISS
NASA placed its first official order for a SpaceX Crew Dragon to carry astronauts to the International Space Station, the agency announced Friday.

November 23, 2015 11:13 PM

Peter Coles - In the Dark

Want to use the Open Journal of Astrophysics? Get an Orcid ID!

We’re getting ready to launch the Open Journal of Astrophysics site so for all the folks out there who are busy preparing to submit papers let me just give you advanced warning how it works. The website is currently being tested with real submissions, but these have so far been canvassed from the Editorial Board for testing purposes: the journal is not yet available for general submission, and the site is not yet public. Once we’re sure everything is fully functional we will open up.

Anyway, in order to submit a paper you will need to obtain an ORCID ID. In a nutshell this is a unique identifier that makes it much easier to keep track of researchers than via names, email address or whatever. It can be used for many other things other than the Open Journal project so it’s a good thing to do in itself.

You can register for an ID here. It only takes seconds to do it, so do it now! You can find out more about ORCID here. When you have your ORCID ID you can log into our Open Journal website to submit a paper.

The Open Journal is built on top of the arXiv which means that all papers submitted to the Open Journal must be submitted to the arXiv first. This in turns means is that you must also be registered as a “trustworthy” person to submit there. You can read about how to do that here. When you have succeeded in submitting your paper to the arXiv you can proceed to submit it to the Open Journal.

As an aside, we do have a Latex template for The Open Journal, but you can for the time being submit papers in any style as long as the resulting PDF file is readable.

To submit a paper to be refereed by The Open Journal all you need to do is type in its arXiv ID and the paper will be imported into the Open Journal. The refereeing process is very interactive – you’ll like it a lot – and when it’s completed the paper will be published, assigned a Digital Object Identifier (DOI) and will be entered into the CrossRef system for the purpose of gathering citations and other bibliometric data.

We will be issuing a general call for submissions very soon, at which point we will also be publishing general guidance in the form of an FAQ, which includes information about copyright etc. In the meantime, all you need to do is get your ORCID ID and get your papers on the arXiv!

by telescoper at November 23, 2015 04:25 PM

astrobites - astro-ph reader's digest

A 1500 Year Old Explosion (maybe)

Title: Discovery of an eclipsing dwarf nova in the ancient nova shell Te 11

Authors: Brent Miszalski, P. A. Woudt, S. P. Littlefair et al.

First author’s institution: South African Astronomical Observatory

Status: Accepted for publication in MNRAS


The nebula Te 11. Observations of the interacting star at its core suggest it may have been formed by a giant explosion, matching one spotted by astronomers over 1500 years ago.

On the 16th of November in 483 CE, astronomers in China recorded the appearance of “a guest star east of Shen, as large as a peck measure, and like a fuzzy star”. The new celestial light shone brightly for just under a month, then faded to nothing. Over 1500 years later, the authors of today’s paper may have found the source.

The suspect is a nebula known as Te 11, a cloud of expanding, glowing gas around half a light-year across at its widest point. Te 11 was originally thought to be a planetary nebula. These are, confusingly, nothing to do with planets, but are instead made out of material thrown off a red giant star as it shrinks into a white dwarf.

But although visually Te 11 looks like a planetary nebula, many of its characteristics don’t quite fit. It’s moving too slowly, and has much less mass than other, confirmed examples.

To search for alternative ways in which the nebula could have formed, the authors obtained a light curve, shown in the figure below, and spectroscopy of the object lurking in Te 11’s centre. They found a white dwarf,  just as the planetary nebula hypotheses predicted. But it wasn’t alone.


Light curve of the dwarf nova at the core of Te 11, showing an eclipse of the white dwarf by its companion star. The purple, red, yellow and green lines show the contribution from the white dwarf, disc, a bright spot on the disc, and elliptical shape of the system, respectively, adding up to make the blue line.

The white dwarf is accompanied by an M dwarf star, so close together that they orbit around their centre of mass in less than three hours. At such close proximity, the gravity of the white dwarf draws material off its companion, forming a ring of gas known as an accretion disc. The material in the disc then gradually spirals down onto the white dwarf.


Artist’s impression of a dwarf nova. The gravity of the white dwarf draws material off the companion star, forming an accretion disc. Image: NOAA

In a number of these systems, the disc becomes unstable every few years, probably due to a change in the viscosity of the gas caused by a rise in temperature (no one is exactly sure how it works). The material falling onto the white dwarf briefly turns from a gentle trickle into a raging torrent, releasing huge amounts of gravitational energy as light. The regular mini-explosions give the systems their name: Dwarf novae, after the larger cosmic explosions called novae and supernovae.

The authors’ observations of Te 11 had been prompted by five novae-like events in the last ten years, spotted by the Catalina Real-Time Transit Survey. The new observations both confirmed that the system was a dwarf nova, and provided exact measurements of some of the characteristics of the two stars, such as their masses and radii.

Te11 hosts an was unusually massive white dwarf, 1.18 times the mass of the Sun (a typical white dwarf is around 0.6). This meant that, as well as dwarf novae, bigger classical novae could also occur. Classical novae take place when the mass building up on the white dwarf becomes so dense that the hydrogen begins to fuse, releasing huge amounts of energy and blowing apart the (newly added) outer layers of the star.

Such a high mass white dwarf means that a novae could reasonably have occurred recently, within a time scale of hundreds of years. The material from the novae would have slammed into the unusually dense interstellar medium in the area, creating the Te 11 nebula. The authors postulate that this huge explosion was the source of the “fuzzy star” spotted in 483 CE.

Miszalski et al. finish by suggesting that more novae could have occurred since then, and high resolution imagining might reveal shells of material nestled inside the nebula. Observing these would give unprecedented insight into the physics of novae and the structures they leave behind.


by David Wilson at November 23, 2015 03:27 PM

CERN Bulletin

CERN Bulletin Issue No. 47-48/2015
Link to e-Bulletin Issue No. 47-48/2015Link to all articles in this issue No.

November 23, 2015 11:53 AM

The Great Beyond - Nature blog

Archived newsblog

Nature’s news team is no longer updating this newsblog: all articles below are archived and extend up to the end of 2014.

Please go to for the latest breaking news from Nature.

by Richard Van Noorden at November 23, 2015 10:20 AM

Tommaso Dorigo - Scientificblogging

Supersymmetry Is About To Be Discovered, Kane Says
While in the process of fact-checking information that is contained in the book I am finalizing, I had the pleasure to have a short discussion with Gordon Kane during the weekend. A Victor Weisskopf distinguished professor at the University of Michigan as well as a director emeritus of the Michigan Center for Theoretical Physics, Gordon is one of the fathers of Supersymmetry, and has devoted the last three decades to its study.

read more

by Tommaso Dorigo at November 23, 2015 09:50 AM

November 21, 2015

Sean Carroll - Preposterous Universe

Long-Term Forecast

This xkcd cartoon is undeniably awesome as-is, but the cosmologist in me couldn’t resist adding one more row at the bottom.


Looks like the forecast calls for Boltzmann Brains! I guess Hilbert space is finite-dimensional after all.

by Sean Carroll at November 21, 2015 02:00 AM

November 20, 2015

Symmetrybreaking - Fermilab/SLAC

Physicists get a supercomputing boost

Scientists have made the first-ever calculation of a prediction involving the decay of certain matter and antimatter particles.

Sometimes the tiniest difference between a prediction and a result can tell scientists that a theory isn’t quite right and it’s time to head back to the drawing board.

One way to find such a difference is to refine your experimental methods to get more and more precise results. Another way to do it: refine the prediction instead. Scientists recently showed the value of taking this tack using some of the world’s most powerful supercomputers.

An international team of scientists has made the first ever calculation of an effect called direct charge-parity violation—or CP violation—a difference between the decay of matter particles and of their antimatter counterparts.

They made their calculation using the Blue Gene/Q supercomputers at the RIKEN BNL Research Center at Brookhaven National Laboratory, at the Argonne Leadership Class Computing Facility at Argonne National Laboratory, and at the DiRAC facility at the University of Edinburgh.

Their work took more than 200 million supercomputer core processing hours—roughly the equivalent of 2000 years on a standard laptop. The project was funded by the US Department of Energy’s Office of Science, the RIKEN Laboratory of Japan and the UK Science and Technology Facilities Council.

The scientists compared their calculated prediction to experimental results established in 2000 at European physics laboratory CERN and Fermi National Accelerator Laboratory.

Scientists first discovered evidence of indirect CP violation in a Nobel-Prize-winning experiment at Brookhaven Lab in 1964. It took them another 36 years to find evidence of direct CP violation.

“This so-called ‘direct’ symmetry violation is a tiny effect, showing up in just a few particle decays in a million,” says Brookhaven physicist Taku Izubuchi, a member of the team that performed the calculation.

Physicists look to CP violation to explain the preponderance of matter in the universe. After the big bang, there should have been equal amounts of matter and antimatter, which should have annihilated with one another. A difference between the behavior of matter and antimatter could explain why that didn’t happen.

Scientists have found evidence of some CP violation—but not enough to explain why our matter-dominated universe exists.

The supercomputer calculations, published in Physical Review Letters 1 Standard Model prediction for direct CP violation in K→ππ decay, so far show no statistically significant difference between prediction and experimental result in direct CP violation.

But scientists expect to double the accuracy of their calculated prediction within two years, says Peter Boyle of the University of Edinburgh. “This leaves open the possibility that evidence for new phenomena… may yet be uncovered.”

by Kathryn Jepsen at November 20, 2015 07:09 PM

Tommaso Dorigo - Scientificblogging

Anomaly! - A Different Particle Physics Book
I was very happy today to sign a contract with an international publisher that will publish a book I have written. The book, titled "Anomaly! - Scientific Discoveries and the Quest for the Unknown", focuses on the CDF experiment, a particle detector that operated at the Tevatron collider for 30 years. 
The Tevatron was the highest-energy collider until the turn-on of the LHC. The CDF and DZERO experiments there discovered the sixth quark, the top, and produced a large number of world-class results in particle physics. 

read more

by Tommaso Dorigo at November 20, 2015 02:02 PM

Matt Strassler - Of Particular Significance

An Overdue Update

A number of people have asked why the blog has been quiet. To make a long story short, my two-year Harvard visit came to an end, and my grant proposals were turned down. No other options showed up except for a six-week fellowship at the Galileo Institute (thanks to the Simons Foundation), which ended last month.  So I am now employed outside of science, although I maintain a loose affiliation with Harvard as an “Associate of the Physics Department” (thanks to Professor Matt Schwartz and his theorist colleagues).

Context: U.S. government cuts to theoretical high-energy physics groups have been 25% to 50% in the last couple of years. (Despite news articles suggesting otherwise, billionaires have not made up for the cuts; and most donations have gone to string theory, not particle physics.) Spare resources are almost impossible to find. The situation is much better in certain other countries, but personal considerations keep me in this one.

News from the Large Hadron Collider (LHC) this year, meanwhile, is optimistic though not without worries. The collider itself operated well despite some hiccups, and things look very good for next year, when the increased energy and high collision rate will make the opportunities for discoveries the greatest since 2011. However, success depends upon the CMS experimenters and their CERN lab support fixing some significant technical problems afflicting the CMS detector and causing it to misbehave some fraction of the time. The ATLAS detector is working more or less fine (as is LHCb, as far as I know), but the LHC can’t run at all while any one of the experimental detectors is open for repairs. Let’s hope these problems can be solved quickly and the 2016 run won’t be much delayed.

There’s a lot more to say about other areas of the field (gravitational waves, neutrinos, etc.) but other bloggers will have to tell those tales. I’ll keep the website on-line, and will probably write some posts if something big happens. And meanwhile I am slowly writing a book about particle physics for non-experts. I might post some draft sections on this website as they are written, and I hope you’ll see the book in print sometime in the next few years.

Filed under: Housekeeping, Uncategorized

by Matt Strassler at November 20, 2015 05:02 AM

November 19, 2015

astrobites - astro-ph reader's digest

Ancient Stars from the Cosmic Dawn

Title: Extremely metal-poor stars in the cosmic dawn in the bulge of the Milky Way

Authors: L.M. Howes et al.

First author’s institution: Research School of Astronomy & Astrophysics, Australian National University

Status: Published in Nature journal

Every way you look in the night sky, you see twinkles of ancient light streaming in from the forgotten past. Our observable Universe is teeming with mind-bogglingly large number of stars, on the order of ~1022.  It is incredible to think that roughly 13 billion years ago, there was none. When the largest dark matter overdensities collapsed about one-one hundredth (200 million years) of the Universe’s age, they eventually grew to become the inner regions (or “bulges“) of galaxies by attracting matter around them. Within these murky nebulous regions, the very first stars burst gloriously into existence.

Pristine Stars

What did the first stars look like? We have never seen one before, but the early Universe left a spattering of clues for us. We think the first stars should be metal-free, because the primordial Universe contained only the simple elements hydrogen and helium. Call us astronomers failed chemists — metals in the astronomy world means any elements heavier than helium, such as lithium, iron, and carbon. Metals are produced in stellar cores through fusion reactions, which are then disposed into the galaxies through supernovae. New stars that form later will then contain some of these recycled metals. As a galaxy ages, its environment becomes more and more chemically enriched from supernovae events. Therefore, the more metal a star contains, the later it forms and the younger it is. The amount of metal in a star is known as its “metallicity”, the logarithmic ratio of the number of metal atoms to that of hydrogen minus the Sun’s metallicity (defined to be 1). It is commonly written as [Z/H] where Z  refers to a certain species of metal; most of the time, Z is iron (symbol: Fe). Young stars rich in metals have positive [Fe/H] while old metal-poor stars typically have negative [Fe/H].

Senior Citizens in the Milky Way Bulge…

The Milky Way bulge is thought to be swarming with immediate descendents of the first stars. However, it is very difficult to detect them due to dust and crowding from other stars. Using the SkyMapper telescope in Australia, the authors successfully detected 500 metal-poor bulge stars, making this survey the first of its kind.

Among the metal-poor bulge stars, the authors re-observed 23 most metal-poor ones to determine their chemical composition. These 23 stars have [Fe/H] <= -2.3, roughly similar to most halo stars. Despite similar metallicities, these bulge stars formed much much earlier than their halo counterpart. Skeptics might be suspicious of the bulge membership of these stars, since halo stars can sometimes pass by the bulge as interlopers. The authors measured the orbits and velocities of their stars and established that half of them have tight bound orbits around the bulge (including their most metal-poor star) and so confirming that at least half of their 23 stars are real members of the bulge.

…with a distaste for Carbon

The authors compared the chemical composition of their stars with halo stars at the same metallicities to look for possible differences. For most elements, the metal-poor bulge stars are very similar to their halo counterpart. Carbon, however, begs to differ. Many metal-poor stars have abnormally high abundance of carbon compared to iron ([C/Fe] > 1) and the number of stars that are carbon-enchanced increases with decreasing metallicities. Don’t ask why yet — astronomers are still scratching their hands (and working hard for possible explanations). These stars are appropriately named “Carbon-Enhanced-Metal-Poor stars” or CEMP stars. The metal-poor stars in the Milky Way halo are also carbon-enhanced; the metal-poor bulge stars aren’t. This is a double surprise because theory also predicts that we should find more carbon-enhanced stars towards the bulge. Figure 1 shows the carbon abundance vs. metallicity of these bulge stars.


Fig. 1 – Abundance ratio of carbon to iron, [C/Fe], as function of metallicity [Fe/H] of the 23 metal-poor bulge stars in this paper (red dots). Metal-poor halo stars (black dots) and metal-rich bulge stars (blue triangles) are plotted for comparison. The dotted lines refer to Solar values. [Figure 3a from the paper]

Among the 23 stars, the authors uncovered a star with an unprecedented metallicity, one that is at least ten times more iron-poor than other metal-poor bulge stars. This star renews the record as the most metal-poor and oldest bulge star we know today. The authors also did not detect any carbon in the star’s spectrum, achieving only an upper limit (the red arrow in Figure 1). By comparing the chemical composition of the star with synthetic yields from different types of supernovae, as shown in Figure 2, the authors suggested that it could have arise from a hypernova, a supernova that is ten times more powerful.


Fig. 2 – The chemical abundance [X/Fe] of the most metal-poor bulge star in this paper for elements X as listed at the top. Measured abundances are marked by the open stars. The various lines are synthetic supernovae yields for a 40 Msun hypernova (HN), a 15 Msun core-collapse supernova (SN), and a supermassive pair-instability supernova (PISN). Compared with other mechanisms, the chemical yield from a 40 Msun hypernova is the most consistent with the observed points. [Figure 3b from the paper]

There is still a lot of room for discovery as the authors only covered one-third of the Milky Way bulge. With a larger and deeper survey, we should be able to find more metal-poor stars in the Milky Way bulge to help propel our currently very limited understanding of these stars and the first stars.


by Suk Sien Tie at November 19, 2015 03:58 PM

Symmetrybreaking - Fermilab/SLAC

Shrinking the accelerator

Scientists plan to use a newly awarded grant to develop a shoebox-sized particle accelerator in five years.

The Gordon and Betty Moore Foundation has awarded $13.5 million to Stanford University for an international effort, including key contributions from the Department of Energy’s SLAC National Accelerator Laboratory, to build a working particle accelerator the size of a shoebox. It’s based on an innovative technology known as “accelerator on a chip.”

This novel technique, which uses laser light to propel electrons through a series of artfully crafted glass chips, has the potential to revolutionize science, medicine and other fields by dramatically shrinking the size and cost of particle accelerators.

“Can we do for particle accelerators what the microchip industry did for computers?” says SLAC physicist Joel England, an investigator with the 5-year project. “Making them much smaller and cheaper would democratize accelerators, potentially making them available to millions of people. We can’t even imagine the creative applications they would find for this technology.”

Robert L. Byer, a Stanford professor of applied physics and co-principal investigator for the project who has been working on the idea for 40 years, says, “Based on our proposed revolutionary design, this prototype could set the stage for a new generation of ‘tabletop’ accelerators, with unanticipated discoveries in biology and materials science and potential applications in security scanning, medical therapy and X-ray imaging.”

The chip that launched an international quest

The international effort to make a working prototype of the little accelerator was inspired by experiments led by scientists at SLAC and Stanford and, independently, at Friedrich-Alexander University Erlangen-Nuremberg (FAU) in Germany. Both teams demonstrated the potential for accelerating particles with lasers in papers published on the same day in 2013.

In the SLAC/Stanford experiments, published in Nature 1 Demonstration of electron acceleration in a laser-driven dielectric microstructure, electrons were first accelerated to nearly light speed in a SLAC accelerator test facility. At this point they were going about as fast as they could go, and any additional acceleration would boost their energy, not their speed.

The speeding electrons then entered a chip made of silica and traveled through a microscopic tunnel that had tiny ridges carved into its walls. Laser light shining on the chip interacted with those ridges and produced an electrical field that boosted the energy of the passing electrons.

In the experiments, the chip achieved an acceleration gradient, or energy boost over a given distance, roughly 10 times higher than the existing 2-mile-long SLAC linear accelerator can provide. At full potential, this means the SLAC accelerator could be replaced with a series of accelerator chips 100 meters long, roughly the length of a football field. 

In a parallel approach, experiments led by Peter Hommelhoff of FAU and published in Physical Review Letters 2 Laser-based acceleration of nonrelativistic electrons at a dielectric structure demonstrated that a laser could also be used to accelerate lower-energy electrons that had not first been boosted to nearly light speed. Both results taken together open the door to a compact particle accelerator.

A tough, high-payoff challenge

For the past 75 years, particle accelerators have been an essential tool for physics, chemistry, biology and medicine, leading to multiple Nobel prize-winning discoveries. They are used to collide particles at high energies for studies of fundamental physics, and also to generate intense X-ray beams for a wide range of experiments in materials, biology, chemistry and other fields. This new technology could lead to progress in these fields by reducing the cost and size of high-energy accelerators. 

The challenges of building the prototype accelerator are substantial. Demonstrating that a single chip works was an important step; now scientists must work out the optimal chip design and the best way to generate and steer electrons, distribute laser power among multiple chips and make electron beams that are 1000 times smaller in diameter to go through the microscopic chip tunnels, among a host of other technical details.

“The chip is the most crucial ingredient, but a working accelerator is way more than just this component,” says Hommelhoff, a professor of physics and co-principal investigator of the project. “We know what the main challenges will be, and we don’t know how to solve them yet. But as scientists we thrive on this type of challenge. It requires a very diverse set of expertise, and we have brought a great crowd of people together to tackle it.”

The Stanford-led collaboration includes world-renowned experts in accelerator physics, laser physics, nanophotonics and nanofabrication. SLAC and two other national laboratories, Deutsches Elektronen-Synchrotron (DESY) in Germany and Paul Scherrer Institute in Switzerland, will contribute expertise and make their facilities available for experiments. In addition to FAU, five other universities are involved in the effort: University of California, Los Angeles, Purdue University, University of Hamburg, the Swiss Federal Institute of Technology in Lausanne (EPFL) and Technical University of Darmstadt.

“The accelerator-on-a-chip project has terrific scientists pursuing a great idea. We’ll know they’ve succeeded when they advance from the proof of concept to a working prototype,” says Robert Kirshner, chief program officer of science at the Gordon and Betty Moore Foundation. “This research is risky, but the Moore Foundation is not afraid of risk when a novel approach holds the potential for a big advance in science. Making things small to produce immense returns is what Gordon Moore did for microelectronics.”

November 19, 2015 02:00 PM

arXiv blog

The Machine-Vision Algorithm for Analyzing Children’s Drawings

Psychologists believe that drawing is an important part of children’s cognitive development. So an objective analysis of these drawings could provide an important insight into this process, a task that machine vision is ideally suited to.

Studying the psychology of children and the way it changes as they develop is a difficult business. One idea is that drawing plays a crucial role in children’s cognitive development, so drawings ought to provide a window into these changes.

November 19, 2015 01:05 PM

Jester - Resonaances

Leptoquarks strike back
Leptoquarks are hypothetical scalar particles that carry both color and electroweak charges. Nothing like that exists in the Standard Model, where the only scalar is the Higgs who is a color singlet. In the particle community, leptoquarks enjoy the similar status as Nickelback in music: everybody's heard of them, but no one likes them.  It is not completely clear why... maybe they are confused with leprechauns, maybe  because they sometimes lead to proton decay, or maybe because they rarely arise in cherished models of new physics.  However,  recently there has been some renewed interest in leptoquarks.  The reason is that these particles seem well equipped to address the hottest topic of this year - the B meson anomalies.

There are at least 3 distinct B-meson anomalies that are currently intriguing:
  1.  A few sigma (2 to 4, depending who you ask) deviation in differential distribution of B → K*μμ decays, 
  2.  2.6 sigma violation of  lepton flavor universality in  B → Kμμ vs B → Kee decays, 
  3.  3.5 sigma violation of lepton flavor universality, but this time in  B → Dτν vs B → Dμν decays. 
Now, leptoquarks with masses in the TeV ballpark can explain either of these anomalies.  How? In analogy to the Higgs, leptoquarks may interact with the Standard Model fermions via Yukawa couplings. Which interactions  are possible is determined by  its color and electroweak charges. For example, this paper proposed a leptoquark transforming as (3,2,1/6) under the Standard Model gauge symmetry (color SU(3) triplet like quarks, weak SU(2) doublet like Higgs,  hypercharge 1/6).  Such particle can have the following Yukawa couplings with b- and s-quarks and muons:
 If both  λb and λs  are non-zero then a tree-level leptoquark exchange can mediate the b-quark decay  b → s μ μ.  This contribution  adds up to the Standard Model amplitudes  mediated by loops of W bosons, and thus affects the B-meson observables. It turns out that the first two anomalies listed above can be fit if the leptoquark mass is in the 1-50 TeV range, depending on the magnitude of λb and λs.

Also the 3rd anomaly above can be easily  explained by leptoquarks. One example from this paper is a leptoquark transforming as (3,1,-1/3) and coupling to matter as

This particle contributes to  b → c τ ν, adding up to the tree-level W boson contribution, and is capable of explaining the apparent excess of semi-leptonic B meson decays into D mesons and tau leptons observed by the BaBar, Belle, and LHCb experiments. The difference to the previous case is that this leptoquark has to be less massive, closer to the TeV scale, because it has to compete with the tree-level contribution in the Standard Model.

There are more kinds of leptoquarks with different charges that allow for Yukawa couplings to matter. Some of them could also explain the 3 sigma discrepancy of the experimentally measured muon anomalous magnetic moment with the Standard Model prediction. Actually, a recent paper says that the (3,1,-1/3) leptoquark discussed above can explain all B-meson and muon g-2 anomalies simultaneously, through a combination of tree-level and loop effects.  In any case, this is something to look out for in this and next year's data.  If a leptoquark is indeed the culprit for the B → Dτν excess, it should be within reach of the 13 TeV run (for the 1st two anomalies it may well be too heavy to produce at the LHC).   The current reach for leptoquarks is up to 1 TeV mass (strongly depending on model details),  see e.g. the recent ATLAS and CMS analyses. So far these searches have provoked little public interest, but that may change soon...

by Jester ( at November 19, 2015 12:58 AM

November 18, 2015

Lubos Motl - string vacua and pheno

First-quantized formulation of string theory is healthy
...and enough to see strings' superiority...

As Kirill reminded us, two weeks ago, a notorious website attracting unpleasant and uintelligent people who just never want to learn string theory published an incoherent rant supplemented by misguided comments assaulting Witten's essay
What every physicist should know about string theory
in Physics Today. Witten presented the approach to string theory that is common in the contemporary textbooks on the subject, the first-quantized approach, and showed why strings eliminate the short-distance (ultraviolet) problems, automatically lead to the gravity in spacetime, and other virtues.

Witten's office as seen under the influence of drugs

This introduction is simple enough and I certainly agree that every physicist should know at least these basic things about string theory but at the end, I think that it isn't the case, anyway. Here I want to clarify certain misunderstandings about the basics of string theory as sketched by Witten; and their relationships, similarities, and differences from quantum mechanics of point-like particles and quantum field theory.

First, let's begin by some light statements that everyone will understand.
These are elephants in the room which are not being addressed.
This latest version takes ignoring the elephants in the room to an extreme, saying absolutely nothing about the problems...
Another huge elephant in the room ignored by Witten’s story motivating string theory as a natural two-dimensional generalization of one-dimensional theories is that the one-dimensional theories he discusses are known to be a bad starting point...
Given the thirty years of heavily oversold publicity for string theory, it is this and the other elephants in the room that every physicist should know about.
So, it can’t have much to do with the real world that we actually live in. These are elephants in the room which are not being addressed.
This makes almost the same argument as the new one, but does also explain one of the elephants in the room (lack of a non-perturbative string theory).
Warren, I think there’s a difference between elephants in the room (we don’t know how to connect string theory to known 4d physics, with or without going to a string field theory) and something much smaller (mice? cockroaches?)...
I kid you not: there are at least 7 colorful sentences in that rant that claim the existence of the elephants in the room. And I didn't count the title and its copies. He must believe in a reduced version of the slogan of his grandfather's close friend (both are co-responsible for the Holocaust), "A lie repeated 7 times becomes the truth."

Sorry but there are no elephants in the room – in Witten's office, in this case. I've seen the office and I know many people who have spent quite some time there and all of them have observed the number of elephants in that room to be zero. It's enough for me to say this fact once.

If that annoying blogger sees the elephants everywhere, he should either reduce the consumption of drugs, increase the dosage of anti-hallucination pills, or both. If his employees had some decency and compassion, they would have already insisted that this particular stuttering computer assistant deals with his psychiatric problems in some way before he can continue with his work.

Fine. Now we can get to the physical issues – to make everyone sure that every single "elephant" is a hallucination.

Before strings were born but after the birth of quantum mechanics, people described the reality by theories that are basically derived from point-like particles and "their" fields. We're talking about the normal models of
  1. quantum mechanics of one or many non-relativistic point-like particles
  2. attempts to make these theories compatible with special relativity
  3. quantum field theory.
You may say that these three classes of theories are increasingly new and increasing more correct and universal. String theory is even newer and more complete so you could think that "it should start" from the category (3) of theories, quantum field theories in the spacetime.

But that's not how it works, at least not in the most natural beginner's way to learn string theory. String theory is not a special kind of the quantum field theories. That's why the construction of string theory has to branch off the hierarchy above earlier than that. String theory already starts with replacing particles with strings in the steps (1) and (2) above and it develops itself analogously to the steps from (1) to (2) to (3) above – but not "quite" identical steps – into a theory that looks like a quantum field theory at long distances but is fundamentally richer and more consistent than that, especially if you care about the behavior at very short distances.

For point-like particles, the Hamiltonians like\[

H = \sum_k \frac{p_k^2}{2m_k} + \sum_{k\lt \ell} V(r_k-r_\ell)

\] work nicely to describe physics of the atoms but they are not compatible with special relativity. The simplest and nice enough generalizations that is relativistic looks like the Klein-Gordon equation\[

(-\square-m^2) \Phi = 0

\] where we imagine that \(\Phi\) is a "wave function of one particle". In quantum mechanics, the wave function ultimately has to be complex because the energy \(E\) pure vector must depend on time as \(\exp(-iEt)\). We may consider \(\Phi\) above to be complex and rewrite the second derivatives with respect to time as a set of equations for \(\Phi\) and \(\partial\Phi/ \partial t\).

When we do so, we find out that the probability density – the time component of the natural Klein-Gordon current \(j^\mu\) – isn't positive definite. It would be a catastrophe if \(j^0\) could have both signs: probabilities would sometimes be positive and sometimes negative. But probabilities cannot be negative. (The "wrong" states have both a negative norm and negative energy so that the ratio is positive but the negative sign of the energy and especially the probability is a problem, anyway.) That's why the "class" of point-like theories (2) is inconsistent.

The disease is automatically solved once we second-quantize \(\Phi\) and interpret it as a quantum field – a set of operators (operator distributions if you are picky) – that act on the Hilbert space (of wave functions) and that are able to create and annihilate particles. We get\[

\Phi(x,y,z,t) = \sum_{k} \zav{ c_k\cdot e^{ik\cdot x} + c_k^\dagger \cdot e^{-ik\cdot x} }

\] Add the vector sign for \(\vec k\), hats everywhere, correct the signs, and add the normalization factors or integrals instead of sums. None of those issues matter in our conceptual discussion. What matters is that \(c_k,c^\dagger_k\) are operators and so is \(\Phi\). Therefore, \(\Phi\) no longer has to be "complex". It may be real – because it's an operator, we actually require it is Hermitian. And I have assumed that \(\Phi\) is Hermitian in the expansion above.

There must exist the ground state – by which we always mean the energy eigenstate \(\ket 0\) corresponding to the lowest eigenvalue of the Hamiltonian \(H\) – and one may prove that\[

c_k \ket 0 = 0.

\] The annihilation operators annihilate the vacuum completely. For this reason, the only one-particle states are the linear superpositions of various vectors \(c^\dagger_k \ket 0\). This is a linear subspace of the full, multi-particle Fock space produced by the quantum field theory. But both the Fock space and this one-particle subspace are positively definite Hilbert spaces. The probabilities are never zero.

You may say that the "dangerous" states that have led to the negative probabilities in the "bad theories of the type (2)" are the states of the type \(c_k\ket 0\) which may have been naively expected to be nonzero vectors in the theories (2) but they are automatically seen to be zero in quantum field theory. Quantum field theories pretty much erases the negative-probability states by hand, automatically.

Now, if you take bosonic strings in \(D=26\), for the sake of simplicity, and ban any internal excitations of the string, the physics of this string will reduce to that of the tachyonic particle. The tachyonic mass is a problem (tachyons disappear once you study the \(D=10\) superstring instead of bosonic string theory; but since 1999, we know that tachyons are not "quite" a hopelessly incurable inconsistency, just signs of some different, unstable physical evolution).

But otherwise the string's physics becomes identical to that of the spinless Klein-Gordon particle. In quantum field theory, the negative-probability polarizations of the spinless particle "disappear" from the spectrum because \(c_k \ket 0=0\) and be sure that exactly the same elimination takes place in string theory.

The correctly treated string theory, much like quantum field theory, simply picks the positive-definite part of the one-particle Hilbert space only. At the end, much like quantum field theory, string theory allows the multi-particle Fock space with arbitrary combinations of arbitrary numbers of arbitrarily excited strings in the spacetime. And this Hilbert space is positive definite.

First-quantized approach to QFT is just fine

Many unpleasant people at that blog believe that for the negative-probability states to disappear, we must mindlessly write down the exact rules and slogans that are taught in quantum field theory courses and no other treatment is possible. They believe that quantum field theory is the only framework that eliminates the wrong states.

But that's obviously wrong. We don't need to talk about quantum fields at all. At the end, we are doing science – at least string theorists are doing science – so what matters are the physical predictions such as cross sections or, let's say, scattering amplitudes.

Even in quantum field theory, we may avoid lots of the talking and mindless formalism if we just want the results – the physical predictions. We may write down the Feynman rules and draw the Feynman diagrams needed for a given process or question directly. We don't need to repeat all the history clarifying how Feynman derived the Feynman rules for QED; we can consider these rules as "the candidate laws of physics". When we calculate all these amplitudes, we may check that they obey all the consistency rules. In fact, they match the observations, too. And that's enough for science to be victorious.

The fun is that even in the ordinary physics of point-like particles, the Feynman diagrams – which may be derived from "quantum fields" – may be interpreted in the first-quantized language, too. The propagators represent the path integral over all histories i.e. trajectories of one point-like particle from one spacetime point to another. The particle literally tries all histories – paths – and we sum over them. When we do so, the relevant amplitude is \(G(x^\mu,y^\mu)\).

However, the Feynman diagrams have many propagators that are meeting at the vertices – singular places of the diagrams. These may be interpreted as "special moments of the histories". The point-like particles are literally allowed to split and join. The prefactors that the Feynman rules force you to add for every vertex represent the probability amplitudes for the splitting/joining event, something that may depend on the internal spin/color or other quantum numbers of all the particles at the vertex.

The stringy Feynman diagrams may be interpreted totally analogously, in the one-string or first-quantized way. Strings may propagate from one place or another – this propagation of one string also includes the general evolution of its internal shape (a history is an embedding of the world sheet into the spacetime) – and they may split and join, too (the world sheet may have branches and loops etc.). In this way, we may imagine that we're Feynman summing over possible histories of oscillating, splitting, and joining strings. The sum may be converted to a formula according to Feynman-like rules relevant for string theory. And the result may be checked to obey all the consistency rules and agree with an effective quantum field theory at long distances.

And because the effective quantum field theories that string theory agrees with may be (for some solutions/compactifications of string theory) those extending the Standard Model (by the addition of SUSY and/or some extra nice new physics) and this is known to be compatible with all the observations, string theory is as compatible with all the observations as quantum field theory. You don't really need anything else in science.

Strings' superiority in comparison with particles

So all the calculations of the scattering amplitudes etc. may be interpreted in the first-quantized language, both in the case of point-like particles and strings. For strings, however, the whole formalism automatically brings us several amazing surprises, by which I mean advantages over the case of point-like particles, including
  1. the automatic appearance of "spin" of the particles from the internal motion of the strings
  2. unification of all particle species into different vibrations of the string
  3. the automatic inclusion of interactions; no special rules for "Feynman vertices" need to be supplemented
  4. automatic removal of short-distance (ultraviolet) divergences
  5. unavoidable inclusion of strings with oscillation eigenstates that are able to perturb the spacetime geometry: Einstein's gravity inevitably follows from string theory
It's a matter of pedagogy that I have identified five advantages. Some people could include others, perhaps more technical ones, or unify some of the entries above into bigger entries, and so on. But I think that this "list of five advantages" is rather wisely chosen.

I am going to discuss the advantages one-by-one. Before I do so, however, I want to emphasize that too many people are obsessed with a particular formalism but that's not what the scientific method – and string theorists are those who most staunchly insist on this method – demands. The scientific method is about the predictions, like the calculation of the amplitudes for all the scattering processes. And string theory has well-defined rules for those. Once you have these universal rules, you don't need to repeat all the details "how you found them" – this justification or motivation or history may be said to be "unnecessary excess baggage" or "knowledge to be studied by historians and social pseudoscientists".

Someone could protest that this method only generalizes the Feynman's approach to quantum field theory. However, this protest is silly for two reasons: it isn't true; and even if it were true, it would be irrelevant. It isn't true because the dynamics of string theory may be described in various types of the operator formalism (quantum field theory on the world sheet with different approaches to the gauge symmetries; string field theory; AdS/CFT; matrix theory, and so on). It's simply not true that the "integrals over histories" become "absolutely" unavoidable in string theory. Second, even if the path integrals were the only way to make physical predictions, there would be nothing wrong about it.

Fine. Let me now discuss the advantages of the strings relatively to particles, one by one.

The spin is included

One of the objection by the "Not Even Wrong" community, if I use a euphemism for that dirty scum, is:
Another huge elephant in the room ignored by Witten’s story motivating string theory as a natural two-dimensional generalization of one-dimensional theories is that the one-dimensional theories he discusses are known to be a bad starting point, for reasons that go far beyond UV problems. A much better starting point is provided by quantized gauge fields and spinor fields coupled to them, which have a very different fundamental structure than that of the terms of a perturbation series of a scalar field theory.
It's probably the elephant with the blue hat. Needless to say, these comments are totally wrong. It is simply not true that the point-like particles and their trajectories described in the quantum formalism – with the Feynman sum or the operators \(\hat x,\hat p\) – are a "bad starting point". They're a perfectly fine starting point. They're how quantum mechanics of electrons and other particles actually started in 1925.

Quantum field theory is one of the later points of the evolution of these theories, not a starting point, and it is not the final word in physics, either.

For point-like particles, the first-quantized approach building on the motion of one particle may look like a formalism restricted to the spinless, scalar, Klein-Gordon particles. But again, this objection is no good because of two reasons. It is false; and even if it were true, it is completely irrelevant for the status of string theory.

The comment that one may only get spinless particles in the first-quantized treatment of point-like particles is wrong e.g. because one can study the propagation of point-like particles in the superspace, a spacetime with additional fermionic spinorial coordinates. And the dynamics of particles in such spaces is equivalent to the dynamics of a superfield which is a conglomerate of fields with different spins. One gets the whole multiplet. More generally and less prettily, one could describe particles with arbitrary spins by adding discrete quantum numbers to the world lines of the Klein-Gordon particles.

But the second problem with the objection is that it is irrelevant because
the stringy generalization of the Klein-Gordon particle is actually enough to describe elementary particles of all allowed values of the spin.
Why? You know why, right? It's because the string has internal dynamics. It may be decomposed to creation and annihilation operators of waves along the string, \(\alpha_{\pm k}^\mu\). The spectrum of the operator \(m^2\) is discrete and the convention is that negative subscripts are creation operators; positive ones are annihilation operators. The ground state of the bosonic string \(\ket 0\) is a tachyon that carries center-of-mass degrees of freedom remembering the location or the total momentum of the string behaving as a particle. And the internal degrees of freedom, thanks to the \(\mu\) superscript (which tells you which scalar field on the string was excited), add spin to the string.

Bosonic strings only carry the spin with \(j\in \ZZ^{0,+}\). If you study the superstring, basically the physics of strings propagating in a superspace, you will find out that all \(j\in \ZZ^{0,+}/2\) appear in the spectrum.

It should be obvious but once again, the conclusion is that
the first-quantized Klein-Gordon particle physics is actually a totally sufficient starting point because once we replace the particles with strings propagating in a superspace, we get particles (and corresponding fields) of all the required spins in the spectrum.
The claim that there's something wrong with this "starting point" or strategy is just wrong. It's pure crackpottery. If you asked the author of that incorrect statement what's wrong about this starting point, he could only mumble some incoherent rubbish that would ultimately reduce to the fact that the first lecture in a low-brow quantum field theory course is the only thing he's been capable of learning and he just doesn't want to learn anything beyond that because his brain is too small and fucked-up for that.

But that doesn't mean that there's something wrong with other ways to construct physical theories. Some of the other ways actually get us much further.

Unification of all particle species into one string

One string can vibrate in different ways. Different energy or mass eigenstates of the internal string oscillations correspond to different particle species such as the graviton, photon, electron, muon, \(u\)-quark, and so on. This is of course a characteristic example of the string theory's unification power.

This idea wasn't quite new in string theory, however. Already in the early 1960s, people managed to realize that the proton and the neutron (or the \(u\)-quark and the \(d\)-quark; or the left-handed electron and the electron neutrino) naturally combine into a doublet. They may be considered a single particle species, the nucleon (if I continue with the proton-neutron case only), which may be found in two quantum states. These two states are analogous to the spin-up and spin-down states. The \(SU(2)\) mathematics is really isomorphic which is why the quantum number distinguishing the proton and the neutron was called the "isospin".

What distinguishes the proton and the neutron is some "detailed information (a qubit) inside this nucleon". It's still the same nucleon that can be switched to be a proton, or a neutron. And the same is true for string theory. What is switched are the vibrations of the internal waves propagating along the string. And there are many more ways to switch them. In fact, we can get all the known particle species – plus infinitely many new, too heavy particle species – by playing with these switches, with the stringy oscillations.

Interactions are automatically included

In the Feynman diagrams for point-like particles, you have to define the rules for the internal lines, the propagators, plus the rules for the vertices where the propagators meet. These are choices that are "almost" independent from each other.

Recall that from a quantum field theory Lagrangian, the propagators are derived from the "free", quadratic terms in the Lagrangian. The vertices are derived from the cubic and higher-order, "interaction" terms in the Lagrangian. Even when the free theory is simple or unique, there may be many choices and parameters to be made when you decide what the allowed vertices should be and do.

The situation is different in string theory. When you replace the interaction vertex by the splitting string, by the pants diagram, it becomes much smoother, as we discuss later. But one huge advantage of the smoother shape is that locally, the pants always look the same. Every square micron (I mean square micro-Planck-length) of the clothes looks the same.

So once you decide what is the "field theory Lagrangian" per area of the pants – the world sheet – you will have rules for the interactions, too. There is no special "interaction vertex" where the rules for the single free string break down. Once you allow the topology of the world sheet to be arbitrary, interactions of the strings are automatically allowed. You produce the stringy counterparts of all Feynman diagrams you get in quantum field theory.

In the simplest cases, one finds out that there is still a topological invariant, the genus \(h\) of the world sheet, and the amplitude from a given diagram may be weighted by \(g_s^{2h}\), a power of the string coupling. But it may be seen that \(g_s\) isn't really a parameter labeling different theories. Instead, its value is related to the expectation value of a string field that results from a particular vibration of the closed string, the dilaton (greetings to Dilaton). This "modulus" may get stabilized – a dynamically generated potential will pick the right minimum, the dilaton vev, and therefore the "relevant value" of the coupling constant, too.

So the choice of the precise "free dynamics of a string" already determines all the interactions in a unique way. This is a way to see why string theory ends up being much more robust and undeformable than quantum field theories.

Ultraviolet divergences are always gone

One old well-known big reason why string theory is so attractive is that the ultraviolet i.e. short-distance divergences in the spacetime don't arise, not even at intermediate stages of the calculation. That's why we don't even need any renormalization that is otherwise a part of the calculations in quantum field theory.

I must point out that it doesn't mean that there's never any renormalization in string theory. If we describe strings using a (conformal) quantum field theory on the two-dimensional world sheet, this quantum field theory requires analogous steps to the quantum field theories that old particle physicists used for the spacetime. There are UV divergences and renormalization etc.

But in string theory, no such divergences may be tied to short distances in the spacetime. And the renormalization on the world sheet works smoothly – the conformal field theory on the world sheet is consistent and, whenever the calculational procedure makes this adjective meaningful, renormalizable. (Conformal theories are scale-invariant so they obviously can't have short-distance problems; the scale invariance means that a problem at one length scale is the same problem at any other length scale.)

This UV health of string theory may be seen in many ways. For example, if you compactify string theory on a circle of radius \(R\), too short a value of \(R\) doesn't produce a potentially problematic dynamics with short-distance problems because the compactification on radius \(R\) is exactly equivalent to a compactification on the radius \(\alpha' / R\), basically \(1/R\) in the "string units", because of the so-called T-duality.

Also, if you consider one-loop diagrams, the simplest diagrams where UV divergences normally occur in quantum field theories, you will find out that the relevant integral in string theory is over a complex \(\tau\) whose region is\[

{\rm Im}(\tau)\gt 0, \,\, |\tau| \gt 1, \,\, |{\rm Re}(\tau)| \lt \frac 12.

\] The most stringy condition defining this "fundamental domain" is \(|\tau|\gt 1\) which eliminates the region of a very small \({\rm Im}(\tau)\). But this is precisely the region where ultraviolet divergences would arise if we integrated over it. In quantum field theory, we would have to integrate over a corresponding region. In string theory, however, we don't because these small values of \({\rm Im}(\tau)\) correspond to "copies" of the same torus that we already described by a much higher value of \(\tau\).

In the Feynman sum over histories, we only sum each shape of the torus once so including the "small \(\tau\) points aside from the fundamental region" would mean to double-count (or to count infinitely many times) and that's not what Feynman tells you to do.

For this reason, if there are some divergences, they may always be interpreted as infrared divergences. It is always possible for every similar divergence in string theory to be interpreted as "the same" divergence that would arise in the effective field theory approximating your string theory vacuum as an "infrared divergence", and no additional divergences occur. In this sense, any kind of string theory – even bosonic string theory – explicitly removes all potential ultraviolet divergences. And it does so without breaking the gauge symmetries or doing similar nasty things that would be unavoidable if you imposed a similar cutoff in quantum field theory.

String theory is extremely clever in the way how it eliminates the UV divergences.

The crackpot-in-chief on the anti-string blog wrote:
From the talks of his I’ve seen, Witten likes to claim that in string perturbation theory the only problems are infrared problems, not UV problems. That’s never seemed completely convincing, since conformal invariance can swap UV and IR. My attempts to understand exactly what the situation is by asking experts have just left me thinking, “it’s complicated”.
I am pretty sure that they meant "it's complicated for an imbecile like you". There is nothing complicated about it from the viewpoint of an intelligent person and string theory grad students understand these things when they study the first or second chapter of the string textbooks. Indeed, modular transformations swap the UV and IR regions and that's exactly why the would-be UV divergences may always be seen as something that we have already counted as IR divergences and we shouldn't count them again.

Grad students understand why there are no UV divergences in string theory but smart 9-year-old girls may already explain to their fellow kids why string theory is right and how compactifications work. According to soon-to-be Prof Peo Webster, who's "personally on Team String Theory", the case of \({\mathcal T}^* S^3\) requires some extra work relatively to an \(S^5\). She explains non-renormalizability and other basic issues that Dr Tim Blais has only sketched.

If you first identify all divergences that may be attributed to long-distance dynamics, i.e. identified as IR divergences, there will be no other divergences left in the string-theoretical integral. Isn't this statement really simple? It's surely too complicated for the crackpot but I hope it won't be too complicated for the readers of this blog.

Now, you may ask about the IR divergences. Aren't they a problem?

Well, infrared divergences are a problem but they are never an inconsistency of the theory. Instead, they may always be eliminated if you ask a more physically meaningful question. When you ask about a scattering process, you may get an IR-divergent cross section. But that's because you neglected the fact that experimentally, you won't be able to distinguish a given idealized process from the processes where some super-low-energy photons or gravitons were emitted along with the final particles you did notice. If you compute the inclusive cross section where the soft particles under a detection threshold \(E_{\rm min}\) – which may be as low as you want but nonzero – are allowed and included, the infrared divergences in the simple processes (without soft photons) exactly cancel against the cross section coming from the more complicated processes with the extra soft photons.

This wisdom isn't tied to quantum field theory per se. The same wisdom operates in any theory that agrees with quantum field theory at long distances – and string theory does. So even in string theory, it's true that IR divergences are not an inconsistency. If you ask a better, more realistically "inclusive" question, the divergences cancel.

In practice, bosonic string theory has infrared divergences that are exponentially harsh and connected with the instability of the vacuum – any vacuum – that allows tachyonic excitations. Tachyons are filtered out of the spectrum in superstring theory but massless particles such as the dilaton may be – and generically, are – sources of "power law" IR divergences, too. However, in type II string theory etc., all the infrared divergences that arise from the massless excitations cancel due to supersymmetry. So ten-dimensional superstring theories avoid both UV (string theory always does) and IR (thanks to SUSY) divergences.

But one must emphasize that in some more complicated compactifications, some IR divergences will refuse to cancel – we know that they don't cancel in the Standard Model and string theory will produce an identical structure of IR divergences because it agrees with a QFT at long distances – but that isn't an inconsistency. It isn't an inconsistency in QFT; and it isn't an inconsistency in string theory – for the same reason. It is a subtlety forcing you to ask questions and calculate answers more carefully. When you do everything carefully, you get perfectly consistent and finite answers to all questions that are actually experimentally testable.

Again, let me emphasize that while the interpretation of infrared divergences is the same in QFT and ST, because those agree at long distances, it isn't the case for UV divergences. At very short (stringy and substringy) distances, string theory is inequivalent to a quantum field theory – any quantum field theory – which is why it is capable of "eliminating the divergences altogether", even without any renormalization, which wouldn't be possible in any QFT.

Also, I want to point out that this ability of string theory to remove the ultraviolet divergences is special for the \(D=2\) world sheets. Higher-dimensional elementary objects could also unify "different particle species" and automatically "produce interactions from the free particles" because the world volume would be locally the same everywhere.

However, membranes and other higher-dimensional fundamental branes (beyond strings) would generate new fatal UV divergences in the world volume. The 2D world sheet is a theory of quantum gravity because the parameterization of the world sheet embedded in the spacetime mustn't matter. A funny thing is that the three components of the 2D metric tensor on the world sheet,\[


\] may be totally eliminated – set to some standard value such as \(h_{\alpha\beta}=\delta_{\alpha\beta}\) – by gauge transformations that are given by three parameters at each point,\[

\delta \sigma^1, \,\, \delta \sigma^2,\,\, \eta,

\] which parameterize the 2D diffeomorphisms and the Weyl rescaling of the metric. So the world sheet gravity may be locally eliminated away totally. That's why no sick "nonrenormalizable gravity" problems arise on the 2D world sheet. But they would arise on a 3D world volume of a membrane where the metric tensor has 6 components but you would have at most 3+1=4 parameters labeling the world volume diffeomorphisms plus the Weyl symmetry. So some polarizations of the graviton would survive, along with the nonrenormalizable UV divergences in the world volume.

In effect, if you tried to cure the quantized Einstein gravity's problems in the spacetime by using membranes, you would solve them indeed but equally serious problems and inconsistencies would re-emerge in the world volume of the membranes. The situation gets even more hopeless if you increase the dimension of the objects; \(h_{\alpha\beta}\) has about \(D^2/2\) components while the diffeomorphisms plus Weyl only depend on \(D+1\) parameters and the growth of the latter expression is slower.

Strings are the only simple fundamental finite-dimensional objects for which both the spacetime and world volume (world sheet) problems are eliminated. That doesn't mean that higher-dimensional objects never occur in physics – they do in string theory (D-branes and other branes) – but what it does mean is that you can't expect as simple and as consistent description of the higher-dimensional objects' internal dynamics as we know in the case of the strings. For example, the dynamics of D-branes may be described by fundamental open strings attached to these D-branes by both end points; you need some "new string theory" even for the world volume that naive old physicists would describe by an effective quantum field theory.

Gravity (dynamical spacetime geometry) is automatically implied by string theory

You may consider the first-quantized equations for a single particle propagating on a curved spacetime. However, the spacetime arena is fixed. The particle is affected by it but cannot dynamically curve it and play with the spacetime around it.

It's very different in string theory. String theory predicts gravity. Gravity was observed by the monkeys (and bacteria) well before they understood string theory which is a pity and a historical accident that psychologically prevents some people from realizing how incredible this prediction made by string theory has been. But logically, string theory is certainly making this prediction – or post-diction, if you wish – and it surely increases the probability that it's right in the eyes of an intelligent beholder.

Why does string theory automatically allow the spacetime to dynamically twist and oscillate and wiggle? Why is the spacetime gravity an unavoidable consequence of string theory?
It may look technical but it's not so bad. The reason is that any infinitesimal change of the spacetime geometry on which a string propagates is physically indistinguishable from the addition of coherent states of closed strings in certain particular vibration patterns – strings oscillating as gravitons – to all processes you may compute.
To sketch how it works in the case of the \(D=26\) bosonic string – the case of the superstring has many more indices and technical complications that don't change anything about the main message – try to realize that when you integrate over all the histories of oscillating, splitting, and joining strings, via Feynman's path integral, you are basically using some world sheet action that looks something like\[

S_{2D} = \int d^2 \sigma \, \sqrt{h}h^{\alpha\beta} \partial_\alpha X^\mu \partial_\beta X^\nu \cdot g_{\mu\nu}^{\rm spacetime}(X^\kappa).

\] Here, \(X^\mu\) and \(h_{\alpha\beta}\) are world sheet fields i.e. functions of two coordinates \(\sigma^\alpha\). I suppressed the dependence (and the overall coefficient) to make the equation more readable. At any rate, \(h\) is the world sheet metric and \(g\) is the spacetime metric which is a predetermined function of \(X^\mu\), the spacetime coordinates. But to calculate the world sheet action in string theory, you substitute the value of the world sheet field \(X^\kappa(\sigma^\alpha)\) as arguments into the function \(g_{\mu\nu}(X^\kappa)\).

What happens if you infinitesimally change the spacetime metric \(g\)? Differentiate with respect to this \(g\). You will effectively produce a term in the scattering amplitude that contains the extra prefactor of the type \(h^{\alpha\beta}\partial_\alpha X^\mu \partial_\beta X^\nu\) with some coefficients \(\delta g_{\mu\nu}\) to contract the indices \(\mu,\nu\).

But the addition of similar prefactors inside the path integral is exactly the string-theoretical rule to add external particles to a process. If you allow me some jargon, it's because external particles attached as "cylinders" to a world sheet may be conformally mapped to local operators (while the infinitely long thin cylinder is amputated) and there's a one-to-one map between the states of an oscillating closed string and the operators you may insert in the bulk of the world sheet. This map is the so-called "state-operator correspondence", a technical insight in any conformal field theory that you probably need to grasp before you fully comprehend why string theory predicts gravity.

And the structure of this prefactor, the so-called "vertex operator", in this case \(\partial X\cdot \partial X\) (the product of two world sheet derivatives of the scalar fields representing the spacetime coordinates), is exactly the vertex operator for a "graviton", a particular massless excitation of a closed string. It's a marginal operator – one whose addition keeps the world sheet action scale-invariant – and this "marginality" is demonstrably the right condition on both sides (consistent deformation of the spacetime background; or the vertex operator of an allowed string excitation in the spectrum).

We proved it for the graviton but it holds in complete generality:
Any consistent/allowed infinitesimal deformation of "the string theory" – the background and properties of the theory governing the propagation of a string – is in one-to-one correspondence with the addition of a string in an oscillation state that is predicted by the unperturbed background.
So the spectrum unavoidably includes the closed string states (graviton states) that exactly correspond to the infinitesimal deformation of the spacetime geometry, and so on. (Only the deformations of the spacetime geometry that obey the relevant equations – basically Einstein's equations – lead to a precisely conformal and therefore consistent string theory so only the deformations and gravitons obeying the Einstein's equations are allowed.) Similarly, if you want to change the string coupling or the dilaton, as we discussed previously, you will find a string state that is predicted in the spectrum whose effect is exactly like that. Gauge fields, Wilson lines, other scalar fields etc. work in the same way. All of them may be viewed either as "allowed deformations of the pre-existing background" or "excitations predicted by the original background".

That's why the undeformed and (infinitesimally but consistently) deformed string theory are always exactly physically equivalent. That is why there don't exist any inequivalent deformations of string theory. String theory is completely unique and because you may define consistent dynamics of a string on an arbitrary Ricci-flat etc. background, such string theory always predicts dynamical gravity, too.

Witten has tried to explain the same point so if I failed to convey this important observation, you should try to get the same message from his review.


To summarize, the most obvious first pedagogic approach to learn how to define string theory and do calculations in string theory deals with the first-quantized formalism, a generalization of the "one-dimensional world lines of particles" to the case of "two dimensions of the stringy world sheet". It's easy to see that analogous rules produce physical predictions that not only share the same qualitative virtues with those in the point-like-particle case and are equally consistent. Instead, the stringy version of this formalism is more consistent and has other advantages.

The stringy version of this computational framework is demonstrably superior from all known viewpoints. And the evidence is overwhelming that there exist particular non-perturbatively exact answers to all the physically meaningful questions and at least in many backgrounds (e.g. those envisioned in matrix models as well as the AdS/CFT correspondence, any version of them), we actually know how to compute them in principle and in the case of a large number of interesting observables, in practice.

Everyone who tries to dispute these claims is either incompetent or a liar or – and it's the case of the system manager – both.

by Luboš Motl ( at November 18, 2015 04:55 PM

arXiv blog

Forget Graphene and Carbon Nanotubes, Get Ready for Diamond Nanothread

The discovery of a stable form of one-dimensional diamond has scientists racing to understand its properties. The first signs are that diamond nanothread will be more versatile than anyone expected.

Hardly a week goes by without somebody proclaiming a new application for graphene, the form of carbon that occurs in single sheets with chicken wire-like structure (see “Research Hints at Graphene’s Photovoltaic Potential”). Roll a graphene sheet into a tube and it forms a carbon nanotube, another wonder material with numerous applications. And wrap it further into a ball and, with a small rearrangement of bonds, it forms buckyballs.

November 18, 2015 10:51 AM

Lubos Motl - string vacua and pheno

FQ Hall effect: has Vafa superseded Laughlin?
A stringy description of a flagship condensed matter effect could be superior

Harvard's top string theorist Cumrun Vafa has proposed a new treatment of the fractional quantum Hall effect that is – if correct – more stringy and therefore potentially more unifying and meaningful than the descriptions used by condensed matter physicists, including the famous Laughlin wave function:
Fractional Quantum Hall Effect and M-Theory (cond-mat.mes-hall, arXiv)
Laughlin's theoretical contributions to the understanding of this effect are admired by string theorists and physicists who like "simply clever" ideas. But the classification of the effects and possibilities seemed to be a bit contrived and people could have thought that a more conceptual description requiring fewer parameters could exist.

Let me start at the very beginning of Hall-related physics, with the classical Hall effect. In the 19th century, they failed to learn quantum mechanics (the excuse was that it didn't exist yet) so they called it simply "the Hall effect".

At any rate, Edwin Hall took a conductor in 1879, a "wire" going in the \(z\) direction, and applied a transverse magnetic field (orthogonal to the wire) pointing in the \(x\) direction. What he was able to measure was a voltage not just in the \(z\) direction, as dictated by Ohm's law, but also in the third \(y\) direction (the cross product of the current and the magnetic field – a direction perpendicular both to the current as well as the magnetic field).

This Hall voltage was proportional to the magnetic field and its origin is easy to understand. The charge carriers that are parts of the electric current are subject to the Lorentz force \[

\vec F = q\cdot \vec v \times \vec B

\] and because the electrons in my conventions are pushed in the \(y\) direction by the Lorentz force, there will be a voltage in that direction, too.

That was simple. At some moment, quantum mechanics was born. Many quantities in quantum mechanics have a discrete spectrum. It turned out that given a fixed current, the Hall voltage has a discrete spectrum, too. It only looks continuous for "regular" magnetic fields that were used in the classical Hall effect. But if you apply really strong magnetic fields, you start to observe that the "Hall conductance" (which is a conductance only by the units; the current and the voltage in the ratio are going in different directions)\[

\sigma = \frac{I_{\rm channel}}{V_{\rm Hall}} = \nu \frac{e^2}{h}

\] only allows discrete values. \(e\) and \(h\) are the elementary charge and Planck's constant (perhaps surprisingly, \(h/e^2\) has units of ohms, it's called the von Klitzing constant \(25815.8\) ohms) but the truly funny thing is that the allowed values of \(\nu\) (pronounce: "nu"), the so-called filling factor, has to be a rather simple rational number. (The classical Hall effect is the usual classical limit of the quantum Hall effect prescribed by Bohr's 1920 "correspondence principle"; the integers specifying the eigenstate are so large that they look continuous.)

Experimentally, \(\nu\) is either an integer or a non-integral rational number. In the latter case, we may call \(\nu\) the "filling fraction" instead. The case of \(\nu\in\ZZ\), the "integer quantum Hall effect", is easy to explain. You must know the mathematics of "Landau levels". The Hamiltonian (=energy) of a free charged particle in the magnetic field is given by\[

\hat H = \frac{m|\hat{\vec v}|^2}{2} = \frac{1}{2m} \zav{ \hat{\vec p} - q\hat{\vec A}/c }^2

\] Note that in the presence of a magnetic field, it's still true that the kinetic energy is \(mv^2/2\). However, \(mv\) is no longer \(p\). Instead, they differ by \(qA/c\), the vector potential. This shift is needed for the \(U(1)\) gauge symmetry. If you locally change the phase of the charged particle's wave function by a gauge transformation, the kinetic energy or speed can't change, and that's why the kinetic energy has to subtract the vector potential which also changes under the gauge transformation.

At any rate, for a uniform magnetic field, \(\hat{\vec A}\) is a linear function of \(\hat{\vec r}\) and you may check that the Hamiltonian above is still a bilinear function in \(\hat{\vec x}\) and \(\hat{\vec p}\). For this reason, it's a Hamiltonian fully analogous to that of a quantum harmonic oscillator. It has an equally-spaced discrete spectrum, too (aside from some continuous momenta that decouple). The excitation (Landau) level ends up being correlated with the filling factor. The mathematics needed to explain it is as simple as the mathematics of a harmonic oscillator applied to each charge carrier separately.

However, experimentally, \(\nu\) may be a non-integer rational number, too. It's the truly nontrivial case of the fraction quantum Hall effect (FQHE).\[

\nu = \frac 13, \frac 25, \frac 37, \frac 23, \frac 35, \frac 15, \frac 29, \frac{3}{13}, \frac{5}{2}, \frac{12}{5}, \dots

\] How is it possible? The harmonic oscillators clearly don't allow levels "in between the normal ones". It seems almost obvious that the interactions between the electrons are actually critical and they conspire to produce a result that looks simple and theoretical condensed matter physics is full of cute phenomenological explanations for such a conspiracy – and various quasiparticles etc.

In condensed matter physics – with many interacting charged particles – it's possible for the electrons to seemingly get fractional (FQHE); for the charge and spin to separate much like when your soul escapes from your body (spinon, chargons), and so on. These phenomena may be viewed as clever phenomenological ideas designed to describe some confusing observations by experimenters. However, closely related and sometimes the same phenomena appear in string theory when string theorists are explaining various transitions and dualities in their much more mathematically well-defined theory of fundamental physics (I am talking about fractional D-branes, Myers effect, and tons of other things).

The importance of thinking in terms of quasiparticles for string theory – and the post-duality-revolution string theory's ability to blur the difference between fundamental and "composite" particles – is another reason to say that string theory is actually much closer to fields like condensed matter physics – disciplines where the theorists and experimenters interact on a daily basis – than "older" parts of the fundamental high-energy physics.

At any rate, I've mentioned the Landau-level-based explanation of the integer quantum Hall effect. Because we're dealing with harmonic oscillators of a sort, there are some Gaussian-based wave functions for the electrons. Robert Laughlin decided to find a dirty yet clever explanation for the fractional quantum Hall effect, too. His wave function contains the "Landau" Gaussian factor as well, aside from a polynomial prefactor:\[

\psi(z_i,\zeta_a) &= \prod_{i,a} (z_i-\zeta_a)\prod_{i\lt j}(z_i-z_j)^{\frac{1}{\nu}}\times\\
&\times \exp\left(-B\sum_i |z_i|^2\right)

\] Here, \(z_i\) and \(\zeta_a\) are complex positions of electrons and quasi-holes, respectively. This basic wave function explained the FQHE with the filling fraction \(\nu = 1/m\) and Laughlin could have shared the Nobel prize. Note that for \(\nu=1/m\), the exponent \(1/\nu\) is actually integer which makes the wave function single-valued.

We're dealing with wave functions in 1 complex dimension i.e. 2 real dimensions so it looks like the setup is similar to the research of conformal field theories in 2 real dimensions (or 1 complex dimension: we want the Euclidean spacetime signature), the kind of mathematics that is omnipresent in the research of perturbative string theory (and its compactifications on string-scale manifolds, e.g. in Gepner models). Indeed, the classification of similar wave functions and dynamical behaviors has been mapped to RCFTs (rational conformal field theories), basically things like the "minimal models".

Also, the polynomial prefactors may remind you of the prefactors (especially the Vandermonde determinant) that convert the integration over matrices in old matrix models to the "fermionic statistics of the eigenvalues".

Cumrun Vafa enters the scene

Cumrun is the father of F-theory (in the same sense in which Witten is the mother of M-theory). He's written lots of impressive papers about topological string theory; and cooperated with Strominger in the first microscopic calculation of the Bekenstein-Hawking (black hole) entropy using D-branes in string theory. Also, he's behind the swampland paradigm (including our Weak Gravity Conjecture) etc.

In this new condensed-matter paper, Cumrun has shown evidence that a more unified description of the FQHE may exist, one that may be called the "holographic description". First of all, he has employed his knowledge of 2D CFTs to describe the dynamics of the FQHE by the minimal models. The minimal models that are needed are representations of either the Virasoro algebra that we already know in bosonic string theory; or the super-Virasoro algebra that only becomes essential in the case of the supersymmetric string theory.

If that part of Vafa's paper is right, it's already amusing because I believe that Robert Laughlin, as a hater of reductionism and a critic of string theory, must dislike supersymmetry as well. If super-Virasoro minimal models are needed for the conceptually unified description of the effect he is famous for, that's pretty juicy. If I roughly get it, Cumrun may de facto unify several classes of "variations of FQHE" into a broader family with one parameter only.

Laughlin with a king in 1998. Should Cumrun have been there instead?

But Cumrun goes further, to three dimensions. 2D CFTs are generally dual to quantum gravity in 3 dimensions. Why it shouldn't be true in this case? The question is what is the right 3D theory of gravity. He identifies the Chern-Simons theory with the \(SL(2,\CC)\) gauge group as a viable candidate.

Note that Chern-Simons theory, while a theory with a gauge field and no dynamical metric, seems at least approximately equivalent to gravity in 3 dimensions. One may perform a field redefinition – that was sometimes called a 3D toy model of Ashtekar's "new variables" field redefinition in 4D. In 3 dimensions, things are less inconsistent because pure 3D gravity has no local excitations. The Ricci tensor and the Einstein tensor have 6 independent components each; but so does the Riemann tensor. So Einstein's equations in the vacuum – Ricci-flatness – are actually equivalent to the complete (Riemann) flatness. No gravitational waves can propagate in the 3D vacuum.

And the gravitational sources in 3D only create "deficit angles". The space around them is basically a cone, something that you may create by cutting a wedge from a flat sheet of paper with the scissors and by gluing. Not much is happening "locally" in the 3D spacetime of quantum gravity which is also why the room for inconsistencies is reduced.

The gauge group \(SL(2,\CC)\) basically corresponds to a negative value of the cosmological constant. In this sense, the 3D gravitating spacetime may be an anti de Sitter space and Cumrun's proposed duality is a low-dimensional example of AdS/CFT. This is a somewhat more vacuous statement because there are no local bulk excitations in Chern-Simons theory – you can't determine the precise geometry – so I think that the assignment "AdS" is only topological in character. Moreover, the big strength of Chern-Simons and topological field theories is that you may put them on spaces of diverse topologies so even the weaker topological claim about the AdS space can't be considered an absolute constraint for the theory.

Also, Chern-Simons theory may actually deviate from a consistent theory of quantum gravity once you go beyond the effective field-theory description – e.g. to black hole microstates. But maybe you should go beyond Chern-Simons theory in that case. Cumrun proposes that the black holes that should exist in the 3D theory of quantum gravity should be identified with Laughlin's quasi-holes. If true, it's funny. Couldn't have the people checked that the two kinds of holes may actually be physically equivalent in the given context?

At any rate, if the 3D theory has boundaries, there is FQHE-like dynamics on the boundary and he may make some predictions about this dynamics. In particular, he claims that some excitations exist and obey exotic statistics. In 4 dimensions and higher, we can have bosons and fermions. In 2+1 dimensions, the trip of one particle around another is topologically different from no trip at all (the world lines may get braided which is why knot theory exists in 3D), so there may be the generic interpolations between the bosons and fermions, the anyons (with a phase). And you may also think about non-Abelian statistics and Cumrun actually claims that it has to be true if his model is correct.

Many non-string theorists have played with topological field (not string) theory and related things and you could think that Vafa's paper is just a "paper by a string theorist", not a "paper using string theory per se". (Strominger semi-jokingly said that he defined string theory as anything studied by his friends. I am sort of annoyed by that semi-joke because that's how you would describe an ill-defined business ruled by nepotism which string theory is certainly not.) But you would be wrong. At the end, Vafa constructs his full candidate description of the FQHE in terms of a compactification of M-theory. Well, he picks the dynamics of M5-branes in M-theory, the \((2,0)\) superconformal field theory in \(d=6\). Many of us have played with this exotic beast in many papers.

These six-dimensional theories with a self-dual three-form field strength at low energies are classified by the ADE classification. Cumrun compactifies such theories on the genus \(g\) Riemann surfaces \(\Sigma\) and claims that the possibilities correspond different forms of the FQHE. Lots of very particular technical constructions in the research of string/M-theory are actually used. Many facts are known about the compactifications which is why Cumrun can make numerous predictions for the actual lab experiments.

I can't reliably verify that Vafa's claims are right. It looks OK according to the resolution I have but to be sure about the final verdict, one has to be familiar with lots of details about the known theoretical and experimental knowledge of the FQHE as well, not to mention theoretical knowledge about the compactifications of the \((2,0)\) theory etc., and I am not fully familiar with everything that is needed.

However, I am certain that if the paper is right and the observed FQHE behavior may be mapped to compactifications of an important limit of string/M-theory, then condensed matter theoretical physicists at good schools should be trained in string/M-theory. A string course – perhaps a special "Strings for CMT" optimized course – should be mandatory. Subir Sachdev-like AdS/viscosity duality has been important for quite some time but in some sense, this kind of Vafa's description of FQHE – and perhaps related compactifications that describe other quantized, classifiable behaviors in condensed matter physics – could make string/M-theory even more fundamental for a sensible understanding of experiments that condensed matter physicists actually love to study in their labs.

It seems that we may be getting closer to the "full knowledge" of all similar interesting emergent behaviors exactly because the ideas we encounter start to repeat themselves – and sometimes in contexts so seemingly distant as Laughlin's labs and calculations of (for Laughlin esoteric) compactifications of limits of M-theory.


Aside from a nicety, Cumrun added in an e-mail:
I will only add that the experiments can settle this one perhaps even in a year, as I am told: Anomalies in neutral upstream currents that have already been observed, if confirmed for \(\nu=n/(2n+1)\) filling fraction, will be against the current paradigm and in line with my model.
Good luck to science and Cumrun in particular. ;-)

by Luboš Motl ( at November 18, 2015 08:26 AM

November 17, 2015

Symmetrybreaking - Fermilab/SLAC

Cleanroom is a verb

It’s not easy being clean.

Although they might be invisible to the naked eye, contaminants less than a micron in size can ruin very sensitive experiments in particle physics.

Flakes of skin, insect parts and other air-surfing particles—collectively known as dust—force scientists to construct or conduct certain experiments in cleanrooms, special places with regulated contaminant levels. There, scientists use a variety of tactics to keep their experiments dust-free.

The enemy within

Cleanrooms are classified by how many particles are found in a cubic foot of space. The fewer the particles, the cleaner the cleanroom.

To prevent contaminating particles from getting in, everything that enters cleanrooms must be covered or cleaned, including the people. Scratch that: especially the people.

“People are the dirtiest things in a cleanroom,” says Lisa Kaufman, assistant professor of nuclear physics at Indiana University. “We have to protect experiment detectors from ourselves.”

Humans are walking landfills as far as a cleanroom is concerned. We shed hair and skin incessantly, both of which form dust. Our body and clothes also carry dust and dirt. Even our fingerprints can be adversaries.

“Your fingers are terrible. They’re oily and filled with contaminants,” says Aaron Roodman, professor of particle physics and astrophysics at SLAC National Accelerator Laboratory.

In an experiment detector susceptible to radioactivity, the potassium in one fingerprint can create a flurry of false signals, which could cloud the real signals the experiment seeks.

As a cleanroom’s greatest enemy, humans must cover up completely to go inside: A zip-up coverall, known as a bunny suit, sequesters shed skin. (Although its name alludes otherwise, the bunny suit lacks floppy ears and a fluffy tail.) Shower-cap-like headgear holds in hair. Booties cover soiled shoes. Gloves are a must-have. In particularly clean cleanrooms, or for scientists sporting burly beards, facemasks may be necessary as well.

These items keep the number of particles brought into a cleanroom at a minimum.

“In a normal place, if you have some surface that’s unattended, that you don’t dust, after a few days you’ll see lots and lots of stuff,” Roodman says. “In a cleanroom, you don’t see anything.”

Getting to nothing, however, can take a lot more work than just covering up.

Washing up at SNOLAB

“This one undergrad who worked here put it, ‘Cleanroom is a verb, not a noun.’ Because the way you get a cleanroom clean is by constantly cleaning,” says research scientist Chris Jillings.

Jillings works at SNOLAB, an underground laboratory studying neutrinos and dark matter. The lab is housed in an active Canadian mine.

It seems an unlikely place for a cleanroom. And yet the entire 50,000-square-foot lab is considered a class-2000 cleanroom, meaning there are fewer than 2000 particles per cubic foot. Your average indoor space may have as many as 1 million particles per cubic foot.

SNOLAB’s biggest concern is mine dust, because it contains uranium and thorium. These radioactive elements can upset sensitive detectors in SNOLAB experiments, such as DEAP-3600, which is searching for the faint whisper of dark matter. Uranium and thorium could leave signals in its detector that look like evidence of dark matter.

“People are the dirtiest things in a cleanroom... We have to protect experiment detectors from ourselves.”

Most workplaces can’t guarantee that all of their employees shower before work, but SNOLAB can. Everyone entering SNOLAB must shower on their way in and re-dress in a set of freshly laundered clothes.

“We’ve sort of made it normal. It doesn’t seem strange to us,” says Jillings, who works on DEAP-3600. “It saves you a few minutes in the morning because you don’t have to shower at home.” More importantly, showering removes mine dust.

SNOLAB also regularly wipes down every surface and constantly filters the air.

Clearing the air for EXO

Endless air filtration is a mainstay of all modern cleanrooms. Willis Whitfield, former physicist at Sandia National Laboratories, invented the modern cleanroom in 1962 by introducing this continuous filtered airflow to flush out particles.

The filtered, pressurized, dehumidified air can make those who work in cleanrooms thirsty and contact-wearers uncomfortable.

“You get used to it over time,” says Kaufman, who works in a cleanroom for the SLAC-headed Enriched Xenon Observatory experiment, EXO-200.

EXO-200 is another testament to particle physicists’ affinity for mines. The experiment hunts for extremely rare double beta decay events at WIPP, a salt mine in New Mexico, in its own class-1000 cleanroom—even cleaner than SNOLAB.

As with SNOLAB experiments, anything emitting even the faintest amount of radiation is foe to EXO-200. Though those entering EXO-200’s cleanroom don’t have to shower, they do have to wash their arms, ears, face, neck and hands before covering up.

Ditching the dust for LSST

SLAC laboratory recently finished building another class-1000 cleanroom, specifically for assembly of the Large Synoptic Survey Telescope. LSST, an astronomical camera, will take over four years to build and will be the largest camera ever.

While SNOLAB and the EXO-200 cleanroom are mostly concerned with the radioactivity in particles containing uranium, thorium or potassium, LSST is wary of even the physical presence of particles.

“If you’ve got parts that have to fit together really precisely, even a little dust particle can cause problems,” Roodman says. Dust can block or absorb light in various parts of the LSST camera.

LSST’s parts are also vulnerable to static electricity. Built-up static electricity can wreck camera parts in a sudden zap known as an electrostatic discharge event.

To reduce the chance of a zap, the LSST cleanroom features static-dissipating floors and all of its benches and tables are grounded. Once again, humans prove to be the worst offenders.

“Most electrostatic discharge events are generated from humans,” says Jeff Tice, LSST cleanroom manager. “Your body is a capacitor and able to store a charge.”

Scientists assembling the camera will wear static-reducing garments as well as antistatic wrist straps that ground them to the floor and prevent the buildup of static electricity.

From static electricity to mine dust to fingerprints, every cleanroom is threatened by its own set of unseen enemies. But they all have one visible enemy in common: us.

by Chris Patrick at November 17, 2015 02:00 PM

November 16, 2015

ZapperZ - Physics and Physicists

Symmetry And Higgs Physics Via Economic Analogy?
Juan Maldacena is trying to do the impossible: explain the symmetry principles and the Higgs mechanism using analogies that one would find in economics.

I'm not making this up! :)

If you follow the link above, you will get the actual paper, which is an Open Access article. Read for yourself! :)

I am not sure if non-physicists will be able to understand it. If you are a non-physicist, and you went through the entire paper, let me know! I'm curious.


by ZapperZ ( at November 16, 2015 07:19 PM

Tommaso Dorigo - Scientificblogging

The Mysterious Z Boson Width Measurement - CDF 1989

As I am revising the book I am writing on the history of the CDF experiment, I have bits and pieces of text that I decided to remove, but which retain some interest for some reason. Below I offer a clip which discusses the measurement of the natural width of the Z boson produced by CDF with Run 0 data in 1989. The natural width of a particle is a measurement of how undetermined is its rest mass, due to the very fast decay. The Z boson is in fact the shortest lived particle we know, and its width is of 2.5 GeV.

read more

by Tommaso Dorigo at November 16, 2015 03:29 PM

November 13, 2015

Lubos Motl - string vacua and pheno

ATLAS dijet events: mass up to \(8.8\TeV\)
The 2015 proton run at the LHC is over.

(Some \(5\TeV\) playground proton collisions are gonna be made soon which will train the collider for the new lead-lead collisions in December.)

At the center-of-mass energy of \(13\TeV\), ATLAS has performed 4.32 inverse femtobarns of collisions out of which 4.00 inverse femtobarns were recorded. (Maybe the number is round by accident, maybe the two zeroes are a bump suggesting some intelligent design.) CMS has collided 4.11/fb of proton-proton pairs. My estimate is that 3.8/fb of that was recorded but it's plausible that up to 1/2 of these CMS collisions occurred without the CMS magnet which would make this 1/2 of the data much less valuable in most channels.

The outcome of 2015 could have been better for the LHC engineers but it could have been worse, too.

The 2012 \(8\TeV\) run has recorded and evaluated (not in all channels so far) 20/fb of data (at ATLAS plus the same for CMS). So the integrated luminosity in 2015 was 5 times lower than in 2012. However, at the higher collision energy, some of the interesting new phenomena become much more visible.

The higher masses of new particles you consider, the more likely it is that they would be discovered in 2015 although they were invisible in 2012. This is just a rough rule-of-thumb, of course. In May, Adam Falkowski gave us a few numerical examples comparing the sensitivity in 2012 and 2015.

If you convert the 4/fb of the \(13\TeV\) data to inverse femtobarns of the "equivalent" \(8\TeV\) data and compare it with 20/fb, you will see that 2015 run was \(K\) times more sensitive (or less sensitive if \(K\lt 1\)) than the 2012 dataset to these selected processes (my \(K\) is \(1/5\) times Jester's "ratio"):
  • \(K=0.46\) for the Higgs boson. The 2015 data will tell us slightly less about the Higgs than the 2012 data.
  • \(K=0.8\) for \(tth\). For some special energy-demanding processes involving the Higgs, the two runs are comparable.
  • \(K=0.54\) for a new \(300\GeV\) Higgs. We will probably learn nothing of the sort because we didn't in 2012.
  • \(K=2\) for \(800\GeV\) stops. We have a somewhat higher chance to see top squarks near this old exclusion limit.
  • \(K=3.6\) for a \(3\TeV\) Z'-boson. For heavier new possible particles (note that there are some signs of that in the smaller data), the advantages of the 2015 run become clear. If there is such a Z'-boson, the LHC has already discovered it in 2015 and we're just waiting for an announcement!
  • \(K=6\) for a \(1.4\TeV\) gluino. This is about the mass limit in some simplest supersymmetric scenarios. You can see that these hypothetical beasts became vastly more visible in 2015 and again, a normal \(1.4\TeV\) gluino has already been discovered if it exists!
That's his list of examples. If you have some neural networks inside your skull, you may roughly estimate \(K\) for other reasonable possible processes in new physics. Right now, I actually think that the gluino is around \(1\TeV\) and lots of the superpartners are clustered around \(600\GeV\). In that case, the 2015 run is stronger by 2012 but not so much, I think.

Adam Falkowski estimates that the first rumors about the 2015 signals will start to spread in early December 2015.

But those comments had nothing to do with the title.

Let me finally return to the main topic: we already know about some cool collisions that ATLAS has recorded in recent months, namely in September. You may want to bookmark
Displays of selected ATLAS collisions
Many of the interesting events in the list are dijet avents. A jet is a high-energy shower of strongly interacting particles, such as pions, kaons, and protons, that are flying away from the collision point to similar directions inside a "tight enough" cone. That's a sign that there was one colored (it means strongly interacting, not African American) elementary particle (such as a quark, gluon, or something new that is similar) with a certain 4-momentum (basically the sum of 4-momenta of the particles in the shower) but this colored elementary particle was confined so it couldn't escape in isolation. Instead, it had to dress lots of colored clothes, create particle-antiparticle pairs etc., so that what is leaving in the direction is basically color-neutral. So "jets" are how "high-energy colored elementary particles" look to an experimenter.

Now, I have tried to pretend to have explained the "physics of jets" to you but it's likely that if you didn't understand it before, you don't understand it now, either. ;-)

ATLAS Experiment © 2014 CERN

Fine. How many dijets (events with two jets going in different directions) the ATLAS detector has seen in 2012? They are sorted to bins according to the invariant mass \(m\)\[

m^2 = P^\mu P_\mu, \quad P^\mu = P^\mu_{{\rm jet}\,1}+P^\mu_{{\rm jet}\,2}.

\] If the two jets are associated with two colored elementary particles that were created in a decay of a new (or old) elementary particle \(Z'\), you may calculate the mass of \(Z'\) in this way by looking at the momenta and energies of the jets in the collisions.

On the graph, you may see that 20/fb of the data were only enough to produce dijets up to the invariant mass of \(3.3\TeV\) or so. (If there were some ATLAS dijet events with energies much higher than that, not shown in the histogram, then I apologize because the following paragraph or two is inaccurate. CMS had dijets up to \(5.15\TeV\), as we know.) Now, we've only had 4/fb of collisions at \(13\TeV\). It's a higher energy but a lower integrated luminosity. What's the highest invariant mass of a dijet event that ATLAS may offer us now? It's comparable to \(3.3\TeV\), we might guess. You may also guess that the "record" after 2015 would be about \(3.3\times 13/8\approx 5.4\TeV\). The odds are decreasing almost exponentially so you shouldn't get too further.

Well, rubbish.

ATLAS actually boasts an \(8.8\TeV\) invariant mass dijet event.

It's higher than the 2012 center-of-mass energy \(8\TeV\) (and the required energy in the lab frame is never smaller than the invariant mass) so you can be strictly certain that such an event couldn't have emerged at the LHC in 2012 (or at the Tevatron ever, for that matter). A whopping 68% of the \(13\TeV\) energy (which is normally shared by many quarks and gluons inside the two protons that usually don't want to team up and build a joint project) went to the creation of the rest mass of an apparent or real new particle.

Here is the colorul picture of this event. I had to link to it because secret agents are silently shooting down all the bloggers who embed this picture and who are not members of the ATLAS Collaboration.

ATLAS Experiment © 2015 CERN

Oops, I actually found a way to save my life. ;-) OK, let us not get too distracted. The highest invariant mass of a dijet – seen already on September 18th (when the total luminosity was less than 1 inverse femtobarn) – is eight point eight damn teraelectronvolts. That's a huge energy – enough for some insect to move its wings, or something like that LOL. (When you Google search for \(8.8\TeV\), you will annoyingly get lots of spurious hits because this energy also happens to be a nucleon-nucleon center-of-mass energy for the proton-lead mixed collisions, analogous to \(14\TeV\) and \(5.5\TeV\) in the non-mixed \(p\)-\(p\) and lead-lead collisions.)

If you search for the invariant mass on the "event displays" ATLAS page, you will find the following invariant masses:\[

8.8\TeV, \,\, 6.9\TeV, \,\, 5.2\TeV,\,\, 3.25\TeV, \,\, 3.12\TeV

\] The last one is a dimuon, not a dijet. Some low-mass dijets close to the Z-boson or top-quark mass are mentioned on that page, too. The \(5.2\TeV\) event is the same one that I have discussed in previous blog posts.

That's a substantial number of very high-energy events. There may be many more events like that which haven't been posted on that page yet. But maybe it was a near-complete list at some point. To believe that there are some narrow enough resonances (new particles with well-defined masses), I would first have to see "approximate repetitions" in the measured energy. The list above doesn't seem to point to specific values of the masses too strongly.

But it is plausible that ATLAS has already seen lots of such events. Three times \(8.8\TeV\), for example, and similarly for some other energies. In that case, when the rumors get out, we could enter a completely new era in which a whole zoo of multi-\({\rm TeV}\) particles start to be uncovered. For the first time, we would have the clear feeling that we're probing a higher energy scale than the CERN did in the early 1980s when it already had no trouble to discover the \(0.09\TeV\) Z-boson.

Unlike the Higgs case, we would be genuinely uncertain about the character of these new particles for quite some time. Are they new gauge bosons? Scalar superpartners of known particles? Signs of extra dimensions of one type or another? Microstates of tiny black holes? The activity dedicated to the identification of the most sensible model would probably be intense.

We're not there yet. But the rumors – and soon after, press releases and papers – may start to be generated soon.

The 2015 data may already be a game-changer but the 2016 data should be. There are lots of particles or effects that may be invisible in 2015 but visible in 2016. But I tend to think that after 2016, the probability per year of seeing something new will drop significantly. In other words, if the LHC doesn't see new physics up to the end of 2016 (or earlier), it will probably not see it for 5 more years afterwards, either.

TBBT: Sheldon's fan

There was a hilarious plot in the latest The Big Bang Theory episode yesterday, one of the best episodes for some time.

You know, Amy has terminated her relationship with Sheldon some time ago. Rajesh and Howard created a website where girls solve a hard problem to get a chance to date Sheldon. Meanwhile, Amy is dating a tall British Dave. During the date, it turns out that Dave is an absolutely obsessed fan of... Sheldon. ;-) Meanwhile, Sheldon receives a pretty and supersmart winner of his contest – Analeigh Tipton (who has appeared on TBBT in the past, with Samantha Potter next to her) – a few seconds after the deadline so he tells her "bad luck".

by Luboš Motl ( at November 13, 2015 04:33 PM

CERN Bulletin

CERN Bulletin

Thursday, 29th October, the Staff Council examined the Management proposals, results of the 2015 Five-Yearly Review. In a Resolution, the Staff Council wrote that it considered that the proposed "package" of measures was not balanced enough and made some additional requests, especially in the field of new career system. At the meeting of the Standing Concertation Committee (SCC) of Monday, 9th November, representatives of the Management and the Staff Association reviewed in detail the document with the Management proposals. This document will be presented to the Member states at the next meeting of TREF on 26th November. Based on the concerns expressed in our Resolution of 29th October, six points, where consensus could not be reached at the meeting, remained pending at the end of the discussion in the SCC. It was decided that these six points would be submitted to the Director-General for arbitration, after a discussion in the Staff Council. Thus, on Thursday, 12th November, the Staff Council met to determine its position relative to the package of measures proposed by the Management. The staff delegates first listened to a summary of the SCC meeting of 9th November given by staff representatives who attended the SCC meeting. Then the six points were presented: Elimination of “tracks”, which provide grouping of grades. Extension of the transition measures, with an improved "phasing out", by one year, until the next five-yearly review. Inclusion of the effect of the new career system in future actuarial studies for the CERN health insurance scheme (CHIS). Possibility for a female fellow on maternity leave to obtain an extension of her contract, in well-defined and specific cases, beyond the current maximum of three years. Guarantees on the implementation of talent management tools necessary for the operation of the new career system, including detailed monitoring using milestones defined and agreed to by the SCC. Fixed advancement budget for at least three years. Going around the table, all delegates present were able to express their view on the proposed approach to not oppose the package of proposals if the Director-General accepts the six points submitted for arbitration. The session concluded with a vote in which 29 delegates participated: 20 did not oppose the package of measures under the condition that the Director-General’s arbitration had a positive outcome, five were opposed, and four abstained. A memorandum with our demands for the arbitration was addressed to the Director-General in the afternoon of 12th November. In the response, received on Friday 13th November, the Director-General replied positively to the six points mentioned. As announced in our October meetings all staff will have the opportunity to give their opinion on the position taken by the Staff Council, through a consultation which will be organized shortly.

by Staff Association at November 13, 2015 01:22 PM

CERN Bulletin

Le GAC organise chaque mois des permanences avec entretiens individuels. La prochaine permanence se tiendra le : Mardi 1er décembre de 13 h 30 à 16 h 00 Salle de réunion de l’Association du personnel Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires. Informations : e-mail :

by GAC-EPA at November 13, 2015 12:40 PM

Clifford V. Johnson - Asymptotia

One Hundred Years of Certitude

Einstein_CentennialSince the early Summer I've been working (with the help of several people at USC*) toward a big event next Friday: A celebration of 100 years since Einstein formulated the field equations of General Relativity, a theory which is one of the top one or few (depending upon who you argue with over beers about this) scientific achievements in the history of human thought. The event is a collaboration between the USC Harman Academy of Polymathic Study and the LAIH, which I co-direct. I chose the title of this post since (putting aside the obvious desire to resonate with a certain great work of literature) this remarkable scientific framework has proven to be a remarkably robust and accurate model of how our universe's gravity actually works in every area it has been tested with experiment and observation**. Despite being all about bizarre things like warped spacetime, slowing down time, and so forth, which most people think is to do only with science fiction. (And yes, you probably test it every day through your [...] Click to continue reading this post

The post One Hundred Years of Certitude appeared first on Asymptotia.

by Clifford at November 13, 2015 03:09 AM

November 12, 2015

Symmetrybreaking - Fermilab/SLAC

Giving physics students options

Many physics degree programs excel at preparing students for an academic career, but more than half of those who complete the programs head to industry instead.

“I was drawn to physics because I thought it was amazing,” says Crystal Bailey, recalling the beginnings of her graduate work in the early 2000s. “There’s a sense of wonder that we’re really understanding something fundamental and elegant about the universe.”

But when she decided that an academic career path wasn’t right for her, she left her degree program. Bailey assumed, like many physics students, that the purpose of earning a physics degree is to remain in academia. In fact, statistics describe a different reality.

The American Institute of Physics states that roughly half of those who enter the workforce with a degree in physics—either a bachelor’s, master’s or doctorate—work in the private sector. 

In an AIP survey of PhD recipients who had earned their degrees the previous year, 64 percent of respondents who identified their jobs as potentially permanent positions were working in industry.

Institutions in the United States currently grant around 1700 physics PhDs each year, though only about 350 academic faculty positions become available in that time, according to the AIP.

Most university physics programs are rooted in academic tradition, and some members of the physics community have expressed concern that not enough emphasis is placed on preparing students for potential jobs in industry. Among these members are the professors and students in three physics programs that are bucking this trend, taking a decidedly different approach to prepare aspiring physicists for what awaits beyond graduation.

Scientists at work

By the time Nicholas Sovis graduates in 2016 with his bachelor’s degree in applied physics, he’ll have two and a half years of work experience analyzing fuel injector movements at Argonne National Laboratory. Like the rest of his colleagues at Kettering University in Flint, Michigan, Sovis arranged an industry co-op through his school starting his freshman year. He alternates every three months between full-time school and full-time work. He’ll graduate in four years by taking an accelerated schedule of courses during academic terms.

“There are a lot of people who work in industry who are at—and really need to be at—the PhD level.”

Co-ops and internships are certainly not unique to this program, but the unparalleled emphasis on co-op experience is part of what Kathryn Svinarich, department head of physics at Kettering, calls their “industry-friendly” culture. 

The university operated for many years training automotive engineers under the name General Motors Institute. Although the school is now an independent institution, Kettering still produces some of the most industry-oriented physicists in the country.

“We’re really turning heads in the [American Physical Society],” says Svinarich. “We’re the only fully co-op school with strong programs in physics.”

The program’s basic purpose is to provide students marketable skills while offering participating companies access to a talent pipeline. The tandem training at both Kettering and a private company or government institution lets students experience academic and industry life and connect with mentors in each realm. 

Sovis says the combination of mentors has broadened his perspective. “I have learned how incredibly diverse the field of physics truly is,” he says. 

As he weighs his options for the future, he adds that he is considering working for an agency such as NASA while remaining open to opportunities to do research at academic institutions.

Fueling innovation

In the 1990s, the graduate program in physics at Case Western Reserve University in Cleveland, Ohio, was getting mixed reviews from its alumni. On one hand, many former students were finding success leading innovative start-ups. On the other, they were struggling, finding themselves unprepared to handle the logistics of running a business.

In response, the university formed the Physics Entrepreneurship Program. This terminal masters degree program aims at providing its students skills in market analysis, financing strategies and leadership, while also connecting students to mentors, funding and talent. Students couple courses in physics with courses in business and law.

“Innovation is not speculative,” says Ed Caner, director of science and technology entrepreneurship programs. “You cannot simply write a business plan and get investors on board.”

Nathan Swift, a second-year student in the program, found this lesson valuable. For his thesis, he’s starting his own company. “We're developing a biomimetic [nature-imitating] impact material that could be integrated into helmets in place of conventional foam,” he says.

His business partners are biologists—PhD candidates at the University of Akron. Without Swift, the students didn’t have the business savvy or mechanical background to develop the idea, which they originally sketched out for a class. The team is currently fundraising and testing early prototypes. 

Swift says that, though he is excited by the opportunity, participating in the Case Western program isn’t about definitively choosing one career path over another. “I'm doing it to gain the necessary skills so that I can be dangerous with both a technical and business fluency—in whatever I choose to pursue.”

Lessons in leadership

James Freericks, professor and director of graduate studies in physics at Georgetown University, says that 20 years ago, professional organizations were telling him that the universities were overproducing PhDs.

Freericks looked deeper and found that an imbalance had existed for decades. The supply of physics doctorates has far outpaced their academic demand as far back as the 1960s.

“To say the only reason you’re producing PhDs is for academics is a very narrow point of view,” Freericks says. “There are a lot of people who work in industry who are at—and really need to be at—the PhD level.”

Freericks now directs Georgetown’s Industrial Leadership in Physics program, organized in 2001. The program is expressly designed to train physics students to secure advanced positions in industry.

“You have to do a certain amount of problem-solving, a certain amount of head-scratching—and hitting your head against the wall...”

As with a traditional program, students engage in rigorous coursework and original research. But the program also blends in elements similar to those at Case Western and Kettering, such as courses in business and patent law and a yearlong apprenticeship in industry. An advisory committee of scientific leaders representing Lockheed Martin, IBM, BASF Corporation and other companies guides the program and provides mentorship.

The lengthy internships give students time to become fully immersed in the research and methods of a company. Ultimately, such apprenticeships prepare students to manage sophisticated scientific projects—including their significant budgets and groups of other scientists.

Academically industrial or industrially academic

What, then, is the right balance in physics education? Barbara Jones, a theoretical physicist at IBM and an ILP advisory committee member, advocates broad training that prepares students for work in industry as well as at a college or a national lab. She points out that the traditional classroom is not a complete failure in this regard.

“To get a PhD in physics, you have to do a certain amount of problem-solving, a certain amount of head-scratching—and hitting your head against the wall—that’s independent of any job,” Jones says. These skills translate, which is why classically trained physicists have been successfully obtaining and thriving in industry jobs for a long time, she says. 

But “students even at more traditional programs can take a pointer from these programs and see about arranging for industrial internships for themselves.”

Perhaps the greatest value of programs such as these, Jones suggests, is giving students options.

Bailey agrees. She eventually completed a PhD in nuclear physics and now serves as the careers program manager at the American Physical Society. She organizes resources to help students navigate the many paths of being a physicist, including a new program now in its pilot stage, APS Industry Mentoring for Physicists (IMPact). The program connects early-career physicists with other physicists working in industry. 

Bailey’s job also frequently involves giving talks on pursuing physics as a career. She tells students, “Your career will not always take you where you expect. But you can always find a way to do the things you love.”

by Troy Rummler at November 12, 2015 03:36 PM

Jester - Resonaances

A year at 13 TeV
A week ago the LHC finished the 2015 run of 13 TeV proton collisions.  The counter in ATLAS stopped exactly at 4 inverse femtobarns. CMS reports just 10% less, however it is not clear what fraction of these data is collected with their magnet on (probably about a half). Anyway, it should have been better, it could have been worse...   4 fb-1 is one fifth of what ATLAS and CMS collected in the glorious year 2012.  On the other hand, the higher collision energy in 2015 translates to larger production cross sections, even for particles within the kinematic reach of the 8 TeV collisions.  How this trade off work in practice depends on the studied process.  A few examples are shown in the plot below
We see that, for processes initiated by collisions of a quark inside one proton with an antiquark inside the other proton, the cross section gain is the least favorable. Still, for hypothetical resonances heavier than ~1.7 TeV, more signal events were produced in the 2015 run than in the previous one. For example, for a 2 TeV W-prime resonance, possibly observed by ATLAS in the 8 TeV data, the net gain is 50%, corresponding to roughly 15 events predicted in the 13 TeV data. However, the plot does not tell the whole story, because the backgrounds have increased as well.  Moreover, when the main background originates from gluon-gluon collisions (as is the case for the W-prime search in the hadronic channel),  it grows faster than the signal.  Thus, if the 2 TeV W' is really there, the significance of the signal in the 13 TeV data should be comparable to that in the 8 TeV data in spite of the larger event rate. That will not be enough to fully clarify the situation, but the new data may make the story much more exciting if the excess reappears;  or much less exciting if it does not... When backgrounds are not an issue (for example, for high-mass dilepton resonances) the improvement in this year's data should be more spectacular.

We also see that, for new physics processes initiated by collisions of a gluon in 1 proton with another gluon in the other proton, the 13 TeV run is superior everywhere above the TeV scale, and the signal enhancement is more spectacular. For example, at 2 TeV one gains a factor of 3 in signal rate. Therefore, models where the ATLAS diboson excess is explained via a Higgs-like scalar resonance will be tested very soon. The reach will also be extended for other hypothetical particles pair-produced in gluon collisions, such as  gluinos in the minimal supersymmetric model. The current lower limit on the gluino mass obtained by  the 8 TeV run is m≳1.4 TeV  (for decoupled squarks and massless neutralino). For this mass, the signal gain in the 2015 run is roughly a factor of 6. Hence we can expect the gluino mass limits will be pushed upwards soon, by about 200 GeV or so.  

Summarizing,  we have a right to expect some interesting results during this winter break. The chances for a discovery  in this year's data are non-zero,  and chances for a tantalizing hints of new physics (whether a real thing or a background fluctuation) are considerable. Limits on certain imaginary particles will be somewhat improved. However, contrary to my hopes/fears, this year is not yet the decisive one for particle physics.  The next one will be.

by Jester ( at November 12, 2015 03:05 PM

November 11, 2015

The n-Category Cafe

Burritos for Category Theorists

You’ve probably heard of Lawvere’s Hegelian taco. Now here is a paper that introduces the burrito to category theorists:

The source of its versatility and popularity is revealed:

To wit, a burrito is just a strong monad in the symmetric monoidal category of food.

Frankly, having seen plenty of attempts to explain monads to computer scientists, I thought this should have been marketed as ‘monads for chefs’. But Mike Stay, who pointed me to this article, explained its subtext:

Haskell uses monads all over the place, and programmers who are not used to functional programming often find them confusing. This is a quote from a widely-shared article on the proliferation of “monad tutorials”:

After struggling to understand them for a week, looking at examples, writing code, reading things other people have written, he finally has an “aha!” moment: everything is suddenly clear, and Joe Understands Monads! What has really happened, of course, is that Joe’s brain has fit all the details together into a higher-level abstraction, a metaphor which Joe can use to get an intuitive grasp of monads; let us suppose that Joe’s metaphor is that Monads are Like Burritos. Here is where Joe badly misinterprets his own thought process: “Of course!” Joe thinks. “It’s all so simple now. The key to understanding monads is that they are Like Burritos. If only I had thought of this before!” The problem, of course, is that if Joe HAD thought of this before, it wouldn’t have helped: the week of struggling through details was a necessary and integral part of forming Joe’s Burrito intuition, not a sad consequence of his failure to hit upon the idea sooner.

The article is this:

by john ( at November 11, 2015 11:41 PM

Tommaso Dorigo - Scientificblogging

A 2 Meter Piece Of Junk Falling From Space
In a bit less than two days an orbiting object known with the peculiar name of WT1190F - but I'd like to rename it as WTF 1190 for obvious reasons - is expected to fall on Earth. The object was discovered by the Catalina Sky Survey in 2013 and little is known about its origin. It has a low density and makes a very eccentric revolution around us every three weeks. 
Below is a picture taken by the University of Hawaii 2.2 meter telescope in October. The object is observed as a bright spot that shows a relative motion with respect to fixed stars.

read more

by Tommaso Dorigo at November 11, 2015 07:29 PM

ZapperZ - Physics and Physicists

What Is Computational Physics?
Rhett Allain has published his take on what "computational physics" is.

Many of us practicing physicists do work in computational physics. Some very often, some now and then. At some point, many of us have to either analyze data, do numerical modeling, or solve intractable equations. We either use pre-made codes, modify some other computer codes, write our own code, or use commercial software.

But certainly, this is less involved than someone who specializes in computational physics. But many of us do have the need to know how to do some of these things as part of our job. People who have to simulate particle beam dynamics, and those design accelerating structures are often accelerator physicists rather than computational physicists.

Hum... now I seem to be rambling on and can't remember the point I was trying to make. Ugh! Old age sucks!


by ZapperZ ( at November 11, 2015 05:49 PM

Symmetrybreaking - Fermilab/SLAC

Dark matter’s newest pursuer

Scientists have inaugurated the new XENON1T experiment at Gran Sasso National Laboratory in Italy.

Researchers at a laboratory deep underneath the tallest mountain in central Italy have inaugurated XENON1T, the world’s largest and most sensitive device designed to detect a popular dark matter candidate.

“We will be the biggest game in town,” says Columbia University physicist Elena Aprile, spokesperson for the XENON collaboration, which has over the past decade designed, built and operated a succession of ever-larger experiments that use liquid xenon to look for evidence of weakly interacting massive dark matter particles, or WIMPs, at the Gran Sasso National Laboratory.

Interactions with these dark matter particles are expected to be rare: Just one a year for every 1000 kilograms of xenon. As a result, larger experiments have a better chance of intercepting a WIMP as it passes through the Earth.

XENON1T’s predecessors—XENON 10 (2006 to 2009) and XENON 100 (2010 to the present)—held 25 and 160 kilograms of xenon, respectively. The new XENON11 experiment’s detector measures 1 meter high and 1 meter in diameter and contains 3500 kilograms of liquid xenon, nearly 10 times as much as the next-biggest xenon-filled dark matter experiment, the Large Underground Xenon experiment.

Looking for WIMPs

Should a WIMP collide with a xenon atom, kicking its nucleus or knocking out one of its electrons, the result is a burst of fast ultraviolet light and a bunch of free electrons. Scientists built a strong electric field in the XENON1T detector to direct these freed particles to the top of the chamber, where they will create a second burst of light. The relative timing and brightness of the two flashes will help the scientists determine the type of particle that created them.

“Since our detectors can detect even a single electron or photon, XENON1T will be sensitive to even the most feeble particle interactions,” says Rafael Lang, a Purdue University physicist on the XENON collaboration.

Scientists cool the xenon to minus 163 degrees Fahrenheit to turn it into a liquid three times denser than water. One oddity of xenon is that its boiling temperature is only 7 degrees Fahrenheit above its melting temperature. So “we have to control our temperature and pressure precisely,” Aprile says.

The experiment is shielded from other particles such as cosmic rays by separate layers of water, lead, polyethylene and copper—not to mention 1400 meters of Apennine rock that lie above the Gran Sasso lab’s underground tunnels.

Keeping the xenon free of contaminants is essential to the detector’s sensitivity. Oxygen, for example, can trap electrons. And the decay of some radioactive krypton isotopes, which are difficult to separate from xenon, can obscure a WIMP signal. The XENON collaboration’s solution is to continuously circulate and filter 100 liters of xenon gas every minute from the top of the detector through a filtering system before chilling it and returning it to service.

A matter of scale

XENON researchers hope that their new experiment will finally be the one to see definitive evidence of WIMPs. But just in case, XENON1T was designed to accommodate a swift upgrade to 7000 kilograms of xenon in its next iteration. (At the same time, the LUX and UK-based Zeplin groups joined forces to design a similar-scale xenon detector, LZ.)

“If we see nothing with XENON1T, it will still be worth it to move up to the 7000-kilogram device, since it will be relatively easy to do that,” Aprile says. “If we do see a few events with XENON1T—and we’re sure they are from the dark matter particle—then the best way to prove that it’s real is to confirm that result with a larger, more sensitive experiment.

“In any case,” Aprile says, “we should know whether the WIMP is real or not before 2020.”

by Mike Ross at November 11, 2015 04:31 PM

The n-Category Cafe

Weil, Venting

From the introduction to André Weil’s Basic Number Theory:

It will be pointed out to me that many important facts and valuable results about local fields can be proved in a fully algebraic context, without any use being made of local compacity, and can thus be shown to preserve their validity under far more general conditions. May I be allowed to suggest that I am not unaware of this circumstance, nor of the possibility of similarly extending the scope of even such global results as the theorem of Riemann–Roch? We are dealing here with mathematics, not theology. Some mathematicians may think they can gain full insight into God’s own way of viewing their favorite topic; to me, this has always seemed a fruitless and a frivolous approach. My intentions in this book are more modest. I have tried to show that, from the point of view which I have adopted, one could give a coherent treatment, logically and aesthetically satisfying, of the topics I was dealing with. I shall be amply rewarded if I am found to have been even moderately successful in this attempt.

I was young when I discovered by harsh experience that even mathematicians with crashingly comprehensive establishment credentials can be as defensive and prickly as anyone. I was older when (and I only speak of my personal tastes) I got bored of tales of Grothendieck-era mathematical Paris.

Nonetheless, I find the second half of Weil’s paragraph challenging. Is there a tendency, in category theory, to imagine that there’s such a thing as “God’s own way of viewing” a topic? I don’t think that approach is fruitless. Is it frivolous?

by leinster ( at November 11, 2015 12:33 AM

November 09, 2015

ZapperZ - Physics and Physicists

100 Years Of General Relativity
General Relativity turns 100 years this month. The Universe was never the same again after that! :)

APS Physics has a collection of articles related to various papers published in their family of journals related to GR. Check them out. Many are fairly understandable to non-experts.


by ZapperZ ( at November 09, 2015 07:03 PM

Symmetrybreaking - Fermilab/SLAC

Neutrino experiments win big again

The Fundamental Physics Prize recognized five collaborations studying neutrino oscillations.

Hot on the heels of their Nobel Prize recognition, neutrino oscillations have another accolade to add to their list. On November 8, representatives from five different neutrino experiments accepted a joint award for the 2016 Breakthrough Prize in Fundamental Physics.

The Breakthrough Prizes, also given for life sciences and mathematics, celebrate both science itself and the work of scientists. The award was founded by Sergey Brin, Anne Wojcicki, Jack Ma, Cathy Zhang, Yuri and Julia Milner, Mark Zuckerberg and Priscilla Chan with the goal of inspiring more people to pursue scientific endeavors.

This year’s $3 million prize for physics will be shared evenly among five teams: the Daya Bay Reactor Neutrino Experiment based in China, the KamLAND collaboration in Japan, the K2K (KEK to Kamioka) and T2K (Tokai to Kamioka) long-baseline neutrino oscillation experiments in Japan, Sudbury Neutrino Observatory (SNO) in Canada, and the Super-Kamiokande collaboration in Japan. These experiments explored the nature of the ghostly particles that are the most abundant massive particle in the universe, and how they change among three types as they travel.

Almost 1400 people contributed to these experiments that discovered and unraveled neutrino oscillations, “revealing a new frontier beyond, and possibly far beyond, the standard model of particle physics,” according to the Breakthrough Prize’s press release.

This year’s physics Nobel laureates Takaaki Kajita (Super-K) and Arthur B. McDonald (SNO) appeared onstage to accept to the Breakthrough Prize along with Yifang Wang, Kam-Biu Luk, Atsuto Suzuki, Koichiro Nishikawa and Yoichiro Suzuki.

“The quest for the secrets of neutrinos is not finished yet, and many more mysteries are yet to be discovered,” Wang said during the ceremony at Mountain View, California. There are many questions left to answer about neutrinos, including how much mass they have, whether there are more than three types, and whether neutrinos and antineutrinos behave differently.

A broad slate of oscillation experiments are currently studying neutrinos or planned for the future. Daya Bay, Super-K, T2K and KamLAND continue to research the particles, as does an upgraded version of SNO, SNO+. The US-based MINOS+ and NOvA are currently taking long-baseline neutrino oscillation data. The Jiangmen Underground Neutrino Observatory is under construction in China, and the international Deep Underground Neutrino Experiment is progressing quickly through the planning phase. Many others dot the neutrino experiment landscape, using everything from nuclear reactors to giant chunks of Antarctic ice to learn more about the hard-to-catch particles. With so much left to discover, it seems like there are plenty of prizes left in neutrino research.

by Lauren Biron at November 09, 2015 06:40 PM

Symmetrybreaking - Fermilab/SLAC

Physics Photowalk voting begins

Pick your favorites from among 24 photos taken during the Global Physics Photowalk.

Twenty-four top photos have been selected to enter the next stage of the Global Physics Photowalk competition.

In September, eight world-leading research laboratories invited photographers to take a look behind the scenes at their facilities­ to share the beauty behind physics. More than 200 photographers collectively participated in the international photowalk, submitting thousands of photos into local competitions. After careful deliberation, each laboratory selected their three winning photos from their local submissions to enter into the global competition.

In the next stage of the global competition, the top 24 photos will be judged in two categories: a jury competition, facilitated through a panel of international judges, and a people’s choice competition, conducted via an online popular vote. Starting today, the public is invited to view and choose their favorite photos on the Interactions Collaboration website. Voting closes November 30.

While voting for the people’s choice selection is underway, an international jury comprising artists, photographers and scientists will convene to scrutinize the photos and crown the global winners.

Those winners will be announced in December and will have the opportunity to be featured in Symmetry magazine, the CERN Courier, and as part of a traveling exhibit across laboratories in Australia, Asia, Europe and North America.

Visit to view additional photographs from each laboratory’s local event.

by Interactions at November 09, 2015 02:00 PM

November 08, 2015

John Baez - Azimuth

Tale of a Doomed Galaxy

Part 1

About 3 billion years ago, if there was intelligent life on the galaxy we call PG 1302-102, it should have known it was in serious trouble.

Our galaxy has a supermassive black hole in the middle. But that galaxy had two. One was about ten times as big as the other. Taken together, they weighed a billion times as much as our Sun.

They gradually spiraled in towards each other… and then, suddenly, one fine morning, they collided. The resulting explosion was 10 million times more powerful than a supernova—more powerful than anything astronomers here on Earth have ever seen! It was probably enough to wipe out all life in that galaxy.

We haven’t actually seen this yet. The light and gravitational waves from the disaster are still speeding towards us. They should reach us in roughly 100,000 years. We’re not sure when.

Right now, we see the smaller black hole still orbiting the big one, once every 5 years. In fact it’s orbiting once every 4 years! But thanks to the expansion of the universe, PG 1302-102 is moving away from us so fast that time on that distant galaxy looks significantly slowed down to us.

Orbiting once every 4 years: that doesn’t sound so fast. But the smaller black hole is about 2000 times more distant from its more massive companion than Pluto is from our Sun! So in fact it’s moving at very high speed – about 1% of the speed of light. We can actually see it getting redshifted and then blueshifted as it zips around. And it will continue to speed up as it spirals in.

What exactly will happen when these black holes collide? It’s too bad we won’t live to see it. We’re far enough that it will be perfectly safe to watch from here! But the human race knows enough about physics to say quite a lot about what it will be like. And we’ve built some amazing machines to detect the gravitational waves created by collisions like this—so as time goes on, we’ll know even more.

Part 2

Even before the black holes at the heart of PG 1302-102 collided, life in that galaxy would have had a quasar to contend with!

This is a picture of Centaurus A, a much closer galaxy with a quasar in it. A quasar is huge black hole in the middle of a galaxy—a black hole that’s eating lots of stars, which rip apart and form a disk of hot gas as they spiral in. ‘Hot’ is an understatement, since this gas moves near the speed of light. It gets so hot that it pumps out intense jets of particles – from its north and south poles. Some of these particles even make it to Earth.

Any solar system in Centaurus A that gets in the way of those jets is toast.

And these jets create lots of radiation, from radio waves to X-rays. That’s how we can see quasars from billions of light years away. Quasars are the brightest objects in the universe, except for short-lived catastrophic events like the black hole collisions and gamma-ray bursts from huge dying stars.

It’s hard to grasp the size and power of such things, but let’s try. You can’t see the black hole in the middle of this picture, but it weighs 55 million times as much as our Sun. The blue glow of the jets in this picture is actually X rays. The jet at upper left is 13,000 light years long, made of particles moving at half the speed of light.

A typical quasar puts out a power of roughly 1040 watts. They vary a lot, but let’s pick this number as our ‘standard quasar’.

But what does 1040 watts actually mean? For comparison, the Sun puts out 4 x 1026 watts. So, we’re talking 30 trillion Suns. But even that’s too big a number to comprehend!

Maybe it would help to say that the whole Milky Way puts out 5 x 1036 watts. So a single quasar, at the center of one galaxy, can have the power of 2000 galaxies like ours.

Or, we can work out how much energy would be produced if the entire mass of the Moon were converted into energy. I’m getting 6 x 1039 joules. That’s a lot! But our standard quasar is putting out a bit more power than if it were converting one Moon into energy each second.

But you can’t just turn matter completely into energy: you need an equal amount of antimatter, and there’s not that much around. A quasar gets its power the old-fashioned way: by letting things fall down. In this case, fall down into a black hole.

To power our standard quasar, 10 stars need to fall into the black hole every year. The biggest quasars eat 1000 stars a year. The black hole in our galaxy gets very little to eat, so we don’t have a quasar.

There are short-lived events much more powerful than a quasar. For example, a gamma-ray burst, formed as a hypergiant star collapses into a black hole. A powerful gamma-ray burst can put out 10^44 watts for a few seconds. That’s equal to 10,000 quasars! But quasars last a long, long time.

So this was life in PG 1302-102 before things got really intense – before its two black holes spiraled into each other and collided. What was that collision like? I’ll talk about that next time.

The above picture of Centaurus A was actually made from images taken by three separate telescopes. The orange glow is submillimeter radiation – between infrared and microwaves—detected by the Atacama Pathfinder Experiment (APEX) telescope in Chile. The blue glow is X-rays seen by the Chandra X-ray Observatory. The rest is a photo taken in visible light by the Wide Field Imager on the Max-Planck/ESO 2.2 meter telescope, also located in Chile. This shows the dust lanes in the galaxy and background stars.

Part 3

What happened at the instant the supermassive black holes in the galaxy PG 1302-102 finally collided?

We’re not sure yet, because the light and gravitational waves will take time to get here. But physicists are using computers to figure out what happens when black hole collide!

Here you see some results. The red blobs are the event horizons of two black holes.

First the black holes orbit each other, closer and closer, as they lose energy by emitting gravitational radiation. This is called the ‘inspiral’ phase.

Then comes the ‘plunge’ and ‘merger’. They plunge towards each other. A thin bridge forms between them, which you see here. Then they completely merge.

Finally you get a single black hole, which oscillates and then calms down. This is called the ‘ringdown’, because it’s like a bell ringing, loudly at first and then more quietly. But instead of emitting sound, it’s emitting gravitational waves—ripples in the shape of space!

In the top picture, the black holes have the same mass: one looks smaller, but that’s because it’s farther away. In the bottom picture, the black hole at left is twice as massive.

Here’s one cool discovery. An earlier paper had argued there could be two bridges, except in very symmetrical situations. If that were true, a black hole could have the topology of a torus for a little while. But these calculations showed that – at least in the cases they looked at—there’s just one bridge.

So, you can’t have black hole doughnuts. At least not yet.

These calculations were done using free software called SpEC. But before you try to run it at home: the team that puts out this software says:

Because of the steep learning curve and complexity of SpEC, new users are typically introduced to SpEC through a collaboration with experienced SpEC users.

It probably requires a lot of computer power, too. These calculations are very hard. We know the equations; they’re just tough to solve. The first complete simulation of an inspiral, merger and ringdown was done in 2005.

The reason people want to simulate colliding black holes is not mainly to create pretty pictures, or even understand what happens to the event horizon. It’s to understand the gravitational waves they will produce! People are building better and better gravitational wave detectors—more on that later—but we still haven’t seen gravitational waves. This is not surprising: they’re very weak. To find them, we need to filter out noise. So, we need to know what to look for.

The pictures are from here:

• Michael I. Cohen and Jeffrey D. Kaplan and Mark A. Scheel, On toroidal horizons in binary black hole inspirals, Phys. Rev. D 85 (2012), 024031.

Part 4

Let’s imagine an old, advanced civilization in the doomed galaxy PG 1302-102.

Long ago they had mastered space travel. Thus, they were able to survive when their galaxy collided with another—just as ours will collide with Andromeda four billion years from now. They had a lot of warning—and so do we. The picture here shows what Andromeda will look like 250 million years before it hits.

They knew everything we do about astronomy—and more. So they knew that when galaxies collide, almost all stars sail past each other unharmed. A few planets get knocked out of orbit. Colliding clouds of gas and dust form new stars, often blue giants that live short, dramatic lives, going supernova after just 10 million years.

All this could be handled by not being in the wrong place at the wrong time. They knew the real danger came from the sleeping monsters at the heart of the colliding galaxies.

Namely, the supermassive black holes!

Almost every galaxy has a huge black hole at its center. This black hole is quiet when not being fed. But when galaxies collide, lots of gas and dust and even stars get caught by the gravity and pulled in. This material form a huge flat disk as it spirals down and heats up. The result is an active galactic nucleus.

In the worst case, the central black holes can eat thousands of stars a year. Then we get a quasar, which easily pumps out the power of 2000 ordinary galaxies.

Much of this power comes out in huge jets of X-rays. These jets keep growing, eventually stretching for hundreds of thousands of light years. The whole galaxy becomes bathed in X-rays—killing all life that’s not prepared.

Let’s imagine a civilization that was prepared. Natural selection has ways of weeding out civilizations that are bad at long-term planning. If you’re prepared, and you have the right technology, a quasar could actually be a good source of power.

But the quasar was just the start of the problem. The combined galaxy had two black holes at its center. The big one was at least 400 million times the mass of our Sun. The smaller one was about a tenth as big—but still huge.

They eventually met and started to orbit each other. By flinging stars out the way, they gradually came closer. It was slow at first, but the closer they got, the faster they circled each other, and the more gravitational waves they pumped out. This carried away more energy—so they moved closer, and circled even faster, in a dance with an insane, deadly climax.

Right now—here on Earth, where it takes a long time for the news to reach us—we see that in 100,000 years the two black holes will spiral down completely, collide and merge. When this happens, a huge pulse of gravitational waves, electromagnetic radiation, matter and even antimatter will blast through the galaxy called PG 1302-102.

I don’t know exactly what this will be like. I haven’t found papers describing this kind of event in detail.

One expert told the New York Times that the energy of this explosion will equal 100 million supernovae. I don’t think he was exaggerating. A supernova is a giant star whose core collapses as it runs out of fuel, easily turning several Earth masses of hydrogen into iron before you can say “Jack Robinson”. When it does this, it can easily pump out 1044 joules of energy. So, 100 millon supernovae is 1052 joules. By contrast, if we could convert all the mass of the black holes in PG 1302-102. into energy, we’d get about 1056 joules. So, our expert was just saying that their merger will turns 0.01% of their combined mass into energy. That seems quite reasonable to me.

But I want to know what happens then! What will the explosion do to the galaxy? Most of the energy comes out as gravitational radiation. Gravitational waves don’t interact very strongly with matter. But when they’re this strong, who knows? And of course there will be plenty of ordinary radiation, as the accretion disk gets shredded and sucked into the new combined black hole.

The civilization I’m imagining was smart enough not to stick around. They decided to simply leave the galaxy.

After all, they could tell the disaster was coming, at least a million years in advance. Some may have decided to stay and rough it out, or die a noble death. But most left.

And then what?

It takes a long time to reach another galaxy. Right now, travelling at 1% the speed of light, it would take 250 million years to reach Andromeda from here.

But they wouldn’t have to go to another galaxy. They could just back off, wait for the fireworks to die down, and move back in.

So don’t feel bad for them. I imagine they’re doing fine.

By the way, the expert I mentioned is S. George Djorgovski of Caltech, mentioned here:

• Dennis Overbye, Black holes inch ahead to violent cosmic union, New York Times, 7 January 2015.

Part 5

When distant black holes collide, they emit a burst of gravitational radiation: a ripple in the shape of space, spreading out at the speed of light. Can we detect that here on Earth? We haven’t yet. But with luck we will soon, thanks to LIGO.

LIGO stands for Laser Interferometer Gravitational Wave Observatory. The idea is simple. You shine a laser beam down two very long tubes and let it bounce back and forth between mirrors at the ends. You use this compare the length of these tubes. When a gravitational wave comes by, it stretches space in one direction and squashes it in another direction. So, we can detect it.

Sounds easy, eh? Not when you run the numbers! We’re trying to see gravitational waves that stretch space just a tiny bit: about one part in 1023. At LIGO, the tubes are 4 kilometers long. So, we need to see their length change by an absurdly small amount: one-thousandth the diameter of a proton!

It’s amazing to me that people can even contemplate doing this, much less succeed. They use lots of tricks:

• They bounce the light back and forth many times, effectively increasing the length of the tubes to 1800 kilometers.

• There’s no air in the tubes—just a very good vacuum.

• They hang the mirrors on quartz fibers, making each mirror part of a pendulum with very little friction. This means it vibrates very well at one particular frequency, and very badly at frequencies far from that. This damps out the shaking of the ground, which is a real problem.

• This pendulum is hung on another pendulum.

• That pendulum is hung on a third pendulum.

• That pendulum is hung on a fourth pendulum.

• The whole chain of pendulums is sitting on a device that detects vibrations and moves in a way to counteract them, sort of like noise-cancelling headphones.

• There are 2 of these facilities, one in Livingston, Louisiana and another in Hanford, Washington. Only if both detect a gravitational wave do we get excited.

I visited the LIGO facility in Louisiana in 2006. It was really cool! Back then, the sensitivity was good enough to see collisions of black holes and neutron stars up to 50 million light years away.

Here I’m not talking about supermassive black holes like the ones in the doomed galaxy of my story here! I’m talking about the much more common black holes and neutron stars that form when stars go supernova. Sometimes a pair of stars orbiting each other will both blow up, and form two black holes—or two neutron stars, or a black hole and neutron star. And eventually these will spiral into each other and emit lots of gravitational waves right before they collide.

50 million light years is big enough that LIGO could see about half the galaxies in the Virgo Cluster. Unfortunately, with that many galaxies, we only expect to see one neutron star collision every 50 years or so.

They never saw anything. So they kept improving the machines, and now we’ve got Advanced LIGO! This should now be able to see collisions up to 225 million light years away… and after a while, three times further.

They turned it on September 18th. Soon we should see more than one gravitational wave burst each year.

In fact, there’s a rumor that they’ve already seen one! But they’re still testing the device, and there’s a team whose job is to inject fake signals, just to see if they’re detected. Davide Castelvecchi writes:

LIGO is almost unique among physics experiments in practising ‘blind injection’. A team of three collaboration members has the ability to simulate a detection by using actuators to move the mirrors. “Only they know if, and when, a certain type of signal has been injected,” says Laura Cadonati, a physicist at the Georgia Institute of Technology in Atlanta who leads the Advanced LIGO’s data-analysis team.

Two such exercises took place during earlier science runs of LIGO, one in 2007 and one in 2010. Harry Collins, a sociologist of science at Cardiff University, UK, was there to document them (and has written books about it). He says that the exercises can be valuable for rehearsing the analysis techniques that will be needed when a real event occurs. But the practice can also be a drain on the team’s energies. “Analysing one of these events can be enormously time consuming,” he says. “At some point, it damages their home life.”

The original blind-injection exercises took 18 months and 6 months respectively. The first one was discarded, but in the second case, the collaboration wrote a paper and held a vote to decide whether they would make an announcement. Only then did the blind-injection team ‘open the envelope’ and reveal that the events had been staged.

Aargh! The disappointment would be crushing.

But with luck, Advanced LIGO will soon detect real gravitational waves. And I hope life here in the Milky Way thrives for a long time – so that when the gravitational waves from the doomed galaxy PG 1302-102 reach us, hundreds of thousands of years in the future, we can study them in exquisite detail.

For Castelvecchi’s whole story, see:

• Davide Castelvecchi Has giant LIGO experiment seen gravitational waves?, Nature, 30 September 2015.

For pictures of my visit to LIGO, see:

• John Baez, This week’s finds in mathematical physics (week 241), 20 November 2006.

For how Advanced LIGO works, see:

• The LIGO Scientific Collaboration Advanced LIGO, 17 November 2014.


To see where the pictures are from, click on them. For more, try this:

• Ravi Mandalia, Black hole binary entangled by gravity progressing towards deadly merge.

The picture of Andromeda in the nighttime sky 3.75 billion years from now was made by NASA. You can see a whole series of these pictures here:

• NASA, NASA’s Hubble shows Milky Way is destined for head-on collision, 31 March 2012.

Let’s get ready! For starters, let’s deal with global warming.

by John Baez at November 08, 2015 01:13 AM

November 06, 2015

Clifford V. Johnson - Asymptotia

Anthropological Friends

From_museo_nacional_antropologia_mexicoAs promised, on the right is the companion figure to the one I shared earlier (on the left). Click for a larger view. These were two jolly fellows I found in glass cases at Mexico City's Museo Nacional de Antropologia, and sort of had to sketch them.

-cvj Click to continue reading this post

The post Anthropological Friends appeared first on Asymptotia.

by Clifford at November 06, 2015 05:40 PM

Symmetrybreaking - Fermilab/SLAC

The light side of dark matter

New technology and new thinking are pushing the dark matter hunt to lower and lower masses.

It’s a seemingly paradoxical but important question in particle physics: Can dark matter be light?

Light in this case refers to the mass of the as-yet undiscovered particle or group of particles that may make up dark matter, the unseen stuff that accounts for about 85 percent of all matter in the universe.

Ever-more-sensitive particle detectors, experimental hints and evolving theories about the makeup of dark matter are driving this expanding search for lighter and lighter particles—even below the mass of a single proton—with several experiments giving chase.

An alternative to WIMPs?

Theorized weakly interacting massive particles, or WIMPs, are counted among the leading candidates for dark matter particles. They most tidily fit some of the leading models.

Many scientists expected WIMPs might have a mass of around 100 billion electronvolts—about 100 times the mass of a proton. The fact that they haven’t definitively showed up in searches covering a range from about 10 billion electronvolts to 1 trillion electronvolts has cracked the door to alternative theories about WIMPs and other candidate dark matter particles.

Possible low-energy signals measured at underground dark matter experiments CoGeNT in Minnesota and DAMA/LIBRA in Italy, along with earlier hints of dark matter particles in space observations of our galaxy’s center by the Fermi Gamma-ray Space Telescope, excited interest in a mass range below about 11 billion electronvolts—roughly 11 times the mass of a proton.

Such low-energy particles could be thought of as lighter, “wimpier” WIMPs, or they could be a different kind of particles: light dark matter.

SuperCDMS, an WIMP-hunting experiment in the Soudan Underground Laboratory in Minnesota, created a special search mode, called CDMSlite, to make its detectors sensitive to particles with mass reaching below 5 billion electronvolts. With planned upgrades, CDMSlite should eventually be able to stretch down to detect particles with a mass about 50 times less than this.

In September, the CDMS collaboration released results that narrow the parameters used to search for light WIMPs in a mass range of 1.6 billion to 5.5 billion electronvolts.

Also in September, collaborators with the CRESST experiment (pictured above) at Gran Sasso laboratory in Italy released results that explored for the first time masses down to 0.5 billion electronvolts.

Other underground experiments, such as LUX at the Sanford Underground Research Facility in South Dakota, EDELWEISS at Modane Underground Laboratory in France, and DAMIC at SNOLAB in Canada, are also working to detect light dark matter particles. Many more experiments, including Earth- and space-based telescopes and CERN’s Large Hadron Collider, are playing a role in the dark matter hunt as well.

This hunt has broadened in many directions, says David Kaplan, a physics professor at Johns Hopkins University.

“Incredible progress has been made—scientists literally gained over 10 orders of magnitude in sensitivity from the beginning of really dedicated WIMP experiments until now,” he says. “In a sense, the WIMP is the most boring possibility. And if the WIMP is ruled out, it’s an extremely interesting time.”

Peter Graham, an assistant professor of physics at Stanford University, says the light dark matter search is especially intriguing because any discovery in the light dark matter range would fly in the face of classical physics theories. “If we find it, it won’t be in the Standard Model,” he says.

Coming attractions

The experiments searching for light dark matter are working together to see through the background particles that can obscure their searches, says Dan Bauer, spokesman for the SuperCDMS collaboration and group leader for the effort at Fermilab.

“In this whole field, it’s competitive but it’s also collaborative,” he says. “We all share information.”

The next few months will bring new results from the CDMSlite experiment and for CRESST.

An upgrade, now in progress, will push the lower limits of the CRESST detectors to about 0.1 billion to 0.2 billion electronvolts, says Federica Petricca, a researcher at the Max Planck Institute for Physics and spokesperson for the CRESST experiment.

“The community has learned to be a bit more open and not to focus on a specific region of the mass range of the dark matter particle,” Petricca says. “I think this is interesting simply because there are motivated theories behind this, and there is no reason to limit the search to some specific model.”

Researchers are also looking out for future results from an experiment called DAMIC. DAMIC searches for signs of dark matter using an array of specialized  charge-coupled devices, similar to the light-sensitive sensors found in today’s smartphone cameras.

DAMIC already can search for particles with a mass below 6 billion electronvolts. The experiment’s next iteration, known as DAMIC100, should be able to take measurements below 0.3 billion electronvolts after it starts up in 2016, says DAMIC spokesperson Juan Estrada of Fermilab.

“I think it is very valuable to have several experiments that are looking in the same region,” Estrada says, “because it doesn’t look like any single experiment will be able to confirm a dark matter signal—we will need to have many experiments.

“There is still a lot of room for innovation.”

by Glenn Roberts Jr. at November 06, 2015 04:11 PM

November 05, 2015

ZapperZ - Physics and Physicists

The Physics Of Sports That "Defy Physics"
I love this article, and it is about time someone writes something like this.

Chad Orzel has a nice article explaining why the often-claimed event in sports that "defy physics" actually happened BECAUSE of physics.

Of course, as several physicists grumbled on Twitter this morning, “defied physics” is a silly way to describe these plays. These aren’t happening in defiance of physics, they’re happening because of physics. Physics is absolute and universal, and never defied– the challenge and the fun of these plays is to explain why and how these seemingly impossible shots are consistent with known physics.

What is being "defied" is one's understanding and expectations of what would happen and what looked seemingly impossible to happen. This is DIFFERENT than discovering  something that "defies physics", and that is what many people, especially sports writers and TV heads do not seem to understand. The fact that these people often lack any deep understanding of basic physics, but somehow seem to clearly know when something they don't understand well is being "defied", appears to be lost in all of this. It is like me, having never visited France or know much about the French people, making a claim that something isn't consistent with that country or people simply based on what I understand from watching TV.

I wish they stop using the phrase "defy physics" in situation like this the same way I wish reporters stop using the phrase "rate of speed" when they actually just mean "speed"!


by ZapperZ ( at November 05, 2015 03:28 PM

November 04, 2015

The n-Category Cafe

Cakes, Custard, Categories and Colbert

As you probably know, Eugenia Cheng has written a book called Cakes, Custard and Category Theory: Easy Recipes for Understanding Complex Maths, which has gotten a lot of publicity. In the US it appeared under the title How to Bake Pi: An Edible Exploration of the Mathematics of Mathematics, presumably because Americans are less familiar with category theory and custard (not to mention the peculiar British concept of “pudding”).

Tomorrow, Wednesday November 4th, Eugenia will appear on The Late Show with Stephen Colbert. There will also be another lesser-known guest who looks like this:

Apparently his name is Daniel Craig and he works on logic—he proved something called the Craig interpolation theorem. I hear he and Eugenia will have a duel from thirty paces to settle the question of the correct foundations of mathematics.

Anyway, it should be fun! If you think I’m making this all up, go here. She’s really going to be on that show.

I’m looking forward to this because while Eugenia learned category theory as a grad student at Cambridge, mainly from her advisor Martin Hyland and Peter Johnstone, Hyland became interested in n-categories and Eugenia wound up doing her thesis on the opetopic approach to n-categories which James Dolan and I had dreamt up. I visited Cambridge for the first time around then, and we wound up becoming friends. It’s nice to see she’s bringing math and even category theory to a large audience.

Here’s my review of Eugenia’s book for the London Mathematical Society Newsletter:

Eugenia Cheng has written a delightfully clear and down-to-earth explanation of the spirit of mathematics, and in particular category theory, based on their similarities to cooking. Sometimes people complain about a math textbook that it’s “just a cookbook”, offering recipes but no insight. Cheng shows the flip side of this analogy, providing plenty of insight into mathematics by exploring its resemblance to the culinary arts. Her book has recipes, but it’s no mere cookbook.

Among all forms of cooking, it seems Cheng’s favorite is the baking of desserts—and among all forms of mathematics, category theory. This is no coincidence: like category theory, the art of the pastry chef is one of the most exacting, but also one of the most delightful, thanks to the elegance of its results. Cheng gives an example: “Making puff pastry is a long and precise process, involving repeated steps of chilling, rolling and foldking to create the deliciously delicate and buttery layers that makes puff pastry different from other kinds of pastry.”

However, she does not scorn the humbler branches of mathematics and cooking, and there’s nothing effete or snobby about this book. No special background is needed to follow it, so if you’re a mathematician who wants your relatives and friends to understand what you are doing and why you love it, this is the perfect gift to inflict on them.

On the other hand, experts may be disappointed unless they pay close attention. There is a fashionable sort of book that lauds the achievements of mathematical geniuses, explaining them in just enough detail to give the reader a sense of awe: typical titles are A Beautiful Mind and The Man Who Knew Infinity. Cheng avoids this sort of hagiography, which may intimidate as often as it inspires. Instead, her book uses examples to show that mathematics is close to everyday experience, not to be feared.

While the book is written in bite-sized pieces suitable for the hasty pace of modern life, it has a coherent architecture and tells an overall story. It does this so winningly and divertingly that one might not even notice. The book’s first part tackles the question “what is mathematics?” The second asks “what is category theory?” Unlike timid people who raise big questions, play with them a while, and move on, Cheng actually proposes answers! I will not attempt to explain them, but the short version is that mathematics exists to make difficult things easy, and category theory exists to make difficult mathematics easy. Thus, what mathematics does for the rest of life, category theory does for mathematics.

Of course, mathematics only succeeds in making a tiny part of life easy, and Cheng admits this freely, saying quite a bit about the limitations of mathematics, and rationality in general. Similarly, category theory only succeeds in making small portions of mathematics easy—but those portions lie close to the glowing core of the subject, the part that illuminates the rest.

And as Cheng explains, illumination is what we most need today. Mere information, once hard to come by, is now cheap as water, pouring through the pipes of the internet in an unrelenting torrent. Your cell phone is probably better at taking square roots or listing finite simple groups than you will ever be. But there is much more to mathematics than that—just as cooking is much more than merely following a cookbook.

by john ( at November 04, 2015 03:57 PM

Teilchen blog

gimp 2.9 improves astrophysical FITS imaging

The GIMP development version gained improved FITS plugin for importing such astrophysical data: See the beautiful images in the article " GIMP team revitalizes astrophotography tools" for the high precision import. The current development

GIMP Homepage

November 04, 2015 09:43 AM

Clifford V. Johnson - Asymptotia

A Mexico Meeting

from_museo_nacional_antropologia_mexico_1Continuing in the "tradition" of sharing a drawing from a visit to a city South of the border, let me introduce you to this character I met (to my delight) on the recent Mexico trip. I did a quick visit to the wonderful Museo Nacional Antropologia, and there he/she was. I neglected to get [...] Click to continue reading this post

The post A Mexico Meeting appeared first on Asymptotia.

by Clifford at November 04, 2015 12:21 AM



[RSS 2.0 Feed] [Atom Feed]

Last updated:
November 27, 2015 05:06 PM
All times are UTC.

Suggest a blog: