Particle Physics Planet

May 24, 2016

Emily Lakdawalla - The Planetary Society Blog

Smooth sailing in San Luis Obispo: LightSail 2 completes day-in-the-life test
The Planetary Society's LightSail 2 spacecraft breezed through a major systems test today, demonstrating the CubeSat can successfully deploy its antenna and solar panels, communicate with the ground unfurl its 32-square-meter solar sails in space.

May 24, 2016 07:41 AM

Emily Lakdawalla - The Planetary Society Blog

OSIRIS-REx shipped to Florida for September launch
OSIRIS-REx's long journey to an asteroid has begun. The spacecraft departed Colorado on Friday, May 20, travelling aboard an Air Force C-17 to the Payload Hazardous Servicing Facility at Kennedy Space Center.

May 24, 2016 12:53 AM

May 23, 2016

Christian P. Robert - xi'an's og

likelihood inflating sampling algorithm

My friends from Toronto Radu Craiu and Jeff Rosenthal have arXived a paper along with Reihaneh Entezari on MCMC scaling for large datasets, in the spirit of Scott et al.’s (2013) consensus Monte Carlo. They devised an likelihood inflated algorithm that brings a novel perspective to the problem of large datasets. This question relates to earlier approaches like consensus Monte Carlo, but also kernel and Weierstrass subsampling, already discussed on this blog, as well as current research I am conducting with my PhD student Changye Wu. The approach by Entezari et al. is somewhat similar to consensus Monte Carlo and the other solutions in that they consider an inflated (i.e., one taken to the right power) likelihood based on a subsample, with the full sample being recovered by importance sampling. Somewhat unsurprisingly this approach leads to a less dispersed estimator than consensus Monte Carlo (Theorem 1). And the paper only draws a comparison with that sub-sampling method, rather than covering other approaches to the problem, maybe because this is the most natural connection, one approach being the k-th power of the other approach.

“…we will show that [importance sampling] is unnecessary in many instances…” (p.6)

An obvious question that stems from the approach is the call for importance sampling, since the numerator of the importance sampler involves the full likelihood which is unavailable in most instances when sub-sampled MCMC is required. I may have missed the part of the paper where the above statement is discussed, but the only realistic example discussed therein is the Bayesian regression tree (BART) of Chipman et al. (1998). Which indeed constitutes a challenging if one-dimensional example, but also one that requires delicate tuning that leads to cancelling importance weights but which may prove delicate to extrapolate to other models.

Filed under: Books, Statistics, University life Tagged: BART, Canada, consensus Monte Carlo, importance sampling, likelihood function, Monte Carlo Statistical Methods, scaling, subsampling, University of Toronto

by xi'an at May 23, 2016 10:16 PM

astrobites - astro-ph reader's digest

Exploring the law of star-formation through a spatially resolved study of two spiral galaxies

Title: The super-linear slope of the spatially resolved star formation law in NGC 3521 and NGC 5194 (M51a)

Authors: Guilin Liu, Jin Koda, Daniela Calzetti, Masayuki Fukuhara, and Reiko Momose

First Author’s Institution: Astronomy Department, University of Massachusetts, Amherst, MA, USA

Paper status: Published in ApJ


Nimisha Kumari

This is a guest post by Nimisha Kumari, a graduate student at the Institute of Astronomy, Cambridge (UK). Her current research involves the spatially-resolved studies of nearby spiral and blue compact dwarf galaxies. She received her bachelor’s degree from the University of Delhi (India) and master’s degree from Ecole Polytechnique (France).



The Schmidt Law, formulated by Maarten Schmidt in 1959, relates either the volume or surface density of the star-formation rate (SFR) with that of the gas as a power-law. In other words, ΣSFR = AΣγgas where ΣSFR and Σgas are the surface densities of the SFR and the gas (atomic and molecular) respectively, A is the average global efficiency of star-formation of the system studied (e.g. galaxies, galactic disks, star-forming regions), and γ is the power-law index. Due to the difficulty in measuring the volume densities of the two quantities, this law is generally expressed in terms of the surface densities, since they are more easily observable.

The value of γ = 1.4 ± 0.1 was found by Robert Kennicutt empirically from data of normal spirals and starbursts. This established the disk-averaged star-formation law, called the Schmidt-Kennicutt (S–K) law. This law can explain plausible scenarios of galaxy formation and evolution and hence is being used as an essential prescription in various models and simulations. However a disk-averaged star-formation law implies smoothing over the enormous local variations in the stellar population (dependent on age and the initial mass function) and gas/dust geometry. This means the currently established law might not be a fundamental physical relationship. Thus, in order to understand the physics of star-formation, we need a spatially-resolved study of the star-formation law.


In their paper, Liu et al. focus on addressing two questions using a sub-kiloparsec study of two nearby spiral galaxies, NGC 5194 and NGC 3521. The first concerns how the data are processed before any analysis has been conducted. The second looks at how the size of the start-forming regions studied affects the star-formation law.

Data used are the images of the two galaxies in Hα, far-ultraviolet (FUV) and mid-infrared (24 μm) wavebands to trace SFR; and CO and H I to trace gas present in the galaxies. The observed star-light (traced by Hα and FUV) and the dust-emission (traced by mid-infrared) in a star-forming region do not only contain the contribution from the star-forming region with young stars and the related dust component, but also from the underlying diffuse component of stellar/dust emission unassociated with current star-formation. The question is: Is it necessary to remove this diffuse component? Astronomers have investigated this question before, and do not know the answer yet. Liu et al. subtract the diffuse component from their data (Hα, FUV and mid-infrared) statistically using an astronomical software HIIphot, and study the S–K law for subtracted and unsubtracted data.

Tracing SFR: To understand how the above described data can be used to trace SFR, let us have a look at a typical star-forming region. A star forming region contains young and hot massive stars emitting mostly in FUV. This corresponds to the ionisation energy of neutral hydrogen found in the interstellar medium. FUV radiation from hot stars ionizes the gas around them, producing H II regions which are traced by Hα emission. The stellar environment also contains a huge amount of dust which absorbs nearly half of the UV and optical radiation, and emits at longer wavelengths in the infrared. Therefore, Hα or FUV luminosity is combined with infrared luminosity to account for the absorption by dust and then converted to SFR.


Figure 1: Results for M51a studied at 750 pc resolution. For the left panel in each pair, the diffuse backgrounds in the Hα, 24μm, and FUV images are not removed (denoted by “BG+”), but are removed in the right panel (“BG−”). Solid black dots indicate data points with sufficient signal-to-noise (S/N) ratio used for analysis, while light gray dots are the points with low S/N. The fitted slopes are indicated at the bottom right of each panel. (a) The molecular-only S-K law with SFR derived from Hα+24μm; (c) the correlation between SFRs derived from Hα+24μm and FUV+24μm, respectively; (d) the relation of Hα+24μ SFR vs. HI surface density; (e) the total hydrogen S-K law with SFR derived from Hα+24μm.(Caption adapted from Figure 4 of Liu et al. 2011)



Figure 1 provides details of the answer to the first question of this study (for NGC 5194) whether the subtraction of diffuse background is necessary for S–K law study. The two SFR indicators (Hα and FUV) can be used interchangeably only in the case when the diffuse back- ground unrelated to current star-formation is subtracted. This result is quite consistent with our knowledge that FUV traces older star-formation (10–100 Myr) in comparison to Hα which traces recent star-formation (< 5 Myr). Subtraction of diffuse background leads to the super-linear slope (i.e. γ > 1) of S-K law, both in the cases of molecular gas as well as total gas. However no apparent correlation is found between the SFR and the atomic gas. Above-mentioned results hold true for NGC 3521 as well. Liu et al. hence conclude that diffuse background is considerably important for studies of star-formation and results in super-linear S–K law if subtracted and linear S–K law if unsubtracted.

Figure 2 shows the influence of spatial scale on the slope of the S–K law (γH ) for subtracted data. NGC 3521, because of its higher inclination angle than that of M51a, has a much larger projected area and physical scale. The limit of study in NGC 3521 is therefore set to 700 pc. This corresponds to a physical scale of ∼ 2 kpc. Figure 2 (right) shows the negligible variation of γ with the spatial scale in NGC 3521 with high error bars, which is attributed to the unreliability in measurements due to high inclination angle. For M51a (Figure 2 left), γ decreases with increasing spatial scale. However, both galaxies show a super-linear slope (γ > 1) of S–K law. Liu et al. emphasize the consistency of their result at lower spatial scale in M51a with that of Galactic studies. This hints at an intrinsic super-linear S-K law for spiral galaxies.


Figure 2: Effect of spatial scale δ (in kiloparsec) on power-law index γH in M51a (left) and NGC 3521 (right).

by Guest at May 23, 2016 02:08 PM

Peter Coles - In the Dark

The Dream of Gerontius

Just a quick lunchtime post to mention that I took yesterday (Sunday) evening off to attend a concert at the Brighton Dome which was part of the annual Brighton Festival. The perf0rmance consisted of just one piece: The Dream of Gerontius by Sir Edward Elgar, performed by the City of Birmingham Symphony Orchestra (conducted by Edward Gardner) together with the Brighton Festival Chorus.

I happen to know a couple of people who sing with the Brighton Festival Chorus. Both were a bit nervous ahead of last night’s performance because it’s a challenging work and although they’ve been rehearsing the choral passages themselves, they only had a short time to practice together with the orchestra. Reading about the performance history of this work, their fears might have been justified: the first performance, in Birmingham in 1900, was a shambles, largely due to inadequate rehearsal time, and it took some time for it to become established in the repertoire. As it turned out, however, they had nothing to worry about. I thought the Chorus was magnificent, as was the Orchestra and indeed the three soloists: Alice Coote (Mezzo), Robert Murray (Tenor) and Matthew Rose (Bass). I particularly liked Matthew Rose’s performance. He cut an imposing figure on the platform, towering over the other musicians, and his sonorous bass tones projected wonderfully.

Although I began by saying that the concert was “just one piece”, The Dream of Gerontius is a very substantial work, lasting over 90 minutes (excluding the interval). It requires a large choir (well over a hundred voices last night) as well as large orchestral forces, including two harps and a big brass section. I’m sure it’s a handful to perform, but last night’s concert was well-controlled and at times simply beautiful.

It’s basically a setting of a long poem, describing the journey of a dying man towards death. It takes a very Roman-Catholic view of Paradise, Purgatory, and the Last Judgement and this may have contributed to its initial lack of popularity in (Protestant) England; it found greater favour in Germany in the years after its first performance.

I’m actually not the biggest fan of Elgar, generally speaking. He’s often very rhythmically unimaginative and predictable, as in the opening passage of Part 1 in last night’s performance which plodded along for a quite a while before getting going. However, there are some thrilling passages too. This work does sound surprisingly modern at times and at others is very reminiscent of Richard Strauss, at least to my ears.

Anyway, an excellent performance of a profound and challenging work. I’m glad to say that it attracted a full house too, though the majority of the audience were (like me) not in the first flush of youth..

P.S. I texted a friend that I was at The Dream of Gerontius, but autocorrect turned it into The Dream of Geronimo. As far as I know there’s no choral work with that title, but perhaps there should be!

by telescoper at May 23, 2016 01:03 PM

CERN Bulletin

CERN Bulletin Issue No. 20-21/2016
Link to e-Bulletin Issue No. 20-21/2016Link to all articles in this issue No.

May 23, 2016 11:39 AM

Peter Coles - In the Dark

R.I.P. John David Jackson (1925-2016)

Yet again I have to pass on some very sad news. Physicist John David Jackson, best known for his classic textbook Classical Electrodynamics, has passed away at the age of 91. I’m sure I speak for many physicists when I say that Classical Electrodynamics was not only an essential part of my physics education but also a constant companion throughout the rest of my career. I have consulted my copy regularly over the last thirty years. I was often frustrated that when I found the topic I was looking for in the index, it referred to a problem (usually a difficult one) rather than a solution, but there’s no question it made me a better physicist.


Rest in peace, John David Jackson (1925-2016).

by telescoper at May 23, 2016 08:40 AM

May 22, 2016

Christian P. Robert - xi'an's og

occupancy rules

While the last riddle on The Riddler was rather anticlimactic, namely to find the mean of the number Y of empty bins in a uniform multinomial with n bins and m draws, with solution


[which still has a link with e in that the fraction of empty bins converges to e⁻¹ when n=m], this led me to some more involved investigation on the distribution of Y. While it can be shown directly that the probability that k bins are non-empty is

{n \choose k}\sum_{i=1}^k (-1)^{k-i}{k \choose i}(i/n)^m

with an R representation by

for (k in 1:n)

I wanted to take advantage of the moments of Y, since it writes as a sum of n indicators, counting the number of empty cells. However, the higher moments of Y are not as straightforward as its expectation and I struggled with the representation until I came upon this formula

\mathbb{E}[Y^k]=\sum_{i=1}^k {k \choose i} i! S(k,i) \left( 1-\frac{i}{n}\right)^m

where S(k,i) denotes the Stirling number of the second kind… Or i!S(n,i) is the number of surjections from a set of size n to a set of size i. Which leads to the distribution of Y by inverting the moment equations, as in the following R code:

  for (k in 1:(n-1)){
   for (i in 1:k)

that I still checked by raw simulations from the multinomial


Filed under: Kids, R, Statistics Tagged: moment derivation, moments, multinomial distribution, occupancy, R, Stack Exchange, Stirling number, surjection

by xi'an at May 22, 2016 10:16 PM

Emily Lakdawalla - The Planetary Society Blog

Shuttle tank caps 41-day journey with trip through streets of Los Angeles
After a 41-day journey marked by stormy seas, a trip through the Panama Canal and a rescue off the Baja California coast, the last unflown space shuttle external fuel tank has arrived at its new home here in Los Angeles.

May 22, 2016 06:51 PM

Clifford V. Johnson - Asymptotia

A New Era

expo_line_two_opening_1Many years ago, even before the ground was broken on phase one of the Expo line and arguments were continuing about whether it would ever happen, I started saying that I was looking forward to the days when I could put my pen down, step out of my office, get on the train a minute away, and take it all the way to the beach and finish my computation there. Well, Friday, the first such day arrived. Phase two of the Expo line is now complete and has opened to the public, with newly finished stations from Culver City through Santa Monica. It joins the already running (since April 2012) Expo phase one, which I've been using every day to get to campus after changing from the Red line (connecting downtown).

expo_line_two_opening_2On Friday I happened to accidentally catch the first Expo Line train heading all the way out to Santa Monica! (I mean the first one for the plebs - there had been a celebratory one earlier with the mayor and so forth, I was told). I was not planning to do so and was just doing my routine trip to campus, thinking I'd try the new leg out later (as I did when phase one opened - see here). But there was a cheer when the train pulled up at Metro/7th downtown and the voice over the overhead speakers [...] Click to continue reading this post

The post A New Era appeared first on Asymptotia.

by Clifford at May 22, 2016 04:50 PM

Peter Coles - In the Dark

I did my research. Yes, I think academic publishers are greedy. (With notes on publishers’ rhetoric and creationism)

As promised…

Sauropod Vertebra Picture of the Week

Another day, another puff-piece from academic publishers about how awesome they are. This time, the Publisher’s Association somehow suckered the Guardian into giving them a credible-looking platform for their party political broadcast, Think academic publishers are greedy? Do your research. I have to give the PA credit for coming up with about the most patronising title possible.

Yes, I did my research. Guess what? Academic publishers are greedy.


(The article doesn’t say it’s by the Publishers Association, by the way. It’s credited to Stephen Lotinga, who LinkedIn tells us is Chief Executive of The Publishers Assocation, but the article doesn’t declare that.)

Oh boy do I get tired of constantly rebutting the same old bs. from publishers. And it really is the same bs. They’re not even taking the trouble to invent new bs., just churning out the same nonsense each time — for example, equating their massive profits with investment in…

View original post 665 more words

by telescoper at May 22, 2016 03:41 PM

Peter Coles - In the Dark

Yes, academic publishers are greedy (and dishonest)

I saw a blatant piece of propaganda in the Guardian the other day, written by the Chief Executive of the Publishers Association.The piece argues that the academic publishing industry benefits the academic community through “innovation and development” and by doing so “adds value” to the raw material supplied by researchers. This is nonsense. The academic publishing industry does not add any value to anything. It just adds cost. And by so doing generates huge profits for itself.

I was annoyed by several other things relating to this item:

  1. It’s written by a vested interest but is presented without a balancing opinion, which makes one wonder why the Guardian is allowing itself to be used as a mouthpiece by these profiteers;
  2. It has been tweeted and retweeed by the Publishers Association several times, as if it were a piece of reporting instead of what it actually is, essentially a commercial;
  3. Some of the claims made in the piece are so risible that they’re insulting.

However, the most annoying thing for me is that I’ve been too busy marking examinations to let off steam by writing a riposte.

I should have worried however, because scrolling down to the comments on the article you can easily find out what academics really think. Moreover, there’s an excellent rebuttal by Mike Taylor here, which I shall reblog.




by telescoper at May 22, 2016 03:40 PM

May 21, 2016

John Baez - Azimuth

The Busy Beaver Game

This month, a bunch of ‘logic hackers’ have been seeking to determine the precise boundary between the knowable and the unknowable. The challenge has been around for a long time. But only now have people taken it up with the kind of world-wide teamwork that the internet enables.

A Turing machine is a simple model of a computer. Imagine a machine that has some finite number of states, say N states. It’s attached to a tape, an infinitely long tape with lots of squares, with either a 0 or 1 written on each square. At each step the machine reads the number where it is. Then, based on its state and what it reads, it either halts, or it writes a number, changes to a new state, and moves either left or right.

The tape starts out with only 0’s on it. The machine starts in a particular ‘start’ state. It halts if it winds up in a special ‘halt’ state.

The Busy Beaver Game is to find the Turing machine with N states that runs as long as possible and then halts.

The number BB(N) is the number of steps that the winning machine takes before it halts.

In 1961, Tibor Radó introduced the Busy Beaver Game and proved that the sequence BB(N) is uncomputable. It grows faster than any computable function!

A few values of BB(N) can be computed, but there’s no way to figure out all of them.

As we increase N, the number of Turing machines we need to check increases faster than exponentially: it’s


Of course, many could be ruled out as potential winners by simple arguments. But the real problem is this: it becomes ever more complicated to determine which Turing machines with N states never halt, and which merely take a huge time to halt.

Indeed, matter what axiom system you use for math, as long as it has finitely many axioms and is consistent, you can never use it to correctly determine BB(N) for more than some finite number of cases.

So what do people know about BB(N)?

For starters, BB(0) = 0. At this point I should admit that people don’t count the halt state as one of our N states. This is just a convention. So, when we consider BB(0), we’re considering machines that only have a halt state. They instantly halt.

Next, BB(1) = 1.

Next, BB(2) = 6.

Next, BB(3) = 21. This was proved in 1965 by Tibor Radó and Shen Lin.

Next, BB(4) = 107. This was proved in 1983 by Allan Brady.

Next, BB(5). Nobody knows what BB(5) equals!

The current 5-state busy beaver champion was discovered by Heiner Marxen and Jürgen Buntrock in 1989. It takes 47,176,870 steps before it halts. So, we know

BB(5) ≥ 47,176,870.

People have looked at all the other 5-state Turing machines to see if any does better. But there are 43 machines that do very complicated things that nobody understands. It’s believed they never halt, but nobody has been able to prove this yet.

We may have hit the wall of ignorance here… but we don’t know.

That’s the spooky thing: the precise boundary between the knowable and the unknowable is unknown. It may even be unknowable… but I’m not sure we know that.

Next, BB(6). In 1996, Marxen and Buntrock showed it’s at least 8,690,333,381,690,951. In June 2010, Pavel Kropitz proved that

\displaystyle{ \mathrm{BB}(6) \ge 7.412 \cdot 10^{36,534} }

You may wonder how he proved this. Simple! He found a 6-state machine that runs for


steps and then halts!

Of course, I’m just kidding when I say this was simple. The machine is easy enough to describe, but proving it takes exactly this long to run takes real work! You can read about such proofs here:

• Pascal Michel, The Busy Beaver Competition: a historical survey.

I don’t understand them very well. All I can say at this point is that many of the record-holding machines known so far are similar to the famous Collatz conjecture. The idea there is that you can start with any positive integer and keep doing two things:

• if it’s even, divide it by 2;

• if it’s odd, triple it and add 1.

The conjecture is that this process will always eventually reach the number 1. Here’s a graph of how many steps it takes, as a function of the number you start with:

Nice pattern! But this image shows how it works for numbers up to 10 million, and you’ll see it doesn’t usually take very long for them to reach 1. Usually less than 600 steps is enough!

So, to get a Turing machine that takes a long time to halt, you have to take this kind of behavior and make it much more long and drawn-out. Conversely, to analyze one of the potential winners of the Busy Beaver Game, people must take that long and drawn-out behavior and figure out a way to predict much more quickly when it will halt.

Next, BB(7). In 2014, someone who goes by the name Wythagoras showed that

\displaystyle{ \textrm{BB}(7) > 10^{10^{10^{10^{10^7}}}} }

It’s fun to prove lower bounds on BB(N). For example, in 1964 Milton Green constructed a sequence of Turing machines that implies

\textrm{BB}(2N) \ge 3 \uparrow^{N-2} 3

Here I’m using Knuth’s up-arrow notation, which is a recursively defined generalization of exponentiation, so for example

\textrm{BB}(10) \ge 3 \uparrow^{3} 3 = 3 \uparrow^2 3^{3^3} = 3^{3^{3^{3^{\cdot^{\cdot^\cdot}}}}}

where there are 3^{3^3} threes in that tower.

But it’s also fun to seek the smallest N for which we can prove BB(N) is unknowable! And that’s what people are making lots of progress on right now.

Sometime in April 2016, Adam Yedidia and Scott Aaronson showed that BB(7910) cannot be determined using the widely accepted axioms for math called ZFC: that is, Zermelo—Fraenkel set theory together with the axiom of choice. It’s a great story, and you can read it here:

• Scott Aaronson, The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me, Shtetl-Optimized, 3 May 2016.

• Adam Yedidia and Scott Aaronson, A relatively small Turing machine whose behavior is independent of set theory, 13 May 2016.

Briefly, Yedidia created a new programming language, called Laconic, which lets you write programs that compile down to small Turing machines. They took an arithmetic statement created by Harvey Friedman that’s equivalent to the consistency of the usual axioms of ZFC together with a large cardinal axiom called the ‘stationary Ramsey property’, or SRP. And they created a Turing machine with 7910 states that seeks a proof of this arithmetic statement using the axioms of ZFC.

Since ZFC can’t prove its own consistency, much less its consistency when supplemented with SRP, their machine will only halt if ZFC+SRP is inconsistent.

Since most set theorists believe ZFC+SRP is consistent, this machine probably doesn’t halt. But we can’t prove this using ZFC.

In short: if the usual axioms of set theory are consistent, we can never use them to determine the value of BB(7910).

The basic idea is nothing new: what’s new is the explicit and rather low value of the number 7910. Poetically speaking, we know the unknowable starts here… if not sooner.

However, this discovery set off a wave of improvements! On the Metamath newsgroup, Mario Carneiro and others started ‘logic hacking’, looking for smaller and smaller Turing machines that would only halt if ZF—that is, Zermelo–Fraenkel set theory, without the axiom of choice—is inconsistent.

By just May 15th, Stefan O’Rear seems to have brought the number down to 1919. He found a Turing machine with just 1919 states that searches for an inconsistency in the ZF axioms. Interestingly, this turned out to work better than using Harvey Friedman’s clever trick.

Thus, if O’Rear’s work is correct, we can only determine BB(1919) if we can determine whether ZF set theory is consistent. However, we cannot do this using ZF set theory—unless we find an inconsistency in ZF set theory.

For details, see:

• Stefan O’Rear, A Turing machine Metamath verifier, 15 May 2016.

I haven’t checked his work, but it’s available on GitHub.

What’s the point of all this? At present, it’s mainly just a game. However, it should have some interesting implications. It should, for example, help us better locate the ‘complexity barrier’.

I explained that idea here:

• John Baez, The complexity barrier, Azimuth, 28 October 2011.

Briefly, while there’s no limit on how much information a string of bits—or any finite structure—can have, there’s a limit on how much information we can prove it has!

This amount of information is pretty low, perhaps a few kilobytes. And I believe the new work on logic hacking can be used to estimate it more accurately!

by John Baez at May 21, 2016 05:24 PM

Peter Coles - In the Dark

Why you SHOULD respond to student requests

I agree with this guy. Even though I doubt the educational value of teachers asking kids to send these things out, I always try to reply.

Write Science

by Shane L. Larson

To my colleagues in professional science:

There has been a tremendous and acerbic backlash over the last week against a current popular practice of K-12 students emailing professional scientists with a list of questions they would like the scientists to comment on. I too have received these emails, and I have to very clearly state (in case you haven’t already been in one of these debates with me) that I have an unpopular view on this issue: I vehemently reject the view that we cannot respond to these emails. It is part of our professional obligation to society to respond to these notes.

In the spirit of intellectual debate, which is the purported hallmark of our discipline, let me recount some of the many aspects of the arguments that have been swirling around.

The Scenario. Emails will sail into our inboxes from (usually) middle-school science students…

View original post 1,742 more words

by telescoper at May 21, 2016 04:20 PM

astrobites - astro-ph reader's digest

Detecting gravitational wave memory

Title: Detecting gravitational-wave memory with LIGO: implications of GW150914

Authors: Paul Lasky, Eric Thrane, Yuri Levin, Jonathan Blackman, and Yanbei Chen

First Author’s Institution: Monash Centre for Astrophysics, School of Physics and Astronomy, Monash University, Australia

If memory serves…

Unless you’ve been living under a rock (or in a black hole) for the past few months, you should recall that the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) recently announced the first direct detection of gravitational waves (GWs) by sensing ripples in the fabric of spacetime that emanated from the collision of two black holes, a signal dubbed GW150914. Following the announcement on February 11th, a profusion of papers have been published examining where such events originate, how the collision of two black holes can emit light, and so on. But today’s post covers a paper that you’ll be sure to never forget: the prospect of using aLIGO detections to uncover a strange effect called gravitational wave memory.

GWs have the effect of stretching and squeezing space, therefore altering the distance between objects. The amount of stretching and squeezing is quantified by strain, which is the change in distance between two points divided by the total (unperturbed) distance between two points. Strain scales with the strength of a GW, and to achieve the largest change in distance (which is essentially what LIGO measures), you want a large strain and a large distance between your two reference points. The main effect from a compact binary merger event (such as GW150914) is an oscillatory stretching and shrinking, with the magnitude of the strain increasing as the binary inspirals. The strain then goes back to zero after the GW passes through Earth (see the general shape of the figure below). However, if you look closely at the figure below, you may have noticed that the waveform shown in blue does not return exactly to zero. That is because this simulated waveform includes gravitational wave memory, a higher-order effect from general relativity that causes a permanent deformation between reference points.

Screen Shot 2016-05-20 at 11.33.51 AM

Simulated strain over time for a binary black hole inspiral with the parameters of GW150914, including the effect from gravitational wave memory. Though this includes the last 2 seconds of the binary’s evolution, GW150914 was only detectable by aLIGO for 0.2 seconds, or about 10 cycles. Adapted from figure 2 in today’s paper.

Similar to how light can have distinct polarizations (hence why polarizing sunglasses can reduce glare), GWs come in two different polarization flavors called plus and cross polarizations. To get technical, memory reveals itself in higher-order modes when one expands the polarizations of the strain in terms of spin-weighted spherical harmonics (check out this article if you want to dig into the nitty-gritty). However, the important things to know are:

  • Memory induces a monotonically increasing (or decreasing) GW strain throughout the compact binary merger
  • The memory component of a merger is about an order of magnitude smaller than the total strain for a merger
  • Similar to the primary component of the GW strain, the memory component increases as the masses in the binary increase
  • The memory component of the strain depends in the inclination of the binary (how the orbital plane of the binary is tilted relative to Earth), though it has a different dependence on inclination than the primary GW strain does
  • To measure the sign of the memory (i.e. whether it caused a permanent stretch or shrink), one needs to accurately measure the polarization angle of a GW

The waiting game

Lasky et al. aim to predict whether aLIGO will be able to detect the effect of memory in GWs from compact binary mergers. For a single event, this effect would prove near impossible to detect. Taking an event like GW150914 as an example, at aLIGO’s design sensitivity the memory component would optimally provide a signal-to-noise ratio (S/N) of 0.42, whereas an S/N of about 3 would be the absolute minimum needed to possibly claim a detection. Instead, could we can wait for a very long time, building up this linear change in strain with each event until it is detectable? Not quite, since each event is equally likely to cause a permanent stretch or squeeze, and on average will cancel out.

The strategy that Lasky et al. take is to effectively “sum up” the low S/N contribution of many memory signals. With this approach it is necessary to know the sign of the memory for each individual detection (whether it caused a permanent stretch or shrink) so the effect of memory for each event will add coherently. As mentioned in the last bullet point above, this means that the building up of memory S/N requires GW events that have an accurately measured polarization angle.

Screen Shot 2016-05-20 at 12.03.10 PM

The buildup of GW memory S/N with the number of GW150914-like events. The blue line and shaded region represents the mean and error if all events are included, and the red line and shaded region represents the mean and error if only events with a measurable polarization angle (and therefore sign of the memory term) are included. The dashed and solid line represent a S/N of 3 and 5, respectively. Adapted from figure 3 in today’s paper.


In the limit that all mergers have the same S/N (i.e. each event is exactly like GW150914), the authors find that total S/N of the memory contribution scales as the square root of the number of events times the number of interferometers (currently, aLIGO consists of two interferometers in Hanford, WA and Livingston LA). The figure above shows the build-up of memory S/N with the number of detected events. Lasky et al. find that the cumulative memory of events will reach a detectable S/N of ~3 after the detection of ~35 GW150914-like events. Taking the aLIGO discovery paper for face value and assuming a GW150914-like event occurs every 16 days, this would indicate that GW memory would be detectable after only 1.5 years of aLIGO operating at design sensitivity!

I know what you’re thinking. Summer is around the corner, and many are still trying to shed that extra holiday weight and get their bodies in gear for the beach. Why don’t we just use gravitational wave memory to appear thinner without concerning ourselves with P90X and kale smoothies? Bad news: even if we orient ourselves in the correct way to align our stomach with the permanent shrinking of space from the memory of GW150914, our body would only shrink by about 10^-23 meters, about 100 million times smaller than the diameter of a proton. Plus, even if you had the best ruler in the universe, your ruler would shrink too. So now if you hear about rapid weight loss using gravitational wave memory on late-night television, you’ll know better.

by Michael Zevin at May 21, 2016 07:00 AM

The n-Category Cafe

Castles in the Air

The most recent issue of the Notices includes a review by Slava Gerovitch of a book by Amir Alexander called Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World. As the reviewer presents it, one of the main points of the book is that science was advanced the most by the people who studied and worked with infinitesimals despite their apparent formal inconsistency. The following quote is from the end of the review:

If… maintaining the appearance of infallibility becomes more important than exploration of new ideas, mathematics loses its creative spirit and turns into a storage of theorems. Innovation often grows out of outlandish ideas, but to make them acceptable one needs a different cultural image of mathematics — not a perfectly polished pyramid of knowledge, but a freely growing tree with tangled branches.

The reviewer makes parallels to more recent situations such as quantum field theory and string theory, where the formal mathematical justification may be lacking but the physical theory is meaningful, fruitful, and made correct predictions, even for pure mathematics. However, I couldn’t help thinking of recent examples entirely within pure mathematics as well, and particularly in some fields of interest around here.

Here are a few; feel free to suggest others in the comments (or to take issue with mine).

  • Informal arguments in higher category theory. For example, Lurie’s original paper On infinity topoi lacked a rigorous formal foundation, but contained many important insights. Because quasicategories had already been invented, he was able to make the ideas rigorous in reasonably short order; but I think it’s fair to say the price is a minefield of technical lemmas. Nowadays one finds people wanting to say “we work with <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-categories model-independently” to avoid all the technicalities, but it’s unclear whether this quite makes sense. (Although I have some hope now that a formal language closer to the informal one may come out of the Riehl-Verity theory of <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmoi.)

  • String diagrams for monoidal categories. Joyal and Street’s original paper “The geometry of tensor calculus” carefully defined string diagrams as topological graphs and proved that any labeled string diagram could be interpreted in a monoidal category. But since then, string diagrams have proven so useful that many people have invented variants of them that apply to many different kinds of monoidal categories, and in many (perhaps most) cases they proceed to use them without a similar justifying theorem. Kate and I proved the justifying theorem for our string diagrams for bicategories with shadows, but we didn’t even try it with our string diagrams for monoidal fibrations.

  • Combining higher category with string diagrams, we have the recent “graphical proof assistant” Globular, which formally works with a certain kind of semistrict <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-category for <semantics>n4<annotation encoding="application/x-tex">n\le 4</annotation></semantics>. It’s known that semistrict 3-categories (Gray-categories) suffice to model all weak 3-categories, but no such theorem is yet known for 4-categories. So officially, doing a proof about 4-categories in Globular tells you nothing more than that it’s true about semistrict 4-categories, and I suspect that few naturally-ocurring 4-categories are naturally semistrict. However, such an argument clearly has meaning and applicability much more generally.

  • And, of course, there is homotopy type theory. Plenty of it is completely rigorous, of course (and even formally verified in a computer), but I’m thinking particularly of its conjectural higher-categorical semantics. Pretty much everyone agrees that HoTT should be an internal language for <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topoi, but with present technology this depends on an initiality theorem for models of type theories in general that is universally believed to be true but is very fiddly to prove correctly and has only been written down carefully in one special case. Moreover, even granting the initiality theorem there are various slight mismatches between the formal theories in current use and what we can construct in higher toposes to model them, e.g. the universes are not strict enough and the HITs are too big. Nevertheless, this relationship has been very fruitful to both sides of the subject already (the type theory and the category theory).

The title of this post is a reference to a classic remark by Thoreau:

“If you have built castles in the air, your work need not be lost; that is where they should be. Now put the foundations under them.”

by shulman ( at May 21, 2016 12:53 AM

May 20, 2016

Christian P. Robert - xi'an's og

the snow geese [book review]

Just as for the previous book, I found this travel book in a nice bookstore, Rue Mouffetard, after my talk at Agro, and bought it [in a French translation] in prevision for my incoming trip to Spain. And indeed read it while in Spain, finishing it a few minutes before touching ground in Paris.

“The hunters wolfed down chicken fried steaks or wolfed down cuds of Red Man, Beech-Nut, Levi Garrett, or Jackson’s Apple Jack”

The Snow Geese was written in 2002 by William Fiennes, a young Englishman recovering from a serious disease and embarking on a wild quest to overcome post-sickness depression. While the idea behind the trip is rather alluring, namely to follow Arctic geese from their wintering grounds in Texas to their summer nesting place on Baffin Island, the book itself is sort of a disaster. As the prose of the author is very heavy, or even very very heavy, with an accumulation of descriptions that do not contribute to the story and a highly bizarre habit to mention brands by groups of three. And of using heavy duty analogies, as in “we were travelling across the middle of a page, with whiteness and black markings all around us, and geese lifting off the snow like letters becoming unstuck”. The reflections about the recovery of the author from a bout of depression and the rise of homesickness and nostalgia are not in the least deep or challenging, while the trip of the geese does not get beyond the descriptive. Worse, the geese remain a mystery, a blur, and a collective, rather than bringing the reader closer to them. If anything is worth mentioning there, it is instead the encounters of the author with rather unique characters, at every step of his road- and plane-trips. To the point of sounding too unique to be true…  His hunting trip with a couple of Inuit hunters north of Iqualit on Baffin Island is both a high and a down of the book in that sharing a few days with them in the wild is exciting in a primeval sense, while witnessing them shoot down the very geese the author followed for 5000 kilometres sort of negates the entire purpose of the trip. It then makes perfect sense to close the story with a feeling of urgency, for there is nothing worth adding.

Filed under: Books, Kids, pictures, Travel Tagged: Antarctica, Baffin Island, Inuits, Rue Mouffetard, snow geese, William Fiennes

by xi'an at May 20, 2016 10:16 PM

Christian P. Robert - xi'an's og

Clifford V. Johnson - Asymptotia

Gut Feeling…

gut_feeling_sampleStill slowly getting back up to speed (literally) on page production. I've made some major tweaks in my desktop workflow (I mostly move back and forth between Photoshop and Illustrator at this stage), and finally have started keeping track of my colours in a more efficient way (using global process colours, etc), which will be useful if I have to do big colour changes later on. My workflow improvement also now includes [...] Click to continue reading this post

The post Gut Feeling… appeared first on Asymptotia.

by Clifford at May 20, 2016 04:38 PM

Tommaso Dorigo - Scientificblogging

Prescaled Jet Triggers: The Rationale Of Randomly Picking Events
In a chapter of the book I have written, "Anomaly! - Collider physics and the quest for new phenomena at Fermilab" (available from September this year), I made an effort to explain a rather counter-intuitive mechanism at the basis of data collection in hadron colliders: the trigger prescale. I would like to have a dry run of the text here, to know if it is really too hard to understand - I still have time to tweak it if needed. So let me know if you understand the description below!

The text below is maybe hard to read as it is taken off context; however, let me at least spend one

read more

by Tommaso Dorigo at May 20, 2016 02:57 PM

Emily Lakdawalla - The Planetary Society Blog

On LightSail 1 launch anniversary, team prepares successor craft for day-in-the-life test
One year ago today, LightSail 1 rode an Atlas V rocket into space. Now, the program stands on the brink of another major milestone, as engineers prepare for a full systems test of LightSail 2, a successor CubeSat that will attempt the first controlled solar sail flight in low-Earth orbit.

May 20, 2016 11:32 AM

May 19, 2016

Emily Lakdawalla - The Planetary Society Blog

Akatsuki begins a productive science mission at Venus
Japan's Akatsuki Venus orbiter is well into its science mission, and has already produced surprising science results. The mission, originally planned to last two years, could last as many as five, monitoring Venus' atmosphere over the long term.

May 19, 2016 11:02 PM

Sean Carroll - Preposterous Universe

Give the People What They Want

And what they want, apparently, is 470-page treatises on the scientific and philosophical underpinnings of naturalism. To appear soon in the Newspaper of Record:


Happy also to see great science books like Lab Girl and Seven Brief Lessons on Physics make the NYT best-seller list. See? Science isn’t so scary at all.

by Sean Carroll at May 19, 2016 05:56 PM

Jester - Resonaances

A new boson at 750 GeV?
ATLAS and CMS presented today a summary of the first LHC results obtained from proton collisions with 13 TeV center-of-mass energy. The most exciting news was of course the 3.6 sigma bump at 750 GeV in the ATLAS diphoton spectrum, roughly coinciding with a 2.6 sigma excess in CMS. When there's an experimental hint of new physics signal there is always this set of questions we must ask:

0. WTF ?
0. Do we understand the background?
1. What is the statistical significance of  the signal?
2. Is the signal consistent with other data sets?
3. Is there a theoretical framework to describe it?
4. Does it fit in a bigger scheme of new physics?

Let us go through these questions one by one.

The background.  There's several boring ways to make photon pairs at the LHC, but they are expected to produce a  spectrum smoothly decreasing with the invariant mass of the pair. This expectation was borne out in run-1, where the 125 GeV Higgs resonance could be clearly seen on top of a nicely smooth background, with no breaks or big wiggles. So it is unlikely that some Standard Model processes (other than a statistical fluctuation) may produce a bump such as the one seen by ATLAS.

The stats.   The local significance is 3.6 sigma in ATLAS and 2.6 sigma in CMS.  Naively combining the two, we get a more than 4 sigma excess. It is a very large effect, but we have already seen this large fluctuations at the LHC that vanished into thin air (remember 145 GeV Higgs?). Next year's LHC data will be  crucial to confirm or exclude the signal.  In the meantime, we have a perfect right to be excited.

The consistency. For this discussion, the most important piece of information is the diphoton data collected in run-1 at 8 TeV center-of-mass energy.  Both ATLAS and CMS have a small 1 sigma excess around 750 GeV in the run-1 data, but there is no clear bump there.  If a new 750 GeV  particle is produced in gluon-gluon collisions,  then the gain in the signal cross section at 13 TeV compared to 8 TeV is roughly a factor of 5.  On the other hand, there was 6 times more data collected at 8 TeV by ATLAS (3.2 fb-1 vs 20 fb-1). This means that the number of signal events produced in ATLAS at 13 TeV should be about 75% of those at 8 TeV, and the ratio is even worse for CMS (who used only 2.6 fb-1).  However, the background may grow less fast than the signal, so the power of the 13 TeV and 8 TeV data is comparable.  All in all, there is some tension between the run-1 and run-2 data sets,  however a mild downward fluctuation of the signal at 8 TeV and/or a mild upward fluctuation at 13 TeV is enough to explain it.  One can also try to explain the lack of signal in run-1 by the fact that the 750 GeV particle is a decay product of a heavier resonance (in which case the cross-section gain can be much larger). More careful study with next year's data  will be needed to test for this possibility.

The model.  This is the easiest part :)  A resonance produced in gluon-gluon collisions and decaying to 2 photons?  We've seen that already... that's how the Higgs boson was first spotted.  So all we need to do is to borrow from the Standard Model. The simplest toy model for the resonance would be a new singlet scalar with mass of 750 GeV coupled to new heavy vector-like quarks that carry color and electric charges. Then quantum effects will produce, in analogy to what happens for the Higgs boson, an effective coupling of the new scalar to gluons and photons:

By a judicious choice of the effective couplings (which depend on masses, charges, and couplings of the vector-like quarks) one can easily fit the diphoton excess observed by ATLAS and CMS. This is shown as the green region in the plot.
 If the vector-like quark is a T', that is to say, it has the same color and electric charge as the Standard Model top quark, then the effective couplings must lie along the blue line. The exclusion limits from the run-1 data (mesh) cut through the best fit region, but do not disfavor the model completely. Variation of this minimal toy model will appear in a 100 papers this week.

The big picture.  Here sky is the limit. The situation is completely different than 3 years ago, where there was one strongly preferred (and ultimately true) interpretation of the 125 GeV diphoton and 4-lepton signals as the Higgs boson of the Standard Model. On the other hand,  scalars coupled to new quarks appear in countless model of new physics. We may be seeing the radial Higgs partner predicted by little Higgs or twin Higgs models, or the dilaton arising due to spontaneous conformal symmetry breaking, or a composite state bound by new strong interactions.  It could be a part of the extended Higgs sector in many different context, e.g. the heavy scalar or pseudo-scalar in the two Higgs doublet models.  For more spaced out possibilities, it could be the KK graviton of the Randall-Sundrum model, or it could fit some popular supersymmetric models such as the  NMSSM. All these scenarios face some challenges.  One is to explain why the branching ratio into two photons is large enough to be observed, and why the 750 GeV scalar is not seen in other decays channels, e.g. in decay to W boson pairs which should be the dominant mode for a Higgs-like scalar.  However, these challenges are nothing that an average theorist could not resolve by tomorrow morning.  Most likely, this particle would just be a small part of the larger structure, possibly having something to do with electroweak symmetry breaking and the hierarchy problem of the Standard Model.  If the signal is a real thing, then it may be the beginning of a new golden era in particle physics....

by Jester ( at May 19, 2016 03:44 PM

Jester - Resonaances

Higgs force awakens
The Higgs boson couples to particles that constitute matter around us, such as electrons, protons, and neutrons. Its virtual quanta are constantly being exchanged between these particles.  In other words, it gives rise to a force -  the Higgs force. I'm surprised why this PR-cool aspect is not explored in our outreach efforts. Higgs bosons mediate the Higgs force in the same fashion as gravitons, gluons, photons, W and Z bosons mediate  the gravity, strong, electromagnetic, and  weak forces. Just like gravity, the Higgs force is always attractive and its strength is proportional, in the first approximation, to particle's mass. It is a force in a common sense; for example, if we bombarded long enough a detector with a beam of particles interacting only via the Higgs force, they would eventually knock off atoms in the detector.

There is of course a reason why the Higgs force is less discussed: it has never been detected directly. Indeed, in the absence of midi-chlorians it is extremely weak. First, it shares the feature of the weak interactions of being short-ranged: since the mediator is massive, the interaction strength is exponentially suppressed at distances larger than an attometer (10^-18 m), about 0.1% of the diameter of a proton. Moreover, for ordinary matter, the weak force is more important because of the tiny Higgs couplings to light quarks and electrons. For example, for the proton the Higgs force is thousand times weaker than the weak force, and for the electron it is hundred thousand times weaker. Finally, there are no known particles interacting only via the Higgs force and gravity (though dark matter in some hypothetical models has this property), so in practice the Higgs force is always a tiny correction to more powerful forces that shape the structure of atoms and nuclei. This is again in contrast to the weak force, which is particularly relevant for neutrinos who are immune to strong and electromagnetic forces.

Nevertheless, this new paper argues that the situation is not hopeless, and that the current experimental sensitivity is good enough to start probing the Higgs force. The authors propose to do it by means of atom spectroscopy. Frequency measurements of atomic transitions have reached the stunning accuracy of order 10^-18. The Higgs force creates a Yukawa type potential between the nucleus and orbiting electrons, which leads to a shift of the atomic levels. The effect is tiny, in particular it  is always smaller than the analogous shift due to the weak force. This is a serious problem, because calculations of the leading effects may not be accurate enough to extract the subleading Higgs contribution.  Fortunately, there may be tricks to reduce the uncertainties. One is to measure how the isotope shift of transition frequencies for several isotope pairs. The theory says that the leading atomic interactions should give rise to a universal linear relation (the so-called King's relation) between  isotope shifts for different transitions. The Higgs and weak interactions should lead to a violation of King's relation. Given many uncertainties plaguing calculations of atomic levels, it may still be difficult to ever claim a detection of the Higgs force. More realistically, one can try to set limits on the Higgs couplings to light fermions which will be better than the current collider limits.  

Atomic spectroscopy is way above my head, so I cannot judge if the proposal is realistic. There are a few practical issues to resolve before the Higgs force is mastered into a lightsaber. However, it is possible that a new front to study the Higgs boson will be opened in the near future. These studies will provide information about the Higgs couplings to light Standard Model fermions, which is complementary to the information obtained from collider searches.

by Jester ( at May 19, 2016 03:43 PM

Jester - Resonaances

750 ways to leave your lover
A new paper last week straightens out the story of the diphoton background in ATLAS. Some confusion was created because theorists misinterpreted the procedures described in the ATLAS conference note, which could lead to a different estimate of the significance of the 750 GeV excess. However, once the correct phenomenological and statistical approach is adopted, the significance quoted by ATLAS can be reproduced, up to small differences due to incomplete information available in public documents. Anyway, now that this is all behind, we can safely continue being excited at least until summer.  Today I want to discuss different interpretations of the diphoton bump observed by ATLAS. I will take a purely phenomenological point of view, leaving for the next time  the question of a bigger picture that the resonance may fit into.

Phenomenologically, the most straightforward interpretation is the so-called everyone's model: a 750 GeV singlet scalar particle produced in gluon fusion and decaying to photons via loops of new vector-like quarks. This simple construction perfectly explains all publicly available data, and can be easily embedded in more sophisticated models. Nevertheless, many more possibilities were pointed out in the 750 papers so far, and here I review a few that I find most interesting.

Spin Zero or More?  
For a particle decaying to two photons, there is not that many possibilities: the resonance has to be a boson and, according to young Landau's theorem, it cannot have spin 1. This leaves at the table spin 0, 2, or higher. Spin-2 is an interesting hypothesis, as this kind of excitations is predicted in popular models like the Randall-Sundrum one. Higher-than-two spins are disfavored theoretically. When more data is collected, the spin of the 750 GeV resonance can be tested by looking at the angular distribution of the photons. The rumor is that the data so far somewhat favor spin-2 over spin-0, although the statistics is certainly insufficient for any serious conclusions.  Concerning the parity, it is practically impossible to determine it by studying the diphoton final state, and both the scalar and the pseudoscalar option are equally viable at present. Discrimination may be possible in the future, but  only if multi-body decay modes of the resonance are discovered. If the true final state is more complicated than two photons (see below), then the 750 GeV resonance may have  any spin, including spin-1 and spin-1/2.

Narrow or Wide? 
The total width is an inverse of particle's lifetime (in our funny units). From the experimental point of view, the width larger than detector's  energy resolution  will show up as a smearing of the resonance due to the uncertainty principle. Currently, the ATLAS run-2 data prefer the width 10 times larger than the experimental resolution  (which is about 5 GeV in this energy ballpark), although the preference is not very strong in the statistical sense. On the other hand, from the theoretical point of view, it is much easier to construct models where the 750 GeV resonance is a narrow particle. Therefore, confirmation of the large width would have profound consequences, as it would significantly narrow down the scope of viable models.  The most exciting interpretation would then be that the resonance is a portal to a dark sector containing new light particles very weakly coupled to ordinary matter.    

How many resonances?  
One resonance is enough, but a family of resonances tightly packed around 750 GeV may also explain the data. As a bonus, this could explain the seemingly large width without opening new dangerous decay channels. It is quite natural for particles to come in multiplets with similar masses: our pion is an example where the small mass splitting π± and π0 arises due to electromagnetic quantum corrections. For Higgs-like multiplets the small splitting may naturally arise after electroweak symmetry breaking, and  the familiar 2-Higgs doublet model offers a simple realization. If the mass splitting of the multiplet is larger than the experimental resolution, this possibility can tested by precisely measuring the profile of the resonance and searching for a departure from the Breit-Wigner shape. On the other side of the spectrum is the idea is that there is no resonance at all at 750 GeV, but rather at another mass, and the bump at 750 GeV appears due to some kinematical accidents.
Who made it? 
The most plausible production process is definitely the gluon-gluon fusion. Production in collisions of light quark and antiquarks is also theoretically sound, however it leads to a more acute tension between run-2 and run-1 data. Indeed, even for the gluon fusion, the production cross section of a 750 GeV resonance in 13 TeV proton collisions is only 5 times larger than at 8 TeV. Given the larger amount of data collected in run-1, we would expect a similar excess there, contrary to observations. For a resonance produced from u-ubar or d-dbar the analogous ratio is only 2.5 (see the table), leading to much more  tension. The ratio climbs back to 5 if the initial state contains the heavier quarks: strange, charm, or bottom (which can also be found sometimes inside a proton), however I haven't seen yet a neat model that makes use of that. Another possibility is to produce the resonance via photon-photon collisions. This way one could cook up a truly minimal and very predictive model where the resonance couples only to photons of all the Standard Model particles. However, in this case, the ratio between 13 and 8 TeV cross section is very unfavorable, merely a factor of 2, and the run-1 vs run-2 tension comes back with more force. More options open up when associated production (e.g. with t-tbar, or in vector boson fusion) is considered. The problem with these ideas is that, according to what was revealed during the talk last December, there isn't any additional energetic particles in the diphoton events. Similar problems are facing models where the 750 GeV resonance appears as a decay product of a heavier resonance, although in this case some clever engineering or fine-tuning may help to hide the additional particles from experimentalist's eyes.

Two-body or more?
While a simple two-body decay of the resonance into two photons is a perfectly plausible explanation of all existing data, a number of interesting alternatives have been suggested. For example, the decay could be 3-body, with another soft visible or invisible  particle accompanying two photons. If the masses of all particles involved are chosen appropriately, the invariant mass spectrum of the diphoton remains sharply peaked. At the same time, a broadening of the diphoton energy due to the 3-body kinematics may explain why the resonance appears wide in ATLAS. Another possibility is a cascade decay into 4 photons. If the  intermediate particles are very light, then the pairs of photons from their decay are very collimated and may look like a single photon in the detector.
 ♬ The problem is all inside your head   and the possibilities are endless. The situation is completely different than during the process of discovering the  Higgs boson, where one strongly favored hypothesis was tested against more exotic ideas. Of course, the first and foremost question is whether the excess is really new physics, or just a nasty statistical fluctuation. But if that is confirmed, the next crucial task for experimentalists will be to establish the nature of the resonance and get model builders on the right track.  The answer is easy if you take it logically ♬ 

All ideas discussed above appeared in recent articles by various authors addressing the 750 GeV excess. If I were to include all references the post would be just one giant hyperlink, so you need to browse the literature yourself to find the original references.

by Jester ( at May 19, 2016 03:43 PM

Jester - Resonaances

April Fools' 16: Was LIGO a hack?

This post is an April Fools' joke. LIGO's gravitational waves are for real. At least I hope so ;) 

We have had recently a few scientific embarrassments, where a big discovery announced with great fanfares was subsequently overturned by new evidence.  We still remember OPERA's faster than light neutrinos which turned out to be a loose cable, or BICEP's gravitational waves from inflation, which turned out to be galactic dust emission... It seems that another such embarrassment is coming our way: the recent LIGO's discovery of gravitational waves emitted in a black hole merger may share a similar fate. There are reasons to believe that the experiment was hacked, and the signal was injected by a prankster.

From the beginning, one reason to be skeptical about LIGO's discovery was that the signal  seemed too beautiful to be true. Indeed, the experimental curve looked as if taken out of a textbook on general relativity, with a clearly visible chirp signal from the inspiral phase, followed by a ringdown signal when the merged black hole relaxes to the Kerr state. The reason may be that it *is* taken out of a  textbook. This is at least what is strongly suggested by recent developments.

On EvilZone, a well-known hacker's forum, a hacker using a nickname Madhatter was boasting that it was possible to tamper with scientific instruments, including the LHC, the Fermi satellite, and the LIGO interferometer.  When challenged, he or she uploaded a piece of code that allows one to access LIGO computers. Apparently, the hacker took advantage the same backdoor that allows the selected members of the LIGO team to inject a fake signal in order to test the analysis chain.  This was brought to attention of the collaboration members, who  decided to test the code. To everyone's bewilderment, the effect was to reproduce exactly the same signal in the LIGO apparatus as the one observed in September last year!

Even though the traces of a hack cannot be discovered, there is little doubt now that there was a foul play involved. It is not clear what was the motif of the hacker: was it just a prank, or maybe an elaborate plan to discredit the scientists. What is even more worrying is that the same thing could happen in other experiments. The rumor is that the ATLAS and CMS collaborations are already checking whether the 750 GeV diphoton resonance signal could also be injected by a hacker.

by Jester ( at May 19, 2016 03:42 PM

Jester - Resonaances

Diphoton update
Today at the Moriond conference ATLAS and CMS updated their diphoton resonance searches. There's been a rumor of an ATLAS analysis with looser cuts on the photons where the significance of the 750 GeV excess grows to a whopping 4.7 sigma. The rumor had it that the this analysis would be made public today, so the expectations were high. However, the loose-cuts analysis was not approved in time by the collaboration, and the fireworks display was cancelled.  In any case,  there was some good news today, and some useful info for model builders was provided.

Let's start with ATLAS. For the 13 TeV results, they now have two analyses: one called spin-0 and one called spin-2. Naively, the cuts in the latter are not optimized not for a spin-2 resonance but rather for a high-mass resonance  (where there's currently no significant excess), so the spin-2 label should not be treated too seriously in this case. Both analyses show a similar excess at 750 GeV: 3.9 and 3.6 sigma respectively for a wide resonance. Moreover, ATLAS provides additional information about the diphoton events, such as the angular distribution of the photons, the number of accompanying jets, the amount of missing energy, etc. This may be very useful for theorists entertaining less trivial models, for example when the 750 GeV resonance is produced  from a decay of a heavier parent particle. Finally, ATLAS shows a re-analysis of the diphoton events collected at 8 TeV center-of-energy of the LHC. The former run-1 analysis was a bit sloppy in the interesting mass range; for example, no limits at all were given for a 750 GeV scalar hypothesis.  Now the run-1 data have been cleaned up and analyzed using the same methods as in run-2. Excitingly, there's a 2 sigma excess in the spin-0 analysis in run-1, roughly compatible with what one would expect given the observed run-2 excess!   No significant excess is seen for the spin-2 analysis, and the tension between the run-1 and run-2 data is quite severe in this case. Unfortunately, ATLAS does not quote the combined significance and the best fit cross section for the 750 GeV resonance.

For CMS, the big news is that the amount of 13 TeV data at their disposal has increased by 20%. Using MacGyver skills, they managed to make sense of the chunk of data collected when the CMS magnet was off due to a technical problem. Apparently it was worth it, as new diphoton events have been found in the 750 GeV ballpark. Thanks to that, and a better calibration,  the significance of the diphoton excess in run-2  actually increases up to 2.9 sigma!  Furthermore, much like ATLAS, CMS updated their run-1 diphoton analyses and combined them with the run-2 ones.  Again, the combination increases the significance of the 750 GeV excess. The combined significance quoted by CMS is 3.4 sigma,  similar for spin-0 and spin-2 analyses. Unlike in ATLAS, the best fit is for a narrow resonance, which is the more preferred option from the theoretical point of view.

In summary, the diphoton excess survived the first test.  After adding more data and improving the analysis techniques the significance slightly increases rather than decreases, as expected for a real particle.  The signal is now a bit more solid: both experiments have a similar amount of diphoton data and they both claim a similar significance of the  750 GeV bump.  It may be a good moment to rename the ATLAS diphoton excess as the LHC diphoton excess :)  So far, the story of 2012 is repeating itself: the initial hints of a new resonance solidify into a consistent picture. Are we going to have another huge discovery this summer?

by Jester ( at May 19, 2016 03:41 PM

ZapperZ - Physics and Physicists

The Curse Of Being A Physicist
When do you speak up in a social setting and set someone straight?

I think I've mentioned a few times on here of being in a social setting, and then being found out that I'm a physicist. Most of the time, this was a good thing, because I get curious questions about what was on the news related to physics (the LHC was a major story for months).

But what if you hear something, and clearly it wasn't quite right. Do you speak up and possibly might cause an embarrassment to the other person?

I attended the annual Members Night at the Adler Planetarium last night here in Chicago. It was a very enjoyable evening. Their new show that is about to open on "Planet Nine" was very, VERY informative and entertaining. I highly recommend it. We got to be among the first to see it before it is opened to the public.

Well, anyway, towards the end of the evening, before we left, we decided to walk around the back of the facility and visit the Doane Observatory. The telescope was looking at Jupiter which was prominent in the night sky last night. There was a line, so we waited in the line for our turn.

As we progressed up, I and my companions heard these two gentlemen chatting away with the visitors, and then to each other about their enthusiasm about astronomy and science, etc. This is always good to know, especially at an event like this. As I got closer to them, it turned out that they were either volunteers, or were working for Adler Planetarium, because they were wearing either name tags or something. One of them identified himself as an astronomer, which wasn't surprising considering the event and the location.

But then, things got a bit sour, at least for me. In trying to pump up their enthusiasm about astronomy and science, they started quoting Carl Sagan's famous phrase that we are all made up of star stuff. This wasn't the bad part, but then they took it further by claiming that hydrogen is the "lego blocks" of the universe, and that everything can be thought of as being built out of hydrogen. One of them started giving an example by saying that you take two hydrogen and put them together, and you get helium!

OK, by then, I was no longer amused by these two guys, and was tempted to say something. I wanted to say that hydrogen is not the "lego blocks" of our universe, not if the Standard Model of Particle Physics has anything to say about that. And secondly, you don't get helium when you put two hydrogen atoms together. After all, where will the extra 2 neutrons in helium come from?

But I stopped myself from saying anything. These people were working pretty hard for  this event, they were trying to show their enthusiasm about the subject matter, and we were surrounded by other people, the general public, who obviously were also interested in this topic. Anything that I would have said to correct these two men would not have looked good, at least that was my assessment at that moment. It might easily led to an awkward, embarrassing moment.

I get that when we try to talk to the public about science, we might overextend ourselves. I used to give tours and participated in outreach programs, so I've been in this type of situation before. While I tried to make sure everything I say was accurate, there were always possibilities that someone in the audience may know more about something I said and may find certain aspects of it not entirely accurate. I get that.

So that was why I didn't say anything to these two gentlemen. I think that what they just told to the people who were within ear shot of them were wrong. Maybe their enthusiasms made them forget some basic facts. That might be forgivable. Still, it is obvious that I'm still thinking about this the next morning, and second guessing if maybe I should have told them quietly that what they said wasn't quite right. Maybe it might stop them from saying it out loud next time?

On the other hand, how many of these people who heard what was said actually (i) understood it and (ii) remembered it?


by ZapperZ ( at May 19, 2016 01:44 PM

ZapperZ - Physics and Physicists

Still No Sterile Neutrinos
IceCube has not found any indication of the presence of sterile neutrinos after looking for it for 2 years, at least not in the energy range that it was expected.

In the latest research, the IceCube collaboration performed independent analysis on two sets of data from the observatory, looking for sterile neutrinos in the energy range between approximately 320 GeV and 20 TeV. If present, light sterile neutrinos with a mass of around 1 eV/C2 would cause a significant disappearance in the total number of muon neutrinos that are produced by cosmic-ray showers in the atmosphere above the northern hemisphere and then travel through the Earth to reach IceCube. The first set of data included more than 20,000 muon-neutrino events detected between 2011 and 2012, while the second covered almost 22,000 events observed between 2009 and 2010. 

I think there are other facilities that are looking for them as well. But this result certainly excludes a large portion of the "search area".


by ZapperZ ( at May 19, 2016 01:17 PM

Symmetrybreaking - Fermilab/SLAC

The Planck scale

The Planck scale sets the universe’s minimum limit, beyond which the laws of physics break.

In the late 1890s, physicist Max Planck proposed a set of units to simplify the expression of physics laws. Using just five constants in nature (including the speed of light and the gravitational constant), you, me and even aliens from Alpha Centauri could arrive at these same Planck units.

The basic Planck units are length, mass, temperature, time and charge.

Let’s consider the unit of Planck length for a moment. The proton is about 100 million trillion times larger than the Planck length. To put this into perspective, if we scaled the proton up to the size of the observable universe, the Planck length would be a mere trip from Tokyo to Chicago. The 14-hour flight may seem long to you, but to the universe, it would go completely unnoticed.

The Planck scale was invented as a set of universal units, so it was a shock when those limits also turned out to be the limits where the known laws of physics applied. For example, a distance smaller than the Planck length just doesn’t make sense—the physics breaks down.

Physicists don’t know what actually goes on at the Planck scale, but they can speculate. Some theoretical particle physicists predict all four fundamental forces—gravity, the weak force, electromagnetism and the strong force—finally merge into one force at this energy. Quantum gravity and superstrings are also possible phenomena that might dominate at the Planck energy scale.

The Planck scale is the universal limit, beyond which the currently known laws of physics break. In order to comprehend anything beyond it, we need new, unbreakable physics.

by Rashmi Shivni at May 19, 2016 01:00 PM

Lubos Motl - string vacua and pheno

Particles are vibrations
The music analogy is much more accurate than most people want to believe

Tetragraviton is a postdoc at the Perimeter Institute who has written several papers on multiloop amplitudes in gauge theory. Even though none of these papers depends on string theory in any tangible way, I've thought that he's a guy close enough to string theory who could potentially work on it which is why I was surprised by his blog post a week ago,
Particles Aren’t Vibrations (at Least, Not the Ones You Think)
which indicates that I was wrong. The first sentence tells you what kind of popularizers are supposed to be a target:
You’ve probably heard this story before, likely from Brian Greene.
I was imagining that there was something subtle. People may dislike the overabundant comments about "music and string theory" etc. But I didn't find anything too subtle in the blog post. While there's always some room for interpretations what a somewhat vague sentence addressed to the laymen could have meant, I think it's right to conclude that Tetragraviton is just flatly wrong.

Needless to say, the claim that (in weakly coupled string theory) different particle species are vibration modes of a string isn't just some fairy-tale used by Brian Greene. It's a translation of an actual defining fact of string theory into plain English. Brian Greene has in no way a monopoly over such a thing. Pretty much everyone else who has talked about string theory agrees that this is the right summary of string theory's ingenious description of the diversity of particle species.

Clearly, you may add people like Michio Kaku:
In string theory, all particles are vibrations on a tiny rubber band; physics is the harmonies on the string; chemistry is the melodies we play on vibrating strings; the universe is a symphony of strings, and the 'Mind of God' is cosmic music resonating in 11-dimensional hyperspace.
Kaku and even Greene may sometimes be presented as "just some popularizers". But they have done highly nontrivial contributions to the field, too. And almost all other string theorists who talk about string theory use very similar formulations. I could give you dozens of examples. But because of his widely respected technical credentials, let me pick Edward Witten:
String theory is an attempt at a deeper description of nature by thinking of an elementary particle not as a little point but as a little loop of vibrating string. One of the basic things about a string is that it can vibrate in many different shapes or forms, which gives music its beauty. If we listen to a tuning fork, it sounds harsh to the human ear. And that's because you hear a pure tone rather than the higher overtones that you get from a piano or violin that give music its richness and beauty.

So in the case of one of these strings it can oscillate in many different forms—analogously to the overtones of a piano string. And those different forms of vibration are interpreted as different elementary particles: quarks, electrons, photons. All are different forms of vibration of the same basic string. Unity of the different forces and particles is achieved because they all come from different kinds of vibrations of the same basic string. In the case of string theory, with our present understanding, there would be nothing more basic than the string.
The fact that particle species are types of vibrations isn't just a truth. It's pretty much "the defining truth", the very reason why string theory is unifying forces and matter. If you allow me to quote Barton Zwiebach's undergraduate textbook, A First Course in String Theory:
Why is string theory a truly unified theory? The reason is simple and goes to the heart of the theory. In string theory, each particle is identified as a particular vibrational mode of an elementary microscopic string. A musical analogy is very apt. Just as a violin string can vibrate in different modes and each mode corresponds to a different sound, the modes of vibration of a fundamental string can be recognized as the different particles we know. One of the vibrational states of strings is the graviton, the quantum of the gravitational field. Since there is just one type of string, and all particles arise from string vibrations, all particles are naturally incorporated into a single theory. When we think in string theory of a decay process...
Everyone who understands string theory agrees with the essence of the statement that string theory explains particles as vibrations.

It's always amazing to see how many people like to pick an important truth, completely negate it, and claim that the result is a very important truth. It looks like they want to prove Niels Bohr's famous quote
The opposite of a correct statement is a false statement. But the opposite of a profound truth may well be another profound truth.
Well, he only says that the opposite of a profound truth may be another profound truth. It usually isn't.

OK, so how did Tetragraviton argue that particles aren't vibrations?

We were shown the higher harmonics on a string with a claim that this is not how string theory produces the list of particle species. Except that it is a totally valid sketch of how string theory does it.

In a flat spacetime background, a single string really has possible higher harmonics \(\alpha^\mu_{\pm n}\) along the string – the \(n\)-th Fourier component in the expansion of a combination of \(x^{\prime \mu}(\sigma)\) and \(p^\mu(\sigma)\) – and \(\alpha_n,\alpha_{-n}\) obey the algebra of annihilation and creation operators, respectively.

A general excited open string state is obtained by the action of these harmonics on the ground state (usually a tachyonic ground state) \(\ket 0\):\[

\dots (\alpha_{-3})^{N_3} (\alpha_{-2})^{N_2} (\alpha_{-1})^{N_1} \ket 0

\] where the exponents \(N_j\) are non-negative integers (only finitely many are nonzero). For each higher harmonic, the string may be excited by the corresponding vibration – an integer number of times because the string obeys the laws of quantum mechanics and the quantum harmonic oscillator has an equally spaced spectrum. Such an excited string behaves as a particle whose mass is proportional to\[

m^2 = m_0^2 (\dots + 3N_3 + 2N_2 + 1N_1 - 1)

\] The more excitations you include, the heavier particle you get. The higher harmonics increase the string theory's mass more quickly. The term \(-1\) is a contribution from the zero-point energies of all these oscillators. You may derive this negative shift as a term proportional to the renormalized sum of integers\[

1+2+3+\dots \to -\frac{1}{12}

\] I've replaced \(=\) by \(\to\) just because I want to reduce the number of angry clueless critics by 70% but be sure that \(=\) would be more accurate.

The characteristic scale \(m\) may be close to the GUT scale if not the Planck scale. But there also exist low-string-scale models (brane worlds) where \(m\) is comparable to a few \({\rm TeV}\)s, the energies marginally accessible by the LHC. I was surprised that Tetragraviton didn't have a clue about the possibility of a low string scale.

In the formula above, I suppressed the \(\mu\) index so I was only adding vibrations in one transverse dimension. A realistic 10D superstring requires 8 copies of such oscillators, all of them may excite the string by the same amount, and there may also be similar fermionic oscillators living on the string. Their contributions to the masses are analogous – except that the corresponding operators "mostly anticommute" and the occupation numbers are therefore \(0\) or \(1\).

For closed strings, we have two sets of oscillators – left-moving and right-moving oscillators \(\alpha\) and \(\tilde\alpha\). Both of them may be added to excite the string. The total \(m^2\) calculated from the left-movers must agree with the total \(m^2\) calculated from the right-movers. The requirement is known as the level-matching condition, \(L_0=\tilde L_0\), and it is basically equivalent to the statement that the choice of the \(\sigma=0\) "origin" of a closed string must be unphysical (the total momentum along/around the closed string must vanish).

Note that our formula calculated \(m^2\) and not \(m\) as the integer. This is due to a rather elementary kinematic technicality that boils down to relativity. In relativity, things simplify when the strings are highly boosted or described in the "light cone gauge". In that case, the component \(p^-\) of the energy-momentum vector – a light-cone gauge edition of "energy" – turns out to contain a term proportional to \(m^2\). (Explanations without the light-cone gauge are possible, too.)

You may have been afraid that in relativity, the energy formula would unavoidably contain lots of square roots from \(E=\sqrt{M^2+P^2}\) which would make all the oscillators unharmonic. But this trap may be avoided by a choice of coordinates on the world sheet. In particular, in the light-cone gauge (really a conformal gauge is enough), the internal energy of the string is linked to \(m^2\) of the corresponding particle and the formula for \(m^2\) reduces to simple harmonic oscillators without square roots. None of these things may be clear to anyone "without any calculations" but the students learn and verify the reasons before the 5th lecture of string theory. The result is that even though the string is a relativistic object (the vibration equations are Lorentz-covariant), the relevant Hamiltonians may be written by simple formulae involving harmonic oscillators and no square roots.

So in some units, the squared masses \(m^2\) of allowed vibrating strings are literally integers in certain units.

The squared masses of known particle species are not equally spaced in this way. It's mostly because
  1. the strings generally vibrate in a curved spacetime background
  2. the particles – vibrations of strings – interact with each other (because strings split and join) and this has a similar effect on the masses as field theory phenomena such as the Higgs mechanism; in fact, the Higgs mechanism and all similar things work in string theory "just like" in field theory
Tetragraviton says that the statement "particles are vibrations" is invalidated in some way because string theory also has extra dimensions and supersymmetry. But neither extra dimensions nor supersymmetry invalidate the picture above. In fact, neither extra dimensions nor supersymmetry imply that even the simple equally spaced spectrum based on the higher harmonics has to be generalized.

Extra dimensions may be flat (torus or its orbifolds) and supersymmetry may be expressed in terms of free fields (whose spectrum is exactly gotten by adding the energy of the higher harmonics).

Moreover, Tetragraviton's extra comments about extra dimensions and supersymmetry are absolutely demagogic given the fact that he claimed to show something inaccurate about Brian Greene's statements about string theory. Brian Greene has always discussed extra dimensions and supersymmetry in much more detail than Tetragraviton. For example, several full chapters are dedicated to these topics in The Elegant Universe.

I want to emphasize that there are actually semirealistic models of string theory – which basically produce the minimal supersymmetric standard model or something like that consistently coupled to quantum gravity – which still build the spectrum pretty much by the simple addition of the higher harmonics that I discussed above. In particular, I mean the orbifolds of tori and the heterotic models in the free fermionic formulation.

A novelty of such orbifolds is that some of the states come from twisted sectors. A twisted sector has some new boundary conditions. A round trip around the closed string doesn't return you to the same point in the space (or configuration space) but one related by a global symmetry (isometry of the compactification manifold or a generalization of an isometry). Consequently, the indices \(n\) of \(\alpha_{n}\) are no longer integers but they are shifted by a fractional shift such as \(1/2\) or \(1/4\) away from an integer. This isn't changing the story qualitatively. It's still true that the squared masses are integer multiples of a quantum. Also, the negative additive shift in the formula for \(m^2\) – the ground state energy – depends on which (twisted – or untwisted) sector you consider.

Let me discuss Tetragraviton's claims in more detail.
It’s a nice story. It’s even partly true. But it [the claim that increasingly heavy particle species are obtained from the addition of higher harmonics etc.] gives a completely wrong idea of where the particles we’re used to come from.
Sorry but it gives a completely correct qualitative idea where all particle species in string theory come from. All of them come from string vibrations and it's always the case that (as long as one ignores subleading corrections to the masses from field theory effects etc.) the more vibrations are added to a string, the heavier particle species we obtain.

Experimentally, we have only observed a few dozens of particle species. But they come from the string tower of vibrating strings, too. In some approximation, they're usually coming from states with \(m^2=0\). But that does not mean that the counting of the higher harmonics and their contributions to \(m^2\) may be avoided.


It's because of the negative shift in the \(m^2\). The ground state of a (closed or open) string is normally a tachyon with \(m^2\lt 0\). This state is projected out by the so-called GSO projection. At the end, the spacetime supersymmetry is a sufficient (but not necessary!) condition to get rid of all the tachyons. But there are always numerous massless states – in the approximation of free strings. And these states are massless because the negative ground state contribution to \(m^2\) is cancelled by the positive contributions from the oscillators. This cancellation may take a different numerical form for different states – and especially for states in different twisted sectors.

But again, the counting of the basic frequency's and higher harmonics' energy is unavoidable even if you want to understand the origin of the massless states. If all the non-constant modes along the string could be completely ignored and omitted, the whole added value of string theory would be "redundant garbage" and we could just work with the equally consistent massless truncation of string theory.

However, we just can't. In particular, one of the massless i.e. \(m^2=0\) states of the vibrating string is the graviton, the quantum of the gravitational wave (or field), the messenger of the gravitational force. Even in the simplest \(D=26\) bosonic string theory, the spin-two graviton states are obtained from a closed string by the action of two oscillators:\[

\alpha^\mu_{-1} \tilde \alpha^\nu_{-1} \ket{0}

\] Similarly, in the RNS \(D=10\) definition of superstring theory, it is\[

\alpha^\mu_{-1/2} \tilde \alpha^\nu_{-1/2} \ket{0}_{NS,NS}

\] The ground state is a tachyon (which survives in bosonic string theory, a source of infrared inconsistencies equivalent to an instability, but is removed by the GSO projections in superstring theory). But its negative \(m^2\) is exactly cancelled by one left-moving excitation of the "basic frequency wave" on the string, and one right-moving one (note that the level-matching condition holds). There is no way to get the same results without the non-constant "sinusoidal waves" on the string.

The point that 4gravitons is missing is that massless states (in the free-string approximation) coming from the quantized strings are massless "by accident". Most states have positive masses, some states happen to have zero masses when the terms are added. But the latter aren't separated from the rest in any a priori way. The massive excitations are in no way added artificially to some massless starting point. There is no massless starting point in string theory. String theory unavoidably generates the massless and massive states simultaneously, with no consistent way to divide them. It is not quite trivial to derive the massless spectrum in a general string compactification. It's about as hard as to derive the states at any massive level.

OK, back to the graviton state that had two (minimal nonzero frequency) wave excitations around the closed string.

Once you allow the "basic frequency" of the wave on the string, you automatically allow all of them because splitting and joining strings are capable of producing truncated sines on a shorter interval which may only be Fourier-expanded on the shorter string if you allow all the higher harmonics as well. And a consistent theory of quantum gravity may only be obtained if you incorporate all of them. There's just no way to consistently truncate the higher harmonics because even the "simple" graviton depends on the non-constant modes along the string.
Again, even for massless states, the careful counting of the energy from nontrivial sinusoidal excitations of the string is essential to get the correct mass.
Disappointingly, the only interpretation of Tetragraviton's words that "the vibrating string picture with harmonics is a completely wrong explanation of the well-known particle species" is that he just doesn't have a clue how string theory explains the massless and light states.

But I believe that this is not the only problem with his views about string theory. Another paragraph says:
String theory’s strings are under a lot of tension, so it takes a lot of energy to make them vibrate. From our perspective, that energy looks like mass, so the more complicated harmonics on a string correspond to extremely massive particles, close to the Planck mass!
I've discussed that, it's not necessarily the case. I do believe that the string scale is close to the Planck mass but there do exist low-string models where it's as low as a few \({\rm TeV}\)s. This is just a technical difference. The heavier excited string vibrations are equally real in both scenarios.

But it's primarily the following paragraph that I believe to be seriously flawed:
Those aren’t the particles you’re used to. They’re not electrons, they’re not dark matter. They’re particles we haven’t observed, and may never observe. They’re not how string theory explains the fundamental particles of nature.
Electrons and particles of dark matter (if the latter is composed of particles) are excited strings as well, and even for those, the addition of energies from vibrations on the string is needed to get the correct mass (despite its being zero in the free-string approximation). There just doesn't exist any sense in which the statement that "the states in the infinite tower of arbitrarily excited strings don't describe the electron or dark matter" could be correct. It's just wrong, wrong, wrong.

But the generalization of this statement, "they [strings excited by the harmonics] are not how string theory explains the fundamental particles of nature" is surely the opposite of a deep truth. That is exactly how string theory explains the fundamental particles of Nature. Barton Zwiebach's quote above may be used as the best explanation in this blog post why this observation is both right and essential.

There may also be some confusion about "what counts as a fundamental particle of Nature". Tetragraviton seems to count the electron but not some heavy states near the string scale. But both of them are fundamental particles of Nature. Moreover, both of them have masses whose essential contribution comes from the energy of vibrations added to the string. There is no qualitative difference between the electron and the graviton; and heavier string states. We may have detected some particles and not others but all of them are equally real and equally fundamental.
So how does string theory go from one fundamental type of string to all of the particles in the universe, if not through these vibrations? As it turns out, there are several different ways it can happen. I’ll describe a few.

The first and most important trick here is supersymmetry. ...
Again, it's simply not true that supersymmetry replaces or invalidates the fact that the main contribution to the mass of particles in string theory comes from the vibrations of a string. Supersymmetry is a special feature of a subset of the string vacua (and similarly quantum field theories). But the elements of this subset are constructed in the same way as elements outside this subset. In string theory, they are constructed by counting the energy that vibrations on a quantum relativistic string may carry. Supersymmetry almost always requires some fermionic degrees of freedom but they may be viewed as extra coordinates of the (super)space and they add vibrations and energy through (fermionic) harmonic oscillators just like their bosonic friends (well, they're not just Platonic friends, they're superpartners). They also have higher harmonics with \(n=2,3\) etc., only the occupation numbers are \(N_a=0,1\).
Supersymmetry relates different types of particles to each other. In string theory, it means that along with vibrations that go higher and higher, there are also low-energy vibrations that behave like different sorts of particles.
Supersymmetry makes it more likely that there will be massless or light particles but it is not a necessary condition. There exist non-supersymmetric (yet tachyon-free) string vacua with the analogous massless portion of the spectrum (massless is meant at the level of the free string, the string scale). Despite the absence of supersymmetry, the number of massless bosonic and fermionic particle species – massless states of a vibrating string – is basically the same as in the similar supersymmetric models (I am talking about the tachyon-free non-SUSY heterotic strings). The states are just different, not less numerous or "worse".
Even with supersymmetry, string theory doesn’t give rise to all of the right sorts of particles. You need something else, like compactifications or branes.
Yup, except that Brian Greene and many others have explained all these things with extra dimensions etc. far more accurately and pedagogically than Tetragraviton. Incidentally, a compactification is always needed to obtain at least a semi-realistic string vacuum.
In string theory, the particles we’re used to aren’t just higher harmonics, or vibrations with more and more energy. They come from supersymmetry, from compactifications and from branes.
Again, there is absolutely no "contradiction" between vibrations on one side and compactifications, SUSY, or branes on the other side. They're independent concepts. Strings vibrate even when they're placed in a compactified spacetime manifold, even when they're supersymmetric, even when there are D-branes around, and even if the strings are open strings attached to these D-branes. It's similar to the bass strings: they also vibrate even when they are surrounded by an instrument whose shape resembles Meghan Trainor, no treble. The shape of the instrument, like the shape of the compactification manifold, influences the sound (and spectrum) of the vibrations but it in no way invalidates or removes the vibrations.

But a point that he repeats all the time and that annoys me is one about the "particle you're used to". String theory isn't primarily a theory mean to discuss "just the particles you used to". String theory is a theory of everything – which includes all particles, including those that no one is used to because we haven't observed them (yet).

If someone isn't interested in the full list of particles in Nature, I find it obvious that he has no reason to be interested in string theory, either – because string theory is almost by definition a theory going well beyond the technical limitations of current experiments. If someone isn't interested what is hiding beneath the surface, an effective field theory is an easier attitude for those whose interest is this narrow-minded. The effective field theories are really defined to be the answer to the questions that never try to go beyond a certain regime defined by practical limitations. But in that case, if someone is interested in these low-energy things only, I don't see why he would be reading blog posts or books about string theory at all.

It just makes no sense whatsoever. The person isn't interested in these questions so he probably doesn't study them and hasn't studied them. He almost certainly knows nothing about the things he could have learned (about string theory) but he hasn't and he should better shut his mouth.
The higher harmonics are still important: there are theorems that you can’t fix quantum gravity with a finite number of extra particles, so the infinite tower of vibrations allows string theory to exploit a key loophole.
Right. All the excited string modes are totally needed for the consistency of the quantum gravity, as I said. Also, as I discussed in a comment on the 4gravitons blog, when you gradually increase the value of the string coupling constant, the excited string states are gradually turning to black hole microstates. The exponential increase of the number of excited string states is a precursor or an approximation to the quasi-exponential increase of the number of black hole microstates we need in a consistent quantum theory of gravity.
They just don’t happen to be how string theory gets the particles of the Standard Model.
If the world is described by a weakly coupled string theory, string theory does derive all the particles of the Standard Model exactly by the same algorithm that 4gravitons irrationally denounces.
The idea that every particle is just a higher vibration is a common misconception, and I hope I’ve given you a better idea of how string theory actually works.
Is it not a misconception and Tetragraviton has only brought confusion and falsehoods to this topic.

Quite generally, a popularizer of science always runs the risk of being separated from the big shots who do the best research, and so on. People realize this (true) general fact and that's also why popularizers are sometimes attacked with similar words. However, in the case of string theory, almost all these attacks are just plain rubbish.

In particular, Brian Greene has been extremely careful what he was saying about string theory. His explanations of these topics correspond pretty much to the most accurate sketch that is accessible to a large enough subset of the lay public. And people who are criticizing some basic claims such as the deep insight that "in string theory, particles are vibrations" are simply full of šit. The identification of the particle species (all of them) and the vibration states of a string is a profound truth (of weakly coupled string theory).

by Luboš Motl ( at May 19, 2016 12:53 PM

The n-Category Cafe

The HoTT Effect

Martin-Löf type theory has been around for years, as have category theory, topos theory and homotopy theory. Bundle them all together within the package of homotopy type theory, and philosophy suddenly takes a lot more interest.

If you’re looking for places to go to hear about this new interest, you are spoilt for choice:

For an event which delves back also to pre-HoTT days, try my

CFA: Foundations of Mathematical Structuralism

12-14 October 2016, Munich Center for Mathematical Philosophy, LMU Munich

In the course of the last century, different general frameworks for the foundations of mathematics have been investigated. The orthodox approach to foundations interprets mathematics in the universe of sets. More recently, however, there have been other developments that call into question the whole method of set theory as a foundational discipline. Category-theoretic methods that focus on structural relationships and structure-preserving mappings between mathematical objects, rather than on the objects themselves, have been in play since the early 1960s. But in the last few years they have found clarification and expression through the development of homotopy type theory. This represents a fascinating development in the philosophy of mathematics, where category-theoretic structural methods are combined with type theory to produce a foundation that accounts for the structural aspects of mathematical practice. We are now at a point where the notion of mathematical structure can be elucidated more clearly and its role in the foundations of mathematics can be explored more fruitfully.

The main objective of the conference is to reevaluate the different perspectives on mathematical structuralism in the foundations of mathematics and in mathematical practice. To do this, the conference will explore the following research questions: Does mathematical structuralism offer a philosophically viable foundation for modern mathematics? What role do key notions such as structural abstraction, invariance, dependence, or structural identity play in the different theories of structuralism? To what degree does mathematical structuralism as a philosophical position describe actual mathematical practice? Does category theory or homotopy type theory provide a fully structural account for mathematics?

Confirmed Speakers:

  • Prof. Steve Awodey (Carnegie Mellon University)
  • Dr. Jessica Carter (University of Southern Denmark)
  • Prof. Gerhard Heinzmann (Université de Lorraine)
  • Prof. Geoffrey Hellman (University of Minnesota)
  • Prof. James Ladyman (University of Bristol)
  • Prof. Elaine Landry (UC Davis)
  • Prof. Hannes Leitgeb (LMU Munich)
  • Dr. Mary Leng (University of York)
  • Prof. Øystein Linnebo (University of Oslo)
  • Prof. Erich Reck (UC Riverside)

Call for Abstracts:

We invite the submission of abstracts on topics related to mathematical structuralism for presentation at the conference. Abstracts should include a title, a brief abstract (up to 100 words), and a full abstract (up to 1000 words), blinded for peer review. Authors should send their abstracts (in pdf format), together with their name, institutional affiliation and current position to We will select up to five submissions for presentation at the conference. The conference language is English.

Dates and Deadlines:

  • Submission deadline: 30 June, 2016
  • Notification of acceptance: 31 July, 2016
  • Registration deadline: 1 October, 2016
  • Conference: 12 - 14 October, 2016

by david ( at May 19, 2016 12:07 PM

astrobites - astro-ph reader's digest

Conserving Water on the TRAPPIST-1 planets

Title: Water Loss from Earth-sized planets in the Habitable Zones of Ultracool Dwarfs: Implications for the planets of TRAPPIST-1
Authors: Emeline Bolmont, Franck Selsis, James Owen, Ignasi Ribas, Sean Raymond, Jérémy Leconte, Michaël Gillon
First Author’s Institution: University of Namur
Status: Submitted to MNRAS

Have you ever wanted to visit a planet in another star system? While most of the known Kepler exoplanets are too far away to take a vacation, communicate with, or make atmospheric observations; a team led by Michaël Gillon just discovered 3 transiting Earth-sized planets (TRAPPIST-1b, c, and d) that are likely in the habitable zone of a very low mass M dwarf star just 40 light years away! That is only 15 light years further than the last group of aliens that contacted us.

This prompts Emeline Bolmont et al., the authors of today’s featured paper, to investigate the likelihood that these planets – and other similar planets that orbit ultracool dwarfs – have any liquid water. Even though these planets are in the habitable zone today, they did not used to be. Since ultracool dwarfs cool down drastically in their early lives, the current locations of the TRAPPIST-1 planets were originally too hot to have liquid water, meaning any water on these planets would have been in gas form in the atmosphere where it would have been prone to escape into space.

Bolmont et al. want to know: Is this type of planet capable of retaining its supply of water vapor for long enough to reach the period of time when the planet is in the habitable zone and its water can condense into the liquid form that is responsible for life on Earth?

Ultracool isn’t cool enough!

While most stars spend the vast majority of their lives on the main sequence where they get hotter and radiate more energy as they grow older, ultracool dwarfs spend a significant portion of their lives getting cooler and cooler. One type of ultracool dwarf – late M dwarfs like TRAPPIST-1 – have so little mass (0.08 solar masses) that they can take up to 100 million years just to reach the main sequence compared to 10 million years for a more massive star like the Sun. Meanwhile, the second type of ultracool dwarf – brown dwarfs – have so little mass (0.01 – 0.08 solar masses) that they never even reach the main sequence and technically, never become stars.

When stars are not on the main sequence – where they fuse hydrogen – they have no energy source and emit less and less radiation over time. As this happens, the region around the star where a planet can harbor water and be considered habitable moves closer in towards the star.

For brown dwarfs, this cooling slows down enough so that after about 30 million years, planets at 0.01 AU will spend about 100 million years in their system’s habitable zone. Meanwhile for late M dwarfs, this cooling stops when the star begins to fuse hydrogen and planets at 0.01 AU stay in their habitable zone for much longer than the lifetime of our solar system. Though, a planet at this distance would hardly be considered habitable if it had already lost too much of its water vapor when it was not in the habitable zone.

Habitable Zones around Ultracool Dwarfs over time. A planet at 0.01 AU around a late M dwarf enters the habitable zone after about 200 Myr. A planet at 0.01 AU around a 0.01 solar mass brown dwarf enters the habitable zone after about 5 Myr

Habitable Zones around Ultracool Dwarfs over time. A planet at 0.01 AU around a late M dwarf enters its habitable zone (upper blue region) after about 200 Myr. A planet at 0.01 AU around a 0.01 solar mass brown dwarf enters its habitable zone (lower blue region) after about 3 Myr. The red region is too close to the star to have any surviving planets.

How to Lose Water Vapor:

(1) Too Much XUV Radiation and (2) Letting Hydrogen Abandon Oxygen

For an Earth-sized planet in an ultracool dwarf system, radiation from the ultracool dwarf is the main source of energy driving the planet to lose some of its atmosphere. Specifically, the low-frequency X-ray and UV parts of the ultracool dwarf’s blackbody curve (XUV, for short) are the only parts of its spectrum that are energetic enough to cause the hydrogen and oxygen needed for water to escape. Fortunately, ultracool dwarfs are so cool that they hardly emit any radiation in the XUV (just 10 millionths of their total luminosity, according to recent observational measurements). Even though the TRAPPIST-1 planets are 20 to 100 times closer to their star than Earth, they receive no more than 4 times the amount of XUV radiation as the Earth does, which is promising for the planets’ prospects for retaining water.

The second factor affecting water loss is the ratio of hydrogen that escapes compared to oxygen. Ideally, hydrogen — which can be photo-dissociated by the incident XUV radiation — will escape at a 2:1 ratio to oxygen, so that there will be the same 2:1 ratio left for it to easily recombine into water later on. However, Bolmont et al. calculate that with the amount of XUV radiation, a slightly less favorable 4:1 is a more realistic loss ratio (since hydrogen is much lighter than oxygen). They then use both the 2:1 and 4:1 ratios to calculate how many “Earth Ocean’s worth” of water vapor are lost in the atmospheres of 0.1, 1.0, and 5.0 Earth mass planets around different mass ultracool dwarfs.

Water loss for different mass planets around different mass brown dwarfs. The left panel uses a 2:1 H:O loss ratio. The right panel uses a less favorable 4:1 ratio. The red points assume the XUV radiation is 10 millionths of the total. The blue points assume a smaller constant value. As expected, lower mass planets around higher mass brown dwarfs lose more water.

Water loss for different mass planets (circles, squares, and triangles) around different mass brown dwarfs. The left panel uses a 2:1 H:O loss ratio. The right panel uses a less favorable 4:1 ratio. The red points assume the XUV radiation is 10 millionths of the total. The blue points assume a smaller constant value. As expected, lower mass planets around higher mass brown dwarfs lose more water.

Can any of the TRAPPIST-1 planets have water?

Bolmont et al. find that the two innermost planets – TRAPPIST-1b and 1c – likely would lose more than an “Earth Ocean” of water vapor in their early lives, making it unlikely for them to have any liquid water left today. On the other hand, TRAPPIST-1d is much more likely to retain a significant fraction of its water vapor, making it the best candidate of the three to have any water and an excellent target in the search for bio-signatures outside of our solar system.


The amount of water loss (in Earth Oceans) on each of the 3 planets over time. If a planet only started with 1 Earth Ocean of water, once it loses 1 Earth Oceans of water, it will not have any more water. The left dashed line at about 40 Myr is the time when TRAPPIST-1d enters the habitable zone. The right two dashed lines at 400 and 500 Myr are age estimates for the system. At its current age, only TRAPPIST-1d has lost less than 1 Earth Ocean of water.

The authors caution (positively!) that they are probably overestimating the amount of water loss for these types of planets. If they started out with more than an Earth Ocean of water, they could still retain some even with the large loss. Additionally, the amount of XUV radiation emitted by ultracool dwarfs is poorly constrained and its un-detectability in most systems makes it likely that it is even less than the measured values used in this study. Both of these factors, along with several others, would only improve the chances that these planets have water.

Bolmont et al. hope that the soon-to-be-launched James Webb Space Telescope (JWST) will be able to measure the atmospheric composition of the TRAPPIST-1 planets. They can then use these measurements to see if they are underestimating or overestimating the amount of water vapor in the atmosphere at the present time. If JWST can support these models, it would be a great sign for future studies of their habitability.

Featured Image Credit: ESO / M. Kornmesser

by Michael Hammer at May 19, 2016 10:13 AM

May 18, 2016

astrobites - astro-ph reader's digest

The gruntwork behind Kepler’s new batch of exoplanets

Article: False positive probabilities for all Kepler Objects of Interest: 1284 newly validated planets and 428 likely false positives

Authors: Timothy D. Morton, Stephen T. Bryson, Jeffrey L. Coughlin, Jason F. Rowe, Ganesh Ravichandran, Erik A. Petigura, Michael R. Haas, Natalie M. Batalha

First author’s institution: Department of Astrophysical Sciences, Princeton University

Status: Published in The Astrophysical Journal


Current tally of the nature of Kepler Objects of Interest (KOIs), as told by the results from the code vespa. Notice that almost half of them are false positives. FPP = false positive probability.

You probably already read or heard about it: the Kepler mission recently announced the discovery of 1284 new exoplanets, more than doubling its number of confirmed planets outside the Solar System. This is no small feat, and in fact, Kepler’s productivity poses a serious problem: because it discovers so many candidates, it’s difficult to perform follow-up observations on all targets to confirm the nature of the transits. This is where we pull ourselves up by the bootstraps and probabilistic validation comes into play.

Making a strong case for Kepler

Kepler looks for exoplanets using the transit method, which relies on these pesky objects to cross the line of sight between the Earth and the host star, causing a dip in its brightness. The problem is that not every dip is caused by a planet, so these signals have to be considered only candidates. Normally, when someone discovers a transit candidate, other astronomers have to do radial velocities follow-up observations aiming to confirm or bust the existence of the exoplanet. But these observations are very time expensive, and generally require large telescopes because Kepler usually observes only dim stars. To circumvent this problem, the main approach has been to demonstrate that all other conceivable scenarios are much more unlikely than the planet transit explanation, i.e., probabilistic validation.

In the past, astronomers used to confirm Kepler transits with cumbersome, computing-heavy probabilistic methods, which were not optimized for large batch processing of many candidates. In order to address this limitation, the astronomer Dr. Timothy Morton, first author of today’s paper, wrote the open source Python module vespa (wasp in Italian and Portuguese).

The way vespa works is by assigning false positive probabilities (FPP) to different hypotheses that explain the transit-like signal using fully automated steps. The higher the FPP, the less likely the signal is to be regarded as a transit. vespa‘s trick is that it creates realistic populations of astrophysical false positives and compares them to the transit signal, allowing for the inference of the FPPs in an objective fashion. These populations consist of various “simulations” of transits with different educated guesses for the parameters of the signal’s shape and for the physical parameters of the host star.

Innocent until proven guilty of transit

The authors of today’s paper applied vespa to all 7470 Kepler Objects of Interest (KOIs) from quarters 1 to 17 in the NExScI database, and successfully calculated FPPs for 7056 of them, of which 2857 have reliable FPPs. In the end, the authors decide, with proper precedent, that the threshold of planet validation is a false positive probability of less than 1%. They find that 1935 KOIs have FPPs below this threshold, which means they are validated at the 99% level. Of these, 1284 are new validations, while the remaining ones were already confirmed (see Fig. 1 below). Moreover, 9 of the new validations are planets that may be in the optimistic habitable zones of their host stars (see Fig. 2 to check the radii and orbital periods of these planets). The authors also identify 428 KOIs that were previously marked as candidates, but are likely false positives, according to vespa’s calculations.


Figure 1. The false positive probabilities (FPP) of all candidate or confirmed KOIs with reliable vespa results in function of their planet radius. All of those below 10^(-2) were confirmed by vespa, while those above are likely false positives. The red circles are median values of FPP in equal-sized bins.

Figure 2 (adapted from original). Most of the exoplanets we detected with Kepler are in configurations that we do not see in our own Solar System. For reference, see the approximate position of the Earth in this plot.

The court is not adjourned yet

The main limitation of this method is that it is difficult to to “validate the validations”, because of the large amount of effort required to do follow-up observations. But the ones that were performed by Spitzer (50 KOIs) and radial velocities monitoring (129 KOIs) do support vespa’s results. 

The authors also point out that more detailed studies of small subsamples of KOIs would help us understand better the false positive probabilities of these candidates. One of the purposes of today’s paper is exactly to kickstart the astronomical community on more detailed studies of subsamples. And, as always, we are looking into the future: the missions TESS and PLATO will definitely produce many transit candidates, so vespa is a huge step towards more effective data analyses.

by Leonardo dos Santos at May 18, 2016 07:11 PM

CERN Bulletin

Croquet club
The CERN Croquet season started Saturday 7 May with the annual opening tournament. A total of 14 very happy players in the spring sunshine. It was a  lovely day in all senses - friendly competition, a lot of laughter and catching up with one another. Players are divided into PROs (low-handicap) and AMs (high-handicap), all matches are played as doubles. The pairings are changed during the day and the individual points go towards determining the winner. Congratulations to Ian Sexton for winning the Pros and Beryl Allardyce who won the Ams. Many of the games were very close and Ian seemed to have some good challenges in his block! Overall results: Pros: 1st - Ian 2nd - Brian 3rd - Angelina 4th - Jean Ams: 1st - Beryl 2nd - Frank 3rd - Peter (+Margaret) 4th - Roberta (+Jenny) Special thanks to the manager Danny Davids for making this tournament such a smooth and well run affair. CERN croquet club holds club tournaments and hosts Swiss Opens, Swiss Championships and International matches during the year.  Anyone (beginners or confirmed players) that is interested in playing this intriguing game please contact either: Ian Sexton, , Norman Eatough, or Dave Underhill,

by Croquet club at May 18, 2016 11:47 AM

Lubos Motl - string vacua and pheno

Weak gravity conjecture linked to many fields of maths, physics: an essay
Ben Heidenreich, Matthew Reece, and Tom Rudelius (Harvard) have won the 5th place in the 2016 Gravity Research Foundation Essay Contest (I will avoid the general rating of this kind of essay contests):
Axion Experiments to Algebraic Geometry: Testing Quantum Gravity via the Weak Gravity Conjecture
They discuss a refinement of our conjecture that for any type of a "charge" similar to electromagnetism, there must always exist sources for which the non-gravitational force donalds the gravitational one.

The essay shows that the inequality has implications for inflation (naively excluding a long enough inflation and maybe forcing one to talk about specific types of inflation), for AdS/CFT (charged operators with low enough dimensions should exist), and for pure mathematics (because the inequality should hold for compactifications on complicated enough manifolds, and such an inequality therefore sometimes turns into a nontrivial geometric theorem about those).

They start with my #1 favorite motivation for the weak gravity conjecture – the absence of global Lie symmetries in quantum gravity. It's been something I was emphasizing long before our paper but I vaguely remember that it wasn't new for some or several co-authors, either.

You know, the important pre-WGC lore – which I may have known from my adviser Tom Banks since the late 1990s – was that there are no global continuous symmetries in a consistent quantum theory of gravity. In general relativity, even translations are made "local" (diffeomorphism group) and things that are not "local" become unnatural.

However, gauge theories with tiny couplings \(g\to 0\) may seemingly emulate global symmetries as accurately as you want. That should better be impossible as well. If something (global symmetries) is forbidden, physical situations or vacua that are "infinitesimally close" to the forbidden thing should better be banned as well, right? Otherwise the ban would be operationally vacuous. There should exist a finite value of some quantity that tells you how far from the forbidden point you have to be.

And that's what the weak gravity conjecture does (and many types of evidence – from problems with extremal black hole remnants to lots of stringy examples – support the conjecture, at least in "some" form). A light charged particle with \[

m \leq \sqrt{2} eq M_{Planck}

\] must exist. Heidenreich et al. promote their belief in a stronger "detailed" version of the weak gravity conjecture that we have conjectured but we ran into some counter-arguments. They call it the lattice weak gravity conjecture (LWGC): for every allowed vector \(\vec Q\) in the lattice of charges, there must exist an object that is lighter than the 0.0001% of the mass of a black hole with the charge one million times \(\vec Q\).

(I have inserted the factor of one million and the millionth to make sure that you omit the corrections from the smallness of the black hole – you work with the semiclassical estimate of the mass.)

This sounds too strong. I thought that for larger charges, the statement actually isn't true – only several "elementary", low-charge light particles are required by WGC, I thought. If they replaced the word "particle" by a "state" (which may be a collection of many separate particles whose charges, masses add), I think that my doubts would go away.

It's difficult to decide whether the new light states should exist for every \(\vec Q\), every direction in the charge space, almost every direction, every direction in a basis of directions, every direction in a (near) orthogonal basis in some metric, or something else. There may exist some "very natural" specific version of the inequality that would be as provable as e.g. the Heisenberg uncertainty principle but I don't think that the "best, most accurate yet strong one" has been pinpointed yet.

The analogy with the Heisenberg uncertainty principle is meant to be an exaggeration. At least I still believe that the WGC is vastly less fundamental than the Heisenberg uncertainty principle. The uncertainty principle may be connected to many – in some sense "all" – situations in physics. WGC has been "connected" to many things. But I still don't see in what sense it could be considered a principle that "changes the rules of the game" in a way that is at least qualitatively analogous to the change of physics implied by the Heisenberg uncertainty principle.

There may be similarities between the inequalities but there are also differences. One of them is that the Heisenberg uncertainty principle strictly disagreed with the class of theories that had been considered before Heisenberg and pals revolutionized physics. On the other hand, WGC tells you to consider a subset of the theories of gravity-coupled-to-matter that were previously allowed.

The amount of activity dedicated to WGC is greater than what I used to assume a decade ago. (And I surely believe that e.g. matrix string theory is much more fundamentally important than WGC, for example.) On the other hand, I can imagine that this line of research on WGC will turn into something that will be self-evidently fundamental in its implications.

As we said at the beginning, WGC talks about some "minimal difference between two situations" – too decoupled new forces (with too weak couplings and/or too heavy charged particles) are forbidden etc. So this WGC-dictated "minimum distance" could be a consequence of some new kind of "orthogonality" that is indeed analogous to (if not a special case of) the orthogonality of mutually exclusive states in quantum mechanics – which is an assumption that may be used to derive the uncertainty principle.

By saying that the lightest charge particles have to be light enough, the WGC also quantifies the intuition that all the "engines" responsible for a force etc. can never be squeezed into a too small region of space. You need the rather long distances – the Compton wavelength of the light enough particle – for this force to arise. That's a way to say that WGC may be said to be "somewhat similar" to the holographic principle, too. All these things suggest that the information can't hide in too small volumes, or in too inaccessible physical phenomena.

If something seems to be nontrivially correct – it doesn't seem to be quite a coincidence that gravity is the weakest force – the reasons should better be understood well. So people's thinking about it is clearly desirable. On the other hand, no one is guaranteed that it will lead to a full-fledged revolution. If WGC really implied that no model of inflation may exist, I would personally not believe such a conclusion, anyway (except if someone gave me a truly convincing full definition of quantum gravity with all the proofs; or at least some viable alternative to inflation). Maybe it's a mistake of mine but I still happen to think that the "case of inflation" is still much stronger than the "case for any particular strong version of WGC applied to instantons". (Whether the inequality for 1-forms may really be applied to 0-forms seems disputable to me, too. Note that the energy-time "uncertainty principle" must be interpreted differently, if it is possible at all, than the momentum-position uncertainty principle, and one must often be very careful when he generalizes things to "related situations".)

The subtitle "Testing Quantum Gravity via the Weak Gravity Conjecture" must be provocative for assorted Šmoits. Not only the essay dares to talk about the testing of quantum gravity. It's worse than that: quantum gravity and string theory are being tested according to a conjecture co-authored by a guy who insists that Šmoits and their apologists are just stinky piles of feces. ;-)

by Luboš Motl ( at May 18, 2016 08:17 AM

May 17, 2016

Clifford V. Johnson - Asymptotia


Actually, I’m super-excited…! There is a New Hope coming. I’m daring to dream… ok just a little bit. (Sorry to be cryptic…More later.) -cvj

The post Excited! appeared first on Asymptotia.

by Clifford at May 17, 2016 08:47 PM

CERN Bulletin

Elections to the Mutual Aid Fund

Every two years, according to Article 6 of the Regulations of the Mutual Aid Fund, the Committee of the Mutual Aid Fund must renew one third of its membership. This year three members are outgoing. Of these three, two will stand again and one will not.


Candidates should be ready to give approximately two hours a month during working time to the Fund whose aim is to assist colleagues in financial difficulties.

We invite applications from CERN Staff who wish to stand for election as a member of the CERN Mutual Aid Fund to send in their application before 17 June 2016, by email to the Fund’s President, Connie Potter (

May 17, 2016 05:05 PM

astrobites - astro-ph reader's digest

Reminder: 2016 Reader Survey

Thanks to everyone who has already responded to our latest readership survey!

It’s important to us that we align our content with your interests, and reader surveys are your chance to make your voice heard. If you haven’t yet participated, please take a few minutes to do so now by clicking the link below.

Take Astrobites Reader Survey

— the Astrobites team

P.S. Don’t forget that we’re giving away free Astrobites t-shirts to randomly-drawn survey respondents!

by Astrobites at May 17, 2016 03:28 PM

Symmetrybreaking - Fermilab/SLAC

Why do objects feel solid?

The way you think about atoms may not be quite right.

A reader asks: "If atoms are mostly empty space, then why does anything feel solid?" James Beacham, a post-doctoral researcher with the ATLAS Experiment group of The Ohio State University, explains.

Video of bVrQw_Cdxyw

Have a burning question about particle physics? Let us know via email or Twitter (using the hashtag #AskSymmetry). We might answer you in a future video!

by Sarah Charley at May 17, 2016 01:00 PM

CERN Bulletin

Arts@CERN | ACCELERATE Austria | 19 May | IdeaSquare
Arts@CERN welcomes you to a talk by architects Sandra Manninger and Matias Del Campo, at IdeaSquare (Point 1) on May 19 at 6:00 p.m.   Sensible Bodies - architecture, data, and desire. Sandra and Matias are the winning architects for ACCELERATE Austria. Focusing on the notion of geometry, they are at CERN during the month of May, as artists in residence. Their research highlights how to go beyond beautiful data to discover something that could be defined voluptuous data. This coagulation of numbers, algorithms, procedures and programs uses the forces of thriving nature and, passing through the calculation of a multi-core processor, knits them with human desire. Read more. ACCELERATE Austria is supported by The Department of Arts of the Federal Chancellery of Austria. Thursday, May 19 at 6:00 p.m. at IdeaSquare.  See event on Indico.

May 17, 2016 08:35 AM

May 16, 2016

The n-Category Cafe

E8 as the Symmetries of a PDE

My friend Dennis The recently gave a new description of the Lie algebra of <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> (as well as all the other complex simple Lie algebras, except <semantics>𝔰𝔩(2,)<annotation encoding="application/x-tex">\mathfrak{sl}(2,\mathbb{C})</annotation></semantics>) as the symmetries of a system of partial differential equations. Even better, when he writes down his PDE explicitly, the exceptional Jordan algebra makes an appearance, as we will see.

This is a story with deep roots: it goes back to two very different models for the Lie algebra of <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics>, one due to Cartan and one due to Engel, which were published back-to-back in 1893. Dennis figured out how these two results are connected, and then generalized the whole story to nearly every simple Lie algebra, including <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics>.

Let’s begin with that model of <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics> due to Cartan: the Lie algebra <semantics>𝔤 2<annotation encoding="application/x-tex">\mathfrak{g}_2</annotation></semantics> is formed by the infinitesimal symmetries of the system of PDE <semantics>u xx=13(u yy) 3,u xy=12(u yy) 2.<annotation encoding="application/x-tex"> u_{x x} = \frac{1}{3} (u_{y y})^3, \quad u_{x y} = \frac{1}{2} (u_{y y})^2 . </annotation></semantics> What does it mean to be an infintesimal symmetry of a PDE? To understand this, we need to see how PDE can be realized geometrically, using jet bundles.

A jet bundle over <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics> is a bundle whose sections are given by holomorphic functions <semantics>u: 2<annotation encoding="application/x-tex"> u \colon \mathbb{C}^2 \to \mathbb{C} </annotation></semantics> and their partials, up to some order. Since we have a 2nd order PDE, we need the 2nd jet bundle: <semantics>J 2( 2,) 2<annotation encoding="application/x-tex"> \begin{matrix} J^2(\mathbb{C}^2, \mathbb{C}) \\ \downarrow \\ \mathbb{C}^2 \end{matrix} </annotation></semantics> This is actually the trivial bundle whose total space is <semantics> 8<annotation encoding="application/x-tex">\mathbb{C}^8</annotation></semantics>, but we label the coordinates suggestively: <semantics>J 2( 2,)={(x,y,u,u x,u y,u xx,u xy,u yy) 8}.<annotation encoding="application/x-tex"> J^2(\mathbb{C}^2, \mathbb{C}) = \left\{ (x,y,u,u_x,u_y, u_{x x}, u_{x y}, u_{y y}) \in \mathbb{C}^8 \right\} . </annotation></semantics> The bundle projection just picks out <semantics>(x,y)<annotation encoding="application/x-tex">(x,y)</annotation></semantics>.

For the moment, <semantics>u x<annotation encoding="application/x-tex">u_x</annotation></semantics>, <semantics>u y<annotation encoding="application/x-tex">u_y</annotation></semantics> and so on are just the names of some extra coordinates and have nothing to do with derivatives. To relate them, we choose some distinguished 1-forms on <semantics>J 2<annotation encoding="application/x-tex">J^2</annotation></semantics>, called the contact 1-forms, spanned by holomorphic combinations of <semantics>θ 1 = duu xdxu ydy, θ 2 = du xu xxdxu xydy, θ 3 = du yu xydxu yydy.<annotation encoding="application/x-tex"> \begin{array}{rcl} \theta_1 & = & d u - u_x d x - u_y d y, \\ \theta_2 & = & d u_x - u_{x x} d x - u_{x y} d y, \\ \theta_3 & = & d u_y - u_{x y} d x - u_{y y} d y . \end{array} </annotation></semantics> These are chosen so that, if our suggestively named variables really were partials, these 1-forms would vanish.

For any holomorphic function <semantics>u: 2<annotation encoding="application/x-tex"> u \colon \mathbb{C}^2 \to \mathbb{C} </annotation></semantics> we get a section <semantics>j 2u<annotation encoding="application/x-tex">j^2 u</annotation></semantics> of <semantics>J 2<annotation encoding="application/x-tex">J^2</annotation></semantics>, called the prolongation of <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics>. It simply takes those variables that we named after the partial derivatives seriously, and gives us the actual partial derivatives of <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> in those slots: <semantics>(j 2u)(x,y)=(x,y,u(x,y),u x(x,y),u y(x,y),u xx(x,y),u xy(x,y),u yy(x,y)).<annotation encoding="application/x-tex"> (j^2 u) (x,y) = (x, y, u(x,y), u_x(x,y), u_y(x,y), u_{x x}(x,y), u_{x y}(x,y), u_{y y}(x,y) ) . </annotation></semantics> Conversely, an arbitrary section <semantics>s<annotation encoding="application/x-tex">s</annotation></semantics> of <semantics>J 2<annotation encoding="application/x-tex">J^2</annotation></semantics> is the prolongation of some <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> if and only if it annihilates the contact 1-forms. Since contact 1-forms are spanned by <semantics>θ 1<annotation encoding="application/x-tex">\theta_1</annotation></semantics>, <semantics>θ 2<annotation encoding="application/x-tex">\theta_2</annotation></semantics> and <semantics>θ 3<annotation encoding="application/x-tex">\theta_3</annotation></semantics>, it suffices that: <semantics>s *θ 1=0,s *θ 2=0,s *θ 3=0.<annotation encoding="application/x-tex"> s^\ast \theta_1 = 0, \quad s^\ast \theta_2 = 0, \quad s^\ast \theta_3 = 0 . </annotation></semantics> Such sections are called holonomic. This correspondence between prolongations and holonomic sections is the key to thinking about jet bundles.

Our PDE <semantics>u xx=13(u yy) 3,u xy=12(u yy) 2<annotation encoding="application/x-tex"> u_{x x} = \frac{1}{3} (u_{y y})^3, \quad u_{x y} = \frac{1}{2} (u_{y y})^2 </annotation></semantics> carves out a submanifold <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> of <semantics>J 2<annotation encoding="application/x-tex">J^2</annotation></semantics>. Solutions correspond to local holonomic sections that land in <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. In general, PDE give us submanifolds of jet spaces.

The external symmetries of our PDE are those diffeomorphisms of <semantics>J 2<annotation encoding="application/x-tex">J^2</annotation></semantics> that send contact 1-forms to contact 1-forms and send <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> to itself. The infinitesimal external symmetries are vector fields that preserve <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> and the contact 1-forms. There are also things called internal symmetries, but I won’t need them here.

So now we’re ready for:

Amazing theorem 1. The infinitesimal external symmetries of our PDE is the Lie algebra <semantics>𝔤 2<annotation encoding="application/x-tex">\mathfrak{g}_2</annotation></semantics>.

Like I said above, Dennis takes this amazing theorem of Cartan and connects it to an amazing theorem of Engel, and then generalizes the whole story to nearly all simple complex Lie algebras. Here’s Engel’s amazing theorem:

Amazing theorem 2. <semantics>𝔤 2<annotation encoding="application/x-tex">\mathfrak{g}_2</annotation></semantics> is the Lie algebra of infinitesimal contact transformations on a 5-dim contact manifold preserving a field of twisted cubic varieties.

This theorem lies at the heart of the story, so let me explain what it’s saying. First, it requires us to become acquainted with contact geometry, the odd-dimensional cousin of symplectic geometry. A contact manifold <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is a <semantics>(2n+1)<annotation encoding="application/x-tex">(2n+1)</annotation></semantics>-dimensional manifold with a contact distribution <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> on it. This is a smoothly-varying family of <semantics>2n<annotation encoding="application/x-tex">2n</annotation></semantics>-dimensional subspaces <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> of each tangent space <semantics>T mM<annotation encoding="application/x-tex">T_m M</annotation></semantics>, satisfying a certain nondegeneracy condition.

In Engel’s theorem, <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is 5-dimensional, so each <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> is 4-dimensional. We can projectivize each <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> to get a 3-dimensional projective space <semantics>(C m)<annotation encoding="application/x-tex">\mathbb{P}(C_m)</annotation></semantics> over each point. Our field of twisted cubic varieties is a curve in each of these projective spaces, the image of a cubic map: <semantics> 1(C m).<annotation encoding="application/x-tex"> \mathbb{C}\mathbb{P}^1 \to \mathbb{P}(C_m) . </annotation></semantics> This gives us a curve <semantics>𝒱 m<annotation encoding="application/x-tex">\mathcal{V}_m</annotation></semantics> in each <semantics>(C m)<annotation encoding="application/x-tex">\mathbb{P}(C_m)</annotation></semantics>, and taken together this is our field of twisted cubic varieties, <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>. Engel gave explicit formulas for a contact structure on <semantics> 5<annotation encoding="application/x-tex">\mathbb{C}^5</annotation></semantics> with a twisted cubic field <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> whose symmetries are <semantics>𝔤 2<annotation encoding="application/x-tex">\mathfrak{g}_2</annotation></semantics>, and you can find these formulas in Dennis’s paper.

How are these two theorems related? The secret is to go back to thinking about jet spaces, except this time, we’ll start with the 1st jet space: <semantics>J 1( 2,)={(x,y,u,u x,u y) 5}.<annotation encoding="application/x-tex"> J^1(\mathbb{C}^2, \mathbb{C}) = \left\{ (x, y, u, u_x, u_y) \in \mathbb{C}^5 \right\} . </annotation></semantics> This comes equipped with a space of contact 1-forms, spanned by a single 1-form: <semantics>θ=duu xdxu ydy.<annotation encoding="application/x-tex"> \theta = d u - u_x d x - u_y d y . </annotation></semantics> And now we see where contact 1-forms get their name: this contact 1-form defines a contact structure on <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics>, given by <semantics>C=ker(θ)<annotation encoding="application/x-tex">C = \mathrm{ker}(\theta)</annotation></semantics>.

Many of you may know Darboux’s theorem in symplectic geometry, which says that any two symplectic manifolds of the same dimension look the same locally. In contact geometry, the analogue of Darboux’s theorem holds, and goes by the name of Pfaff’s theorem. By Pfaff’s theorem, there’s an open set in <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics> which is contactomorphic to an open set in <semantics> 5<annotation encoding="application/x-tex">\mathbb{C}^5</annotation></semantics> with Engel’s contact structure. And we can use this map to transfer our twisted cubic field <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> to <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics>, or at least an open subset of it. This gives us a twisted cubic field on <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics>, one that continues to have <semantics>𝔤 2<annotation encoding="application/x-tex">\mathfrak{g}_2</annotation></semantics> symmetry.

We are getting tantalizingly close to a PDE now. We have a jet space <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics>, with some structure on it. We just lack a submanifold of that jet space. Our twisted cubic field <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> gives us a curve in each <semantics>(C m)<annotation encoding="application/x-tex">\mathbb{P}(C_m)</annotation></semantics>, not in <semantics>J 1<annotation encoding="application/x-tex">J^1</annotation></semantics> itself.

To these ingredients, add a bit of magic. Dennis found a natural construction that takes our twisted cubic field <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> and gives us a submanifold of a space that, at least locally, looks like <semantics>J 2( 2,)<annotation encoding="application/x-tex">J^2(\mathbb{C}^2, \mathbb{C})</annotation></semantics>, and hence describes a PDE. This PDE is the <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics> PDE.

It works like this. Our contact 1-form <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> endows each <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> with a symplectic structure, <semantics>dθ m<annotation encoding="application/x-tex">d\theta_m</annotation></semantics>. Starting with our contact structure, <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, this symplectic structure is only defined up to rescaling, because <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> determines <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> only up to rescaling. Nonetheless, it makes sense to look for subspaces of <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> that are Lagrangian: subspaces of maximal dimension on which <semantics>dθ m<annotation encoding="application/x-tex">d\theta_m</annotation></semantics> vanishes. The space of all Lagrangian subspaces of <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics> is called the Lagrangian-Grassmannian, <semantics>LG(C m)<annotation encoding="application/x-tex">\mathrm{LG}(C_m)</annotation></semantics>, and we can form a bundle <semantics>LG(J 1) J 1 <annotation encoding="application/x-tex"> \begin{matrix} \mathrm{LG}(J^1) \\ \downarrow \\ J^1 \\ \end{matrix} </annotation></semantics> whose fiber over each point is <semantics>LG(C m)<annotation encoding="application/x-tex">LG(C_m)</annotation></semantics>. It turns out <semantics>LG(J 1)<annotation encoding="application/x-tex">LG(J^1)</annotation></semantics> is locally the same as <semantics>J 2( 2,)<annotation encoding="application/x-tex">J^2(\mathbb{C}^2, \mathbb{C})</annotation></semantics>, complete the with latter’s complement of contact 1-forms.

Dennis’s construction takes <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> and gives us a submanifold of <semantics>LG(J 1)<annotation encoding="application/x-tex">\mathrm{LG}(J^1)</annotation></semantics>, as follows. Remember, each <semantics>𝒱 m<annotation encoding="application/x-tex">\mathcal{V}_m</annotation></semantics> is a curve in <semantics>(C m)<annotation encoding="application/x-tex">\mathbb{P}(C_m)</annotation></semantics>. The tangent space to a point <semantics>p𝒱 m<annotation encoding="application/x-tex">p \in \mathcal{V}_m</annotation></semantics> is thus a line in the projective space <semantics>(C m)<annotation encoding="application/x-tex">\mathbb{P}(C_m)</annotation></semantics>, and this corresponds to 2-dimensional subspace of the 4-dimensional contact space <semantics>C m<annotation encoding="application/x-tex">C_m</annotation></semantics>. This subspace turns out to be Lagrangian! Thus, points <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> of <semantics>𝒱 m<annotation encoding="application/x-tex">\mathcal{V}_m</annotation></semantics> give us points of <semantics>LG(C m)<annotation encoding="application/x-tex">LG(C_m)</annotation></semantics>, and letting <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics> and <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> vary, we get a submanifold of <semantics>LG(J 1)<annotation encoding="application/x-tex">LG(J^1)</annotation></semantics>. Locally, this is our PDE.

Dennis then generalizes this story to all simple Lie algebras besides <semantics>𝔰𝔩(2,)<annotation encoding="application/x-tex">\mathfrak{sl}(2,\mathbb{C})</annotation></semantics>. For simple Lie groups other than those in the <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> series, there is a homogenous space with a natural contact structure that has a field of twisted varieties living on it, called the field of “sub-adjoint varieties”. The same construction that worked for <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics> now gives PDE for these. The <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> cases take more care.

Better yet, Dennis builds on work of Landsberg and Manivel to get explicit descriptions of all these PDE in terms of cubic forms on Jordan algebras! Landsberg and Manivel describe the field of sub-adjoint varieties using these cubic forms. For <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics>, the Jordan algebra in question is the complex numbers <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics> with the cubic form <semantics>(t)=t 33.<annotation encoding="application/x-tex"> \mathfrak{C}(t) = \frac{t^3}{3} . </annotation></semantics>

Given any Jordan algebra <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> with a cubic form <semantics><annotation encoding="application/x-tex">\mathfrak{C}</annotation></semantics> on it, first polarize <semantics><annotation encoding="application/x-tex">\mathfrak{C}</annotation></semantics>: <semantics>(t)= abct at bt c,<annotation encoding="application/x-tex"> \mathfrak{C}(t) = \mathfrak{C}_{abc} t^a t^b t^c , </annotation></semantics> and then cook up a PDE for a function <semantics>u:W.<annotation encoding="application/x-tex"> u \colon \mathbb{C} \oplus W \to \mathbb{C} . </annotation></semantics> as follows: <semantics>u 00= abct at bt c,u 0a=32 abct bt c,u ab=3 abct c,<annotation encoding="application/x-tex"> u_{00} = \mathfrak{C}_{abc} t^a t^b t^c, \quad u_{0a} = \frac{3}{2} \mathfrak{C}_{a b c} t^b t^c, \quad u_{a b} = 3 \mathfrak{C}_{a b c} t^c , </annotation></semantics> where <semantics>tW<annotation encoding="application/x-tex">t \in W</annotation></semantics>, and I’ve used the indices <semantics>a<annotation encoding="application/x-tex">a</annotation></semantics>, <semantics>b<annotation encoding="application/x-tex">b</annotation></semantics>, and <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics> for coordiantes in <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>, 0 for the coordinate in <semantics><annotation encoding="application/x-tex">\mathbb{C}</annotation></semantics>. For <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics>, this gives us the PDE <semantics>u 00=t 33,u 01=t 22,u 11=t,<annotation encoding="application/x-tex"> u_{00} = \frac{t^3}{3}, \quad u_{01} = \frac{t^2}{2}, \quad u_{11} = t , </annotation></semantics> which is clearly equivalent to the PDE we wrote down earlier. Note that this PDE is determined entirely by the cubic form <semantics><annotation encoding="application/x-tex">\mathfrak{C}</annotation></semantics> - the product on our Jordan algebra plays no role.

Now we’re ready for Dennis’s amazing theorem.

Amazing theorem 3. Let <semantics>W=𝔥 3(𝕆)<annotation encoding="application/x-tex">W = \mathbb{C} \otimes \mathfrak{h}_3(\mathbb{O})</annotation></semantics>, the exceptional Jordan algebra, and <semantics><annotation encoding="application/x-tex">\mathfrak{C}</annotation></semantics> be the cubic form on <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> given by the determinant. Then the following PDE on <semantics>W<annotation encoding="application/x-tex">\mathbb{C} \oplus W</annotation></semantics> <semantics>u 00= abct at bt c,u 0a=32 abct bt c,u ab=3 abct c,<annotation encoding="application/x-tex"> u_{00} = \mathfrak{C}_{abc} t^a t^b t^c, \quad u_{0a} = \frac{3}{2} \mathfrak{C}_{a b c} t^b t^c, \quad u_{a b} = 3 \mathfrak{C}_{a b c} t^c , </annotation></semantics> has external symmetry algebra <semantics>𝔢 8<annotation encoding="application/x-tex">\mathfrak{e}_8</annotation></semantics>.


Thanks to Dennis The for explaining his work to me, and for his comments on drafts of this post.

by huerta ( at May 16, 2016 08:59 PM

Lubos Motl - string vacua and pheno

ATLAS: an amusing 2.1-sigma gluino-muon-multijet island excess
ATLAS released an interesting preprint
Search for gluinos in events with an isolated lepton, jets and missing transverse momentum at \(\sqrt{s} = 13\TeV\) with the ATLAS detector
in which gluino pairs were searched in final states with MET, many jets, and a single lepton. There were six signal regions. The last, sixth one, showed a mild but interesting excess. \(2.5\pm 0.7\) events were expected with a muon (thanks, Bill), but eight events were observed.

The excess looks intriguing when it's visualized on Figure 6.

The plot has two parts:

The observed thick red exclusion line is generally similar to the dashed black expected exclusion line. But on the upper picture, there is a clear "downward tooth" of the red line around the gluino mass of \(1200-1300\GeV\) and the lightest neutralino mass around \(400-600\GeV\), potentially properties of particles that may exist according to this hint.

On the second diagram, the excess looks like an island with \(m_{\tilde g} \sim 1250\GeV\) and the ratio of two mass differences (lightest chargino minus first lightest neutralino OVER gluino minus lightest neutralino) equal to \(0.75\) or so. However, the plot isn't quite showing the neighborhood of the most interesting values indicated by the upper plot because in the lower plot, the lightest neutralino is assumed to weigh \(60\GeV\).

The island-like shape of the exclusion line on the lower picture is interesting, nevertheless. Note that this is what the exclusion lines look like when all the wrong values of the mass are excluded and the correct mass is discovered. In this sense, the lower picture could already be a sketch of a discovery paper.

At any rate, if you go through the LHC category or search for a gluino on this blog, I think you will agree that it's far from the first hint of a gluino close enough to \(1200\GeV\) and a lightest neutralino in the \(600\GeV\) category. I am extremely far from any form of certainty that the gluino has to be found near these masses but if you offer me 100-to-1 odds like Jester did, I will happily make the bet again (or increase the existing one, if you wish).

by Luboš Motl ( at May 16, 2016 05:09 PM

May 15, 2016

CERN Bulletin

Federico Antinori elected as the new ALICE Spokesperson

On 8 April 2016 the ALICE Collaboration Board elected Federico Antinori from INFN Padova (Italy) as the new ALICE Spokesperson.


During his three-year mandate, starting in January 2017, he will lead a collaboration of more than 1500 people from 154 physics institutes across the globe.

Antinori has been a member of the collaboration ever since it was created and he has already held many senior leadership positions. Currently he is the experiment’s Physics Coordinator and as such he has the responsibility to overview the whole sector of physics analysis. During his mandate ALICE has produced many of its most prominent results. Before that he was the Coordinator of the Heavy Ion First Physics Task Force, charged with the analysis of the first Pb-Pb data samples. In 2007 and 2008 Federico served as ALICE Deputy Spokesperson. He was also the first ALICE Trigger Coordinator, having a central role in defining the experiment’s trigger menus from the first run in 2009 until the end of his mandate in 2011. He also played an important role in the commissioning of the experiment before the start of its operation.

Being entrusted by the Collaboration with its leadership makes Antinori feel honoured. “ALICE is a unique scientific instrument, built with years of dedication and labour of hundreds of colleagues. We have practically only begun to exploit its possibilities. As Spokesperson I can play a key role in making ALICE ever more efficient and successful and this is a truly exciting prospect for me.”

May 15, 2016 10:05 PM

ZapperZ - Physics and Physicists

Grandfather Paradox - Resolved?
This Minute Physics video claims to have "resolved" the infamous grandfather paradox. Well, OK, they don't actually say that, but they basically indicated why this might be a never-ending loop.

Still, let's think about it this way instead. During your grandfather's time, presumably, ALL the atoms or energy that will make you are already there, only they are all not together to form you. This only happens later on. But they are all there!

But here you come along from another time, popping into existence in your grandfather's time. Aren't you violating conservation of energy by adding MORE energy to the universe that are not accounted for? Now, unless there is a quid pro quo, where an equal amount of energy in your grandfather's time was siphoned to the future where you came from, this violation of conservation of energy is hard to explain away, especially if you invoke Noether's theorem.

I haven't come across a popular account of this issue.


by ZapperZ ( at May 15, 2016 01:58 PM

Geraint Lewis - Cosmic Horizons

How Far Can We Go? A long way, but not not that far!
Obligatory "sorry it's been a long time since I posted" comment. Life, grants, student, etc All of the usual excuses! But I plan (i.e. hope) to do more writing here in the future.

But what's the reason for today's post? Namely this video posted on your tube.
The conclusion is that humans are destined to explore the Local Group of galaxies, and that is it. And this video has received a bit of circulation on the inter-webs, promoted by a few sciency people.

The problem, however, is that it is wrong. The basic idea is that accelerating expansion due to the presence of dark energy means that the separation of objects will get faster and fast, and so it will be a little like chasing after a bus; the distance between the two of you will continue to get bigger and bigger. This part is correct, and in the very distant universe, there will be extremely isolated bunches of galaxies whose own gravitational pull overcomes the cosmic expansion. But the rest, just how much we can explore is wrong.

Why? Because they seem to have forgotten something key. Once we are out there traveling in the "expanding universe" then the expansion works in our advantage, increasing the distance not only between us and where we want to get to, but also between us and home. We effectively "ride" expansion.

So, how far could we get? Well, time to call (again - sorry) Tamara Davis's excellent cosmological work, in particular this paper on misconceptions about the Big Bang. I've spoken about this paper many times (and read it, it is quite excellent) but for this post, what we need to look is at the "conformal" picture of our universe. I don't have time togo into the details here, but the key thing is that you manipulate space and time so light rays trade at 45 degrees in the picture. Here's our universe.

The entire (infinite!) history of the universe is in this picture, mapped onto "conformal time". We're in the middle on the line marked now. If we extend our past light cone into the future, we can see the volume of the universe acceptable to us, given the continued accelerating expansion. We can see that encompasses objects that are currently not far from 20 billion light years away from us. This means that light rays fired out today will get this far, much, much larger than the Local Group of galaxies.

But ha! you scoff, that's a light ray. Puny humans in rockets have no chance!

Again, wrong, as you need to care about relativity again. How do I know? I wrote a paper about this with two smart students, Juliana Kwan (who is now at the University of Pennsylvania)  and Berian James, at Square. The point is that if you accelerate off into the universe, even at a nice gentle acceleration similar to what we experience here on Earth, you still get to explore much of the universe accessible to light rays.

Here's our paper 
 The key point is not just about how far you want to get, but whether or not you want to get home again. I am more than happy to acknowledge Jeremy Heyl's earlier work that inspired ours.

One tiny last point is the question whether our (or maybe not our) decedents will realise that there is dark energy in the universe. Locked away in Milkomenda (how I hate that name)  the view of the dark universe in the future might lead you to conclude that there is no more to the universe than ourselves, and it would appear static and unchanging, but anything thrown "out there", such as rocket ships (as per above) or high velocity stars, would still reveal the presence of dark energy.

There's plenty of universe we could potentially explore!

by Cusp ( at May 15, 2016 08:10 AM

May 14, 2016

Clifford V. Johnson - Asymptotia


character_with_watch_rough_13_may_2016When you realize mid-sketch that your character is wearing a watch, and so that means you should probably go back and add it to all the previous pages and panels... (click for larger view.)

-cvj Click to continue reading this post

The post Realization… appeared first on Asymptotia.

by Clifford at May 14, 2016 02:04 AM

May 13, 2016

Sean Carroll - Preposterous Universe

Big Picture Part Six: Caring

One of a series of quick posts on the six sections of my book The Big PictureCosmos, Understanding, Essence, Complexity, Thinking, Caring.

Chapters in Part Six, Caring:

  • 45. Three Billion Heartbeats
  • 46. What Is and What Ought to Be
  • 47. Rules and Consequences
  • 48. Constructing Goodness
  • 49. Listening to the World
  • 50. Existential Therapy

In this final section of the book, we take a step back to look at the journey we’ve taken, and ask what it implies for how we should think about our lives. I intentionally kept it short, because I don’t think poetic naturalism has many prescriptive advice to give along these lines. Resisting the temptation to give out a list of “Ten Naturalist Commandments,” I instead offer a list of “Ten Considerations,” things we can keep in mind while we decide for ourselves how we want to live.

A good poetic naturalist should resist the temptation to hand out commandments. “Give someone a fish,” the saying goes, “and you feed them for a day. Teach them to fish, and you feed them for a lifetime.” When it comes to how to lead our lives, poetic naturalism has no fish to give us. It doesn’t even really teach us how to fish. It’s more like poetic naturalism helps us figure out that there are things called “fish,” and perhaps investigate the various possible ways to go about catching them, if that were something we were inclined to do. It’s up to us what strategy we want to take, and what to do with our fish once we’ve caught them.

There are nevertheless some things worth saying, because there are a lot of untrue beliefs to which we all tend to cling from time to time. Many (most?) naturalists have trouble letting go of the existence of objective moral truths, even if they claim to accept the idea that the natural world is all that exists. But you can’t derive ought from is, so an honest naturalist will admit that our ethical principles are constructed rather than derived from nature. (In particular, I borrow the idea of “Humean constructivism” from philosopher Sharon Street.) Fortunately, we’re not blank slates, or computers happily idling away; we have aspirations, desires, preferences, and cares. More than enough raw material to construct workable notions of right and wrong, no less valuable for being ultimately subjective.

Of course there are also incorrect beliefs on the religious or non-naturalist side of the ledger, from the existence of divinely-approved ways of being to the promise of judgment and eternal reward for good behavior. Naturalists accept that life is going to come to an end — this life is not a dress rehearsal for something greater, it’s the only performance we get to give. The average person can expect a lifespan of about three billion heartbeats. That’s a goodly number, but far from limitless. We should make the most of each of our heartbeats.


The finitude of life doesn’t imply that it’s meaningless, any more than obeying the laws of physics implies that we can’t find purpose and joy within the natural world. The absence of a God to tell us why we’re here and hand down rules about what is and is not okay doesn’t leave us adrift — it puts the responsibility for constructing meaningful lives back where it always was, in our own hands.

Here’s a story one could imagine telling about the nature of the world. The universe is a miracle. It was created by God as a unique act of love. The splendor of the cosmos, spanning billions of years and countless stars, culminated in the appearance of human beings here on Earth — conscious, aware creatures, unions of soul and body, capable of appreciating and returning God’s love. Our mortal lives are part of a larger span of existence, in which we will continue to participate after our deaths.

It’s an attractive story. You can see why someone would believe it, and work to reconcile it with what science has taught us about the nature of reality. But the evidence points elsewhere.

Here’s a different story. The universe is not a miracle. It simply is, unguided and unsustained, manifesting the patterns of nature with scrupulous regularity. Over billions of years it has evolved naturally, from a state of low entropy toward increasing complexity, and it will eventually wind down to a featureless equilibrium condition. We are the miracle, we human beings. Not a break-the-laws-of-physics kind of miracle; a miracle in that it is wondrous and amazing how such complex, aware, creative, caring creatures could have arisen in perfect accordance with those laws. Our lives are finite, unpredictable, and immeasurably precious. Our emergence has brought meaning and mattering into the world.

That’s a pretty darn good story, too. Demanding in its own way, it may not give us everything we want, but it fits comfortably with everything science has taught us about nature. It bequeaths to us the responsibility and opportunity to make life into what we would have it be.

I do hope people enjoy the book. As I said earlier, I don’t presume to be offering many final answers here. I do think that the basic precepts of naturalism provide a framework for thinking about the world that, given our current state of knowledge, is overwhelmingly likely to be true. But the hard work of understanding the details of how that world works, and how we should shape our lives within it, is something we humans as a species have really only just begun to tackle in a serious way. May our journey of discovery be enlivened by frequent surprises!

by Sean Carroll at May 13, 2016 04:04 PM

Tommaso Dorigo - Scientificblogging

Catching The 750 GeV Boson With Roman Pots ?!
I am told by a TOTEM manager that this is public news and so it can be blogged about - so here I would like to explain a rather cunning plan that the TOTEM and the CMS collaborations have put together to enhance the possibilities of a discovery, and a better characterization, of the particle that everybody hopes is real, the 750 GeV resonance seen in photon pairs data by ATLAS and CMS in their 2015 data.

read more

by Tommaso Dorigo at May 13, 2016 12:56 PM

Lubos Motl - string vacua and pheno

Cernette: a bound state of 12 top quarks?
Willmutt reminded me of a paper I saw in the morning,
Production and Decay of \(750\GeV\) state of 6 top and 6 anti top quarks
by two experienced physicists, Froggatt and (co-father of string theory) Nielsen, that proposes that the \(750\GeV\) cernette could be real – and it could be a part of the Standard Model. They've been talking about the bound state (now proposed to be the cernette) since 2003.

At that time, the particle was conjectured to be so heavily bound that it would be a tachyon, \(m^2\lt 0\). I actually think that composite tachyons can't exist in tachyon-free theories, can they? (You better believe that such a tachyonic particle is impossible because such a man-made Cosmos-eating tachyonic toplet would be even worse than an Earth-eating strangelet LOL.)

The zodiac, a similarly strange bound state of 12 particles.

Unlike my numerologically driven weakly bound states of new particles, they propose that the particle could be a heavily bound state of 12 top quarks in total.

More precisely, they say that there should be 6 top quarks and 6 top antiquarks in the beast. The number 6 is preferred because all \(2\times 3 = 6\) arrangements of the spin-and-color are represented – both for quarks and antiquarks. So this complete list could potentially make a particle that is as stable as the atom of helium; or the helium-4 nucleus (the alpha-particle). The whole low-lying "shell" is occupied in all these cases!

The binding energy could come from the exchange of the virtual Higgs quanta. Note that for the odd messenger spins, \(J=1,3,5,\dots\), i.e. for electromagnetism, the like charges repel. For the \(J=2\) gravity, the like charges (positive masses) attract. For \(J=0\), the like charges must attract, too. A closer analysis of the signs in the Dirac fermionic bilinears implies that the opposite sources of the Higgs field actually attract as well – so the "sign of the top quark" is ignored. An ironic side effect of this rule is that when a top quark-antiquark pair is created, the total field they produce jumps discontinuously. But unlike the electric charge, the "charge sourcing the Higgs field" isn't conserved, so this jump isn't contradicting anything.

Twelve top quarks have the mass of \(12\times 173\GeV=2076\GeV\) so you need the interaction energy \(-1326\GeV\) to get down to \(750\GeV\). There are \(12\times 11/ 2\times 1=66\) pairs of "tops" (or antitops) in the proposed bound state. If each of them contributes \(-20\GeV\) in average, you will be fine. But do they contribute \(-20\GeV\) in such bounds states? Cannot someone just calculate these things, e.g. some lattice QCD methods? Cannot one see this \(-20\GeV\) in the toponium?

Both authors claim that \(pp\to SS\) where \(S\) is their 12-particle bound state has the cross section of 0.2 pb and 2 pb at \(8\TeV\) and \(13\TeV\), respectively, which seem good enough. The dominant decay modes should be (in this order) \(S\to t\bar t,gg,hh,W^+W^-,ZZ,\) and \(\gamma\gamma\). Given the low status of the diphoton, that doesn't look too good, does it? It is pretty hard to imagine how this complicated beast decays at all – twelve particles have to be liquidated almost simultaneously. That only occurs in some very high order, doesn't it? I am actually surprised by the high production cross section for the same reason.

But the simplicity makes the proposal attractive even if the absence of the Beyond the Standard Model physics could be disappointing at the end.

by Luboš Motl ( at May 13, 2016 12:34 PM

May 12, 2016

Sean Carroll - Preposterous Universe

Big Picture Part Five: Thinking

One of a series of quick posts on the six sections of my book The Big PictureCosmos, Understanding, Essence, Complexity, Thinking, Caring.

Chapters in Part Five, Thinking:

  • 37. Crawling Into Consciousness
  • 38. The Babbling Brain
  • 39. What Thinks?
  • 40. The Hard Problem
  • 41. Zombies and Stories
  • 42. Are Photons Conscious?
  • 43. What Acts on What?
  • 44. Freedom to Choose

Even many people who willingly describe themselves as naturalists — who agree that there is only the natural world, obeying laws of physics — are brought up short by the nature of consciousness, or the mind-body problem. David Chalmers famously distinguished between the “Easy Problems” of consciousness, which include functional and operational questions like “How does seeing an object relate to our mental image of that object?”, and the “Hard Problem.” The Hard Problem is the nature of qualia, the subjective experiences associated with conscious events. “Seeing red” is part of the Easy Problem, “experiencing the redness of red” is part of the Hard Problem. No matter how well we might someday understand the connectivity of neurons or the laws of physics governing the particles and forces of which our brains are made, how can collections of such cells or particles ever be said to have an experience of “what it is like” to feel something?

These questions have been debated to death, and I don’t have anything especially novel to contribute to discussions of how the brain works. What I can do is suggest that (1) the emergence of concepts like “thinking” and “experiencing” and “consciousness” as useful ways of talking about macroscopic collections of matter should be no more surprising than the emergence of concepts like “temperature” and “pressure”; and (2) our understanding of those underlying laws of physics is so incredibly solid and well-established that there should be an enormous presumption against modifying them in some important way just to account for a phenomenon (consciousness) which is admittedly one of the most subtle and complex things we’ve ever encountered in the world.

My suspicion is that the Hard Problem won’t be “solved,” it will just gradually fade away as we understand more and more about how the brain actually does work. I love this image of the magnetic fields generated in my brain as neurons squirt out charged particles, evidence of thoughts careening around my gray matter. (Taken by an MEG machine in David Poeppel’s lab at NYU.) It’s not evidence of anything surprising — not even the most devoted mind-body dualist is reluctant to admit that things happen in the brain while you are thinking — but it’s a vivid illustration of how closely our mental processes are associated with the particles and forces of elementary physics.


The divide between those who doubt that physical concepts can account for subjective experience and those who are think it can is difficult to bridge precisely because of the word “subjective” — there are no external, measurable quantities we can point to that might help resolve the issue. In the book I highlight this gap by imagining a dialogue between someone who believes in the existence of distinct mental properties (M) and a poetic naturalist (P) who thinks that such properties are a way of talking about physical reality:

M: I grant you that, when I am feeling some particular sensation, it is inevitably accompanied by some particular thing happening in my brain — a “neural correlate of consciousness.” What I deny is that one of my subjective experiences simply is such an occurrence in my brain. There’s more to it than that. I also have a feeling of what it is like to have that experience.

P: What I’m suggesting is that the statement “I have a feeling…” is simply a way of talking about those signals appearing in your brain. There is one way of talking that speaks a vocabulary of neurons and synapses and so forth, and another way that speaks of people and their experiences. And there is a map between these ways: when the neurons do a certain thing, the person feels a certain way. And that’s all there is.

M: Except that it’s manifestly not all there is! Because if it were, I wouldn’t have any conscious experiences at all. Atoms don’t have experiences. You can give a functional explanation of what’s going on, which will correctly account for how I actually behave, but such an explanation will always leave out the subjective aspect.

P: Why? I’m not “leaving out” the subjective aspect, I’m suggesting that all of this talk of our inner experiences is a very useful way of bundling up the collective behavior of a complex collection of atoms. Individual atoms don’t have experiences, but macroscopic agglomerations of them might very well, without invoking any additional ingredients.

M: No they won’t. No matter how many non-feeling atoms you pile together, they will never start having experiences.

P: Yes they will.

M: No they won’t.

P: Yes they will.

I imagine that close analogues of this conversation have happened countless times, and are likely to continue for a while into the future.

by Sean Carroll at May 12, 2016 03:58 PM

Symmetrybreaking - Fermilab/SLAC

Mommy, Daddy, where does mass come from?

The Higgs field gives mass to elementary particles, but most of our mass comes from somewhere else.

The story of particle mass starts right after the big bang. During the very first moments of the universe, almost all particles were massless, traveling at the speed of light in a very hot “primordial soup.” At some point during this period, the Higgs field turned on, permeating the universe and giving mass to the elementary particles.  

The Higgs field changed the environment when it was turned on, altering the way that particles behave. Some of the most common metaphors compare the Higgs field to a vat of molasses or thick syrup, which slows some particles as they travel through.

Others have envisioned the Higgs field as a crowd at a party or a horde of paparazzi. As famous scientists or A-list celebrities pass through, people surround them, slowing them down, but less-known faces travel through the crowds unnoticed. In these cases, popularity is synonymous with mass—the more popular you are, the more you will interact with the crowd, and the more “massive” you will be. 

But why did the Higgs field turn on? Why do some particles interact more with the Higgs field than others? The short answer is: We don’t know.

“This is part of why finding the Higgs field is just the beginning—because we have a ton of questions,” says Matt Strassler, a theoretical physicist and associate of the Harvard University physics department. 

The strong force and you

The Higgs field gives mass to fundamental particles—the electrons, quarks and other building blocks that cannot be broken into smaller parts. But these still only account for a tiny proportion of the universe’s mass.

The rest comes from protons and neutrons, which get almost all their mass from the strong nuclear force. These particles are each made up of three quarks moving at breakneck speeds that are bound together by gluons, the particles that carry the strong force. The energy of this interaction between quarks and gluons is what gives protons and neutrons their mass. Keep in mind Einstein’s famous E=mc2, which equates energy and mass. That makes mass a secret storage facility for energy.

“When you put three quarks together to create a proton, you end up binding up an enormous energy density in a small region in space,” says John Lajoie, a physicist at Iowa State University. 

A proton is made of two up quarks and a down quark; a neutron is made of two down quarks and an up quark. Their similar composition makes the mass they acquire from the strong force nearly identical. However, neutrons are slightly more massive than protons—and this difference is crucial. The process of neutrons decaying into protons promotes chemistry, and thus, biology. If protons were heavier, they would instead decay into neutrons, and the universe as we know it would not exist. 

“As it turns out, the down quarks interact more strongly with the Higgs [field], so they have a bit more mass,” says Andreas Kronfeld, a theoretical physicist at Fermilab. This is why the tiny difference between proton and neutron mass exists. 

But what about neutrinos?

We’ve learned that the elementary particles get their mass from the Higgs field—but wait! There may be an exception: neutrinos. Neutrinos are in a class by themselves; they have extremely tiny masses (a million times smaller than the electron, the second lightest particle), are electrically neutral and rarely interact with matter.

Scientists are puzzled as to why neutrinos are so light. Theorists are currently considering multiple possibilities. It might be explained if neutrinos are their own antiparticles—that is, if the antimatter version is identical to the matter version. If physicists discover that this is the case, it would mean that neutrinos get their mass from somewhere other than the Higgs boson, which physicists discovered in 2012.

Neutrinos must get their mass from a Higgs-like field, which is electrically neutral and spans the entire universe. This could be the same Higgs that gives mass to the other elementary particles, or it could be a very distant cousin. In some theories, neutrino mass also comes from an additional, brand new source that could hold the answers to other lingering particle physics mysteries.

“People tend to get excited about this possibility because it can be interpreted as evidence for a brand new energy scale, naively unrelated to the Higgs phenomenon,” says André de Gouvêa, a theoretical particle physicist at Northwestern University.

This new mechanism may also be related to how dark matter, which physicists think is made up of yet undiscovered particles, gets its mass.

“Nature tends to be economical, so it's possible that the same new set of particles explains all of these weird phenomena that we haven't explained yet,” de Gouvêa says.

by Diana Kwon at May 12, 2016 01:39 PM

The n-Category Cafe

The Works of Charles Ehresmann

Charles Ehresmann’s complete works are now available for free here:

There are 630 pages on algebraic topology and differential geometry, 800 pages on local structures and ordered categories, and their applications to topology, 900 pages on structured categories and quotients and internal categories and fibrations, and 850 pages on sketches and completions and sketches and monoidal closed structures.

That’s 3180 pages!

On top of this, more issues of the journal he founded, Cahiers de Topologie et Géométrie Différentielle Catégoriques, will become freely available online.

Andrée Ehresmann announced this magnificent gift to the world on the category theory mailing list, writing:

We are pleased to announce that the issues of the Cahiers de Topologie et Géométrie Différentielle Catégoriques, from Volume L (2009) to LV (2014) included, are now freely downloadable from the internet site of the Cahiers:

through the hyperlink to Recent Volumes.

In the future the issues of the Cahiers will become freely available on the site of the Cahiers two years after their paper publication. We recall that papers published up to Volume XLIX are accessible on the NUMDAM site.

Moreover, the 7 volumes of Charles Ehresmann: Oeuvres complètes et commentées (edited by A. Ehresmann from 1980-83 as Supplements to the Cahiers) are now also freely downloadable from the site

These 2 sites are included in the site of Andrée Ehresmann

and they can also be accessed through hyperlinks on its first page.


Andree Ehresmann, Marino Gran and Rene Guitart,

Chief-Editors of the Cahiers

by john ( at May 12, 2016 04:24 AM

May 11, 2016

Clifford V. Johnson - Asymptotia

Close Encounter?

...of the physics kind.


Ok, I'll share a bit during my lunch break from spending too much time doing detail in a tiny panel few will linger on. (Perils of a detail-freak....) It's a rough underdrawing I did this morning for a panel I'm now turning into final art (the black stuff is the start of final lines). That's the character you saw a turnaround for earlier, busy at work in a cafe when... (To be continued...)

Click to continue reading this post

The post Close Encounter? appeared first on Asymptotia.

by Clifford at May 11, 2016 08:56 PM

Sean Carroll - Preposterous Universe

Big Picture Part Four: Complexity

One of a series of quick posts on the six sections of my book The Big PictureCosmos, Understanding, Essence, Complexity, Thinking, Caring.

Chapters in Part Four, Complexity:

  • 28. The Universe in a Cup of Coffee
  • 29. Light and Life
  • 30. Funneling Energy
  • 31. Spontaneous Organization
  • 32. The Origin and Purpose of Life
  • 33. Evolution’s Bootstraps
  • 34. Searching Through the Landscape
  • 35. Emergent Purpose
  • 36. Are We the Point?

One of the most annoying arguments a scientist can hear is that “evolution (or the origin of life) violates the Second Law of Thermodynamics.” The idea is basically that the Second Law says things become more disorganized over time, but the appearance of life represents increased organization, so what do you have to say about that, Dr. Smarty-Pants?

This is a very bad argument, since the Second Law only says that entropy increases in closed systems, not open ones. (Otherwise refrigerators would be impossible, since the entropy of a can of Diet Coke goes down when you cool it.) The Earth’s biosphere is obviously an open system — we get low-entropy photons from the Sun, and radiate high-entropy photons back to the universe — so there is manifestly no contradiction between the Second Law and the appearance of complex structures.

As right and true as that response is, it doesn’t quite address the question of why complex structures actually do come into being. Sure, they can come into being without violating the Second Law, but that doesn’t quite explain why they actually do. In Complexity, the fourth part of The Big Picture, I talk about why it’s very natural for such a thing to happen. This covers the evolution of complexity in general, as well as specific questions about the origin of life and Darwinian natural selection. When it comes to abiogenesis, there’s a lot we don’t know, but good reason to be optimistic about near-term progress.

In 2000, Gretchen Früh-Green, on a ship in the mid-Atlantic Ocean as part of an expedition led by marine geologist Deborah Kelley, stumbled across a collection of ghostly white towers in the video feed from a robotic camera near the ocean floor deep below. Fortunately they had with them a submersible vessel named Alvin, and Kelley set out to explore the structure up close. Further investigation showed that it was just the kind of alkaline vent formation that Russell had anticipated. Two thousand miles east of South Carolina, not far from the Mid-Atlantic Ridge, the Lost City hydrothermal vent field is at least 30,000 years old, and may be just the first known example of a very common type of geological formation. There’s a lot we don’t know about the ocean floor.

Lost City

The chemistry in vents like those at Lost City is rich, and driven by the sort of gradients that could reasonably prefigure life’s metabolic pathways. Reactions familiar from laboratory experiments have been able to produce a number of amino acids, sugars, and other compounds that are needed to ultimately assemble RNA. In the minds of the metabolism-first contingent, the power source provided by disequilibria must come first; the chemistry leading to life will eventually piggyback upon it.

Albert Szent-Györgyi, a Hungarian physiologist who won the Nobel Prize in 1937 for the discovery of Vitamin C, once offered the opinion that “Life is nothing but an electron looking for a place to rest.” That’s a good summary of the metabolism-first view. There is free energy locked up in certain chemical configurations, and life is one way it can be released. One compelling aspect of the picture is that it’s not simply working backwards from “we know there’s life, how did it start?” Instead, its suggesting that life is the solution to a problem: “we have some free energy, how do we liberate it?”

Planetary scientists have speculated that hydrothermal vents similar to Lost City, might be abundant on Jupiter’s moon Europa or Saturn’s moon Enceladus. Future exploration of the Solar System might be able to put this picture to a different kind of test.

A tricky part of this discussion is figuring out when it’s okay to say that a certain naturally-evolved organism or characteristic has a “purpose.” Evolution itself has no purpose, but according to poetic naturalism it’s perfectly okay to ascribe purposes to specific things or processes, as long as that kind of description actually provides a useful way of talking about the higher-level emergent behavior.

by Sean Carroll at May 11, 2016 04:01 PM

May 10, 2016

Sean Carroll - Preposterous Universe

Big Picture Part Three: Essence

One of a series of quick posts on the six sections of my book The Big PictureCosmos, Understanding, Essence, Complexity, Thinking, Caring.

Chapters in Part Three, Essence:

  • 19. How Much We Know
  • 20. The Quantum Realm
  • 21. Interpreting Quantum Mechanics
  • 22. The Core Theory
  • 23. The Stuff of Which We Are Made
  • 24. The Effective Theory of the Everyday World
  • 25. Why Does the Universe Exist?
  • 26. Body and Soul
  • 27. Death Is the End

In Part Three we get our hands dirty diving into some of the central features of how our world actually works: quantum mechanics, field theory, and the Core Theory describing the actual particles and forces that make up the visible universe. The discussion of the basics of quantum mechanics itself is quite brief, and I mention the Many-Worlds formulation only to emphasize that there’s nothing about QM that implies we need to be idealist, anti-realist, or non-determinist. (Those options are open, of course — but they’re not forced on us by what we know about quantum mechanics.)

More directly relevant to this discussion are the ideas of effective field theory and crossing symmetry that let us conclude the laws of physics underlying everyday life are completely known. (I used to say “…completely understood,” but too many people chose to quibble about whether we “really understand” them rather than grasping the point, so I’ve switched to “known.”) (No, I don’t think it will really help either.) In early drafts I went on a bit too long about all the quarks and gluons and so forth, since personally I think that stuff is endlessly fascinating. But it dragged down the pace a bit, so now I have an Appendix in which I give the full Core Theory equation and explain — tersely but accurately! — every single term that appears in it.

In the body of the text I concentrate more on explaining what the claim actually says and why it has a chance of being true. For example, why it doesn’t matter for everyday purposes that we don’t yet understand quantum gravity.

Physicists divide our theoretical understanding of these particles and forces into two grand theories: the Standard Model of Particle Physics, which includes everything we’ve been talking about except for gravity, and general relativity, Einstein’s theory of gravity as the curvature of spacetime. We lack a full “quantum theory of gravity” — a model that is based on the principles of quantum mechanics, and matches onto general relativity when things become classical-looking. Superstring theory is one very promising candidate for such a model, but right now we just don’t know how to talk about situations where gravity is very strong, like near the Big Bang or inside a black hole, in quantum-mechanical terms. Figuring out how to do so is one of the greatest challenges currently occupying the minds of theoretical physicists around the world.

But we don’t live inside a black hole, and the Big Bang was quite a few years ago. We live in a world where gravity is relatively weak. And as long as the force is weak, quantum field theory has no trouble whatsoever describing how gravity works. That’s why we’re confident in the existence of gravitons; they are an inescapable consequence of the basic features of general relativity and quantum field theory, even if we lack a complete theory of quantum gravity. The domain of applicability of our present understanding of quantum gravity includes everything we experience in our everyday lives.

There is, therefore, no reason to keep the Standard Model and general relativity completely separate from each other. As far as the physics of the stuff you see in front of you right now is concerned, it is all very well described by one big quantum field theory. Nobel Laureate Frank Wilczek has dubbed it the Core Theory. It’s the quantum field theory of the quarks, electrons, neutrinos, all the families of fermions, electromagnetism, gravity, the nuclear forces, and the Higgs. In the Appendix we lay it out in a bit more detail. The Core Theory is not the most elegant concoction that has ever been dreamed up in the mind of a physicist, but it’s been spectacularly successful at accounting for every experiment ever performed in a laboratory here on Earth. (At least as of mid-2015 — we should always be ready for the next surprise.)

Princess Elisabeth of BohemiaOne of my favorite chapters in the book is 26, Body and Soul, where I relate the story of Princess Elisabeth of Bohemia and René Descartes. And how, you may ask, does quantum field theory relate to an epistolary conversation carried out in the seventeenth century? Descartes, of course, was famously a champion of mind/body dualism. Elisabeth challenged him on this, asking how something (the immaterial soul) that had no location or extent in space could possibly influence something (the physical body) that manifestly did. The updated version of Elisabeth’s challenge is to ask, “How could an immaterial soul possibly affect the evolution of the particles and fields in the Core Theory? How should that gloriously precise and well-tested equation be modified?”

by Sean Carroll at May 10, 2016 04:30 PM

Tommaso Dorigo - Scientificblogging

Scavenging LHC Data: The CMS Data Scouting Technique
With the Large Hadron Collider now finally up and running after the unfortunate weasel incident, physicists at CERN and around the world are eager to put their hands on the new 2016 collisions data. The #MoarCollisions hashtag keeps entertaining the tweeting researchers and their followers, and everybody is anxious to finally ascertain whether the tentative signal of a new 750 GeV particle seen in diphoton decays in last year's data will reappear and confirm an epic discovery, or what.

read more

by Tommaso Dorigo at May 10, 2016 08:29 AM

May 09, 2016

Lubos Motl - string vacua and pheno

Cernette: a bound state of two \(Z'\)-bosons?
TV: John Oliver gave a totally sensible 20-minute tirade explaining why "scientific study says" stories in the media are mostly bullšit.
I am giving a popular talk on LIGO in 90 minutes and Tristan du Pree has offered me a distraction via Twitter. How do you get distracted if you think about LIGO too much? Yes, by hearing about the LHC:
Did you find already a good model for a possible \(375/750/1500\) tower of \(Z\gamma/\gamma \gamma\)?
Well, I didn't, I wrote him: it seemed increasingly clear to me that the invariant masses in the \(Z\gamma\) and \(\gamma\gamma\) decays should better be the same. So the numerological explanation of the coincidence doesn't work.

But then I decided that I haven't carefully enough investigated a loophole that could explain why the \(\gamma\gamma\) signal isn't observed near \(375\GeV\): the Landau-Yang theorem. A massive spin-one boson cannot decay to two identical massless spin-one bosons – or, if you wish, a \(Z\)-boson or \(Z'\)-boson cannot decay to two photons.

The reason for or the proof of the theorem? Well, there's no trilinear function of the three polarization vectors \(\vec \epsilon_{1,2,3}\) that may also depend on the massless particle's momentum \(\vec k\) but that is also symmetric under the exchange of the two final photons.

That seems to be the only possible explanation of the absence of the \(\gamma\gamma\) signal at \(375\GeV\) given the assumption that the excesses of \(Z\gamma\) at \(375\GeV\) are real new physics. So if that's the case, there has to be a new \(Z'\)-boson at \(375\GeV\). Or maybe even a composite particle, such as a toponium, could be OK?

And if the coincidence \(750/2=375\) is more than just a coincidence, then the \(750\GeV\) cernette should better be a bound state of the two \(375\GeV\) \(Z'\)-bosons. Probably a tightly bound state, indeed, but the large width observed especially by ATLAS could potentially be explained by this composite character of the object.

I realize that the interactions felt by the new \(Z'\)-boson would have to be immensely strong and it probably doesn't work but I am running out of time so I expect some commenters to tell me whether it could work.

by Luboš Motl ( at May 09, 2016 06:28 PM

The n-Category Cafe

Man Ejected from Flight for Solving Differential Equation

A professor of economics was escorted from an American Airlines flight and questioned by secret police after the woman in the next seat spotted him writing page after page of mysterious symbols. It’s all over the internet. Press reports do not specify which differential equation it was.

Although his suspiciously mediterranean appearance may have contributed to his neighbour’s paranoia, the professor has the privilege of not having an Arabic name and says he was treated with respect. He’s Italian. The flight was delayed by an hour or two, he was allowed to travel, and no harm seems to have been done.

Unfortunately, though, this story is part of a genre. It’s happening depressingly often in the US that Muslims (and occasionally others) are escorted off planes and treated like criminals on the most absurdly flimsy pretexts. Here’s a story where some passengers were afraid of the small white box carried by a fellow passenger. It turned out to contain baklava. Here’s one where a Berkeley student was removed from a flight for speaking Arabic, and another where a Somali woman was ejected because a flight attendant “did not feel comfortable” with her request to change seats. The phenomenon is now common enough that it has acquired a name: “Flying while Muslim”.

by leinster ( at May 09, 2016 05:06 PM

Jon Butterworth - Life and Physics

Symmetrybreaking - Fermilab/SLAC

LHC prepares to deliver six times the data

Experiments at the Large Hadron Collider are once again recording collisions at extraordinary energies.

After months of winter hibernation, the Large Hadron Collider is once again smashing protons and taking data. The LHC will run around the clock for the next six months and produce roughly 2 quadrillion high-quality proton collisions, six times more than in 2015 and just shy of the total number of collisions recorded during the nearly three years of the collider’s first run.

“2015 was a recommissioning year. 2016 will be a year of full data production during which we will focus on delivering the maximum number of data to the experiments,” says Fabiola Gianotti, CERN director general.

The LHC is the world’s most powerful particle accelerator. Its collisions produce subatomic fireballs of energy, which morph into the fundamental building blocks of matter. The four particle detectors located on the LHC’s ring allow scientists to record and study the properties of these building blocks and look for new fundamental particles and forces.

“We’re proud to support more than a thousand US scientists and engineers who play integral parts in operating the detectors, analyzing the data and developing tools and technologies to upgrade the LHC’s performance in this international endeavor,” says Jim Siegrist, associate director of science for high-energy physics in the US Department of Energy’s Office of Science. “The LHC is the only place in the world where this kind of research can be performed, and we are a fully committed partner on the LHC experiments and the future development of the collider itself.”

Between 2010 and 2013 the LHC produced proton-proton collisions with 8 Tera-electronvolts of energy. In the spring of 2015, after a two-year shutdown, LHC operators ramped up the collision energy to 13 TeV. This increase in energy enables scientists to explore a new realm of physics that was previously inaccessible. Run II collisions also produce Higgs bosons—the groundbreaking particle discovered in LHC Run I—25 percent faster than Run I collisions and increase the chances of finding new massive particles by more than 40 percent.

Almost everything we know about matter is summed up in the Standard Model of particle physics, an elegant map of the subatomic world. During the first run of the LHC, scientists on the ATLAS and CMS experiments discovered the Higgs boson, the cornerstone of the Standard Model that helps explain the origins of mass. The LHCb experiment also discovered never-before-seen five-quark particles, and the ALICE experiment studied the near-perfect liquid that existed immediately after the Big Bang. All these observations are in line with the predictions of the Standard Model.

“So far the Standard Model seems to explain matter, but we know there has to be something beyond the Standard Model,” says Denise Caldwell, director of the Physics Division of the National Science Foundation. “This potential new physics can only be uncovered with more data that will come with the next LHC run.”

For example, the Standard Model contains no explanation of gravity, one of the four fundamental forces in the universe. It also does not explain astronomical observations of dark matter, a type of matter that interacts with our visible universe only through gravity, nor does it explain why matter prevailed over antimatter during the formation of the early universe. The small mass of the Higgs boson also suggests that matter is fundamentally unstable.

The new LHC data will help scientists verify the Standard Model’s predictions and push beyond its boundaries. Many predicted and theoretical subatomic processes are so rare that scientists need billions of collisions to find just a small handful of events that are clean and scientifically interesting. Scientists also need an enormous amount of data to precisely measure well-known Standard Model processes. Any significant deviations from the Standard Model’s predictions could be the first step towards new physics.

The United States is the largest national contributor to both the ATLAS and CMS experiments, with 45 US universities and laboratories working on ATLAS and 49 working on CMS.

A version of this article was published by Fermilab.

May 09, 2016 02:04 PM

May 08, 2016

John Baez - Azimuth


One of the big problems with intermittent power sources like wind and solar is the difficulty of storing energy. But if we ever get a lot of electric vehicles, we’ll have a lot of batteries—and at any time, most of these vehicles are parked. So, they can be connected to the power grid.

This leads to the concept of vehicle-to-grid or V2G. In a V2G system, electric vehicles can connect to the grid, with electricity flowing from the grid to the vehicle or back. Cars can help solve the energy storage problem.

Here’s something I read about vehicle-to-grid systems in Sierra magazine:

At the University of Delaware, dozens of electric vehicles sit in a uniform row. They’re part of an experiment involving BMW, power-generating company NRG, and PJM—a regional organization that moves electricity around 13 states and the District of Columbia—that’s examining how electric vehicles can give energy back to the electricity grid.

It works like this: When the cars are idle (our vehicles typically sit 95 percent of the time), they’re plugged in and able to deliver the electricity in their batteries back to the grid. When energy demand is high, they return electricity to the grid; when demand is low, they absorb electricity. One car doesn’t offer much, but 30 of them is another story—worth about 300 kilowatts of power. Utilities will pay for this service, called “load leveling,” because it means that they don’t have to turn on backup power plants, which are usually coal or natural gas burners. And the EV owners get regular checks—approximately $2.50 a day, or about $900 a year.

It’s working well, according to Willett Kempton, a longtime V2G guru and University of Delaware professor who heads the school’s Center for Carbon-Free Power Integration: “In three years hooked up to the grid, the revenue was better than we thought. The project, which is ongoing, shows that V2G is viable. We can earn money from cars that are driven regularly.”

V2G still has some technical hurdles to overcome, but carmakers—and utilities, too—want it to happen. In a 2014 report, Edison Electric Institute, the power industry’s main trade group, called on utilities to promote EVs [electric vehicles], describing EV adoption as a “quadruple win” that would sustain electricity demand, improve customer relations, support environmental goals, and reduce utilities’ operating costs.

Utilities appear to be listening. In Virginia and North Carolina, Dominion Resources is running a pilot project to identify ways to encourage EV drivers to only charge during off-peak demand. In California, San Diego Gas & Electric will be spending $45 million on a vehicle-to-grid integration system. At least 25 utilities in 14 states are offering customers some kind of EV incentive. And it’s not just utilities—the Department of Defense is conducting V2G pilot programs at four military bases.

Paula DuPont-Kidd, a spokesperson for PJM, says V2G is especially useful for what’s called “frequency regulation service”—keeping electricity transmissions at a steady 60 cycles per second. “V2G has proven its ability to be a resource to the grid when power is aggregated,” she says. “We know it’s possible. It just hasn’t happened yet.”

I wonder how much, exactly, this system would help.

My quote comes from here:

• Jim Motavalli, Siri, will connected vehicles be greener?, Sierra, May–June 2016.

Motavalli also discusses vehicle-to-vehicle connectivity and vehicle-to-building systems. The latter could let your vehicle power your house during a blackout—which seems of limited use to me, but maybe I don’t get the point.

In general, it seems good to have everything I own have the ability to talk to all the rest. There will be security concerns. But as we move toward ‘ecotechnology’, our gadgets should become less obtrusive, less hungry for raw power, more communicative, and more intelligent.

by John Baez at May 08, 2016 12:46 AM

May 07, 2016

ZapperZ - Physics and Physicists

"... in America today, the only thing more terrifying than foreigners is…math...."
OK, I'm going to get a bit political here, but with some math! So if this is not something you care to read, skip this.

I've been accused many times of being an "elitist", as if giving someone a label like that is a sufficient argument against what I had presented (it isn't!). But you see, it is hard not to be an "elitist" when you read something like this.

Prominent Guido Menzio, who is Italian, was pulled out of a plane because his seatmate thought he was writing something suspicious while they waited for their plane to take off. She couldn't understand the letters and probably it was "Arabic" or something (what if it is?), and since Menzio looks suspiciously "foreign", she reported him to the crew.

That Something she’d seen had been her seatmate’s cryptic notes, scrawled in a script she didn’t recognize. Maybe it was code, or some foreign lettering, possibly the details of a plot to destroy the dozens of innocent lives aboard American Airlines Flight 3950. She may have felt it her duty to alert the authorities just to be safe. The curly-haired man was, the agent informed him politely, suspected of terrorism.

The curly-haired man laughed.

He laughed because those scribbles weren’t Arabic, or some other terrorist code. They were math.

Yes, math. A differential equation, to be exact.
You can't make this up! But what hits home is what Menzio said later in the news article, and what the article writer ended with.

Rising xenophobia stoked by the presidential campaign, he suggested, may soon make things worse for people who happen to look a little other-ish.

“What might prevent an epidemic of paranoia? It is hard not to recognize in this incident, the ethos of [Donald] Trump’s voting base,” he wrote.

In this true parable of 2016 I see another worrisome lesson, albeit one also possibly relevant to Trump’s appeal: That in America today, the only thing more terrifying than foreigners is…math.
During this summer months, many of us travel to conferences all over the place. So, if you look remotely exotic, have a slightly darker skin, don't risk it by doing math on an airplane. That ignorant passenger sitting next to you just might rat on you! If by being an "elitist" means that I can recognize the difference between "math" and "arabic", then I'd rather be an elitist than someone who is proud of his/her aggressive ignorance.

How's that? Are you still with me?


by ZapperZ ( at May 07, 2016 03:47 PM

May 06, 2016

Tommaso Dorigo - Scientificblogging

A Statistics Session At A Particle Physics Conference ?

The twelfth edition of “Quark Confinement and the Hadron Spectrum“, a particle physics conference specialized in QCD and Heavy Ion physics, will be held in Thessaloniki this year, from

read more

by Tommaso Dorigo at May 06, 2016 09:49 AM

John Baez - Azimuth

Shelves and the Infinite

Infinity is a very strange concept. Like alien spores floating down from the sky, large infinities can come down and contaminate the study of questions about ordinary finite numbers! Here’s an example.

A shelf is a set with a binary operation \rhd that distributes over itself:

a \rhd (b \rhd c) = (a \rhd b) \rhd (a \rhd c)

There are lots of examples, the simplest being any group, where we define

g \rhd h = g h g^{-1}

They have a nice connection to knot theory, which you can see here if you think hard:

My former student Alissa Crans, who invented the term ‘shelf’, has written a lot about them, starting here:

• Alissa Crans, Lie 2-Algebras, Chapter 3.1: Shelves, Racks, Spindles and Quandles, Ph.D. thesis, U.C. Riverside, 2004.

I could tell you a long and entertaining story about this, including the tale of how shelves got their name. But instead I want to talk about something far more peculiar, which I understand much less well. There’s a strange connection between shelves, extremely large infinities, and extremely large finite numbers! It was first noticed by a logician named Richard Laver in the late 1980s, and it’s been developed further by Randall Dougherty.

It goes like this. For each n, there’s a unique shelf structure on the numbers \{1,2, \dots ,2^n\} such that

a \rhd 1 = a + 1 \bmod 2^n

So, the elements of our shelf are


1 \rhd 1 = 2

2 \rhd 1 = 3

and so on, until we get to

2^n \rhd 1 = 1

However, we can now calculate

1 \rhd 1

1 \rhd 2

1 \rhd 3

and so on. You should try it yourself for a simple example! You’ll need to use the self-distributive law. It’s quite an experience.

You’ll get a list of 2^n numbers, but this list will not contain all the numbers \{1, 2, \dots, 2^n\}. Instead, it will repeat with some period P(n).

Here is where things get weird. The numbers P(n) form this sequence:

1, 1, 2, 4, 4, 8, 8, 8, 8, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, …

It may not look like it, but the numbers in this sequence approach infinity!

if we assume an extra axiom, which goes beyond the usual axioms of set theory, but so far seems consistent!

This axiom asserts the existence of an absurdly large cardinal, called an I3 rank-into-rank cardinal.

I’ll say more about this kind of cardinal later. But, this is not the only case where a ‘large cardinal axiom’ has consequences for down-to-earth math, like the behavior of some sequence that you can define using simple rules.

On the other hand, Randall Dougherty has proved a lower bound on how far you have to go out in this sequence to reach the number 32.

And, it’s an incomprehensibly large number!

The third Ackermann function A_3(n) is roughly 2 to the nth power. The fourth Ackermann function A_4(n) is roughly 2 raised to itself n times:


And so on: each Ackermann function is defined by iterating the previous one.

Dougherty showed that for the sequence P(n) to reach 32, you have to go at least

n = A(9,A(8,A(8,255)))

This is an insanely large number!

I should emphasize that if we use just the ordinary axioms of set theory, the ZFC axioms, nobody has proved that the sequence P(n) ever reaches 32. Neither is it known that this is unprovable if we only use ZFC.

So, what we’ve got here is a very slowly growing sequence… which is easy to define but grows so slowly that (so far) mathematicians need new axioms of set theory to prove it goes to infinity, or even reaches 32.

I should admit that my definition of the Ackermann function is rough. In reality it’s defined like this:

A(m, n) = \begin{cases} n+1 & \mbox{if } m = 0 \\ A(m-1, 1) & \mbox{if } m > 0 \mbox{ and } n = 0 \\ A(m-1, A(m, n-1)) & \mbox{if } m > 0 \mbox{ and } n > 0. \end{cases}

And if you work this out, you’ll find it’s a bit annoying. Somehow the number 3 sneaks in:

A(2,n) = 2 + (n+3) - 3

A(3,n) = 2 \cdot (n+3) - 3

A(4,n) = 2^{n+3} - 3

A(5,n) = 2\uparrow\uparrow(n+3) - 3

where a \uparrow\uparrow b means a raised to itself b times,

A(6,n) = 2 \uparrow\uparrow\uparrow(n+3) - 3

where a \uparrow\uparrow\uparrow b means a \uparrow\uparrow (a \uparrow\uparrow (a \uparrow\uparrow \cdots )) with the number a repeated b times, and so on.

However, these irritating 3’s scarcely matter, since Dougherty’s number is so large… and I believe he could have gotten an even larger upper bound if he wanted.

Perhaps I’ll wrap up by saying very roughly what an I3 rank-into-rank cardinal is.

In set theory the universe of all sets is built up in stages. These stages are called the von Neumann hierarchy. The lowest stage has nothing in it:

V_0 = \emptyset

Each successive stage is defined like this:

V_{\lambda + 1} = P(V_\lambda)

where P(S) is the the power set of S, that is, the set of all subsets of S. For ‘limit ordinals’, that is, ordinals that aren’t of the form \lambda + 1, we define

\displaystyle{ V_\lambda = \bigcup_{\alpha < \lambda} V_\alpha }

An I3 rank-into-rank cardinal is an ordinal \lambda such that V_\lambda admits a nontrivial elementary embedding into itself.

Very roughly, this means the infinity \lambda is so huge that the collection of sets that can be built by this stage can mapped into itself, in a one-to-one but not onto way, into a smaller collection that’s indistinguishable from the original one when it comes to the validity of anything you can say about sets!

More precisely, a nontrivial elementary embedding of V_\lambda into itself is a one-to-one but not onto function

f: V_\lambda \to V_\lambda

that preserves and reflects the validity of all statements in the language of set theory. That is: for any sentence \phi(a_1, \dots, a_n) in the language of set theory, this statement holds for sets a_1, \dots, a_n \in V_\lambda if and only if \phi(f(a_1), \dots, f(a_n)) holds.

I don’t know why, but an I3 rank-into-rank cardinal, if it’s even consistent to assume one exists, is known to be extraordinarily big. What I mean by this is that it automatically has a lot of other properties known to characterize large cardinals. It’s inaccessible (which is big) and ineffable (which is bigger), and measurable (which is bigger), and huge (which is even bigger), and so on.

How in the world is this related to shelves?

The point is that if

f, g : V_\lambda \to V_\lambda

are elementary embeddings, we can apply f to any set in V_\lambda. But in set theory, functions are sets too: sets of ordered pairs So, g is a set. It’s not an element of V_\lambda, but all its subsets g \cap V_\alpha are, where \alpha < \lambda. So, we can define

f \rhd g = \bigcup_{\alpha < \lambda} f (g \cap V_\alpha)

Laver showed that this operation distributes over itself:

f \rhd (g \rhd h) = (f \rhd g) \rhd (f \rhd h)

And, he showed that if we take one elementary embedding and let it generate a shelf by this this operation, we get the free shelf on one generator!

The shelf I started out describing, the numbers \{1, \dots, 2^n \} with

a \rhd 1 = a + 1 \bmod 2^n

also has one generator namely the number 1. So, it’s a quotient of the free shelf on one generator by one relation, namely the above equation.

That’s about all I understand. I don’t understand how the existence of a nontrivial elementary embedding of V_\lambda into itself implies that the function P(n) goes to infinity, and I don’t understand Randall Dougherty’s lower bound on how far you need to go to reach P(n) = 32. For more, read these:

• Richard Laver, The left distributive law and the freeness of an algebra of elementary embeddings, Adv. Math. 91 (1992), 209–231.

• Richard Laver, On the algebra of elementary embeddings of a rank into itself, Adv. Math. 110 (1995), 334–346.

• Randall Dougherty and Thomas Jech, Finite left distributive algebras and embedding algebras, Adv. Math. 130 (1997), 201–241.

• Randall Dougherty, Critical points in an algebra of elementary embeddings, Ann. Pure Appl. Logic 65 (1993), 211–241.

• Randall Dougherty, Critical points in an algebra of elementary embeddings, II.

by John Baez at May 06, 2016 06:40 AM

May 05, 2016

Andrew Jaffe - Leaves on the Line

Wussy (Best Band in America?)

It’s been a year since the last entry here. So I could blog about the end of Planck, the first observation of gravitational waves, fatherhood, or the horror (comedy?) of the US Presidential election. Instead, it’s going to be rock ’n’ roll, though I don’t know if that’s because it’s too important, or not important enough.

It started last year when I came across Christgau’s A+ review of Wussy’s Attica and the mentions of Sonic Youth, Nirvana and Television seemed compelling enough to make it worth a try (paid for before listening even in the streaming age). He was right. I was a few years late (they’ve been around since 2005), but the songs and the sound hit me immediately. Attica was the best new record I’d heard in a long time, grabbing me from the first moment, “when the kick of the drum lined up with the beat of [my] heart”, in the words of their own description of the feeling of first listening to The Who’s “Baba O’Riley”. Three guitars, bass, and a drum, over beautiful screams from co-songwriters Lisa Walker and Chuck Cleaver.


And they just released a new record, Forever Sounds, reviewed in Spin Magazine just before its release:

To certain fans of Lucinda Williams, Crazy Horse, Mekons and R.E.M., Wussy became the best band in America almost instantaneously…

Indeed, that list nailed my musical obsessions with an almost google-like creepiness. Guitars, soul, maybe even some politics. Wussy makes me feel almost like the Replacements) did in 1985.

IMG 1764

So I was ecstatic when I found out that Wussy was touring the UK, and their London date was at the great but tiny Windmill in Brixton, one of the two or three venues within walking distance of my flat (where I had once seen one of the other obsessions from that list, The Mekons). I only learned about the gig a couple of days before, but tickets were not hard to get: the place only holds about 150 people, but their were far fewer on hand that night — perhaps because Wussy also played the night before as part of the Walpurgis Nacht festival. But I wanted to see a full set, and this night they were scheduled to play the entire new Forever Sounds record. I admit I was slightly apprehensive — it’s only a few weeks old and I’d only listened a few times.

But from the first note (and after a good set from the third opener, Slowgun) I realised that the new record had already wormed its way into my mind — a bit more atmospheric, less song-oriented, than Attica, but now, obviously, as good or nearly so. After the 40 or so minutes of songs from the album, they played a few more from the back catalog, and that was it (this being London, even after the age of “closing time”, most clubs in residential neighbourhoods have to stop the music pretty early). Though I admit I was hoping for, say, a cover of “I Could Never Take the Place of Your Man”, it was still a great, sloppy, loud show, with enough of us in the audience to shout and cheer (but probably not enough to make very much cash for the band, so I was happy to buy my first band t-shirt since, yes, a Mekons shirt from one of their tours about 20 years ago…). I did get a chance to thank a couple of the band members for indeed being the “best band in America” (albeit in London). I also asked whether they could come back for an acoustic show some time soon, so I wouldn’t have to tear myself away from my family and instead could bring my (currently) seven-month old baby to see them some day soon.

They did say UK tours might be a more regular occurrence, and you can follow their progress on the Wussy Road Blog. You should just buy their records, support great music.

by Andrew at May 05, 2016 10:36 PM

ZapperZ - Physics and Physicists

Scanning Probe Microscopy
The Physical Review is marking the 35th Anniversary of Scanning Tunneling Microscopy (STM) and 30 years of Atomic Force Microscopy (AFM) with free access to notable papers from the Physical Review journals in these two experimental techniques.

So check them out!


by ZapperZ ( at May 05, 2016 05:01 PM

Symmetrybreaking - Fermilab/SLAC

Following LIGO’s treasure maps

Astronomers around the world are looking for visible sources of gravitational waves.

On the morning of September 16, 2015, an email appeared in 63 inboxes scattered around the globe. The message contained a map of the cosmos and some instructions, and everyone who received it knew the most important thing was to keep it secret. 

It wasn’t until five months later that the world found out what the owners of those inboxes knew: that two days earlier, on September 14, the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected gravitational waves for the first time. That secret was shared with 63 astronomy collaborations, and it sparked the start of a worldwide treasure hunt. Astronomers searched the skies for rare and faint objects that might be the source of the detected ripples in space-time.  

Searching for the optical counterparts to those waves is a crucial step after the initial detection. The additional information can provide interesting scientific results for both gravitational-wave scientists and astronomers. Gravitational waves may be caused by several different phenomena such as neutron stars colliding or, in the case of the first signal, a pair of black holes merging. Studying these objects can be its own reward for astronomers, so they prepare for months or even years to drop everything at a moment’s notice to follow up signals whenever they appear. 

But in September, the email from LIGO took most of those astronomers by surprise. In fact, according to LIGO collaboration member Daniel Holz of the University of Chicago, the clear, crisp signal caught just about everyone off guard. Advanced LIGO, the most recent upgrade that had quadrupled their sensitivity, had just begun its engineering run—they had barely turned the machine on when they hit pay dirt.

“It was insane, incredible,” Holz says. “We all worked very hard, and to have what you hope for and dream about land in your lap so fast, so early and so emphatically was like my wildest dreams coming true.” 

The signal was detected loud and clear at 4:50 a.m. Chicago time, so Holz was able to see it when he checked email at 7 a.m. His initial thought was that it might have been a mistake, but by the time he’d bicycled to work and had his morning tea, many of the obvious ways the signal could have been an error had been eliminated. By the end of that day, it was likely that the LIGO team had the real thing on their hands. 

“We were prepared to do a lot of analysis, and that work can take months,” Holz says. “But this case was so emphatic that within hours we were quite confident that we had something incredibly interesting.”

The collaboration still analyzed the signal for two days before sending it out to astronomy teams. Marica Branchesi, an astronomer who has been part of LIGO and its sister experiment Virgo since 2009, was part of the small group that sent the September 16 email. She says extra care was taken with this first signal.

“Because it was the first candidate, we took the time to do more analysis and be sure it was an event,” she says. “This is something we had dreamed of for a long time.”

While LIGO’s extraordinary sensitivity allows it to detect gravitational waves, which result from warped space-time, pinpointing the source of those waves is another matter. LIGO uses a pair of massive laser interferometers, one located in Washington state, the other in Louisiana. With two detectors, LIGO can figure out which direction the waves are coming from, but a third detector (the Advanced VIRGO detector, located in Italy and coming online later this year) will enable them to triangulate the signal. 

What Branchesi and the LIGO/Virgo team sent to astronomers on September 16 was a sky map that covered 600 square degrees, an area 6000 times larger than the full moon, with probabilities assigned to pixels.

“The region of sky is huge,” Branchesi says. “It’s a challenge to cover. With such a large region, you can find many objects that look like they might be the counterpart, but aren’t.”

Illustration by Sandbox Studio, Chicago

The LIGO team also did not know at the time what we know now—that this particular gravitational wave was caused by a pair of black holes, which are unlikely to be visible with telescopes (though the Fermi Gamma-ray Space Telescope did pick up a burst of gamma rays in the same area). But Marcelle Soares-Santos, an astrophysicist who works on the Dark Energy Survey at the US Department of Energy’s Fermilab, says she would have followed up on the LIGO email regardless.

“There may be something,” she says, “but we don’t know unless we look. We don’t expect a pair of black holes to be visible, but if the area near the black holes is full of matter, maybe we can detect that.”

Soares-Santos is part of a roughly 25-member team within the Dark Energy Survey called DES-GW, dedicated to following up signals from LIGO. The effort began in 2013, when LIGO put out an open invitation to astronomers to search for optical counterparts. 

“It seemed like a challenging thing to do, to find a transient object in a huge area of sky,” she says. “But then I realized that the Dark Energy Camera is a perfect tool for a discovery like this.”

That camera, the main instrument of the survey, has several advantages, Soares-Santos says: It has a wide field of view, it’s on a large telescope (the 4-meter Blanco telescope at the Cerro Tololo Inter-American Observatory in Chile), and it has a particular sensitivity to the red end of the spectrum, which helps astronomers chase down the faint objects they’re looking for. 

DES-GW has an agreement with the main Dark Energy Survey: If a signal from LIGO comes in, astronomers drop everything and use the camera to chase it. That’s because the objects that are likeliest to be found are neutron stars, the smallest and densest types of stars known to exist. They are thought to form when a massive star collapses, creating a supernova, and they fade quickly, rapidly rendering them undetectable. 

When two neutron stars are formed side by side, the theory goes, these stars create detectable gravitational waves. Spotting two neutron stars (or a neutron star paired with a black hole) would be like finding buried treasure. And it would be just as difficult, according to Stephen Smartt of the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) collaboration, which also followed up this first signal.

“The sky maps we receive are 500 to 1000 square degrees, which is a big chunk of sky,” he says. “Only a small number of facilities are able to map out that area to faint limits.”

Most of the teams working on these optical follow-ups expect the counterparts to be faint, Smartt says, because if they were bright and common, then the currently running surveys would probably have spotted them already. 

“They could have already been discovered and we haven’t recognized them,” he says, “but most astronomers think that is unlikely.”  

Essentially, Smartt says, astronomers are looking for something bright, fleeting and newly formed—something that hasn’t shown up on previous sweeps of the survey area. Soares-Santos notes that astronomers are essentially looking for an object like a supernova, but fainter, redder and decaying faster.

“A supernova lasts about a month,” she says. “These last about 10 days. That’s why we want to be quick.” 

The initial sky map sent to astronomers showed two areas of high probability, one in the northern part of the region and one in the southern part. Pan-STARRS, based in Hawaii, concentrated on the northern one, finding roughly 60 transient objects and analyzing them. They discovered nothing unusual and, as more analysis was done on LIGO’s end, learned that they were looking in an area less likely to be the source. But Smartt’s very happy to keep following up these signals. 

“It was an amazing discovery,” he says. “These follow-ups are a high-risk project, and we don’t know if we will hit gold or find nothing.” 

But finding the sought-for objects would open up doors to new science, from probing the origin of heavy elements to high-energy physics and even constraining theories of modified gravity.

“The payoff is so great, it’s worth pursuing,” he says. 

DES scanned the southern area and similarly found nothing unusual. More detailed maps were provided later, showing that they too were off the mark somewhat, but as the system improves, this should be less of an issue. And there will be plenty of opportunity to put it through its paces in the future. 

“At first [DES-GW] was seen as high-risk,” Soares-Santos says. “Now the perception is that there is still a risk involved, but there will not be a lack of events. Everybody is very happy we did this.”

And the results of following these signals will be beneficial to astronomy as well. DES scientists will learn more about objects they rarely observe, like binary neutron stars, but they could also potentially use that information to aid in their main mission to learn more about dark energy. Soares-Santos explained that they could use neutron stars the same way they are using supernovae now, to study how the universe has expanded over time.

“In principle, if the rates are as high as we think they could be, we could have another probe for DES,” she says. 

Branchesi agreed that the system, though currently working well, will improve. In particular, the LIGO/Virgo team wants to get the alerts to astronomers sent out no more than a few minutes after gravitational waves are detected. And with the Advanced VIRGO detector coming online soon, the probability maps will get much more exact. 

But she says she was happy with how well such a vast and diverse group of physicists and astronomers worked together not only to detect gravitational waves for the first time, but also to follow up that detection with solid observation. That, she says, will only get better as well.

“There’s a lot of us, and it’s important that we work together,” she says.

LIGO is still holding an open call for astronomy collaborations that would like to look for optical counterparts to gravitational wave signals. It’s a chance, Holz says, to be part of something that has captivated the world.

“Our community is very excited, the broader scientific community is excited and the public is excited,” he says. “It’s similar to the Higgs discovery, but different, because it’s opening up an entirely new window. It’s enabling the first step in a whole new way to probe the universe, and the excitement is about where we're headed. It’s revolutionary.”

by Andre Salles at May 05, 2016 02:47 PM

May 03, 2016

Symmetrybreaking - Fermilab/SLAC

EXO-200 resumes its underground quest

The upgraded experiment aims to discover if neutrinos are their own antiparticles.

Science is often about serendipity: being open to new results, looking for the unexpected.

The dark side of serendipity is sheer bad luck, which is what put the Enriched Xenon Observatory experiment, or EXO-200, on hiatus for almost two years.

Accidents at the Department of Energy’s underground Waste Isolation Pilot Project (WIPP) facility near Carlsbad, New Mexico, kept researchers from continuing their search for signs of neutrinos and their antimatter pairs. Designed as storage for nuclear waste, the site had both a fire and a release of radiation in early 2014 in a distant part of the facility from where the experiment is housed. No one at the site was injured. Nonetheless, the accidents, and the subsequent efforts of repair and remediation, resulted in a nearly two-year suspension of the EXO-200 effort.

Things are looking up now, though: Repairs to the affected area of the site are complete, new safety measures are in place, and scientists are back at work in their separate area of the site, where the experiment is once again collecting data. That’s good news, since EXO-200 is one of a handful of projects looking to answer a fundamental question in particle physics: Are neutrinos and antineutrinos the same thing?

The neutrino that wasn't there

Each type of particle has its own nemesis: its antimatter partner. Electrons have positrons—which have the same mass but opposite electric charge—quarks have antiquarks and protons have antiprotons. When a particle meets its antimatter version, the result is often mutual annihilation. Neutrinos may also have antimatter counterparts, known as antineutrinos. However, unlike electrons and quarks, neutrinos are electrically neutral, so antineutrinos look a lot like neutrinos in many circumstances.

In fact, one hypothesis is that they are one and the same. To test this, EXO-200 uses 110 kilograms of liquid xenon (of its 200kg total) as both a particle source and particle detector. The experiment hinges on a process called double beta decay, in which an isotope of xenon has two simultaneous decays, spitting out two electrons and two antineutrinos. (“Beta particle” is a nuclear physics term for electrons and positrons.)

If neutrinos and antineutrinos are the same thing, sometimes the result will be neutrinoless double beta decay. In that case, the antineutrino from one decay is absorbed by the second decay, canceling out what would normally be another antineutrino emission. The challenge is to determine if neutrinos are there or not, without being able to detect them directly.

“Neutrinoless double beta decay is kind of a nuclear physics trick to answer a particle physics problem,” says Michelle Dolinski, one of the spokespeople for EXO-200 and a physicist at Drexel University. It’s not an easy experiment to do.

EXO-200 and similar experiments look for indirect signs of neutrinoless double beta decay. Most of the xenon atoms in EXO-200 are a special isotope containing 82 neutrons, four more than the most common version found in nature. The isotope decays by emitting two electrons, changing the atom from xenon into barium. Detectors in the EXO-200 experiment collect the electrons and measure the light produced when the beta particles are stopped in the xenon. These measurements together are what determine whether double beta decay happened, and whether the decay was likely to be neutrinoless.

EXO-200 isn’t the only neutrinoless double beta decay experiment, but many of the others use solid detectors instead of liquid xenon. Dolinski got her start on the CUORE experiment, a large solid-state detector, but later changed directions in her research.

“I joined EXO-200 as a postdoc in 2008 because I thought that the large liquid detectors were a more scalable solution,” she says. "If you want a more sensitive liquid-state experiment, you can build a bigger tank and fill it with more xenon.”

Neutrinoless or not, double beta decay is very rare. A given xenon atom decays randomly, with an average lifetime of a quadrillion times the age of the universe. However, if you use a sufficient number of atoms, a few of them will decay while your experiment is running.

“We need to sample enough nuclei so that you would detect these putative decays before the researcher retires,” says Martin Breidenbach, one of the EXO-200 project leaders and a physicist at the Department of Energy’s SLAC National Accelerator Laboratory.

But the experiment is not just detecting neutrinoless events. Heavier neutrinos mean more frequent decays, so measuring the rate reveals the neutrino mass — something very hard to measure otherwise.

Prior runs of EXO-200 and other experiments failed to see neutrinoless double beta decay, so either neutrinos and antineutrinos aren’t the same particle after all, or the neutrino mass is small enough to make decays too rare to be seen during the experiment’s lifetime. The current limit for the neutrino mass is less than 0.38 electronvolts—for comparison, electrons are about 500,000 electronvolts in mass.

SLAC National Accelerator Laboratory's Jon Davis checks the enriched xenon storage bottles before the refilling of the TPC.

Brian Dozier, Los Alamos National Laboratory

Working in the salt mines

Cindy Lin is a Drexel University graduate student who spends part of her time working on the EXO-200 detector at the mine. Getting to work is fairly involved.

“In the morning we take the cage elevator half a mile down to the mine,” she says. Additionally, she and the other workers at WIPP have to take a 40-hour safety training to ensure their wellbeing, and wear protective gear in addition to normal lab clothes.

“As part of the effort to minimize salt dust particles in our cleanroom, EXO-200 scientists also cover our hair and wear coveralls,” Lin adds.

The sheer amount of earth over the detector shields it from electrons and other charged particles from space, which would make it too hard to spot the signal from double beta decay. WIPP is carved out of a sodium chloride deposit—the same stuff as table salt—that has very little uranium or the other radioactive minerals you find in solid rock caverns. But it has its drawbacks, too.

“Salt is very dynamic: It moves at the level of centimeters a year, so you can't build a nice concrete structure,” says Breidenbach. To compensate, the EXO-200 team has opted for a more modular design.

The inadvertent shutdown provided extra challenges. EXO-200, like most experiments, isn’t well suited for being neglected for more than a few days at a time. However, Lin and other researchers worked hard to get the equipment running for new data this year, and the downtime also allowed researchers to install some upgraded equipment.

The next phase of the experiment, nEXO, is at a conceptual stage based on what has been learned from EXO200. Experimenters are considering the benefits of moving the project deeper underground, perhaps at a facility like the Sudbury Neutrino Observatory (SNOlab) in Canada. Dolinski is optimistic that if there are any neutrinoless double beta decays to see, nEXO or similar experiments should see them in the next 15 years or so.

Then, maybe we’ll know if neutrinos and antineutrinos are the same and find out more about these weird low-mass particles.

by Matthew R. Francis at May 03, 2016 04:28 PM

Axel Maas - Looking Inside the Standard Model

Digging into a particle
This time I would like to write about a new paper which I have just put out. In this paper, I investigate a particular class of particles.

This class of particles is actually quite similar to the Higgs boson. I. e. the particles are bosons and they have the same spin as the Higgs boson. This spin is zero. This class of particles is called scalars. These particular sclars also have the same type of charges, they interact with the weak interaction.

But there are fundamental differences as well. One is that I have switched off the back reaction between these particles and the weak interactions: The scalars are affected by the weak interaction, but they do not influence the W and Z bosons. I have also switched off the interactions between the scalars. Therefore, no Brout-Englert-Higgs effect occurs. On the other hand, I have looked at them for several different masses. This set of conditions is known as quenched, because all the interactions are shut-off (quenched), and the only feature which remains to be manipulated is the mass.

Why did I do this? There are two reasons.

One is a quite technical reason. Even in this quenched situation, the scalars are affected by quantum corrections, the radiative corrections. Due to them, the mass changes, and the way the particles move changes. These effects are quantitative. And this is precisely the reason to study them in this setting. Being quenched it is much easier to actually determine the quantitative behavior of these effects. Much easier than when looking at the full theory with back reactions, which is a quite important part of our research. I have learned a lot about these quantitative effects, and am now much more confident in how they behave. This will be very valuable in studies beyond this quenched case. As was expected, there was not many surprises found. Hence, it was essentially a necessary but unspectacular numerical exercise.

Much more interesting was the second aspect. When quenching, this theory becomes very different from the normal standard model. Without the Brout-Englert-Higgs effect, the theory actually looks very much like the strong interaction. Especially, in this case the scalars would be confined in bound states, just like quarks are in hadrons. How this occurs is not really understood. I wanted to study this using these scalars.

Justifiable, you may ask why I would do this. Why would I not just have a look at the quarks themselves. There is a conceptual and a technical reason. The conceptual reason is that quarks are fermions. Fermions have non-zero spin, in contrast to scalars. This entails that they are mathematically more complicated. These complications mix in with the original question about confinement. This is disentangled for scalars. Hence, by choosing scalars, these complications are avoided. This is also one of the reasons to look at the quenched case. The back-reaction, irrespective of with quarks or scalars, obscures the interesting features. Thus, quenching and scalars isolates the interesting feature.

The other is that the investigations were performed using simulations. Fermions are much, much more expensive than scalars in such simulations in terms of computer time. Hence, with scalars it is possible to do much more at the same expense in computing time. Thus, simplicity and cost made scalars for this purpose attractive.

Did it work? Well, no. At least not in any simple form. The original anticipation was that confinement should be imprinted into how the scalars move. This was not seen. Though the scalars are very peculiar in their properties, they in no obvious way show confinement. It may still be that there is an indirect way. But so far nobody has any idea how. Though disappointing, this is not bad. It only tells us that our simple ideas were wrong. It also requires us to think harder on the problem.

An interesting observation could be made nonetheless. As said above, the scalars were investigated for different masses. These masses are, in a sense, not the observed masses. What they really are is the mass of the particle before quantum effects are taken into account. These quantum effects change the mass. These changes were also measured. Surprisingly, the measured mass was larger than the input mass. The interactions created mass, even if the input mass was zero. The strong interaction is known to do so. However, it was believed that this feature is strongly tied to fermions. For scalars it was not expected to happen, at least not in the observed way. Actually, the mass is even of a similar size as for the quarks. This is surprising. This implies that the kind of interaction is generically introducing a mass scale.

This triggered for me the question whether the mass scale also survives when having the backcoupling in once more. If it remains even when there is a Brout-Englert-Higgs effect then this could have interesting implications for the mass of the Higgs. But this remains to be seen. It may as well be that this will not endure when not being quenched.

by Axel Maas ( at May 03, 2016 04:21 PM