Particle Physics Planet

April 01, 2015

Jester - Resonaances

What If, Part 1
This is the do-or-die year, so Résonaances will be dead serious. This year, no stupid jokes on April Fools' day: no Higgs in jail, no loose cables, no discovery of supersymmetry, or such. Instead, I'm starting with a new series "What If" inspired  by XKCD.  In this series I will answer questions that everyone is dying to know the answer to. The first of these questions is

If HEP bloggers were Muppets,
which Muppet would they be? 

Here is  the answer.

  • Gonzo the Great: Lubos@Reference Frame (odd-numbered days)
    The one true artist. Not treated seriously by other Muppets, but adored by chicken.
  • Animal: Lubos@Reference Frame (even-numbered days)
    My favorite Muppet. Pure mayhem and destruction. Has only two modes: beat it, or eat it.
  • Swedish Chef: Tommaso@Quantum Diaries Survivor
    The Muppet with a penchant for experiment. You don't understand what he says but it's always amusing nonetheless.
  • Kermit the Frog: Matt@Of Particular Significance
    Born Muppet leader, though not clear if he really wants the job.
  • Miss Piggy: Sabine@Backreaction
    Not the only female Muppet, but certainly the best known. Admired for her stage talents but most of all for her punch.
  • Rowlf: Sean@Preposterous Universe
    One-Muppet orchestra. Impressive as an artist or and as a comedian, though some complain he's gone to the dogs.

  • Statler and Waldorf: Peter@Not Even Wrong
    Constantly heckling other Muppets from the balcony, yet every week back for more.
  • Fozzie Bear:  Jester@Résonaances
    Failed stand-up comedian. Always stressed that he may not be funny after all.
If you have a match for Beaker, Bunsen Honeydew, or Dr Strangepork, let me know in the comments.

In preparation:
-If physicists were smurfs...
-If physicists lived in Middle-earth... 

-If physicists were Hobbit's dwarves... 
and more. 

by Jester ( at April 01, 2015 09:35 AM

Quantum Diaries

LHC Run 2 cancelled, CERN closes doors

After a three week review CERN Director General, Rolf Dieter Heuer has announced that the LHC will not have another run and that the international laboratory will be closing its doors to science. The revelation follows an intense week of discussion, analysis and rumour mongering.

While deleting some old files from the myriad of hard drives at the CERN Computing Centre, IT support found some data nobody had seen before. “It was just sitting there on a few hard drives in the corner” said Linus Distro, from IT Support. “So I told the analysts to take a look at it and the rest is history!”

The single event that definitely proved the existence of supersymmetry (BBC)

The single event that definitely proved the existence of supersymmetry (BBC)

It turns out the rest is history, because these few exobytes of data held the answers to all of the open questions of physics. After discovering a staggering 327 new particles the physicists managed to prove the existence of supersymmetry, extra-dimensions, dark matter, micro black holes, technicolor, and top quark condendsates. But not string theory, that’s just silly.

Theorist John Ellis commented “I never thought I’d see this in my lifetime. I mean, I expected to see supersymmetry and dark matter, but now we have technicolor too. It’s quite simply amazing. We’ve been sitting on this data for years without even knowing it.”

Due to take on the role of Director General in 2016, Fabiola Gianotti said “Now that physics is finished I’m not sure what to do. I was expecting a long and industrious career at the lab, now I can retire early and buy a nice beach house near Napoli.”

The situation for unviersitities across the world is less clear. PhD students are expected to have up to seven theses each to cope with all the extra discoveries. Professors are starting to panic, trying to save as much of their funding as possible. There has been a sudden increase in the number of conferences in Hawai’i, Cuba, and the Bahamas, as postdocs squeeze as much opportunity out of the final weeks of their careers as possible.

The ALICE Control Room will be repurposed into a massive Call of Duty multiplayer facilitiy (ALICE Matters)

The ALICE Control Room will be repurposed into a massive Call of Duty multiplayer facilitiy (ALICE Matters)

“The atmosphere on site is incredible!” shouted one slightly inebriated physicist, “People say we should measure everything down to the 6th decimal place, but to be honest we’ll probably just stop after four.”

Famous atheist Richard Dawkins as leapt on the opportunity to prove the non existence of god. “If those files answer all the questions physics has left then surely it proves there is no god.” he tweeted last week. And he’s not alone. Thousands of people across the globe are finally realising that with no questions left to answer, they are completely intellectually and spiritually satisfied for the first time in history, and are busy validating their own world views.

Among the top answers are the following: Schrödinger’s cat is alive and well and living in Droitwich, god plays dice on Tuesdays, light is a particle and a wave and Canadian (and hopes you’re having a good day), electrons are strawberry flavoured, Leibniz and Newton were good friend who discovered calculus together, and if you could ride a beam of light it would be totally freaking awesome.

While the phycisists may not have much to do anymore the number of visitors has increased by a factor 3500% in the past two weeks. People from all over the world are descending upon CERN to experience extra dimensions and parallel universes. For 20 CHF a family can visit a parallel universe of their choosing for up to two weeks. Head of CERN Visits Mick Storr said “It’s a great time to visit CERN. Finally we know where we came from, where we’re going, and what we’re made of. Now I just need to work out what to have for dinner.”

Early crowds grather to see the creation of the daily 14:00 wormhole at CMS. (CERN)

Early crowds grather to see the creation of the daily 14:00 wormhole at CMS. (CERN)

It’s unclear what will happen next. There are certainly questions about how best to use the extra dimensions, but the biggest problem is a social one. Nobody knows what will happen to the thousands of physicists who will have to re-enter the “real world”. It’s a scary place for some, and physicists lack basic transferable skills such as burger flipping and riot control.

Whatever happens, everyone will look back at the Winter of 2015 as most exciting year in science history. This year’s Nobel Prize ceremony will be a complicated matter indeed.

by Aidan Randle-Conde at April 01, 2015 09:25 AM

Peter Coles - In the Dark


The University of Sussex is closing down for a week to allow people to take a breather around Easter weekend. After this afternoon’s staff meeting, I will heading off for a week’s holiday and probably won’t be blogging until I get back, primarily because I won’t have an internet connection where I’m going. That’s a deliberate decision, by the way….

So, as the saying goes, there will now follow a short intermission….

PS. The suitably restful and very typical bit of 1950s  “light” music accompanying this is called Pastoral Montage, and it was written by South African born composer Gideon Fagan.


by telescoper at April 01, 2015 09:25 AM

The n-Category Cafe

Split Octonions and the Rolling Ball

You may enjoy these webpages:

because they explain a nice example of the Erlangen Program more tersely — and I hope more simply — than before, with the help of some animations made by Geoffrey Dixon using WebGL. You can actually get a ball to roll in way that illustrates the incidence geometry associated to the exceptional Lie group <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics>!

Abstract. Understanding exceptional Lie groups as the symmetry groups of more familiar objects is a fascinating challenge. The compact form of the smallest exceptional Lie group, <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics>, is the symmetry group of an 8-dimensional nonassociative algebra called the octonions. However, another form of this group arises as symmetries of a simple problem in classical mechanics! The space of configurations of a ball rolling on another ball without slipping or twisting defines a manifold where the tangent space of each point is equipped with a 2-dimensional subspace describing the allowed infinitesimal motions. Under certain special conditions, the split real form of <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics> acts as symmetries. We can understand this using the quaternions together with an 8-dimensional algebra called the ‘split octonions’. The rolling ball picture makes the geometry associated to <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics> quite vivid. This is joint work with James Dolan and John Huerta, with animations created by Geoffrey Dixon.

I’m going to take this show on the road and give talks about it at Penn State, the University of York (virtually), and elsewhere. And there’s no shortage of material to read for more details. John Huerta has blogged about this work here:

* John Huerta, G2 and the rolling ball.

and I have a 5-part series where I gradually lead up to the main idea, starting with easier examples:

* John Baez, Rolling circles and balls.

There’s also plenty of actual papers:

So, enjoy!

by john ( at April 01, 2015 01:23 AM

March 31, 2015

Christian P. Robert - xi'an's og

Le Monde puzzle [#905]

A recursive programming  Le Monde mathematical puzzle:

Given n tokens with 10≤n≤25, Alice and Bob play the following game: the first player draws an integer1≤m≤6 at random. This player can then take 1≤r≤min(2m,n) tokens. The next player is then free to take 1≤s≤min(2r,n-r) tokens. The player taking the last tokens is the winner. There is a winning strategy for Alice if she starts with m=3 and if Bob starts with m=2. Deduce the value of n.

Although I first wrote a brute force version of the following code, a moderate amount of thinking leads to conclude that the person given n remaining token and an adversary choice of m tokens such that 2m≥n always win by taking the n remaining tokens:


 if (n>2*m){
   for (i in 1:(2*m))

eliminating solutions which dividers are not solutions themselves:

for (i in 3:6){

which leads to the output

> subs=rep(0,16)
> for (n in 10:25) subs[n-9]=optim(n,3)
> for (n in 10:25) if (subs[n-9]==1) subs[n-9]=1-optim(n,2)
> subs
 [1] 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
> (10:25)[subs==1]
[1] 18

Ergo, the number of tokens is 18!

Filed under: Books, Kids, R, Statistics, University life Tagged: Le Monde, mathematical puzzle, R, recursive function

by xi'an at March 31, 2015 10:15 PM

astrobites - astro-ph reader's digest

Falling stones paint it black
  • Title: Darkening of Mercury’s surface by cometary carbon
  • Authors: Megan Bruck Syal, Peter H. Schultz, Miriam A. Riner
  • First author’s institution: Lawrence Livermore National Laboratory
  • Status of the Paper: Published in Nature
  • ImageBy NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington. Edited version of Image:Mercury in color – Prockter07.jpg by Papa Lima Whiskey. (NASA/JPL [1]) [Public domain], via Wikimedia Commons


What’s the issue?


Figure 1: The red curve illustrates the reflectance of Moon-like material mixed with organic free quartz sand and the blue curve shows the reflectance for the material mixed with organics (sugar) after the impact of the projectile. The impact causes the production of carbon for the material mixed with organics and its reflectance is lower.

Today’s Astrobite presents an explanation of a feature of the Solar System‘s smallest and innermost planet: the darkening of Mercury. So far astronomers could get no satisfaction in explaining the fact that Mercury’s surface is darker than the Moon’s. In principle, iron is the most common cause for darkening of bodies without an air atmosphere. The problem for Mercury is: Mercury’s surface consists of less iron than the Moon. Thus, there has to be a material different from iron that “paints” Mercury black.

Shooting on a Range and Gambling in Monte Carlo for Science

The authors of the article propose that carbon, instead of iron, can darken the surface of Mercury sufficiently. Indeed they do experiments at NASA’s Ames Vertical Gun Range, where they shoot projectiles onto Moon-like material mixed with/without organics. When the projectile is shot on the material with organics, the heat induced by the impact causes the formation of carbon. As you can see in Figure 1 (Figure 3 in the paper) the reflectance of light from the surface of the two materials is significantly lower, when carbon is present.

At this point you may ask: Fine, but why should there be more carbon on Mercury than on the Moon? The explanation of the authors is based on two observations.

  • Comets consist on average about 18% of carbon.
  • The number of cometary impacts per unit area decreases roughly inversely with decreasing radial distance from the Sun.

Considering that enough of the impacting material retains on Mercury, the authors suggest that the larger amount of meteoritic impacts enrich the surface of Mercury more in carbon than the surface of the Moon. To be precise, they consider only small meteorites – so called micrometeorites – and they assume a constant spherical size of 0.25 cm and constant speed of 20 km/s. You may argue now that this is a drastic simplification since meteorites have varying sizes and corresponding higher speeds. The authors are aware of that, but they argue that larger objects have higher speeds such that they will not be captured by Mercury’s gravitational field. Thus the impact of larger objects is negligible and micrometeorites are the dominating objects for impact.

Figure 2: The plot illustrates the probability distribution (black), the amount of mass retained on the surface obtained from the resolved micrometeorites for Mercury (dark blue points and curve) and the Moon (red points and curve) for different impact angles measured with respect to the horizontal axis. In comparison the results obtained from tracer particles for Mercury are plotted (light blue diamonds)

Figure 2: The plot illustrates the probability distribution (black), the amount of mass retained on the surface obtained from the resolved micrometeorites for Mercury (dark blue points and curve) and the Moon (red points and curve) for different impact angles measured with respect to the horizontal axis. In comparison the results obtained from tracer particles for Mercury are plotted as light blue diamonds.

The authors test their idea with a Monte Carlo code, where they compute the percentage of retaining meteorites on the Moon and Mercury for different impact angles. In Figure 2 of this Astrobite (Figure 1 in the paper) you can see the probability for a micrometeorite of certain impact angle and the mass fraction of the objects that retain on Mercury. The impactors (micrometeorites) are resolved by several grid cells in the code. The drop in the retainment fraction at 30° reflects the fact that the micrometeorite has relatively seen more energy compared to the target than at different angles. The results are also similar for tracer particles that follow the movement of impactors during the simulation. The authors explain the differences at 15° with asymmetric shock conditions such that some mass of the impactors does not stay on the surface. However, this process is – in contrast to the first method – not resolved by tracing the micrometeorites as single objects by tracer particles.


Altogether, you can see that a sufficient amount of mass from the impactors stays on the surface (on average 83 % for Mercury and 63 % for the Moon) and the authors conclude that approximately 50 times more carbon-rich micrometeorites are delivered to Mercury than to the Moon. Together with the result from the shooting experiment that carbon darkens the surface efficiently, micrometeorite impact may cause the darkening of Mercury. In other words: Falling stones paint it black.

Attribution for the image of Mercury: By NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington. Edited version of Image:Mercury in color – Prockter07.jpg by Papa Lima Whiskey. (NASA/JPL [1]) [Public domain], via Wikimedia Commons

by Michael Küffmeier at March 31, 2015 10:06 PM

Quantum Diaries

CERN Had Dark Energy All Along; Uses It To Fuel Researchers

I don’t usually get to spill the beans on a big discovery like this, but this time, I DO!

CERN Had Dark Energy All Along!!

That’s right. That mysterious energy making up ~68% of the universe was being used all along at CERN! Being based at CERN now, I’ve had a first hand glimpse into the dark underside of Dark Energy. It all starts at the Crafted Refilling of Empty Mugs Area (CREMA), pictured below.

One CREMA station at CERN


Researchers and personnel seem to stumble up to these stations at almost all hours of the day, looking very dreary and dazed. They place a single cup below the spouts, and out comes a dark and eerie looking substance, which is then consumed. Some add a bit of milk for flavor, but all seem perkier and refreshed after consumption. Then they disappear from whence they came. These CREMA stations seem to be everywhere, from control rooms to offices, and are often found with groups of people huddled around them. In fact, they seem to exert a force on all who use them, keeping them in stable orbits about the stations.

In order to find out a little bit more about this mysterious substance and its dispersion, I asked a graduating student, who wished to remain unnamed, a little bit about their experiences:

Q. How much of this dark stuff do you consume on a daily basis?

A. At least one cup in the morning to fuel up, I don’t think I could manage to get to lunchtime without that one. Then multiple other cups distributed over the day, depending on the workload. It always feels like they help my thinking.

Q. Do you know where it comes from?

A. We have a machine in our office which takes capsules. I’m not 100% sure where those capsules are coming from, but they seem to restock automatically, so no one ever asked.

Q. Have you been hiding this from the world on purpose?

A. Well our stock is important to our group, if we would just share it with everyone around we could run out. And no one of us can make it through the day without. We tried alternatives, but none are so effective.

Q. Do you remember the first time you tried it?

A. Yes, they hooked me on it in university. From then on nothing worked without!

Q. Where does CERN get so much of it?

A. I never thought about this question. I think I’m just happy that there is enough for everyone here, and physicist need quite a lot of it to work.

In order to gauge just how much of this Dark Energy is being consumed, I studied the flux of people from the cafeteria as a function of time with cups of Dark Energy. I’ve compiled the results into the Dark Energy Consumption As Flux (DECAF) plot below.

Dark Energy Consumption as Flux plot. Taken March 31, 2015. Time is given in 24h time. Errors are statistical.


As the DECAF plot shows, there is a large spike in consumption, particularly after lunch. There is a clear peak at times after 12:20 and before 13:10. Whether there is an even larger peak hiding above 13:10 is not known, as the study stopped due to my advisor asking “shouldn’t you be doing actual work?”

There is an irreducible background of Light Energy in the cups used for Dark Energy, particularly of the herbal variety. Fortunately, there is often a dangly tag hanging off of the cup  to indicate to others that they are not using the precious Dark Energy supply, and provide a clear signal for this study to eliminate the background.

While illuminating, this study still does not uncover the exact nature of Dark Energy, though it is clear that it is fueling research here and beyond.

by Adam Davis at March 31, 2015 08:28 PM

Emily Lakdawalla - The Planetary Society Blog

Revitalized 0.81m telescope studying properties of NEOs
Thanks to a new focal reducer and re-aluminized mirror from a Shoemaker NEO grant, a 0.81-meter telescope in Italy is performing astrometric follow-up observations and physical studies of asteroids.

March 31, 2015 04:04 PM

Lubos Motl - string vacua and pheno

Quantum gravity from quantum error-correcting codes?
Guest blog by Dr Beni Yoshida, quantum information fellow at Caltech

The lessons we learned from the Ryu-Takayanagi formula, the firewall paradox, and the ER=EPR conjecture have convinced us that quantum information theory can become a powerful tool to sharpen our understanding of various problems in high-energy physics. But many of the concepts utilized so far rely on entanglement entropy and its generalizations, quantities developed by Von Neumann more than 60 years ago. We live in the 21st century. Why don’t we use more modern concepts, such as the theory of quantum error-correcting codes?
Off-topic, LHC: CERN sent quite some current to the shorted segment of the circuit, apparently melted and destroyed the offending metallic piece in a diode box, and miraculously cured the LHC! Restart could be within days. LM
In a recent paper with Daniel Harlow, Fernando Pastawski and John Preskill, we have proposed a toy model of the AdS/CFT correspondence based on quantum error-correcting codes. Fernando has already written how this research project started after a fateful visit by Daniel to Caltech and John’s remarkable prediction in 1999. In this post, I hope to write an introduction which may serve as a reader’s guide to our paper, explaining why I’m so fascinated by the beauty of the toy model.

This is certainly a challenging task because I need to make it accessible to everyone while explaining real physics behind the paper. My personal philosophy is that a toy model must be as simple as possible while capturing key properties of the system of interest. In this post, I will try to extract some key features of the AdS/CFT correspondence and construct a toy model which captures these features. This post may be a bit technical compared to other recent posts, but anyway, let me give it a try...

Bulk locality paradox and quantum error-correction

The AdS/CFT correspondence says that there is some kind of correspondence between quantum gravity on \((d+1)\)-dimensional asymptotically-AdS space and \(d\)-dimensional conformal field theory on its boundary. But how are they related?

The AdS-Rindler reconstruction tells us how to “reconstruct” a bulk operator from boundary operators. Consider a bulk operator \(\phi\) and a boundary region A on a hyperbolic space (in other words, a negatively-curved plane). On a fixed time-slice, the causal wedge of A is a bulk region enclosed by the geodesic line of A (a curve with a minimal length). The AdS-Rindler reconstruction says that \(\phi\) can be represented by some integral of local boundary operators supported on A if and only if \(\phi\) is contained inside the causal wedge of A. Of course, there are multiple regions A,B,C,… whose causal wedges contain \(\phi\), and the reconstruction should work for any such region.

The Rindler-wedge reconstruction

That a bulk operator in the causal wedge can be reconstructed by local boundary operators, however, leads to a rather perplexing paradox in the AdS/CFT correspondence. Consider a bulk operator \(\phi\) at the center of a hyperbolic space, and split the boundary into three pieces, A, B, C. Then the geodesic line for the union of BC encloses the bulk operator, that is, \(\phi\) is contained inside the causal wedge of BC. So, \(\phi\) can be represented by local boundary operators supported on BC. But the same argument applies to AB and CA, implying that the bulk operator \(\phi\) corresponds to local boundary operators which are supported inside AB, BC, and CA simultaneously. It would seem then that the bulk operator \(\phi\) must correspond to an identity operator times a complex phase. In fact, similar arguments apply to any bulk operators, and thus, all the bulk operators must correspond to identity operators on the boundary. Then, the AdS/CFT correspondence seems so boring...

The bulk operator at the center is contained inside causal wedges of BC, AB, AC. Does this mean that the bulk operator corresponds to an identity operator on the boundary?

Almheiri, Dong, and Harlow have recently proposed an intriguing way of reconciling this paradox with the AdS/CFT correspondence [see also Polchinski et al., TRF]. They proposed that the AdS/CFT correspondence can be viewed as a quantum error-correcting code. Their idea is as follows. Instead of \(\phi\) corresponding to a single boundary operator, \(\phi\) may correspond to different operators in different regions, say \(O_{AB}\), \(O_{BC}\), \(O_{CA}\) living in AB, BC, CA respectively. Even though \(O_{AB}\), \(O_{BC}\), \(O_{CA}\) are different boundary operators, they may be equivalent inside a certain low energy subspace on the boundary.

This situation resembles the so-called quantum secret-sharing code. The quantum information at the center of the bulk cannot be accessed from any single party A, B or C because \(\phi\) does not have representation on A, B, or C. It can be accessed only if multiple parties cooperate and perform joint measurements. It seems that a quantum secret is shared among three parties, and the AdS/CFT correspondence somehow realizes the three-party quantum secret-sharing code!

Entanglement wedge reconstruction?

Recently, causal wedge reconstruction has been further generalized to the notion of entanglement wedge reconstruction. Imagine we split the boundary into four pieces A,B,C,D such that A,C are larger than B,D. Then the geodesic lines for A and C do not form the geodesic line for the union of A and C because we can draw shorter arcs by connecting endpoints of A and C, which form the global geodesic line. The entanglement wedge of AC is a bulk region enclosed by this global geodesic line of AC. And the entanglement wedge reconstruction predicts that \(\phi\) can be represented as an integral of local boundary operators on AC if and only if \(\phi\) is inside the entanglement wedge of AC [1].

Causal wedge vs entanglement wedge.

Building a minimal toy model; the five-qubit code

Okay, now let’s try to construct a toy model which admits causal and entanglement wedge reconstructions of bulk operators. Because I want a simple toy model, I take a rather bold assumption that the bulk consists of a single qubit while the boundary consists of five qubits, denoted by A, B, C, D, E.

Reconstruction of a bulk operator in the “minimal” model.

What does causal wedge reconstruction teach us in this minimal setup of five and one qubits? First, we split the boundary system into two pieces, ABC and DE and observe that the bulk operator \(\phi\) is contained inside the causal wedge of ABC. From the rotational symmetries, we know that the bulk operator \(\phi\) must have representations on ABC, BCD, CDE, DEA, EAB. Next, we split the boundary system into four pieces, AB, C, D and E, and observe that the bulk operator \(\phi\) is contained inside the entanglement wedge of AB and D. So, the bulk operator \(\phi\) must have representations on ABD, BCE, CDA, DEB, EAC. In summary, we have the following:
The bulk operator must have representations on R if and only if R contains three or more qubits.
This is the property I want my toy model to possess.

What kinds of physical systems have such a property? Luckily, we quantum information theorists know the answer; the five-qubit code. The five-qubit code, proposed here and here, has an ability to encode one logical qubit into five-qubit entangled states and corrects any single qubit error. We can view the five-qubit code as a quantum encoding isometry from one-qubit states to five-qubit states:\[

\alpha | 0 \rangle + \beta | 1 \rangle \rightarrow \alpha | \tilde{0} \rangle + \beta | \tilde{1} \rangle

\] where \(| \tilde{0} \rangle\) and \(| \tilde{1} \rangle\) are the basis for a logical qubit. In quantum coding theory, logical Pauli operators \(\bar{X}\) and \(\bar{Z}\) are Pauli operators which act like Pauli X (bit flip) and Z (phase flip) on a logical qubit spanned by \(| \tilde{0} \rangle\) and \(| \tilde{1} \rangle\). In the five-qubit code, for any set of qubits R with volume 3, some representations of logical Pauli X and Z operators, \(\bar{X}_{R}\) and \(\bar{Z}_{R}\), can be found on R. While \(\bar{X}_{R}\) and \(\bar{X}_{R'}\) are different operators for \(R \not= R'\), they act exactly in the same manner on the codeword subspace spanned by \(| \tilde{0} \rangle\) and \(| \tilde{1} \rangle\). This is exactly the property I was looking for.

Holographic quantum error-correcting codes

We just found possibly the smallest toy model of the AdS/CFT correspondence, the five-qubit code! The remaining task is to construct a larger model. For this goal, we view the encoding isometry of the five-qubit code as a six-leg tensor. The holographic quantum code is a network of such six-leg tensors covering a hyperbolic space where each tensor has one open leg. These open legs on the bulk are interpreted as logical input legs of a quantum error-correcting code while open legs on the boundary are identified as outputs where quantum information is encoded. Then the entire tensor network can be viewed as an encoding isometry.

The six-leg tensor has some nice properties. Imagine we inject some Pauli operator into one of six legs in the tensor. Then, for any given choice of three legs, there always exists a Pauli operator acting on them which counteracts the effect of the injection. An example is shown below:

In other words, if an operator is injected from one tensor leg, one can “push” it into other three tensor legs.

Finally, let’s demonstrate causal wedge reconstruction of bulk logical operators. Pick an arbitrary open tensor leg in the bulk and inject some Pauli operator into it. We can “push” it into three tensor legs, which are then injected into neighboring tensors. By repeatedly pushing operators to the boundary in the network, we eventually have some representation of the operator living on a piece of boundary region A. And the bulk operator is contained inside the causal wedge of A. (Here, the length of the curve can be defined as the number of tensor legs cut by the curve). You can also push operators into the boundary by choosing different tensor legs which lead to different representations of a logical operator. You can even have a rather exotic representation which is supported non-locally over two disjoint pieces of the boundary, realizing entanglement wedge reconstruction.

Causal wedge and entanglement wedge reconstruction.

What’s next?

This post is already pretty long and I need to wrap it up…

Shor’s quantum factoring algorithm is a revolutionary invention which opened a whole new research avenue of quantum information science. It is often forgotten, but the first quantum error-correcting code is another important invention by Peter Shor (and independently by Andrew Steane) which enabled a proof that the quantum computation can be performed fault-tolerantly. The theory of quantum error-correcting codes has found interesting applications in studies of condensed matter physics, such as topological phases of matter. Perhaps then, quantum coding theory will also find applications in high energy physics.

Indeed, many interesting open problems are awaiting us. Is entanglement wedge reconstruction a generic feature of tensor networks? How do we describe black holes by quantum error-correcting codes? Can we build a fast scrambler by tensor networks? Is entanglement a wormhole (or maybe a perfect tensor)? Can we resolve the firewall paradox by holographic quantum codes? Can the physics of quantum gravity be described by tensor networks? Or can the theory of quantum gravity provide us with novel constructions of quantum codes?

I feel that now is the time for quantum information scientists to jump into the research of black holes. We don’t know if we will be burned by a firewall or not..., but it is worth trying.

1. Whether entanglement wedge reconstruction is possible in the AdS/CFT correspondence or not still remains controversial. In the spirit of the Ryu-Takayanagi formula which relates entanglement entropy to the length of a global geodesic line, entanglement wedge reconstruction seems natural. But that a bulk operator can be reconstructed from boundary operators on two separate pieces A and C non-locally sounds rather exotic. In our paper, we constructed a toy model of tensor networks which allows both causal and entanglement wedge reconstruction in many cases. For details, see our paper.

by Luboš Motl ( at March 31, 2015 03:48 PM

Symmetrybreaking - Fermilab/SLAC

LHC restart back on track

The Large Hadron Collider has overcome a technical hurdle and could restart as early as next week.

On Monday, teams working on the Large Hadron Collider resolved a problem that had been delaying the restart of the accelerator, according to a statement from CERN.

On March 24, the European physics laboratory announced that a short circuit to ground had occured in one of the connections with an LHC magnet. LHC magnets are superconducting, which means that they can maintain a high electrical current with zero electrical resistance. To be superconducting, the LHC magnets must be chilled to almost minus 460 degrees Fahrenheit.

The short circuit ocurred between a superconducting magnet and its diode. Diodes help protect the LHC's magnets by diverting electrical current into a parallel circuit if the magnets lose their superconductivity.

When teams discovered the problem, all eight sections of the LHC were already cooled to operating temperature. To fix the problem, they knew that they might have to go through a weeks-long process of carefully rewarming and then recooling one section.

The short circuit was caused by a fragment of metal caught between the magnet and the diode. After locating the fragment and examining it via X-ray, engineers and technicians decided to try to melt it. They could do this in a way similar to blowing a fuse. Importantly, the technique would not require them to warm up the magnets.

They injected almost 400 amps of current into the diode circuit for a few milliseconds. Measurements made today showed the short circuit had disappeared.

Now the teams must conduct further tweaks and tests and restart the final commissioning of the accelerator. The LHC could see beams as early as next week.

Photo by: Maximilien Brice, CERN


Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at March 31, 2015 03:23 PM

Peter Coles - In the Dark

Why the Big Bang wasn’t as loud as you think…

So how loud was the Big Bang?

I’ve posted on this before but a comment posted today reminded me that perhaps I should recycle it and update it as it relates to the cosmic microwave background, which is what I work on on the rare occasions on which I get to do anything interesting.

As you probably know the Big Bang theory involves the assumption that the entire Universe – not only the matter and energy but also space-time itself – had its origins in a single event a finite time in the past and it has been expanding ever since. The earliest mathematical models of what we now call the  Big Bang were derived independently by Alexander Friedman and George Lemaître in the 1920s. The term “Big Bang” was later coined by Fred Hoyle as a derogatory description of an idea he couldn’t stomach, but the phrase caught on. Strictly speaking, though, the Big Bang was a misnomer.

Friedman and Lemaître had made mathematical models of universes that obeyed the Cosmological Principle, i.e. in which the matter was distributed in a completely uniform manner throughout space. Sound consists of oscillating fluctuations in the pressure and density of the medium through which it travels. These are longitudinal “acoustic” waves that involve successive compressions and rarefactions of matter, in other words departures from the purely homogeneous state required by the Cosmological Principle. The Friedman-Lemaitre models contained no sound waves so they did not really describe a Big Bang at all, let alone how loud it was.

However, as I have blogged about before, newer versions of the Big Bang theory do contain a mechanism for generating sound waves in the early Universe and, even more importantly, these waves have now been detected and their properties measured.


The above image shows the variations in temperature of the cosmic microwave background as charted by the Planck Satellite. The average temperature of the sky is about 2.73 K but there are variations across the sky that have an rms value of about 0.08 milliKelvin. This corresponds to a fractional variation of a few parts in a hundred thousand relative to the mean temperature. It doesn’t sound like much, but this is evidence for the existence of primordial acoustic waves and therefore of a Big Bang with a genuine “Bang” to it.

A full description of what causes these temperature fluctuations would be very complicated but, roughly speaking, the variation in temperature you corresponds directly to variations in density and pressure arising from sound waves.

So how loud was it?

The waves we are dealing with have wavelengths up to about 200,000 light years and the human ear can only actually hear sound waves with wavelengths up to about 17 metres. In any case the Universe was far too hot and dense for there to have been anyone around listening to the cacophony at the time. In some sense, therefore, it wouldn’t have been loud at all because our ears can’t have heard anything.

Setting aside these rather pedantic objections – I’m never one to allow dull realism to get in the way of a good story- we can get a reasonable value for the loudness in terms of the familiar language of decibels. This defines the level of sound (L) logarithmically in terms of the rms pressure level of the sound wave Prms relative to some reference pressure level Pref

L=20 log10[Prms/Pref].

(the 20 appears because of the fact that the energy carried goes as the square of the amplitude of the wave; in terms of energy there would be a factor 10).

There is no absolute scale for loudness because this expression involves the specification of the reference pressure. We have to set this level by analogy with everyday experience. For sound waves in air this is taken to be about 20 microPascals, or about 2×10-10 times the ambient atmospheric air pressure which is about 100,000 Pa.  This reference is chosen because the limit of audibility for most people corresponds to pressure variations of this order and these consequently have L=0 dB. It seems reasonable to set the reference pressure of the early Universe to be about the same fraction of the ambient pressure then, i.e.

Pref~2×10-10 Pamb.

The physics of how primordial variations in pressure translate into observed fluctuations in the CMB temperature is quite complicated, because the primordial universe consists of a plasma rather than air. Moreover, the actual sound of the Big Bang contains a mixture of wavelengths with slightly different amplitudes. In fact here is the spectrum, showing a distinctive signature that looks, at least in this representation, like a fundamental tone and a series of harmonics…



If you take into account all this structure it all gets a bit messy, but it’s quite easy to get a rough but reasonable estimate by ignoring all these complications. We simply take the rms pressure variation to be the same fraction of ambient pressure as the averaged temperature variation are compared to the average CMB temperature,  i.e.

Prms~ a few ×10-5Pamb.

If we do this, scaling both pressures in logarithm in the equation in proportion to the ambient pressure, the ambient pressure cancels out in the ratio, which turns out to be a few times 10-5. With our definition of the decibel level we find that waves of this amplitude, i.e. corresponding to variations of one part in a hundred thousand of the reference level, give roughly L=100dB while part in ten thousand gives about L=120dB. The sound of the Big Bang therefore peaks at levels just a bit less than 120 dB.


As you can see in the Figure above, this is close to the threshold of pain,  but it’s perhaps not as loud as you might have guessed in response to the initial question. Modern popular beat combos often play their dreadful rock music much louder than the Big Bang….

A useful yardstick is the amplitude  at which the fluctuations in pressure are comparable to the mean pressure. This would give a factor of about 1010 in the logarithm and is pretty much the limit that sound waves can propagate without distortion. These would have L≈190 dB. It is estimated that the 1883 Krakatoa eruption produced a sound level of about 180 dB at a range of 100 miles. By comparison the Big Bang was little more than a whimper.

PS. If you would like to read more about the actual sound of the Big Bang, have a look at John Cramer’s webpages. You can also download simulations of the actual sound. If you listen to them you will hear that it’s more of  a “Roar” than a “Bang” because the sound waves don’t actually originate at a single well-defined event but are excited incoherently all over the Universe.

by telescoper at March 31, 2015 01:38 PM

Symmetrybreaking - Fermilab/SLAC

Physics Madness: The Fundamental Four

Which physics machine has the power to go all the way?

The competition is fierce, and only four fantastic pieces of physics equipment emerged from the fray. Below are your Fundamental Four match-ups, so get voting to make sure your favorite makes it to the Grand Unified Championship.

You have until midnight PDT on Thursday, April 2, to vote in this round. Come back on April 3 to see if your pick advanced and vote in the final round.




by Lauren Biron at March 31, 2015 01:00 PM

Tommaso Dorigo - Scientificblogging

Fighting Plagiarism In Scientific Papers
Plagiarism is the most sincere form of flattery, they say (or rather, this is said of imitation). In arts - literature, music, painting - it can at times be tolerated, as an artist might want to take inspiration from others, elaborate on an idea, or give it a different twist. In art it is the realization of the idea which matters.

read more

by Tommaso Dorigo at March 31, 2015 12:57 PM

March 30, 2015

Christian P. Robert - xi'an's og

MCMskv, Lenzerheide, Jan. 5-7, 2016

moonriseFollowing the highly successful [authorised opinion!, from objective sources] MCMski IV, in Chamonix last year, the BayesComp section of ISBA has decided in favour of a two-year period, which means the great item of news that next year we will meet again for MCMski V [or MCMskv for short], this time on the snowy slopes of the Swiss town of Lenzerheide, south of Zürich. The committees are headed by the indefatigable Antonietta Mira and Mark Girolami. The plenary speakers have already been contacted and Steve Scott (Google), Steve Fienberg (CMU), David Dunson (Duke), Krys Latuszynski (Warwick), and Tony Lelièvre (Mines, Paris), have agreed to talk. Similarly, the nine invited sessions have been selected and will include Hamiltonian Monte Carlo,  Algorithms for Intractable Problems (ABC included!), Theory of (Ultra)High-Dimensional Bayesian Computation, Bayesian NonParametrics, Bayesian Econometrics,  Quasi Monte Carlo, Statistics of Deep Learning, Uncertainty Quantification in Mathematical Models, and Biostatistics. There will be afternoon tutorials, including a practical session from the Stan team, tutorials for which call is open, poster sessions, a conference dinner at which we will be entertained by the unstoppable Imposteriors. The Richard Tweedie ski race is back as well, with a pair of Blossom skis for the winner!

As in Chamonix, there will be parallel sessions and hence the scientific committee has issued a call for proposals to organise contributed sessions, tutorials and the presentation of posters on particularly timely and exciting areas of research relevant and of current interest to Bayesian Computation. All proposals should be sent to Mark Girolami directly by May the 4th (be with him!).

Filed under: Kids, Mountains, pictures, R, Statistics, Travel, University life Tagged: ABC, BayesComp, Bayesian computation, Blossom skis, Chamonix, Glenlivet, Hamiltonian Monte Carlo, intractable likelihood, ISBA, MCMSki, MCMskv, Monte Carlo Statistical Methods, Richard Tweedie, ski town, STAN, Switzerland, Zurich

by xi'an at March 30, 2015 10:15 PM

ATLAS Experiment

Moriond Electroweak: physics, skiing and Italian food

If you’re a young physicist working in high energy physics, you realize very soon in your career that “going for Moriond” and “going to Moriond” are two different things, and that neither of the two means that you’re actually going here:

The original location of the Moriond conference series

The original location of the Moriond conference series

“Les rencontres de Moriond” is one of the main Winter conferences for our field. Starting from its original location in Moriond, it has been held around the French and Italian Alps since 1966. In the 60s and 70s, there was a clear distinction between two branches of the same conference, as “electroweak” and “QCD” physics were still done in different labs and accelerators: in those years the former had to do for example with the discovery of the W and Z bosons and their interactions, while the latter saw the developments of a model to describe the “quarks” that compose protons and neutrons, and the discovery of these constituents themselves. Nowadays, both kinds of physics are studied at the LHC and in other experiments around the world, so the results presented in the two conferences are not necessarily divided by topic anymore.

This year I was lucky enough to be contributing some results that were “going for Moriond”, which means they’d be approved by the Collaboration to be presented at this conference for the first time, but I would also be “going to Moriond” in person. This year’s “Moriond Electroweak” was held in the Italian mountain resort of La Thuile, and had a special significance. In the session that celebrated the 50 years of the conference, the founder Jean Trân Thanh Vân reminded the audience of the two pillars of this conference:

  • encourage discussions and exchanges between theoretical and experimental physicists;
  • let young scientists meet senior researchers and discuss their results.
The official Moriond EW t-shirt and the announcement of the slalom competition

The official Moriond EW t-shirt and the announcement of the slalom competition

The first point was made when theorists and experimentalists alike were asked to take part in a slalom competition. The results were not categorized by subject of study, but certainly the cheering came from and towards both parties.

The latter took place almost every evening, in the dedicated “young scientists session”. Here, students and young post-docs can apply to give a short talk and answer questions on their research topic in front of an international audience of theorists and experimentalists.

The questions and answers can be then carried on to the (abundant) dinner. As an Italian, I do appreciate the long evenings dedicated to a mixture of excellent food combined with physics discussions (and where the two can be identified with each other, as in this snapshot from a talk by Francesco Riva).

New physics matched to Italian dinner choices at the conference, according to Francesco Riva

New physics matched to Italian dinner choices at the conference, according to Francesco Riva

Back to the physics: the results I contributed to were shown in the afternoon session, before the 50th anniversary talks. They’re in the top corner of one of the slides from the summary of the ATLAS and CMS new physics searches.

Beyond the Standard Model: New results presented at the Moriond EW conference

Beyond the Standard Model: New results presented at the Moriond EW conference

That’s only the tip of the iceberg of a search that looks for new phenomena that would manifest as an excess of collimated jets of particles in the central region of the detector, and it shows that there is no new physics to be found here, nor in any of the other searches shown in the conference so far. (What we didn’t know at that time was that there would be something not consistent with expectations in the LHCb results shown just one day after, as explained in this article). Given that so far we have not found much beyond what we consider Standard (as in belonging to the predictions made by the Standard Model of Particle Physics), the conference had a special focus on searches that look for the unexpected in unexpected places. “Stealthy” is how the physics beyond the Standard Model that is particularly hard to find is characterized, and as experimentalists we want to pay particular attention to the “blind spots” where we haven’t yet looked for the upcoming LHC runs. This was highlighted in the morning talks, describing searches for Supersymmetry in blind spots and searches for particles that leave no immediate signature after the collision because of their long lifetime. There were also other ideas of how to test the Standard Model with very high precision, as highlighted in another food-related slide by Francesco Riva.

Techniques to find new physics, according to Francesco Riva

Techniques to find new physics, according to Francesco Riva

No one in the audience forgot, however, that the new LHC run will bring more energy and more data. Both will allow us to investigate new, rare processes that were not accessible in the first run. Discoveries might be just around the corner!

Overall, the “Rencontres de Moriond” conferences have the effect of leaving everyone enthusiastic for the discussion and eager for more results: in particular, next year’s edition may see some of the first results of the upcoming LHC run. And of course, the results will be best discussed on skis and over dinner.

Caterina Doglioni Caterina Doglioni is a post-doctoral researcher in the ATLAS group of the University of Geneva. She got her taste for calorimeters with the Rome Sapienza group in the commissioning of the ECAL at the CMS experiment during her Master’s thesis. She continued her PhD work with the University of Oxford and moved to hadronic calorimeters: she worked on calibrating and measuring hadronic jets with the first ATLAS data. She is still using jets to search for new physics phenomena, while thinking about calorimeters at a new future hadron collider.

by Caterina Doglioni at March 30, 2015 06:54 PM

Quantum Diaries

Superconducting test accelerator achieves first electron beam

This article appeared in Fermilab Today on March 30, 2015.

Last week the first SRF cavities of Fermilab's superconducting test accelerator propelled their first electrons. Photo: Reidar Hahn

Last week the first SRF cavities of Fermilab’s superconducting test accelerator propelled their first electrons. Photo: Reidar Hahn

The newest particle accelerators and those of the future will be built with superconducting radio-frequency (SRF) cavities, and institutions around the world are working hard to develop this technology. Fermilab’s advanced superconducting test accelerator was built to take advantage of SRF technology accelerator research and development.

On Friday, after more than seven years of planning and building by scientists and engineers, the accelerator has delivered its first beam.

The Fermilab superconducting test accelerator is a linear accelerator (linac) with three main components: a photoinjector that includes an RF gun coupled to an ultraviolet-laser system, several cryomodules and a beamline. Electron bunches are produced when an ultraviolet pulse generated by the laser hits a cathode located on the back plate of the gun. Acceleration continues through two SRF cavities inside the cryomodules. After exiting the cryomodules, the bunches travel down a beamline, where researchers can assess them.

Each meter-long cavity consists of nine cells made from high-purity niobium. In order to become superconductive, the cavities sit in a vessel filled with superfluid liquid helium at temperatures close to absolute zero.

As RF power pulses through these cavities, it creates an oscillating electric field that runs through the cells. If the charged particles meet the oscillating waves at the right phase, they are pushed forward and propelled down the accelerator.

The major advantage of using superconductors is that the lack of electrical resistance allows virtually all the energy passing through to be used for accelerating particle beams, ultimately creating more efficient accelerators.

The superconducting test accelerator team celebrates first beam in the operations center at NML. Vladimir Shiltsev, left, is pointing to an image of the beam. Photo: Pavel Juarez, AD

The superconducting test accelerator team celebrates first beam in the operations center at NML. Vladimir Shiltsev, left, is pointing to an image of the beam. Photo: Pavel Juarez, AD

“It’s more bang for the buck,” said Elvin Harms, one of the leaders of the commissioning effort.

The superconducting test accelerator’s photoinjector gun first produced electrons in June 2013. In the current run, electrons are being shot through one single-cavity cryomodule, with a second, upgraded model to be installed in the next few months. Future plans call for accelerating the electron beam through an eight-cavity cryomodule, CM2, which was the first to reach the specifications of the proposed International Linear Collider (ILC).

Fermilab is one of the few facilities that provides space for advanced accelerator research and development. These experiments will help set the stage for future superconducting accelerators such as SLAC’s Linac Coherent Light Source II, of which Fermilab is one of several partner laboratories.

“The linac is similar to other accelerators that exist, but the ability to use this type of setup to carry out accelerator science experiments and train students is unique,” said Philippe Piot, a physicist at Fermilab and professor at Northern Illinois University leading one of the first experiments at the test accelerator. A Fermilab team has designed and is beginning to construct the Integrable Optics Test Accelerator ring, a storage ring that will be attached to the superconducting test accelerator in the years to come.

“This cements the fact that Fermilab has been building up the infrastructure for mastering SRF technology,” Harms said. “This is the crown jewel of that: saying that we can build the components, put them together, and now we can accelerate a beam.”

Diana Kwon

by Fermilab at March 30, 2015 06:11 PM

Lubos Motl - string vacua and pheno

David Gross' NYU lecture
I think that this 97-minute-long public lecture by David Gross at New York University hasn't been embedded on this blog yet:

It is not just another copy of a talk you have heard five times.

He has talked about the Standard Model's being nano-nanophysics, QCD, Higgs, some signs of SUSY (and perhaps unification) at the TeV scale that we may already be seeing, the future colliders (probably in China), and Schrödinger's dogs, among other things.

There were some questions at the end, too.

by Luboš Motl ( at March 30, 2015 05:41 PM

CERN Bulletin

Safety Training: places available in March and April 2015

There are places available in the forthcoming Safety courses. For updates and registrations, please refer to the Safety Training Catalogue (see here).


March 30, 2015 04:03 PM

Emily Lakdawalla - The Planetary Society Blog

Your First Timeline of Events for LightSail's Test Flight
The team behind The Planetary Society’s LightSail spacecraft is kicking off a series of simulations to ensure the spacecraft’s ground systems are ready for launch.

March 30, 2015 03:29 PM

CERN Bulletin

Academic Training Lecture | Practical Statistics for LHC Physicists: Descriptive Statistics, Probability and Likelihood | 7-9 April
Please note that our next series of Academic Training Lectures will take place on the 7, 8 and 9 April 2015   Practical Statistics for LHC Physicists: Descriptive Statistics, Probability and Likelihood, by Harrison Prosper, Floridia State University, USA. from 11.00 a.m. to 12.00 a.m. in the Council Chamber (503-1-001)

March 30, 2015 03:28 PM

Peter Coles - In the Dark

Found in Translation…

A nice surprise was waiting for me when I arrived at work this morning in the form of a parcel from Oxford University Press containing six copies of the new Arabic edition of my book  Cosmology: A Very Short Introduction. I think I’ve put them the right way up. I was a bit confused because they open the opposite way to books in English, as arabic is read from right to left rather than from left to right.


Anyway, although I can’t read Arabic it’s nice to have these to put with the other foreign editions, including these. I still can’t remember whether the first one is Japanese or Korean…






…still, it’s interesting to see how they’ve chosen different covers for the different translations, and at least I know what my name looks like in Russian Bulgarian!

by telescoper at March 30, 2015 11:12 AM

John Baez - Azimuth

A Networked World (Part 2)

guest post by David Spivak

Creating a knowledge network

In 2007, I asked myself: as mathematically as possible, what can formally ground meaningful information, including both its successful communication and its role in decision-making? I believed that category theory could be useful in formalizing the type of object that we call information, and the type of relationship that we call communication.

Over the next few years, I worked on this project. I tried to understand what information is, how it is stored, and how it can be transferred between entities that think differently. Since databases store information, I wanted to understand databases category-theoretically. I eventually decided that databases are basically just categories \mathcal{C}, corresponding to a collection of meaningful concepts and connections between them, and that these categories are equipped with functors \mathcal{C}\to\mathbf{Set}. Such a functor assigns to each meaningful concept a set of examples and connects them as dictated by the morphisms of \mathbf{Set}. I later found out that this “databases as categories” idea was not original; it is due to Rosebrugh and others. My view on the subject has matured a bit since then, but I still like this basic conception of databases.

If we model a person’s knowledge as a database (interconnected tables of examples of things and relationships), then the network of knowledgeable humans could be conceptualized as a simplicial complex equipped with a sheaf of databases. Here, a vertex represents an individual, with her database of knowledge. An edge represents a pair of individuals and a common ground database relating their individual databases. For example, you and your brother have a database of concepts and examples from your history. The common-ground database is like the intersection of the two databases, but it could be smaller (if the people don’t yet know they agree on something). In a simplicial complex, there are not only vertices and edges, but also triangles (and so on). These would represent databases held in common between three or more people.

I wanted “regular people” to actually make such a knowledge network, i.e., to share their ideas in the form of categories and link them together with functors. Of course, most people don’t know categories and functors, so I thought I’d make things easier for them by equipping categories with linguistic structures: text boxes for objects, labeled arrows for morphisms. For example, “a person has a mother” would be a morphism from the “person” object, to the “mother” object. I called such a linguistic category an olog, playing on the word blog. The idea (originally inspired during a conversation with my friend Ralph Hutchison) was that I wanted people, especially scientists, to blog their ontologies, i.e., to write “onto-logs” like others make web-logs.

Ologs codify knowledge. They are like concept webs, except with more rules that allow them to simultaneously serve as database schemas. By introducing ologs, I hoped I could get real people to upload their ideas into what is now called the cloud, and make the necessary knowledge network. I tried to write my papers to engage an audience of intelligent lay-people rather than for an audience of mathematicians. It was a risk, but to me it was the only honest approach to the larger endeavor.

(For students who might want to try going out on a limb like this, you should know that I was offered zero jobs after my first postdoc at University of Oregon. The risk was indeed risky, and one has to be ok with that. I personally happened to be the beneficiary of good luck and was offered a grant, out of the clear blue sky, by a former PhD in algebraic geometry, who worked at the Office of Naval Research at the time. That, plus the helping hands of Haynes Miller and many other brilliant and wonderful people, can explain how I lived to tell the tale.)

So here’s how the simplicial complex of ologs would ideally help humanity steer. Suppose we say that in order for one person to learn from another, the two need to find a common language and align some ideas. This kind of (usually tacit) agreement on, or alignment of, an initial common-ground vocabulary and concept-set is important to get their communication onto a proper footing.

For two vertices in such a simplicial network, the richer their common-ground olog (i.e., the database corresponding to the edge between them) is, the more quickly and accurately the vertices can share new ideas. As ideas are shared over a simplex, all participating databases can be updated, hence making the communication between them richer. In around 2010, Mathieu Anel and I worked out a formal way this might occur; however, we have not yet written it up. The basic idea can be found here.

In this setup, the simplicial complex of human knowledge should grow organically. Scientists, business people, and other people might find benefit in ologging their ideas and conceptions, and using them to learn from their peers. I imagined a network organizing itself, where simplices of like-minded people could share information with neighboring groups across common faces.

I later wrote a book called Category Theory for the Sciences, available free online, to help scientists learn how category theory could apply to familiar situations like taxonomies, graphs, and symmetries. Category theory, simply explained, becomes a wonderful key to the whole world of pure mathematics. It’s the closest thing we have to a universal language of thought, and therefore an appropriate language for forming connections.

My working hypothesis for the knowledge network was this. The information held by people whose worldview is more true—more accurate—would have better predictive power, i.e., better results. This is by definition: I define ones knowledge to be accurate as the extent to which, when he uses this knowledge to direct his actions, he has good luck handling his worldly affairs. As Louis Pasteur said, “Luck favors the prepared mind.” It follows that if someone has a track record of success, others will value finding broad connections into his olog. However, to link up with someone you must find a part of your olog that aligns with his—a functorial connection—and you can only receive meaningful information from him to the extent that you’ve found such common ground.

Thus, people who like to live in fiction worlds would find it difficult to connect, except to other like-minded “Obama’s a Christian”-type people. To the extent you are imbedded in a fictional—less accurate, less predictive—part of the network, you will find it difficult to establish functorial connections to regions of more accurate knowledge, and therefore you can’t benefit from the predictive and conceptual value of this knowledge.

In other words, people would be naturally inclined to try to align their understanding with people that are better informed. I felt hope that this kind of idea could lead to a system in which honesty and accuracy were naturally rewarded. At the very least, those who used it could share information much more effectively than they do now. This was my plan; I just had to make it real.

I had a fun idea for publicizing ologs. The year was in 2008, and I remember thinking it would be fantastic if I could olog the political platform and worldview of Barack Obama and of Sarah Palin. I wished I could sit down with them and other politicians and help them write ologs about what they believed and wanted for the country. I imagined that some politicians might have ologs that look like a bunch of disconnected text boxes—like a brain with neurons but no synapses—a collection of talking points but no real substantive ideas.

Anyway, there I was, trying to understand everything this way: all information was categories (or perhaps sketches) and presheaves. I would work with interested people from any academic discipline, such as materials science, to make ologs about whatever information they wanted to record category-theoretically. Ologs weren’t a theory of everything, but instead, as Jack Morava put it, a theory of anything.

One day I was working on a categorical sketch to model processes within processes, but somehow it really wasn’t working properly. The idea was simple: each step in a recipe is a mini-recipe of its own. Like chopping the carrots means getting out a knife and cutting board, putting a carrot on there, and bringing the knife down successively along it. You can keep zooming into any of these and see it as its own process. So there is some kind of nested, fractal-like behavior here. The olog I made could model the idea of steps in a recipe, but I found it difficult to encode the fact that each step was itself a recipe.

This nesting thing seemed like an idea that mathematics should treat beautifully, and ologs weren’t doing it justice. It was then when I finally admitted that there might be other fish in the mathematical sea.

by John Baez at March 30, 2015 01:00 AM

March 29, 2015

Christian P. Robert - xi'an's og

intuition beyond a Beta property


A self-study question on X validated exposed an interesting property of the Beta distribution:

If x is B(n,m) and y is B(n+½,m) then √xy is B(2n,2m)

While this can presumably be established by a mere change of variables, I could not carry the derivation till the end and used instead the moment generating function E[(XY)s/2] since it naturally leads to ratios of B(a,b) functions and to nice cancellations thanks to the ½ in some Gamma functions [and this was the solution proposed on X validated]. However, I wonder at a more fundamental derivation of the property that would stem from a statistical reasoning… Trying with the ratio of Gamma random variables did not work. And the connection with order statistics does not apply because of the ½. Any idea?

Filed under: Books, Kids, R, Statistics, University life Tagged: beta distribution, cross validated, moment generating function, Stack Echange

by xi'an at March 29, 2015 10:15 PM

ZapperZ - Physics and Physicists

Stephen Hawking and Brian Cox To Trademark Their Names
"Stephen Hawking" and "Brian Cox" will be trademark names very soon. So if you have plans to market t-shirts and other products with these people's names, watch out!

Maybe this will get rid of some of the tacky stuff that I've seen associated to them, especially Hawking. But then again, who knows, they may turn around and produce their own tacky merchandise.


by ZapperZ ( at March 29, 2015 04:04 PM

ZapperZ - Physics and Physicists

A Tale Of Two Scientists
It is fascinating to read about the stuff behind the scene involving the negotiations between the United States and Iran regarding Iran's nuclear program. And in the middle of all this are two scientists/engineers out of MIT with nuclear science background.

At the Massachusetts Institute of Technology in the mid-1970s, Ernest J. Moniz was an up-and-coming nuclear scientist in search of tenure, and Ali Akbar Salehi, a brilliant Iranian graduate student, was finishing a dissertation on fast-neutron reactors.

The two did not know each other, but they followed similar paths once they left the campus: Mr. Moniz went on to become one of the nation’s most respected nuclear physicists and is now President Obama’s energy secretary. Mr. Salehi, who was part of the last wave of Iranians to conduct nuclear studies at America’s elite universities, returned to an Iran in revolution and rose to oversee the country’s nuclear program.

You may read more about it in the article. And I definitely agree with this sentiment:

Mr. Moniz, 70, understands his role well: He is providing not only technical expertise but also political cover for Mr. Kerry. If a so-called framework agreement is reached in the next few days, it will be Mr. Moniz who will have to vouch to a suspicious Congress, to Israel and to Arab allies that Iran would be incapable of assembling the raw material for a single nuclear weapon in less than a year.

“It wouldn’t mean much coming from Kerry,” said a member of the administration deeply involved in the strategy who spoke on the condition of anonymity. “The theory is that Ernie’s judgment on that matter is unassailable.”

At the heart of this is a scientific/technical issues. Now once presented, it is up to the politicians to decide, because beyond that, it is no longer a science/technical decision, but a political one. To have someone, a negotiator, who is not only knowledgeable in that area, but also who happens to be a world-renowned expert, is extremely beneficial.


by ZapperZ ( at March 29, 2015 04:00 PM

Peter Coles - In the Dark

Praise, by R.S. Thomas

Today is Palm Sunday, the start of what Christians call “Holy Week”, which culiminates in Easter. It’s also the birthday of the great Welsh poet R.S. Thomas, who was born on this day in 1913. Thomas spent much of his life as an Anglican priest. I’m not a Christian but I am drawn to the religious verse of R.S. Thomas not only for its directness and lack of artifice but also the honesty with which he addresses the problems his faith sets him. There are many atheists who think religion is some kind of soft option for those who can’t cope with life in an unfriendly universe, but reading R.S. Thomas, whose faith was neither cosy nor comfortable, led me to realise that is very far from the case. I recommend him as an antidote to the simple-minded antagonism of people like Richard Dawkins. There are questions that science alone will never answer, so we should respect people who search for a truth we ourselves cannot understand.

And whether or not it is clear to you, no doubt the universe is unfolding as it should. Therefore be at peace with God, whatever you conceive Him to be.

I will be offline for the Easter holiday so I thought I’d post a poem that I find appropriate to the time of yea. You can read it as Praise for God, or for Nature, or for both. I don’t think it matters.

I praise you because
you are artist and scientist
in one. When I am somewhat
fearful of your power,
your ability to work miracles
with a set-square, I hear
you murmuring to yourself
in a notation Beethoven
dreamed of but never achieved.
You run off your scales of
rain water and sea water, play
the chords of the morning
and evening light, sculpture
with shadow, join together leaf
by leaf, when spring
comes, the stanzas of
an immense poem. You speak
all languages and none,
answering our most complex
prayers with the simplicity
of a flower, confronting
us, when we would domesticate you
to our uses, with the rioting
viruses under our lens.

by telescoper at March 29, 2015 01:45 PM

Geraint Lewis - Cosmic Horizons

Musings on academic careers - Part 1
As promised, I'm going to put down some thoughts on academic careers. In doing this, I should put my cards on the table and point out that while I am a full-time professor of astrophysics of the University of Sydney, I didn't really plan my career or following the musings given below. The musings come from take a hard look at the modern state of play in modern academia.

I am going to be as honest as possible, and surely some of my colleagues will disagree with my musings. Some people have a romantic view of many things, including science, and will trot out the line that science is somewhat distinct from people. That might be the case, but the act of doing science is clearly done my people, and that means all of the issues that govern human interactions come into play. It is important to remember this.

Now, there may be some lessons below for how to become a permanent academic, but there is no magic formula. But realising some of these lessons on what is at play may help.

Some of you may have heard me harp on about some of these issues before, but hopefully there is some new stuff as well. OK. Let's begin.

Career Management
It must be remembered that careers rarely just happen. Careers must be managed. I know some people hate to realise this, as science is supposed to be above all this career stuff - surely "good people" will be identified and rewarded!

Many students and postdocs seem to bumble along and only think of "what's next?" when they are up against the wire. I have spoken with students about the process of applying for postdocs, the long lead time needed, the requirement of at least three referees, all aspects of job hunting, and then, just moments from the submission of their PhD, they suddenly start looking for jobs. I weep a little when they frantically ask me "Who should I have as my third referee?"

Even if you are a brand-new PhD student, you need to think about career management. I don't mean planning, such as saying I will have a corner office in Harvard in 5 years (although there is nothing wrong with having aspirational goals!), but management. So, what do I mean?

Well, if you are interested in following a career in academia, then learn about the various stages and options involved and how you get from one to the other. This (and careers beyond academia) should be mandatory for new students, and reminded at all stages of your career that you need to keep thinking about it. What kind of things should you be doing at the various stages of your career? What experience would your next employer like you to have? It is very important to try and spot holes in your CV and fill them in; this is very important! If you know you have a weakness, don't ignore it, fix it.

Again, there is no magic formula to guarantee that you will be successful in moving from one stage to another, but you should be able to work out the kind of CV you need. If you are having difficulties in identifying these things, talk with people (get a mentor!).

And, for one final point, the person responsible for managing your career is you. Not your supervisor, not your parents, and not the non-existent gods of science. You are.

Being Strategic
This is part of your career management.

In the romantic vision of science, an academic is left to toddle along and be guided by their inquisitive nature to find out what is going on in the Universe. But academia does not work that way (no matter how much you want to rage against it). If you want an academic career, then it is essential to realise that you will be compared to your peers at some point. At some point, someone is is going to have a stack of CVs in front of them and will be going through them and will have to choose a subset who met the requirements for a position, and then rank those subset to find the best candidate. As part of your career management you need to understand what people are looking for! (I speak from experience of helping people prepare for jobs who know little about the actual job, the people offering it, what is needed etc etc).

I know people get very cross with this, but there are key indicators people look at, things like the number of papers, citation rates, grant income, student supervision, teaching experience. Again, at all points you need to ask "is there a hole in my CV?" and if there is, fill it! Do not ignore it.

But, you might be saying, how can I be strategic in all of this? I just get on with my work! You need to think about what you do. If you have a long running project, are there smaller projects you can do when waiting to spin out some short, punchy papers? Can I lead something that I will become world known in? Is there an idea I can spin to a student to make progress on? You should be thinking of "results" and results becoming talks at conferences and papers in journals.

If you are embarking on a new project, a project that is going to require substantial investment of time, you should ensure something will come from it, even if it is a negative or null result. You should never spend a substantial period of time, such as six months, and not have anything to show for it!

Are there collaborations you could forge and contribute to? Many people have done very well by being part of large collaborations, resulting in many papers, although, be aware that when seeing survey papers on a CV now as "well, what did this person contribute to the project?".

The flip-side is also important. Beware of spending to much time on activities that do not add to you CV! I have seen some, especially students, spending a lot of time on committees and jobs that really don't benefit them. Now, don't get me wrong. Committee work and supporting meetings etc is important, but think about where you are spending your time and ask yourself if your CV is suffering because of it.

How many hours should I work?
Your CV does not record the number of hours you work! It records your research output and successes. If you are publishing ten papers a year on four hour days, then wonderful, but if you are two years into a postdoc, working 80 hours per week and have not published anything, you might want to think about how you are using your time. 

But I am a firm believer of working smarter, not harder, and thinking and planning ideas and projects. Honestly, I have a couple of papers which (in a time before children) were born from ideas that crystalised over a weekend and submitted soon after. I am not super-smart, but do like to read widely, to go to as many talks as I can, to learn new things, and apply ideas to new problems.

One thing I have seen over and over again is people at various stages of their careers becoming narrower and narrower in their focus, and it depresses me when I go to talks in my own department and see students not attending. This narrowness, IMHO, does not help in establishing an academic career. This, of course, is not guaranteed, but when I look at CVs, I like to see breadth. 

So, number of hours is not really an important issue, your output is. Work hours do become important when you are a permanent academic because all the different things, especially admin and teaching you have to do, but as an early career researcher, it should not be the defining thing. Your output is. 

Is academia really for me?
I actually think this is a big one,  and is one which worries me as I don't think people at many stages of their career actually think about. Being a student is different to being an postdoctoral researcher, is different to being an academic, and it seems to be that people embarking on PhDs, with many a romantic notion about winning a Nobel prize somewhere along the way, don't really know what an "academic" is and what they do, just that it is some sort of goal.

In fact, this is such a big one, I think this might be a good place to stop and think about later musings.

by Cusp ( at March 29, 2015 04:22 AM

Emily Lakdawalla - The Planetary Society Blog

Field Report from Mars: Sol 3971 - March 26, 2015
Opportunity reaches a marathon milestone—in more ways than one. Larry Crumpler reports on the current status of the seemingly unstoppable Mars rover.

March 29, 2015 12:47 AM

March 28, 2015

Lubos Motl - string vacua and pheno

Dark matter: Science Friday with Weinberg, Hooper, Cooley
The background is temporarily "nearly white" today because I celebrate the Kilowatt Hour, also known as the Electricity Thanksgiving Day. Between 8:30 and 9:30 pm local time, turn all your electric appliances on and try to surpass one kilowatt. By this $0.20 sacrifice, you will fight those who want to return us to the Middle Ages and who organize the so-called Earth Hour.

Ira Flatow's Science Friday belongs among the better or best science shows. Yesterday, he hosted some very interesting guests and the topic was interesting, too:
Understanding the Dark Side of Physics
The guests were Steven Weinberg, famous theorist and Nobel prize winner from Austin; Dan Hooper, a top Fermilab phenomenologist; and Judi Cooley, a senior experimental particle physicist from Dallas

And if you have 30 spare minutes, you should click the orange-white "play" button above and listen to this segment.

It doesn't just repeat some well-known old or medium-age things about dark matter. They start the whole conversation by discussing a very new story so that even listeners who are physicists may learn something new.

An observation was announced that imposes new upper limits on the self-interaction of dark matter. If it interacts with itself at all (it of course interacts gravitationally but if there is another contribution to its self-interaction), the strength of this force is smaller than an upper bound that is more constraining than those we knew before.

See e.g.
Hubble and Chandra discover dark matter is not as sticky as once thought
Dark matter does not slow down when colliding with each other, which means that it interacts with itself even less than previously thought.

The nongravitational interactions of dark matter in colliding galaxy clusters (by David Harvey+3, Science)
If you remember the "bullet cluster" that showed the existence of dark matter – and its separation from visible matter, they found 72 similar "clusters" and just like the 72 virgins waiting to rape a Muslim terrorist, all of them make the same suggestion: some dark matter is out there. They say that the certainty is now 7.6 sigma when these 72 observations are combined.

However, the dark matter location remains close enough to the associated visible stars which allows them to deduce, at 95% confidence level, that the cross section per unit mass isn't too high:\[

\frac{\sigma_{DM}}{m} \leq 0.47\,{\rm cm}^2 / {\rm g}

\] The dark matter just doesn't seem more excited by itself than it is by the visible matter. Theories with "dark photons" are the first ones that are heavily constrained and many natural ones are killed. But maybe even some more conventional WIMP theories may be punished.

I think that if you have worked on my proposed far-fetched idea of holographic MOND, you have one more reason to increase your activity. And I guess that all axions are just fine with the new finding.

Weinberg clarifies the situation – why dark matter isn't understood too well (it's dark!) etc. – very nicely but many other things are said in the show, too. When the two other guests join, they also discuss other dark matter experiments, dark energy, gravitational waves, string theory etc.

Funnily enough, a layman listener wanted the guests to describe the cataclysms that would occur if the dark matter hit the Earth. The response is, of course, that dark matter hits our bodies all the time and nothing at all happens most of the time. I can't resist to ask: Why would a layperson assume that dark matter must be associated with a "cataclysm"?

People have simply liked to think about cataclysms from the very beginning of primitive religions, and the would-be modern era encourages people to unscientifically attribute cataclysms to many things – carbon dioxide was the most popular "culprit" in the recent decade (and of course, there are many retarded people around us who still believe that CO2 emissions are dangerous). People just can't get interested in something if it is not hyped by a talk about catastrophes.

At one moment, Weinberg (who also promoted his new book about the history of physics, To Explain the World) wisely says that dark matter is preferred because it's also supported by some precision measurements of the CMB – and because it's much easier and more conservative to introduce a new particle species than to rewrite the laws of gravity. Flatow is laughing but it is a serious matter. Flatow is a victim of the populist delusion that there are so many particles which must mean that they were introduced because they don't have any natural enemies. But particles are introduced when they are seen or at least glimpsed.

Lots of particles are used by theoretical physicists because they are being seen experimentally every day and even the new particles that are not sharply seen yet are being introduced because they explain some observations or patterns in them – in this sense, the particles are being seen fuzzily or indirectly (at least when the theorist behind them has any quality). And all theories involving new particles compete with other theories involving other new particles so it's no "unrestricted proliferation of new concepts without standards". Instead, it's the business-as-usual science.

The real question is whether a rather conservative theory with new particle species is more likely – and ultimately more true – than some totally new radical theory that denies that physics may be described in terms of particles and fields. Of course that a true paradigm shift may be needed. But the evidence that it is so – or the ability of the existing, radically new frameworks to convince that they are on the right track – isn't strong enough (yet?) which is why it seems OK to assume that the discrepancies may be fixed with some new particle species.

Also, Flatow is laughing when Weinberg calls the visible matter a contamination – because it's significantly smaller than dark matter which is still smaller than dark energy (by the magnitude of the energy density). Most laymen would find this laughable, too, and it's because the anthropocentrism continues to be believed by most laymen:

We are at the center of the Universe and everything we know from the everyday life must play an essential role in the most profound structure of the Universe. But as science has been showing for 500 years or so, this simply ain't so. If I ignore the fact that the Czechs are the ultimate average nation in the world, we the humans are a random update to one of many long branches of the evolution tree that arose from some rather random complex molecules revolving around an element that is not the most fundamental one, and the whole visible matter around us is a contamination and the clump of matter where we live is a mediocre rock orbiting a rank-and-file star in an unspectacular galaxy – and the Universe itself may be (but doesn't have to be!) a rather random and "not special" solution of string theory within the landscape.

Hooper mentions the 1960s and 1970s as the golden era of classical physics – and the recent years were slower.

At the end, Cooley and Weinberg discuss string theory – experimenters can't test it so the theory isn't useful for them but it's right that people work on it, and it has never been the case that all predictions of theories had to (or could) be tested. Weinberg wraps the discussion with some historical examples – especially one involving Newton – proving that the principle that all interesting predictions must be testable in the near future is misguided.

The short discussion on is full of crackpots irritated by the very concept of "dark matter" and the research of dark matter.

Off-topic: One of the good 2015 Czech songs, "[I Am Not a] Robotic Kid" (lyrics preaches against parents' planning their kids' lives and against conformism). Well, I should say "Czech-Japanese songs" because the leader of Mirai, the band, is Mirai Navrátil – as the name shows, a textbook example of a Czech-Japanese hybrid. He actually plans to sing in Japanese as well. It's their first song.

by Luboš Motl ( at March 28, 2015 04:04 PM

Peter Coles - In the Dark

Nature or Degree


A thoughtful post to follow on from yesterday’s reaction to the GermanWings tragedy…

Originally posted on Mental Health Cop:

It was the timing and tone of yesterday’s newspaper headlines that crossed the line for me: not any of discussion about mental health and airline safety. Of course, occupational health and fitness standards for pilots should be rigorous and we heard yesterday about annual testing, psychological testing, etc., etc.. By now, it may be easy to forget that when papers went to press on Thursday night, we still knew comparatively little about the pilot of the doomed flight. We certainly did not know that he appears to have ripped up sick notes that were relevant to the day of the crash or what kind of condition they related to – we still don’t, as the German police have not confirmed it. Whilst we did have suggestion that he had experience of depression and ‘burnout’ – whatever that means – we don’t know the nature or degree of this, do we?

View original 1,050 more words

by telescoper at March 28, 2015 01:24 PM

Emily Lakdawalla - The Planetary Society Blog

In Pictures: One-Year ISS Mission Begins
The one-year ISS mission of Scott Kelly and Mikhail Kornienko began with an early morning launch from Baikonur, Kazakhstan.

March 28, 2015 05:04 AM

Clifford V. Johnson - Asymptotia

Getty Visit
LAIH_luncheon_getty_2Every year the Los Angeles Institute for the Humanities has a luncheon at the Getty jointly with the Getty Research Institute and the LAIH fellows get to hang out with the Getty Scholars and people on the Getty Visiting Scholars program (Alexa Sekyra, the head of the program, was at the luncheon today, so I got to meet her). The talk is usually given by a curator of an exhibition or program that's either current, or coming up. The first time I went, a few years ago, it was the Spring before the launch of the Pacific Standard Time region-wide celebration of 35 years of Southern California art and art movements ('45-'80) that broke away from letting New York and Western Europe call the tunes and began to define some of the distinctive voices of their own that are now so well known world wide... then we had a talk from a group of curators about the multi-museum collaboration to make that happen. One of the things I learned today from Andrew Perchuck, the Deputy Director of the Getty Research Institute who welcomed us all in a short address, was that there will be a new Pacific Standard Time event coming up in 2018, so stay tuned. This time it will have more of a focus on Latino and Latin American art. See here. LAIH_luncheon_getty_1 Today we had Nancy Perloff tell us about the current exhibit (for which she is [...] Click to continue reading this post

by Clifford at March 28, 2015 02:13 AM

March 27, 2015

Tommaso Dorigo - Scientificblogging

Another One Bites The Dust - WW Cross Section Gets Back Where It Belongs
Sometimes I think I am really lucky to have grown convinced that the Standard Model will not be broken by LHC results. It gives me peace of mind, detachment, and the opportunity to look at every new result found in disagreement with predictions with the right spirit - the "what's wrong with it ?" attitude that every physicist should have in his or her genes.

read more

by Tommaso Dorigo at March 27, 2015 10:22 PM

Emily Lakdawalla - The Planetary Society Blog

Ceres Gets Real; Pluto Lurks
Although we are still along way from understanding this fascinating little body, Ceres is finally becoming a real planet with recognizable features! And that's kinda cool.

March 27, 2015 09:10 PM

CERN Bulletin

Archives of the 90s - CERN Bulletin
Compilation d'archives des années 1990 pour le numéro anniversaire du Bulletin du CERN (50 ans).

by Journalist, Student at March 27, 2015 02:40 PM

CERN Bulletin

Archives of the 2000s - CERN Bulletin
Compilation d'archives des années 2000 pour le numéro anniversaire du Bulletin du CERN (50 ans).

by Journalist, Student at March 27, 2015 02:37 PM

CERN Bulletin

Archives of the 70s - CERN Bulletin
Compilation d'archives des années 1970 pour le numéro anniversaire du Bulletin du CERN (50 ans).

by Journalist, Student at March 27, 2015 02:24 PM

Symmetrybreaking - Fermilab/SLAC

Physics Madness: The Elemental Eight

Half the field, twice the fun. Which physics machine will win it all?

The first round of Physics Madness is over and the field has narrowed to eight amazing physics machines. The second round of voting is now open, so pick your favorites and send them on to the Fundamental Four.

You have until midnight PDT on Monday, March 30, to vote in this round. Come back on March 31 to see if your pick advanced and vote in the next round.

by Lauren Biron at March 27, 2015 01:00 PM

Georg von Hippel - Life on the lattice

Workshop "Fundamental Parameters from Lattice QCD" at MITP (upcoming deadline)
Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.

The scientific programme "Fundamental Parameters from Lattice QCD" at the Mainz Institute of Theoretical Physics (MITP) is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.

The deadline for registration is Tuesday, 31 March 2015.

by Georg v. Hippel ( at March 27, 2015 09:20 AM

John Baez - Azimuth

A Networked World (Part 1)

guest post by David Spivak

The problem

The idea that’s haunted me, and motivated me, for the past seven years or so came to me while reading a book called The Moment of Complexity: our Emerging Network Culture, by Mark C. Taylor. It was a fascinating book about how our world is becoming increasingly networked—wired up and connected—and that this is leading to a dramatic increase in complexity. I’m not sure if it was stated explicitly there, but I got the idea that with the advent of the World Wide Web in 1991, a new neural network had been born. The lights had been turned on, and planet earth now had a brain.

I wondered how far this idea could be pushed. Is the world alive, is it a single living thing? If it is, in the sense I meant, then its primary job is to survive, and to survive it’ll have to make decisions. So there I was in my living room thinking, “oh my god, we’ve got to steer this thing!”

Taylor pointed out that as complexity increases, it’ll become harder to make sense of what’s going on in the world. That seemed to me like a big problem on the horizon, because in order to make good decisions, we need to have a good grasp on what’s occurring. I became obsessed with the idea of helping my species through this time of unprecedented complexity. I wanted to understand what was needed in order to help humanity make good decisions.

What seemed important as a first step is that we humans need to unify our understanding—to come to agreement—on matters of fact. For example, humanity still doesn’t know whether global warming is happening. Sure almost all credible scientists have agreed that it is happening, but does that steer money into programs that will slow it or mitigate its effects? This isn’t an issue of what course to take to solve a given problem; it’s about whether the problem even exists! It’s like when people were talking about Obama being a Muslim, born in Kenya, etc., and some people were denying it, saying he was born in Hawaii. If that’s true, why did he repeatedly refuse to show his birth certificate?

It is important, as a first step, to improve the extent to which we agree on the most obvious facts. This kind of “sanity check” is a necessary foundation for discussions about what course we should take. If we want to steer the ship, we have to make committed choices, like “we’re turning left now,” and we need to do so as a group. That is, there needs to be some amount of agreement about the way we should steer, so we’re not fighting ourselves.

Luckily there are a many cases of a group that needs to, and is able to, steer itself as a whole. For example as a human, my neural brain works with my cells to steer my body. Similarly, corporations steer themselves based on boards of directors, and based on flows of information, which run bureaucratically and/or informally between different parts of the company. Note that in neither case is there any suggestion that each part—cell, employee, or corporate entity—is “rational”; they’re all just doing their thing. What we do see in these cases is that the group members work together in a context where information and internal agreement is valued and often attained.

It seemed to me that intelligent, group-directed steering is possible. It does occur. But what’s the mechanism by which it happens, and how can we think about it? I figured that the way we steer, i.e., make decisions, is by using information.

I should be clear: whenever I say information, I never mean it “in the sense of Claude Shannon”. As beautiful as Shannon’s notion of information is, he’s not talking about the kind of information I mean. He explicitly said in his seminal paper that information in his sense is not concerned with meaning:

Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.

In contrast, I’m interested in the semantic stuff, which flows between humans, and which makes possible decisions about things like climate change. Shannon invented a very useful quantitative measure of meaningless probability distributions.

That’s not the kind of information I’m talking about. When I say “I want to know what information is”, I’m saying I want to formulate the notion of human-usable semantic meaning, in as mathematical a way as possible.

Back to my problem: we need to steer the ship, and to do so we need to use information properly. Unfortunately, I had no idea what information is, nor how it’s used to make decisions (let alone to make good ones), nor how it’s obtained from our interaction with the world. Moreover, I didn’t have a clue how the minute information-handling at the micro-level, e.g., done by cells inside a body or employees inside a corporation, would yield information-handling at the macro (body or corporate) level.

I set out to try to understand what information is and how it can be communicated. What kind of stuff is information? It seems to follow rules: facts can be put together to form new facts, but only in certain ways. I was once explaining this idea to Dan Kan, and he agreed saying, “Yes, information is inherently a combinatorial affair.” What is the combinatorics of information?

Communication is similarly difficult to understand, once you dig into it. For example, my brain somehow enables me to use information and so does yours. But our brains are wired up in personal and ad hoc ways, when you look closely, a bit like a fingerprint or retinal scan. I found it fascinating that two highly personalized semantic networks could interface well enough to effectively collaborate.

There are two issues that I wanted to understand, and by to understand I mean to make mathematical to my own satisfaction. The first is what information is, as structured stuff, and what communication is, as a transfer of structured stuff. The second is how communication at micro-levels can create, or be, understanding at macro-levels, i.e., how a group can steer as a singleton.

Looking back on this endeavor now, I remain concerned. Things are getting increasingly complex, in the sorts of ways predicted by Mark C. Taylor in his book, and we seem to be losing some control: of the NSA, of privacy, of people 3D printing guns or germs, of drones, of big financial institutions, etc.

Can we expect or hope that our species as a whole will make decisions that are healthy, like keeping the temperature down, given the information we have available? Are we in the driver’s seat, or is our ship currently in the process of spiraling out of our control?

Let’s assume that we don’t want to panic but that we do want to participate in helping the human community to make appropriate decisions. A possible first step could be to formalize the notion of “using information well”. If we could do this rigorously, it would go a long way toward helping humanity get onto a healthy course. Further, mathematics is one of humanity’s best inventions. Using this tool to improve our ability to use information properly is a non-partisan approach to addressing the issue. It’s not about fighting, it’s about figuring out what’s happening, and weighing all our options in an informed way.

So, I ask: What kind of mathematics might serve as a formal ground for the notion of meaningful information, including both its successful communication and its role in decision-making?

by John Baez at March 27, 2015 01:00 AM

March 26, 2015

Symmetrybreaking - Fermilab/SLAC

Better ‘cosmic candles’ to illuminate dark energy

Using a newly identified set of supernovae, researchers have found a way to measure distances in space twice as precisely as before.

Researchers have more than doubled the precision of a method they use to measure long distances in space—the same one that led to the discovery of dark energy.

In a paper published in Science, researchers from the University of California, Berkeley, SLAC National Accelerator Laboratory, the Harvard-Smithsonian Center for Astrophysics and Lawrence Berkeley National Laboratory explain that the improvement allows them to measure astronomical distances with an uncertainty of less than 4 percent.

The key is a special type of Type Ia supernovae.

Type Ia supernovae are thermonuclear explosions of white dwarfs—the very dense remnants of stars that have burned all of their hydrogen fuel. A Type Ia supernova is believed to be triggered by the merger or interaction of the white dwarf with an orbiting companion star.

“For a couple of weeks, a Type Ia supernova becomes increasingly bright before it begins to fade,” says Patrick Kelly, the new study’s lead author from the University of California, Berkeley. “It turns out that the rate at which it fades tells us about the absolute brightness of the explosion.”

If the absolute brightness of a light source is known, its observed brightness can be used to calculate its distance from the observer. This is similar to a candle, whose light appears fainter the farther away it is. That’s why Type Ia supernovae are also referred to as astronomical “standard candles.”

The 2011 Nobel Prize in Physics went to a trio of scientists who used these standard candles to determine that our universe is expanding at an accelerating rate. Scientists think this is likely caused by an unknown form of energy they call dark energy.

Measurements using these cosmic candles are far from perfect, though. For reasons that are not yet understood, the distances inferred from supernova explosions seem to be systematically linked to the environments the supernovae are located in. For instance, the mass of the host galaxy appears to have an effect of 5 percent.

In the new study, Kelly and his colleagues describe a set of Type Ia supernovae that allow distance measurements that are much less dependent on such factors.  Using data from NASA’s GALEX satellite, the Sloan Digital Sky Survey and the Kitt Peak National Observatory, they determined that supernovae located in host galaxies that are rich in young stars yield much more precise distances. 

The scientists also have a likely explanation for the extraordinary precision. “It appears that the corresponding white dwarfs were fairly young when they exploded,” Kelly says. “This relatively small spread in age may cause this particular set of Type Ia supernovae to be more uniform.”

For their study, the scientists analyzed almost 80 supernovae that, on average, were 400 million light years away. On an astronomical scale, this is a relatively short distance, and light emitted by these sources stems from rather recent cosmic times.

“An exciting prospect for our analysis is that it can be easily applied to Type Ia supernovae in larger distances—an approach that will let us analyze distances more accurately as we go further back in time,” Kelly says.

This knowledge, in turn, may help researchers draw a more precise picture of the expansion history of the universe and could provide crucial clues about the physics behind the ever increasing speed at which the cosmos expands. 

The intense ultraviolet emission from stars within a circle surrounding these supernovae (shown in white) reveals the presence of hot, massive stars and suggests that the supernovae result from the disruption of comparatively young white dwarf stars.

Courtesy of: Patrick Kelly/University of California, Berkeley


Like what you see? Sign up for a free subscription to symmetry!

by Manuel Gnida at March 26, 2015 01:00 PM

astrobites - astro-ph reader's digest

Jupiter is my shepherd that I shall not want

Title: Jupiter’s Decisive Role t’sn the Inner Solar System’s Early Evolution
Authors: Konstantin Batygin and Gregory Laughlin
First Author Institution: Division of Geological and Planetary Sciences, California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125, USA
Status: Submitted to Proceedings of the National Academy of Sciences of the United States of America

Since the discovery of the first extra-solar planet around another star in 1995 we now know of ~ 4000 candidate planets in our galaxy. All of these discoveries have been key in improving our understanding of the formation of both planets and star systems as a whole. However a full explanation of the processes which form star systems is still elusive, including a description of the formation of our very own Solar System.

The problem with all these exoplanets and star systems we’ve discovered so far is that they suggest that the Solar System is just weirdMost other systems seem to have massive planets similar to the size and mass of Neptune but which are the distance of Mercury to the Sun from their own star (often these planets can be as big as Jupiter – since these are the easiest for us to detect). For example the famous Kepler-11 system is extremely compact, with 6 planets with a total mass of about 40 Earth masses all within 0.5 AU (astronomical unit – the distance of the Earth from the Sun) orbiting around a G-type star not at all dissimilar from the Sun.

Figure 1: Diagram showing the size and distribution of the Kepler detected exoplanets with a mass less than Jupiter and within the orbit of Mars. The radial distance is plotted logarithmically. The orbits of the terrestrial solar planets are also shown. Figure 1 in Batygin & Laughlin.

Figure 1: Diagram showing the size and distribution of the Kepler detected exoplanets with a mass less than Jupiter and within the orbit of Mars. The radial distance is plotted logarithmically. The orbits of the terrestrial solar planets are also shown. Figure 1 in Batygin & Laughlin.

Figure 1 shows all the Kepler detected planets with masses less than Jupiter within the orbit of Mars from their own star. So if most other star systems seem to be planet mass heavy close into their star – why is the Solar System so mass poor and the Sun so alone in the centre?

The authors of this paper use simulations of how the orbital parameters of different objects in systems change due to the influence of other objects, to test the idea that Jupiter could have migrated inwards from the initial place it formed to somewhere between the orbits of Mars and Earth (~ 1.5 AU). The formation of Saturn, during Jupiter’s migration, is thought to have had a massive gravitational influence on Jupiter and consequently pulled it back out to its present day position.

If we think first about how star systems form, the most popular theory is the core-accretion theory, where material around a star condenses into a protoplanetary disc from which planets form from the bottom up. Small grains of dust collide and stick together forming small rocks, then in turn planetesimals and so on until a planet sized mass is formed. So we can imagine Jupiter encountering an army of planetesimals as it migrated inwards. The gravitational effects, perturbations and resonances between the orbits of the planetesimals and Jupiter ultimately work to cause the planetesimals to migrate inwards towards the Sun. The simulations in this paper show that with some simplifying assumptions the total amount of mass that could be swept up and inwards towards the Sun by Jupiter is ~10-20 Earth masses.

Not only are the orbital periods of these planetesimals affected but their orbital eccentricities (how far from circular the orbit is) are also increased. This means that within that army of planetesimals there’s now alot more occasions where two orbits might cross initiating the inevitable cascade of collisions which grind down each planetesimal into smaller and smaller chunks over time. Figure 2 shows how the simulations predict this for planetesimals as Jupiter migrates inwards.

Figure 2: Evolution of the eccentricity of planetesimals in the Solar System due to the orbital migration of Jupiter. Each planetesimal is colour coded according to it's initial conditions. Figure 2a in Batygin & Laughlin.

Figure 2: Evolution of the eccentricity of planetesimals in the Solar System due to the orbital migration of Jupiter. Each planetesimal is colour coded according to its initial conditions. Figure 2a in Batygin & Laughlin.

Given the large impact frequency expected in a rather old protoplanetary disc where Jupiter and Saturn have already formed, the simulations suggest that a large fraction, if not all, of the planetesimals affected by Jupiter will quickly fall inwards to the Sun, especially after Jupiter reverses its migration direction. This decay in the orbits is shown in Figure 3 with each planetesimal getting steadily closer to the Sun until they are consumed by it.

Figure 3: The decay of the orbital radius of planetesimals over time due to the presence of Jupiter's migration. The planetesimals are shown by the coloured lines and the planets from the Kepler-11 system are shown by the black lines. This plot shows how if a planet such as Jupiter migrated in the Kepler-11 system each of the 6 planets would end up destroyed by the Sun.  Figure 3 in Batygin & Laughlin.

Figure 3: The decay of the orbital radius of planetesimals over time due to Jupiter’s migration. The planetesimals are shown by the coloured lines and the planets from the Kepler-11 system are shown by the black lines. This plot shows how if a planet such as Jupiter migrated inwards in the Kepler-11 system, each of the 6 planets would end up destroyed by the parent star. Figure 3 in Batygin & Laughlin.

The orbital wanderings of Jupiter inferred from these simulations might explain the lack of present-day high mass planets close to the Sun. The planetesimals that survived the collisions and inwards migration may have been few and far between, only being able to coalesce to form smaller rocky planets like Earth.

The next step for this theory is to test it, on another star system similar to our own with giant planets with orbital periods exceeding 100 days. However our catalogue of exoplanets is not complete enough to provide such a test yet. Finding these large planets at such large radii from their star is difficult because their long orbital periods coincide with how often we have the chance of observing a transit. For example if we wanted to detect a Neptune-like planet in a Neptune-like orbit, a transit would only occur every 165 years. Also, detecting small planets close to a star is also difficult as the current telescope sensitivities don’t allow us to detect the change in the light of a star for planets so small.

So perhaps we just haven’t been looking long enough or with good enough equipment to find star systems like ours. However with missions  like GAIA, TESS and K2 in the near future perhaps we’ll find that the Solar System is maybe not as unique as we think.

by Becky Smethurst at March 26, 2015 11:12 AM

Clifford V. Johnson - Asymptotia

Framed Graphite
framed_graphiteIt took a while, but I got this task done. (Click for a slightly larger view.)Things take a lot longer these days, because...newborn. You'll recall that I did a little drawing of the youngster very soon after his arrival in December. Well, it was decided a while back that it should be on display on a wall in the house rather than hide in my notebooks like my other sketches tend to do. This was a great honour, but presented me with difficulty. I have a rule to not take any pages out of my notebooks. You'll think it is nuts, but you'll find that this madness is shared by many people who keep notebooks/sketchbooks. Somehow the whole thing is a Thing, if you know what I mean. To tear a page out would be a distortion of the record.... it would spoil the archival aspect of the book. (Who am I kidding? I don't think it likely that future historians will be poring over my notebooks... but I know that future Clifford will be, and it will be annoying to find a gap.) (It is sort of like deleting comments from a discussion on a blog post. I try not to do that without good reason, and I leave a trail to show that it was done if I must.) Anyway, where was I? Ah. Pages. Well, I had to find a way of making a framed version of the drawing that kept the spirit and feel of the drawing intact while [...] Click to continue reading this post

by Clifford at March 26, 2015 04:51 AM

March 25, 2015

arXiv blog

Physicists Describe New Class of Dyson Sphere

Physicists have overlooked an obvious place to search for shell-like structures constructed around stars by advanced civilizations to capture their energy.

Back in 1960, the physicist Freeman Dyson publish an unusual paper in the journalScience entitled “Search for Artificial Stellar Sources of Infra-red Radiation.” In it, he outlined a hypothetical structure that entirely encapsulates a star to capture its energy, which has since become known as a Dyson sphere.

March 25, 2015 09:47 PM

arXiv blog

An Emerging Science of Clickbait

Researchers are teasing apart the complex set of links between the virality of a Web story and the emotions it generates.

March 25, 2015 08:06 PM

Quantum Diaries

The dawn of DUNE

This article appeared in symmetry on March 25, 2015.

A powerful planned neutrino experiment gains new members, new leaders and a new name. Image: Fermilab

A powerful planned neutrino experiment gains new members, new leaders and a new name. Image: Fermilab

The neutrino experiment formerly known as LBNE has transformed. Since January, its collaboration has gained about 50 new member institutions, elected two new spokespersons and chosen a new name: Deep Underground Neutrino Experiment, or DUNE.

The proposed experiment will be the most powerful tool in the world for studying hard-to-catch particles called neutrinos. It will span 800 miles. It will start with a near detector and an intense beam of neutrinos produced at Fermi National Accelerator Laboratory in Illinois. It will end with a 10-kiloton far detector located underground in a laboratory at the Sanford Underground Research Facility in South Dakota. The distance between the two detectors will allow scientists to study how neutrinos change as they zip at close to the speed of light straight through the Earth.

“This will be the flagship experiment for particle physics hosted in the US,” says Jim Siegrist, associate director of high-energy physics for the US Department of Energy’s Office of Science. “It’s an exciting time for neutrino science and particle physics generally.”

In 2014, the Particle Physics Project Prioritization Panel identified the experiment as a top priority for US particle physics. At the same time, it recommended the collaboration take a few steps back and invite more international participation in the planning process.

Physicist Sergio Bertolucci, director of research and scientific computing at CERN, took the helm of an executive board put together to expand the collaboration and organize the election of new spokespersons.

DUNE now includes scientists from 148 institutions in 23 countries. It will be the first large international project hosted by the US to be jointly overseen by outside agencies.

This month, the collaboration elected two new spokespersons: André Rubbia, a professor of physics at ETH Zurich, and Mark Thomson, a professor of physics at the University of Cambridge. One will serve as spokesperson for two years and the other for three to provide continuity in leadership.

Rubbia got started with neutrino research as a member of the NOMAD experiment at CERN in the ’90s. More recently he was a part of LAGUNA-LBNO, a collaboration that was working toward a long-baseline experiment in Europe. Thomson has a long-term involvement in US-based underground and neutrino physics. He is the DUNE principle investigator for the UK.

Scientists are coming together to study neutrinos, rarely interacting particles that constantly stream through the Earth but are not well understood. They come in three types and oscillate, or change from type to type, as they travel long distances. They have tiny, unexplained masses. Neutrinos could hold clues about how the universe began and why matter greatly outnumbers antimatter, allowing us to exist.

“The science is what drives us,” Rubbia says. “We’re at the point where the next generation of experiments is going to address the mystery of neutrino oscillations. It’s a unique moment.”

Scientists hope to begin installation of the DUNE far detector by 2021. “Everybody involved is pushing hard to see this project happen as soon as possible,” Thomson says.

Jennifer Huber and Kathryn Jepsen

Image: Fermilab

Image: Fermilab

by Fermilab at March 25, 2015 05:46 PM

Symmetrybreaking - Fermilab/SLAC

The dawn of DUNE

A powerful planned neutrino experiment gains new members, new leaders and a new name.

The neutrino experiment formerly known as LBNE has transformed. Since January, its collaboration has gained about 50 new member institutions, elected two new spokespersons and chosen a new name: Deep Underground Neutrino Experiment, or DUNE.

The proposed experiment will be the most powerful tool in the world for studying hard-to-catch particles called neutrinos. It will span 800 miles. It will start with a near detector and an intense beam of neutrinos produced at Fermi National Accelerator Laboratory in Illinois. It will end with a 10-kiloton far detector located underground in a laboratory at the Sanford Underground Research Facility in South Dakota. The distance between the two detectors will allow scientists to study how neutrinos change as they zip at close to the speed of light straight through the Earth.

“This will be the flagship experiment for particle physics hosted in the US,” says Jim Siegrist, associate director of high-energy physics for the US Department of Energy’s Office of Science. “It’s an exciting time for neutrino science and particle physics generally.”

In 2014, the Particle Physics Project Prioritization Panel identified the experiment as a top priority for US particle physics. At the same time, it recommended the collaboration take a few steps back and invite more international participation in the planning process.

Physicist Sergio Bertolucci, director of research and scientific computing at CERN, took the helm of an executive board put together to expand the collaboration and organize the election of new spokespersons.

DUNE now includes scientists from 148 institutions in 23 countries. It will be the first large international project hosted by the US to be jointly overseen by outside agencies.

This month, the collaboration elected two new spokespersons: André Rubbia, a professor of physics at ETH Zurich, and Mark Thomson, a professor of physics at the University of Cambridge. One will serve as spokesperson for two years and the other for three to provide continuity in leadership.

Rubbia got started with neutrino research as a member of the NOMAD experiment at CERN in the ’90s. More recently he was a part of LAGUNA-LBNO, a collaboration that was working toward a long-baseline experiment in Europe. Thomson has a long-term involvement in US-based underground and neutrino physics. He is the DUNE principle investigator for the UK.

Scientists are coming together to study neutrinos, rarely interacting particles that constantly stream through the Earth but are not well understood. They come in three types and oscillate, or change from type to type, as they travel long distances. They have tiny, unexplained masses. Neutrinos could hold clues about how the universe began and why matter greatly outnumbers antimatter, allowing us to exist.

“The science is what drives us,” Rubbia says. “We’re at the point where the next generation of experiments is going to address the mystery of neutrino oscillations. It’s a unique moment.”

Scientists hope to begin installation of the DUNE far detector by 2021. “Everybody involved is pushing hard to see this project happen as soon as possible,” Thomson says. 

Courtesy of: Fermilab


Like what you see? Sign up for a free subscription to symmetry!

by Jennifer Huber and Kathryn Jepsen at March 25, 2015 01:00 PM

Quantum Diaries

Vote LUX, and give an underdog a chance

I’ve had a busy few weeks after getting back from America, so apologies for the lack of blogging! Some things I’ve been up to:
– Presenting my work on LUX to MPs at the Houses of Parliament for the SET for Britain competition. No prizes, but lots of interesting questions from MPs, for example: “and what can you do with dark matter once you find it?”. I think he was looking for monetary gain, so perhaps I should have claimed dark matter will be the zero-carbon fuel of the future!
– Supplementing my lowly salary by marking an enormous pile of undergraduate problem sheets and by participating in paid eye-tracking studies for both the UCL psychology department and a marketing company
– The usual work on analysing LUX data and trying to improve our sensitivity to low mass dark matter.
And on Saturday, I will be on a panel of “experts” (how this has happened I don’t know) giving a talk as part of the UCL Your Universe festival. The discussion is aptly titled “Light into the Dark: Mystery of the Invisible Universe”, and if you’re in London and interested in this sort of thing, you should come along. Free tickets are available here.

I will hopefully be back to posting more regularly now, but first, a bit of promotion!

Symmetry Magazine are running a competition to find “which physics machine will reign supreme” and you can vote right here.

Symmetry Magazine's

Physics Madness: Symmetry Magazine’s tournament to find the champion physics experiment

The first round matches LUX with the LHC, and considering we are a collaboration of just over 100 (compared to CERN’s thousands of scientists) with nothing like the media coverage the LHC gets, we’re feeling like a bit of an underdog.
But you can’t just vote for us because we’re an underdog, so here are some reasons you should #voteLUX:

-For spin-dependent WIMP-nucleon scattering for WIMPs above ~8GeV, LUX is 10,000x more sensitive than the LHC (see figure below).
-LUX cost millions of dollars, the LHC cost billions.
-It’s possible to have an understanding of how LUX works in its entirety. The LHC is too big and has too many detectors for that!
-The LHC is 175m underground. LUX is 1,478m underground, over 8x deeper, and so is much better shielded from cosmic rays.
-The LHC has encountered problems both times it has tried to start up. LUX is running smoothly right now!
-I actually feel kind of bad now, because I like the LHC, so I will stop.

Dark matter sensitivity limits

Dark matter sensitivity limits, comparing LHC results to LUX in red. The x axis is the mass of the dark matter particle, and the y axis is its interaction probability. The smaller this number, the greater the sensitivity.

Anyway, if you fancy giving the world’s most sensitive dark matter detector a hint of a chance in it’s battle against the behemoth LHC, vote LUX. Let’s beat the system!

by Sally Shaw at March 25, 2015 11:27 AM

Lubos Motl - string vacua and pheno

CMS: a 2.9-sigma \(WH\) hint at \(1850\GeV\)

Unfortunately, due to a short circuit somewhere at the LHC, a small metallic piece will have to be removed – which takes a week (it's so slow because CERN employs LEGO men to do the job) – and the 2015 LHC physics run may be postponed by up to 5 weeks because of that.
Wolfram: You have the last week to buy Mathematica at a 25% discount (a "pi day" celebration; student edition). Edward Measure has already happily bought it.
Meanwhile, ATLAS and CMS have flooded their web pages with new papers resulting from the 2012 run. In most of these papers, the Standard Model gets an "A".

It is not really the case of the CMS' note
Search for massive \(WH\) resonances decaying to \(\ell\nu b\bar b\) final state in the boosted regime at \(\sqrt{s}=8\TeV\)
because a local 2.9-sigma excess is seen in the muon subchannel – see Figures 5, 6b, and 7 – for the mass of a new hypothetical charged particle \(1.8\TeV\leq m_{W'} \leq 1.9 \TeV\).

It's a small excess – the confidence level gets reduced to about 2 sigma with the look-elsewhere correction – but this new hypothetical charged particle could be interpreted within the Littlest Higgs model (theory) or a Heavy Vector Triplet model, among other, perhaps more likely ones.

In the (now) long list of LHC anomalies mentioned at this blog, some of them could look similar, especially the \(2.1 \TeV\) right-handed \(W_R^\pm\)-boson (CMS July 2014) and and the strange effective-mass \(1.65\TeV\) events (ATLAS March 2012).

by Luboš Motl ( at March 25, 2015 08:47 AM

March 24, 2015

astrobites - astro-ph reader's digest

The First Star Clusters

Title: The Luminosity of Population III Star Clusters

Authors: Alexander L. DeSouza and Shantanu Basu

First Author’s Institution: Department of Physics and Astronomy, University of Western Ontario

Status: Accepted by MNRAS

First light

A major goal for the next generation of telescopes, such as the James Webb Space Telescope (JWST) is to study the first stars and galaxies in the universe. But what would they look like? Would JWST be able to see them? Recent studies have suggested that even the most massive specimens of the very first generation of stars, known as Population III stars, may be undetectable with JWST.

But not all hope is lost–one of the reasons why Population III stars are so hard to detect is that, unlike later generations of stars, they are believed to form in isolation. Later generations of stars (called Population I and Population II stars) usually form in clusters, from the fragmentation of large clouds of molecular gas. On the other hand, cosmological simulations have suggested that Population III stars would form from gas collected in dark matter mini-halos of about a million solar masses in size which would have virialized (reached dynamic equilibrium) by redshifts of about 20-50. Molecular hydrogen acts as a coolant in this scenario, allowing the gas to cool enough to condense down into a star. Early simulations showed that gravitational fragmentation would eventually produce one massive fragment–on the order of about a hundred solar masses–per halo.  This molecular hydrogen, however, could easily be destroyed by the UV radiation from the first massive star formed, preventing others from forming from the same parent cloud of gas. While Population III stars in this paradigm are thought to be much more massive than later generations of stars, they would also be isolated from other ancient stars.

However, there is a lot of uncertainty about the masses of these first stars, and recent papers have investigated the possibility that the picture could be more complicated than first thought. The molecular gas in the dark matter mini-halos could experience more fragmentation before it reaches stellar density, which may lead to multiple smaller stars, rather than one large one, forming from the same cloud of gas. These stars could then evolve relatively independently of each other. The authors of today’s paper investigate the idea that Population III stars could have formed in clusters and also study the luminosity of the resulting groups of stars.


Screenshot 2015-03-17 08.03.28

Figure 1 from the paper showing the evolution of a single protostar in time steps of 5 kyr. The leftmost image shows the protostar and its disk at 5 kyr after the formation of the protostar. Some fragments can be seen at radii of 10 AU to several hundred AU. They can then accrete onto the protostar in bursts of accretion. The middle time step shows a quiescent phase. There are no fragments within 300 AU of the disk and no new ones are forming so the disk is relatively smooth–the ones that already exist were formed during an earlier phase raised to higher orbits. The right most image shows the system at 15 kyr from the formation of the protostar, showing how some of the larger fragments can be sheared apart and produce large fluctuations in the luminosity of the protostar as they are accreted.

The authors of today’s paper begin by arguing that the pristine, mostly atomic gas that collects in these early dark matter mini-halos could fragment by the Jeans criterion in a manner similar to the giant molecular clouds that we see today. This fragmentation would produce small clusters of stars that are relatively isolated from each other, so they are able to model each of the members in the cluster independently. They do this by using numerical hydrodynamical simulations in the thin-disk limit.

Their fiducial model is a gas of 300 solar masses, about 0.5 pc in radius, and at a temperature of 300 K. They find that the disk that forms around the protostars (the large fragments of gas that have contracted out of the original cloud of gas) forms relatively quickly, within about 3 kyr of the formation of the protostar. The disk begins to fragment a few hundred years after it forms. These clumps can then accrete onto the protostar in bursts of accretion or get raised to higher orbits.

Most of the time, however, the protostar is in a quiescent phase and is accreting mass relatively smoothly. The luminosity of the overall star cluster increases during the bursts of accretion, and it also increases as new protostars are formed. The increasing luminosity of the stellar cluster can make it more difficult to detect single accretion events. For clusters of a moderate size of about 16 members, these competing effects result in the star cluster spending about 15% of its time at an elevated luminosity, sometimes even a 1000 times the quiescent luminosity. The star clusters can then have luminosities approaching and occasionally exceeding 108 solar luminosities. Population III stars with masses ranging from 100-500 solar masses on the other hand, are likely to have luminosities of about 106 to 107.

These clusters would be some of the most luminous objects at these redshifts and would make a good target for telescopes such as ALMA and JWST. We have few constraints on the star formation rates at such high redshifts, and a lot of uncertainty in what the earliest stars would look like. So should these exist, even if we couldn’t see massive individual population III stars, we may still be able to detect these clusters of smaller stars and gain insight into what star formation looked like at the beginning of our universe.

by Caroline Huang at March 24, 2015 10:57 PM

John Baez - Azimuth

Stationary Stability in Finite Populations

guest post by Marc Harper

A while back, in the article Relative entropy minimization in evolutionary dynamics, we looked at extensions of the information geometry / evolutionary game theory story to more general time-scales, incentives, and geometries. Today we’ll see how to make this all work in finite populations!

Let’s recall the basic idea from last time, which John also described in his information geometry series. The main theorem is this: when there’s an evolutionarily stable state for a given fitness landscape, the relative entropy between the stable state and the population distribution decreases along the population trajectories as they converge to the stable state. In short, relative entropy is a Lyapunov function. This is a nice way to look at the action of a population under natural selection, and it has interesting analogies to Bayesian inference.

The replicator equation is a nice model from an intuitive viewpoint, and it’s mathematically elegant. But it has some drawbacks when it comes to modeling real populations. One major issue is that the replicator equation implicitly assumes that the population proportions of each type are differentiable functions of time, obeying a differential equation. This only makes sense in the limit of large populations. Other closely related models, such as the Lotka-Volterra model, focus on the number of individuals of each type (e.g. predators and prey) instead of the proportion. But they often assume that the number of individuals is a differentiable function of time, and a population of 3.5 isn’t very realistic either.

Real populations of replicating entities are not infinitely large; in fact they are often relatively small and of course have whole numbers of each type, at least for large biological replicators (like animals). They take up space and only so many can interact meaningfully. There are quite a few models of evolution that handle finite populations and some predate the replicator equation. Models with more realistic assumptions typically have to leave the realm of derivatives and differential equations behind, which means that the analysis of such models is more difficult, but the behaviors of the models are often much more interesting. Hopefully by the end of this post, you’ll see how all of these diagrams fit together:

One of the best-known finite population models is the Moran process, which is a Markov chain on a finite population. This is the quintessential birth-death process. For a moment consider a population of just two types A and B. The state of the population is given by a pair of nonnegative integers (a,b) with a+b=N, the total number of replicators in the population, and a and b the number of individuals of type A and B respectively. Though it may artificial to fix the population size N, this often turns out not to be that big of a deal, and you can assume the population is at its carrying capacity to make the assumption realistic. (Lots of people study populations that can change size and that have replicators spatially distributed say on a graph, but we’ll assume they can all interact with each whenever they want for now).

A Markov model works by transitioning from state to state in each round of the process, so we need to define the transitions probabilities to complete the model. Let’s put a fitness landscape on the population, given by two functions f_A and f_B of the population state (a,b). Now we choose an individual to reproduce proportionally to fitness, e.g. we choose an A individual to reproduce with probability

\displaystyle{ \frac{a f_A}{a f_A + b f_B} }

since there are a individuals of type A and they each have fitness f_A. This is analogous to the ratio of fitness to mean fitness from the discrete replicator equation, since

\displaystyle{ \frac{a f_A}{a f_A + b f_B} =  \frac{\frac{a}{N} f_A}{\frac{a}{N} f_A + \frac{b}{N} f_B} \to \frac{x_i f_i(x)}{\overline{f(x)}} }

and the discrete replicator equation is typically similar to the continuous replicator equation (this can be made precise), so the Moran process captures the idea of natural selection in a similar way. Actually there is a way to recover the replicator equation from the Moran process in large populations—details at the end!

We’ll assume that the fitnesses are nonnegative and that the total fitness (the denominator) is never zero; if that seems artificial, some people prefer to transform the fitness landscape by e^{\beta f(x)}, which gives a ratio reminiscent of the Boltzmann or Fermi distribution from statistical physics, with the parameter \beta playing the role of intensity of selection rather than inverse temperature. This is sometimes called Fermi selection.

That takes care of the birth part. The death part is easier: we just choose an individual at random (uniformly) to be replaced. Now we can form the transition probabilities of moving between population states. For instance the probability of moving from state (a,b) to (a+1, b-1) is given by the product of the birth and death probabilities, since they are independent:

\displaystyle{ T_a^{a+1} = \frac{a f_A}{a f_A + b f_B} \frac{b}{N} }

since we have to chose a replicator of type A to reproduce and one of type B to be replaced. Similarly for (a,b) to (a-1, b+1) (switch all the a’s and b’s), and we can write the probability of staying in the state (a, N-a) as

T_a^{a} = 1 - T_{a}^{a+1} - T_{a}^{a-1}

Since we only replace one individual at a time, this covers all the possible transitions, and keeps the population constant.

We’d like to analyze this model and many people have come up with clever ways to do so, computing quantities like fixation probabilities (also known as absorption probabilities), indicating the chance that the population will end up with one type completely dominating, i.e. in state (0, N) or (N,0). If we assume that the fitness of type A is constant and simply equal to 1, and the fitness of type B is r \neq 1, we can calculate the probability that a single mutant of type B will take over a population of type A using standard Markov chain methods:

\displaystyle{\rho = \frac{1 - r^{-1}}{1 - r^{-N}} }

For neutral relative fitness (r=1), \rho = 1/N, which is the probability a neutral mutant invades by drift alone since selection is neutral. Since the two boundary states (0, N) or (N,0) are absorbing (no transitions out), in the long run every population ends up in one of these two states, i.e. the population is homogeneous. (This is the formulation referred to by Matteo Smerlak in The mathematical origins of irreversibility.)

That’s a bit different flavor of result than what we discussed previously, since we had stable states where both types were present, and now that’s impossible, and a bit disappointing. We need to make the population model a bit more complex to have more interesting behaviors, and we can do this in a very nice way by adding the effects of mutation. At the time of reproduction, we’ll allow either type to mutate into the other with probability \mu. This changes the transition probabilities to something like

\displaystyle{ T_a^{a+1} = \frac{a (1-\mu) f_A + b \mu f_B}{a f_A + b f_B} \frac{b}{N} }

Now the process never stops wiggling around, but it does have something known as a stationary distribution, which gives the probability that the population is in any given state in the long run.

For populations with more than two types the basic ideas are the same, but there are more neighboring states that the population could move to, and many more states in the Markov process. One can also use more complicated mutation matrices, but this setup is good enough to typically guarantee that no one species completely takes over. For interesting behaviors, typically \mu = 1/N is a good choice (there’s some biological evidence that mutation rates are typically inversely proportional to genome size).

Without mutation, once the population reached (0,N) or (N,0), it stayed there. Now the population bounces between states, either because of drift, selection, or mutation. Based on our stability theorems for evolutionarily stable states, it’s reasonable to hope that for small mutation rates and larger populations (less drift), the population should spend most of its time near the evolutionarily stable state. This can be measured by the stationary distribution which gives the long run probabilities of a process being in a given state.

Previous work by Claussen and Traulsen:

• Jens Christian Claussen and Arne Traulsen, Non-Gaussian fluctuations arising from finite populations: exact results for the evolutionary Moran process, Physical Review E 71 (2005), 025101.

suggested that the stationary distribution is at least sometimes maximal around evolutionarily stable states. Specifically, they showed that for a very similar model with fitness landscape given by

\left(\begin{array}{c} f_A \\ f_B \end{array}\right)  = \left(\begin{array}{cc} 1 & 2\\ 2&1 \end{array}\right)  \left(\begin{array}{c} a\\ b \end{array}\right)

the stationary state is essentially a binomial distribution centered at (N/2, N/2).

Unfortunately, the stationary distribution can be very difficult to compute for an arbitrary Markov chain. While it can be computed for the Markov process described above without mutation, and in the case studied by Claussen and Traulsen, there’s no general analytic formula for the process with mutation, nor for more than two types, because the processes are not reversible. Since we can’t compute the stationary distribution analytically, we’ll have to find another way to show that the local maxima of the stationary distribution are “evolutionarily stable”. We can approximate the stationary distribution fairly easily with a computer, so it’s easy to plot the results for just about any landscape and reasonable population size (e.g. N \approx 100).

It turns out that we can use a relative entropy minimization approach, just like for the continuous replicator equation! But how? We lack some essential ingredients such as deterministic and differentiable trajectories. Here’s what we do:

• We show that the local maxima and minima of the stationary distribution satisfy a complex balance criterion.

• We then show that these states minimize an expected relative entropy.

• This will mean that the current state and the expected next state are ‘close’.

• Lastly, we show that these states satisfy an analogous definition of evolutionary stability (now incorporating mutation).

The relative entropy allows us to measure how close the current state is to the expected next state, which captures the idea of stability in another way. This ports the relative minimization Lyapunov result to some more realistic Markov chain models. The only downside is that we’ll assume the populations are “sufficiently large”, but in practice for populations of three types, N=20 is typically enough for common fitness landscapes (there are lots of examples here for N=80, which are prettier than the smaller populations). The reason for this is that the population state (a,b) needs enough “resolution” (a/N, b/N) to get sufficiently close to the stable state, which is not necessarily a ratio of integers. If you allow some wiggle room, smaller populations are still typically pretty close.

Evolutionarily stable states are closely related to Nash equilibria, which have a nice intuitive description in traditional game theory as “states that no player has an incentive to deviate from”. But in evolutionary game theory, we don’t use a game matrix to compute e.g. maximum payoff strategies, rather the game matrix defines a fitness landscape which then determines how natural selection unfolds.

We’re going to see this idea again in a moment, and to help get there let’s introduce an function called an incentive that encodes how a fitness landscape is used for selection. One way is to simply replace the quantities a f_A(a,b) and b f_B(a,b) in the fitness-proportionate selection ratio above, which now becomes (for two population types):

\displaystyle{ \frac{\varphi_A(a,b)}{\varphi_A(a,b) + \varphi_B(a,b)} }

Here \varphi_A(a,b) and \varphi_B(a,b) are the incentive function components that determine how the fitness landscape is used for natural selection (if at all). We have seen two examples above:

\varphi_A(a,b) = a f_A(a, b)

for the Moran process and fitness-proportionate selection, and

\varphi_A(a,b) = a e^{\beta f_A(a, b)}

for an alternative that incorporates a strength of selection term \beta, preventing division by zero for fitness landscapes defined by zero-sum game matrices, such as a rock-paper-scissors game. Using an incentive function also simplifies the transition probabilities and results as we move to populations of more than two types. Introducing mutation, we can describe the ratio for incentive-proportion selection with mutation for the ith population type when the population is in state x=(a,b,\ldots) / N as

\displaystyle{ p_i(x) = \frac{\sum_{k=1}^{n}{\varphi_k(x) M_{i k} }}{\sum_{k=1}^{n}{\varphi_k(x)}} }

for some matrix of mutation probabilities M. This is just the probability that we get a new individual of the ith type (by birth and/or mutation). A common choice for the mutation matrix is to use a single mutation probability \mu and spread it out over all the types, such as letting

M_{ij} = \mu / (n-1)


M_{ii} = 1 - \mu

Now we are ready to define the expected next state for the population and see how it captures a notion of stability. For a given state population x in a multitype population, using x to indicate the normalized population state (a,b,\ldots) / N, consider all the neighboring states y that the population could move to in one step of the process (one birth-death cycle). These neighboring states are the result of increasing a population type by one (birth) and decreasing another by one (death, possibly the same type), of course excluding cases on the boundary where the number of individuals of any type drops below zero or rises above N. Now we can define the expected next state as the sum of neighboring states weighted by the transition probabilities

E(x) = \sum_{y}{y T_x^{y}}

with transition probabilities given by

T_{x}^{y} = p_{i}(x) x_{j}

for states y that differ in 1/N at the ith coordinate and -1/N at jth coordinate from x. Here x_j is just the probability of the random death of an individual of the jth type, so the transition probabilities are still just birth (with mutation) and death as for the Moran process we started with.

Skipping some straightforward algebraic manipulations, we can show that

\displaystyle{ E(x) = \sum_{y}{y T_x^{y}} = \frac{N-1}{N}x + \frac{1}{N}p(x)}

Then it’s easy to see that E(x) = x if and only if x = p(x), and that x = p(x) if and only if x_i = \varphi_i(x). So we have a nice description of ‘stability’ in terms of fixed points of the expected next state function and the incentive function

x = E(x) = p(x) = \varphi(x),

and we’ve gotten back to “no one has an incentive to deviate”. More precisely, for the Moran process

\varphi_i(x) = x_i f_i(x)

and we get back f_i(x) = f_j(x) for every type. So we take x = \varphi(x) as our analogous condition to an evolutionarily stable state, though it’s just the ‘no motion’ part and not also the ‘stable’ part. That’s what we need the stationary distribution for!

To turn this into a useful number that measures stability, we use the relative entropy of the expected next state and the current state, in analogy with the Lyapunov theorem for the replicator equation. The relative entropy

\displaystyle{ D(x, y) = \sum_i x_i \ln(x_i) - y_i \ln(x_i) }

has the really nice property that D(x,y) = 0 if and only if x = y, so we can use the relative entropy D(E(x), x) as a measure of how close to stable any particular state is! Here the expected next state takes the place of the ‘evolutionarily stable state’ in the result described last time for the replicator equation.

Finally, we need to show that the maxima (and minima) of of the stationary distribution are these fixed points by showing that these states minimize the expected relative entropy.

Seeing that local maxima and minima of the stationary distribution minimize the expected relative entropy is a more involved, so let’s just sketch the details. In general, these Markov processes are not reversible, so they don’t satisfy the detailed-balance condition, but the stationary probabilities do satisfy something called the global balance condition, which says that for the stationary distribution s we have that

s_x \sum_{x}{T_x^{y}} = \sum_{y}{s_y T_y^{x}}

When the stationary distribution is at a local maximum (or minimum), we can show essentially that this implies (up to an \epsilon, for a large enough population) that

\displaystyle{\sum_{x}{T_x^{y}} = \sum_{y}{T_y^{x}} }

a sort of probability inflow-outflow equation, which is very similar to the condition of complex balanced equilibrium described by Manoj Gopalkrishnan in this Azimuth post. With some algebraic manipulation, we can show that these states have E(x)=x.

Now let’s look again at the figures from the start. The first shows the vector field of the replicator equation:

You can see rest points at the center, on the center of each boundary edge, and on the corner points. The center point is evolutionarily stable, the center points of the boundary are semi-stable (but stable when the population is restricted to a boundary simplex), and the corner points are unstable.

This one shows the stationary distribution for a finite population model with a Fermi incentive on the same landscape, for a population of size 80:

A fixed population size gives a partitioning of the simplex, and each triangle of the partition is colored by the value of the stationary distribution. So you can see that there are local maxima in the center and on the centers of the triangle boundary edges. In this case, the size of the mutation probability determines how much of the stationary distribution is concentrated on the center of the simplex.

This shows one-half of the Euclidean distance squared between the current state and the expected next state:

And finally, this shows the same thing but with the relative entropy as the ‘distance function':

As you can see, the Euclidean distance is locally minimal at each of the local maxima and minima of the stationary distribution (including the corners); the relative entropy is only guaranteed so on the interior states (because the relative entropy doesn’t play nicely with the boundary, and unlike the replicator equation, the Markov process can jump on and off the boundary). It turns out that the relative Rényi entropies for q between 0 and 1 also work just fine, but for the large population limit (the replicator dynamic), the relative entropy is the somehow the right choice for the replicator equation (has the derivative that easily gives Lyapunov stability), which is due to the connections between relative entropy and Fisher information in the information geometry of the simplex. The Euclidean distance is the q=0 case and the ordinary relative entropy is q=1.

As it turns out, something very similar holds for another popular finite population model, the Wright–Fisher process! This model is more complicated, so if you are interested in the details, check out our paper, which has many nice examples and figures. We also define a process that bridges the gap between the atomic nature of the Moran process and the generational nature of the Wright–Fisher process, and prove the general result for that model.

Finally, let’s see how the Moran process relates back to the replicator equation (see also the appendix in this paper), and how we recover the stability theory of the replicator equation. We can use the transition probabilities of the Moran process to define a stochastic differential equation (called a Langevin equation) with drift and diffusion terms that are essentially (for populations with two types:

\mathrm{Drift}(x) = T^{+}(x) - T^{-}(x)

\displaystyle{ \mathrm{Diffusion}(x) = \sqrt{\frac{T^{+}(x) + T^{-}(x)}{N}} }

As the population size gets larger, the diffusion term drops out, and the stochastic differential equation becomes essentially the replicator equation. For the stationary distribution, the variance (e.g. for the binomial example above) also has an inverse dependence on N, so the distribution limits to a delta-function that is zero except for at the evolutionarily stable state!

What about the relative entropy? Loosely speaking, as the population size gets larger, the iteration of the expected next state also becomes deterministic. Then the evolutionarily stable states is a fixed point of the expected next state function, and the expected relative entropy is essentially the same as the ordinary relative entropy, at least in a neighborhood of the evolutionarily stable state. This is good enough to establish local stability.

Earlier I said both the local maxima and minima minimize the expected relative entropy. Dash and I haven’t proven that the local maxima always correspond to evolutionarily stable states (and the minima to unstable states). That’s because the generalization of evolutionarily stable state we use is really just a ‘no motion’ condition, and isn’t strong enough to imply stability in a neighborhood for the deterministic replicator equation. So for now we are calling the local maxima stationary stable states.

We’ve also tried a similar approach to populations evolving on networks, which is a popular topic in evolutionary graph theory, and the results are encouraging! But there are many more ‘states’ in such a process, since the configuration of the network has to be taken into account, and whether the population is clustered together or not. See the end of our paper for an interesting example of a population on a cycle.

by John Baez at March 24, 2015 06:00 PM

Symmetrybreaking - Fermilab/SLAC

LHC will not restart this week

Engineers and technicians may need to warm up and recool a section of the accelerator before they can introduce particles.

The Large Hadron Collider will not restart this week, according to a statement from CERN.

Engineers and technicians are investigating an intermittent short circuit to ground in one of the machine’s magnet circuits. They identified the problem during a test run on March 21. It is a well understood issue, but one that could take time to resolve since it is in a cold section of the machine. The repair process may require warming up and re-cooling that part of the accelerator.

“Any cryogenic machine is a time amplifier,” says CERN’s Director for Accelerators, Frédérick Bordry, “so what would have taken hours in a warm machine could end up taking us weeks.”

Current indications suggest a delay of between a few days and several weeks. CERN's press office says a revised schedule will be announced as soon as possible.

The other seven of the machine’s eight sectors have successfully been commissioned to the 2015 operating energy of 6.5 trillion electron-volts per beam.

According to the statement, the impact on LHC operation will be minimal: 2015 is a year for fully understanding the performance of the upgraded machine with a view to full-scale physics running in 2016 through 2018.

“All the signs are good for a great Run II,” says CERN Director General Rolf Heuer. “In the grand scheme of things, a few weeks delay in humankind’s quest to understand our universe is little more than the blink of an eye.”


Like what you see? Sign up for a free subscription to symmetry!


by Kathryn Jepsen at March 24, 2015 05:43 PM

arXiv blog

Spacecraft Traveling Close to Light Speed Should Be Visible with Current Technology, Say Engineers

Relativistic spacecraft must interact with the cosmic microwave background in a way that produces a unique light signature. And that means we should be able to spot any nearby, according to a new analysis.

March 24, 2015 03:00 PM

Symmetrybreaking - Fermilab/SLAC

Physics Madness: The Supersymmetric Sixteen

Which physics machine will reign supreme? Your vote decides.

Editor's Note: This round has closed; move on to the Elemental Eight!


March is here, and that means one thing: brackets. We’ve matched up 16 of the coolest pieces of particle physics equipment that help scientists answer big questions about our universe. Your vote will decide this year’s favorite.
     The tournament will last four rounds, starting with the Supersymmetric Sixteen today, moving on to the Elemental Eight on March 27, then the Fundamental Four on March 31 and finally the Grand Unified Championship on April 3. The first round’s match-ups are below. You have until midnight PDT on Thursday, March 26, to vote in this round. May the best physics machine win!


by Lauren Biron at March 24, 2015 01:00 PM

March 23, 2015

astrobites - astro-ph reader's digest

The Radial Velocity Method: Current and Future Prospects

To date, we have confirmed more than 1500 extrasolar planets, with over 3300 other planet candidates waiting to be confirmed. These planets have been found with different methods (see Figure 1). The two currently most successful are: the transit method and the radial velocity method. The former measures the periodic dimming of a star as an orbiting planet passes in front of it, and tends to find short-period large-radius planets. The latter works like this: as a planet orbits around its host star, the planet tugs the host star causing the star to move in its own tiny orbit. This wobble motion —which increases with increasing planet mass— can be detected as tiny shifts in the star’s spectra. We just found a planet.

That being said, in our quest to find even more exoplanets, where do we invest our time and money? Do we pick one method over another? Or do we spread our efforts, striving to advance all of them simultaneously? How do we assess how each of them is working; how do we even begin? Here it pays off to take a stand, to make some decisions on how to proceed, to set realistic and achievable goals, to define a path forward that the exoplanet community can agree to follow.

Figure 1: Currently confirmed planets (from December 2014), showing planetary masses as a function of period. To date, the radial velocity method (red), and the transit method (green), are by far the most successful planet-finding techniques. Figure 42 from the report.

Figure 1: Currently confirmed planets (as of December 2014), showing planetary masses as a function of period. To date, the radial velocity method (red), and the transit method (green), are by far the most successful planet-finding techniques. Other methods include: microlensing, imaging, transit timing variations, and orbital brightness modulation. Figure 42 from the report.

To do this effectively, and to ensure that the US exoplanet community has a plan, NASA’s Exoplanet Exploration Program (ExEP) appoints a so-called Program Analysis Group (ExoPAG). This group is responsible for coordinating community input into the development and execution of NASA’s exoplanetary goals, and serves as a forum to analyze its priorities for future exoplanetary exploration. Most of ExoPAG’s work is conducted in a number of Study Analysis Groups (SAGs). Each group focuses on one specific exoplanet topic, and is headed by some of the leading scientists in the corresponding sub-topic. These topics include: discussing future flagship missions, characterizing exoplanet atmospheres, and analyzing individual detection techniques and their future. A comprehensive list of the different SAGs is maintained here.

One of the SAGs focused their efforts on analyzing the current and future prospects of the radial velocity method. Recently, the group published an analysis report which discusses the current state-of-affairs of the radial velocity technique, and recommends future steps towards increasing its sensitivity. Today’s astrobite summarizes this report.

The questions this SAG studied can roughly be divided into three categories:

1-2: Radial velocity detection sensitivity is primarily limited by two categories of systematic effects. First, by long-term instrument stability, and second, by astrophysical sources of jitter.

3:  Finding planets with the radial velocity technique requires large amounts of observing time. We thus have to account for what telescopes are available, and how we design effective radial velocity surveys.

We won’t talk so much about the last category in this astrobite. But, let’s dive right into the former two.

Instrumentation Objectives

No instrument is perfect. All instruments have something that ultimately limits their sensitivity. We can make more sensitive measurements with a ruler if we make the tick-marks denser. Make the tick-marks too dense, and we can’t tell them apart. Our sensitivity is limited.

Astronomical instruments that measure radial velocities —called spectrographs— are, too, limited in sensitivity. Their sensitivity is to a large extent controlled by how stable they are over long periods of time. Various environmental factors —such as mechanical vibrations, thermal variations, and pressure changes— cause unwanted shifts in the stellar spectra, that can all masquerade as a radial velocity signal. Minimize such variations, and work on correcting —or calibrating out— the unwanted signals they cause, and we increase the sensitivity. Not an easy job.

Figure 2: Mass of planets detected with the radial velocity technique as a function of year detected. More planets are being found each year, hand-in-hand with increasing instrument sensitivity. Figure 43 from the report.

Figure 2: Masses of planets detected with the radial velocity technique, as a function of their discovery year. More planets are being found each year, hand-in-hand with increasing instrument sensitivity. For transiting planets the actual masses are plotted, otherwise the minimum mass is plotted. Figure 43 from the report.

Still, it can be done, and we are getting better at it. Figure 2 shows that we are finding lighter and lighter planets — hand-in-hand with increasing instrument sensitivity: we are able to detect smaller and smaller wobble motions. Current state-of-the-art spectrographs are, in the optical, sensitive down to 1m/s wobble motions, and only slightly worse (1-3m/s) in the Near Infrared. To set things in perspective, the Earth exerts a 9cm/s wobble on the Sun. Thus, to find true Earth analogs, we need instruments sensitive to a few centimeters. The authors of the report note that achieving 10-20cm/s instrument precision is realistic within a few years — some such instruments are even being developed as we speak. Further push on these next generation spectrographs is strongly recommended by the authors; they support a path towards finding Earth analogues.

Science Objectives

Having a perfect spectrograph, with perfect precision, would, however, not solve the whole problem. This is due to stellar jitter: the star itself can produce signals that can wrongly be interpreted as planets. Our ultimate sensitivity or precision is constrained by our physical understanding of the stars we observe.

Stellar jitter originates from various sources. The sources have different timescales, ranging from minutes and hours (e.g. granulation), to days and months (e.g. star spots), and even up to years (e.g. magnetic activity cycles). Figure 3 gives a good overview of the main sources of stellar jitter. Many of the sources are understood, and can be mitigated (green boxes), but other signals still pose problems (red boxes), and require more work. The blue boxes are more or less solved. We would like to see more green boxes.

Figure 3:An overview diagram of stellar jitter that affects radial velocity measurements. Note the different timescales. Green boxes denote a largely understood problem, but the red boxes require more work. Blue boxes are somewhere in between. Figure 44 from the report.

Figure 3: An overview diagram of stellar jitter that affects radial velocity measurements. Note the different timescales. Green boxes denote an understood problem, but the red boxes require significant more work. Blue boxes are somewhere in between. Figure 44 from the report.


The radial velocity method is one way to discover and characterize exoplanets. In this report, one of NASA’s Study Analysis Groups evaluates the current status of the method. Moreover, with input from the exoplanet community, the group discusses recommendations to move forward, to ensure that this method continues to be workhorse method in finding and characterizing exoplanets. This will involve efficiently scheduled observatories, and significant investments in technology development (see a great list of current and future spectrographs here), data analysis and in our understanding of the astrophysics behind stellar jitter. With these efforts, we make steps towards discovering and characterizing true Earth analogs.

Full Disclosure: My adviser is one of the authors of the SAG white paper report. I chose to cover it here for two reasons. First, I wanted to further your insight into this exciting subfield, and secondly, likewise, my own.

by Gudmundur Stefansson at March 23, 2015 11:37 PM

ZapperZ - Physics and Physicists

This Tennis Act Disproves Physics?!
Since when?!

Why is it that when some "morons" see something that they can't comprehend, they always claim that it violates physics, or can't be explained by physics, as IF they understand physics well-enough to make such judgement? I mean, c'mon!

This is the case here where this writer claims that Novak Djokovic ability to stop the ball coming at him with his racket somehow defy physics and turning it all into "a lie". (

Look, I know this is written probably in jest, and probably without giving it a second thought, but such stupid comments of journalism should really be stopped and called out. There's nothing that can't be explained here by physics. If Djokovic had held the racket with a stiff arm, he would not have been able to stop the ball the way he did. In fact, it would have bounced off the racket. But look at how he stopped it. He moved his arm back to "absorb" the impact, basically allowing the strings to absorb the momentum of the ball. This is called "Impulse", where the force on the ball to change its momentum to zero is spread out over a longer period of time. Thus, the force needed to slow it down is small enough that it doesn't cause it to bounce off the strings.

In other words, what is observed can easily be explained by physics!

BTW, Martina Navratilova had done this same thing a few times while she was an active player. I've witness her doing this at least twice during matches. So it is not as if this is anything new. Not only that, although it is less spectacular and easier to do, badminton players do such a thing numerous times as well when they are trying to catch a shuttlecock.


by ZapperZ ( at March 23, 2015 05:04 PM

arXiv blog

Twitter Data Mining Reveals the Origins of Support for Islamic State

Studying the pre-Islamic State tweets of people who end up backing the organization paints a revealing picture of how support emerges, say computer scientists.

Back in May 2014, news emerged that an Egyptian man called Ahmed Al-Darawy had died on the battlefields of Iraq while fighting for the Islamic State of Iraq and the Levant, otherwise known as Islamic State or ISIS.

March 23, 2015 03:05 PM

Tommaso Dorigo - Scientificblogging

Spring Flukes: New 3-Sigma Signals From LHCb And ATLAS
Spring is finally in, and with it the great expectations for a new run of the Large Hadron Collider, which will restart in a month or so with a 62.5% increase in center of mass energy of the proton-proton collisions it produces: 13 TeV. At 13 TeV, the production of a 2-TeV Z' boson, say, would not be so terribly rare, making a signal soon visible in the data that ATLAS and CMS are eager to collect.

read more

by Tommaso Dorigo at March 23, 2015 11:41 AM

March 21, 2015

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A day out (and a solar eclipse) at Maynooth University

I  had a most enjoyable day on Friday at the mathematical physics department of Maynooth University, or NUI Maynooth, to give it its proper title. I was there to attend an international masterclass in particle physics. This project, a few years old, is a superb science outreach initiative associated with CERN, the European Centre for Particle Physics particle in Geneva, home of the famous Large Hadron Collider (LHC). If you live on planet earth, you will probably have heard that a famous particle known as the Higgs boson was recently discovered at the LHC. The idea behind the masterclasses is to give secondary school students the opportunity to “become a particle physicists for a day” by performing measurements on real data from CERN.


The day got off to a great start with a lecture on “Quarks, leptons and the forces of nature” by Dr. Paul Watts, a theoretical physicist at Maynooth. An excellent introduction to the world pf particle physics, I was amused by Paul’s cautious answer to a question on the chances of finding supersymmetric particles at the LHC. What the students didn’t know was that Paul studied under the late Bruno Zumino, a world expert on supersymmetry, and one of the pioneers of the theory. Paul’s seminar was followed by another lecture, “Particle Physics Experiments and the Little Bang” , an excellent a talk on the detection of particles at the LHC by Dr Jonivar Skullerud, another physicist at Maynooth. In between the two lectures. we all trooped outside in the hope of seeing something of today’s solar eclipse . I was not hopeful, given that the sky was heavily overcast until about 9.30. Lo and behold, the skies cleared in time and we all got a ringside view of the event through glasses supplied by Maynooth physics department! Now that’s how you impress visitors to the college… images IMG_20150320_101922

Viewing the eclipse

After lunch we had the workshop proper. Each student was assigned a computer on which software had been installed that allowed them to analyse particle events from the ALICE detector at the LHC (lead ion collisions). Basically, the program allowed the students to measure the momentum and energy of decay products of particles from the tracks produced in collisions, allowing them to calculate the mass of the parent particle and thus identify it. As so often, I was impressed how quickly the students got the hang of the program – having missed the introduction thanks to a meeting, I was by far the slowest in the room. We all then submitted our results, only to find a large discrepancy between the total number of particles we detected and the number predicted by theory! We then all returned to the conference room, and uploaded our results to the control room at the LHC. It was fun comparing our data live with other groups around Europe and discussing the results. Much hilarity greeted the fact that many of the other groups got very different results, and the explanation for that (but what many groups really wanted to know was  whether we got a good look at the eclipse in Ireland). IMG_20150320_154423

Uploading our results via a conference call with the contol room at the LHC, CERN

All in all, a wonderful way for students to get a glimpse of life in the world of the LHC, to meet active particle physics researchers, and to link up with students from other countries. See here for the day’s program.

by cormac at March 21, 2015 07:27 PM

John Baez - Azimuth

Thermodynamics with Continuous Information Flow

guest post by Blake S. Pollard

Over a century ago James Clerk Maxwell created a thought experiment that has helped shape our understanding of the Second Law of Thermodynamics: the law that says entropy can never decrease.

Maxwell’s proposed experiment was simple. Suppose you had a box filled with an ideal gas at equilibrium at some temperature. You stick in an insulating partition, splitting the box into two halves. These two halves are isolated from one another except for one important caveat: somewhere along the partition resides a being capable of opening and closing a door, allowing gas particles to flow between the two halves. This being is also capable of observing the velocities of individual gas particles. Every time a particularly fast molecule is headed towards the door the being opens it, letting fly into the other half of the box. When a slow particle heads towards the door the being keeps it closed. After some time, fast molecules would build up on one side of the box, meaning half the box would heat up! To an observer it would seem like the box, originally at a uniform temperature, would for some reason start splitting up into a hot half and a cold half. This seems to violate the Second Law (as well as all our experience with boxes of gas).

Of course, this apparent violation probably has something to do with positing the existence of intelligent microscopic doormen. This being, and the thought experiment itself, are typically referred to as Maxwell’s demon.

Photo credit: Peter MacDonald, Edmonds, UK

When people cook up situations that seem to violate the Second Law there is typically a simple resolution: you have to consider the whole system! In the case of Maxwell’s demon, while the entropy of the box decreases, the entropy of the system as a whole, demon include, goes up. Precisely quantifying how Maxwell’s demon doesn’t violate the Second Law has led people to a better understanding of the role of information in thermodynamics.

At the American Physical Society March Meeting in San Antonio, Texas, I had the pleasure of hearing some great talks on entropy, information, and the Second Law. Jordan Horowitz, a postdoc at Boston University, gave a talk on his work with Massimiliano Esposito, a researcher at the University of Luxembourg, on how one can understand situations like Maxwell’s demon (and a whole lot more) by analyzing the flow of information between subsystems.

Consider a system made up of two parts, X and Y. Each subsystem has a discrete set of states. Each systems makes transitions among these discrete states. These dynamics can be modeled as Markov processes. They are interested in modeling the thermodynamics of information flow between subsystems. To this end they consider a bipartite system, meaning that either X transitions or Y transitions, never both at the same time. The probability distribution p(x,y) of the whole system evolves according to the master equation:

\displaystyle{ \frac{dp(x,y)}{dt} = \sum_{x', y'} H_{x,x'}^{y,y'}p(x',y') - H_{x',x}^{y',y}p(x,y) }

where H_{x,x'}^{y,y'} is the rate at which the system transitions from (x',y') \to (x,y). The ‘bipartite’ condition means that H has the form

H_{x,x'}^{y,y'} = \left\{ \begin{array}{cc} H_{x,x'}^y & x \neq x'; y=y' \\   H_x^{y,y'} & x=x'; y \neq y' \\  0 & \text{otherwise.} \end{array} \right.

The joint system is an open system that satisfies the second law of thermodynamics:

\displaystyle{ \frac{dS_i}{dt} = \frac{dS_{XY}}{dt} + \frac{dS_e}{dt} \geq 0 }


\displaystyle{ S_{XY} = - \sum_{x,y} p(x,y) \ln ( p(x,y) ) }

is the Shannon entropy of the system, satisfying

\displaystyle{ \frac{dS_{XY} }{dt} = \sum_{x,y} \left[ H_{x,x'}^{y,y'}p(x',y') - H_{x',x}^{y',y}   p(x,y) \right] \ln \left( \frac{p(x',y')}{p(x,y)} \right) }


\displaystyle{ \frac{dS_e}{dt}  = \sum_{x,y} \left[ H_{x,x'}^{y,y'}p(x',y') - H_{x',x}^{y',y} p(x,y) \right] \ln \left( \frac{ H_{x,x'}^{y,y'} } {H_{x',x}^{y',y} } \right) }

is the entropy change of the environment.

We want to investigate how the entropy production of the whole system relates to entropy production in the bipartite pieces X and Y. To this end they define a new flow, the information flow, as the time rate of change of the mutual information

\displaystyle{ I = \sum_{x,y} p(x,y) \ln \left( \frac{p(x,y)}{p(x)p(y)} \right) }

Its time derivative can be split up as

\displaystyle{ \frac{dI}{dt} = \frac{dI^X}{dt} + \frac{dI^Y}{dt}}


\displaystyle{ \frac{dI^X}{dt} = \sum_{x,y} \left[ H_{x,x'}^{y} p(x',y) - H_{x',x}^{y}p(x,y) \right] \ln \left( \frac{ p(y|x) }{p(y|x')} \right) }


\displaystyle{ \frac{dI^Y}{dt} = \sum_{x,y} \left[ H_{x}^{y,y'}p(x,y') - H_{x}^{y',y}p(x,y) \right] \ln \left( \frac{p(x|y)}{p(x|y')} \right) }

are the information flows associated with the subsystems X and Y respectively.


\displaystyle{ \frac{dI^X}{dt} > 0}

a transition in X increases the mutual information I, meaning that X ‘knows’ more about Y and vice versa.

We can rewrite the entropy production entering into the second law in terms of these information flows as

\displaystyle{ \frac{dS_i}{dt} = \frac{dS_i^X}{dt} + \frac{dS_i^Y}{dt} }


\displaystyle{ \frac{dS_i^X}{dt} = \sum_{x,y} \left[ H_{x,x'}^y p(x',y) - H_{x',x}^y p(x,y) \right] \ln \left( \frac{H_{x,x'}^y p(x',y) } {H_{x',x}^y p(x,y) } \right) \geq 0 }

and similarly for \frac{dS_Y}{dt} . This gives the following decomposition of entropy production in each subsystem:

\displaystyle{ \frac{dS_i^X}{dt} = \frac{dS^X}{dt} + \frac{dS^X_e}{dt} - \frac{dI^X}{dt} \geq 0 }

\displaystyle{ \frac{dS_i^Y}{dt} = \frac{dS^Y}{dt} + \frac{dS^X_e}{dt} - \frac{dI^Y}{dt} \geq 0},

where the inequalities hold for each subsystem. To see this, if you write out the left hand side of each inequality you will find that they are both of the form

\displaystyle{ \sum_{x,y} \left[ x-y \right] \ln \left( \frac{x}{y} \right) }

which is non-negative for x,y \geq 0.

The interaction between the subsystems is contained entirely in the information flow terms. Neglecting these terms gives rise to situations like Maxwell’s demon where a subsystem seems to violate the second law.

Lots of Markov processes have boring equilibria \frac{dp}{dt} = 0 where there is no net flow among the states. Markov processes also admit non-equilibrium steady states, where there may be some constant flow of information. In this steady state all explicit time derivatives are zero, including the net information flow:

\displaystyle{ \frac{dI}{dt} = 0 }

which implies that \frac{dI^X}{dt} = - \frac{dI^Y}{dt}. In this situation the above inequalities become

\displaystyle{ \frac{dS^X_i}{dt} = \frac{dS_e^X}{dt} - \frac{dI^X}{dt} }


\displaystyle{ \frac{dS^Y_i}{dt} = \frac{dS_e^X}{dt} + \frac{dI^X}{dt} }.


\displaystyle{ \frac{dI^X}{dt} > 0 }

then X is learning something about Y or acting as a sensor. The first inequality

\frac{dS_e^X}{dt} \geq \frac{dI^X}{dt} quantifies the minimum amount of energy X must supply to do this sensing. Similarly -\frac{dS_e^Y}{dt} \leq \frac{dI^X}{dt} bounds the amount of useful energy is available to Y as a result of this information transfer.

In their paper Horowitz and Esposito explore a few other examples and show the utility of this simple breakup of a system into two interacting subsystems in explaining various interesting situations in which the flow of information has thermodynamic significance.

For the whole story, read their paper!

• Jordan Horowitz and Massimiliano Esposito, Thermodynamics with continuous information flow, Phys. Rev. X 4 (2014), 031015.

by John Baez at March 21, 2015 01:00 AM

Geraint Lewis - Cosmic Horizons

Moving Charges and Magnetic Fields
Still struggling with grant writing season, so another post which has resulted in my random musings about the Universe (which actually happens quite a lot).

In second semester, I am teaching electricity and magnetism to our First Year Advanced Class. I really enjoy teaching this class as the kids are on the ball and can ask some deep and meaningful questions.

But the course is not ideal. Why? Because we teach from a textbook and the problem is that virtually all modern text books are almost the same. Science is trotted out in an almost historical progression. But it does not have to be taught that way.

In fact, it would be great if we could start with Hamiltonian and Lagrangian approaches, and derive physics from a top down approach. We're told that it's mathematically too challenging, but it really isn't. In fact, I would start with a book like The Theoretical Minimum, not some multicoloured compendium of physics.

We have to work with what we have!

One of the key concepts that we have to get across is that electricity and magnetism are not really two separate things, but are actually two sides of the same coin. And, in the world of classical physics, it was the outstanding work of James Clerk Maxwell who provided the mathematical framework that broad them together. Maxwell gave us his famous equations that underpin electro-magnetism.
Again, being the advanced class, we can go beyond this and look at the work that came after Maxwell, and that was the work by Albert Einstein, especially Special Theory of Relativity.

The wonderful thing about special relativity is that the mix of electric and magnetic fields depends upon the motion of an observer. One person sees a particular configuration of electric and magnetic fields, and another observer, moving relative to the first, will see a different mix of electric and magnetic fields.

This is nice to say, but what does it actually mean? Can we do anything with it to help understand electricity and magnetism a little more? I think so.

In this course (and EM courses in general) we spend a lot of time calculating the electric field of a static charge distribution. For this, we use the rather marvellous Gauss's law, that relates the electric field distribution to the underlying charges.
I've written about this wonderful law before, and should how you can use symmetries (i.e. nice simple shapes like spheres, boxes and cylinders) to calculate the electric field.

Then we come to the sources of magnetic field. And things, well, get messy. There are some rules we can use, but it's, well, as I said, messy.

We know that magnetic fields are due to moving charges, but what's the magnetic field of a lonely little charge moving on its own? Looks something like this
Where does this come from? And how do you calculate it? Is there an easier way?

And the answer is yes! The kids have done a touch of special relativity at high school and (without really knowing it in detail) have seen the Lorentz transformations. Now, introductory lessons on special relativity often harp on about swimming back and forth across rivers, or something like that, and have a merry dance before getting to the point. And the transforms are presented as a way to map coordinators from one observer to another, but they are much more powerful than that.

You can use them to transform vectors from one observers viewpoint to another. Including electric and magnetic fields. And these are simple algebra.

where we also have the famous Lorentz factor. So, what does this set of equations tell us? Well, if we have an observer who sees a particular electric field (Ex,Ey,Ez), and magnetic field (Bx,By,Bz), then an observer moving with a velocity v (in the x-direction) with see the electric and magnetic fields with the primed components.

Now, we know that the electric field of an isolated charge at rest is. We can use Gauss's law and it tells us that the field is spherically symmetrical and looks like this
The field drops off in strength with the square of the distance. What would be the electric and magnetic fields if this charge was trundling past us at a velocity v? Easy, we just use the Lorentz transforms to tell us. We know exactly what the electric field looks like of the charge at rest, and we know that, at rest, there is no magnetic field.

Being as lazy as I am, I didn't want to calculate anything by hand, so I chucked it into MATLAB, a mathematical environment that many students have access too. I'm not going to be an apologist for MATLAB's default graphics style (which I think sucks - but there are, with a bit of work, solutions).

Anyway, here's a charge at rest. The blue arrows are the electric field. No magnetic field, remember!
So, top left is a view along the x-axis, then y, then z, then a 3-D view. Cool!

Now, what does this charge look like if it is moving relative to me? Throw it into the Lorentz transforms, and voila!

MAGNETIC FIELDS!!! The charge is moving along the x-axis with respect to me, and when we look along x we can see that the magnetic fields wrap around the direction of motion (remember your right hand grip rule kids!).

That was for a velocity of 10% the speed of light. Let's what it up to 99.999%
The electric field gets distorted also!

Students also use Gauss's law to calculate the electric field of an infinitely long line of charge. Now the strength of the field drops off as the inverse of the distance from the line of charge.

Now, let's consider an observer moving at a velocity relative to the line of charge.
Excellent! Similar to what we saw before, and what we would expect. The magnetic field curls around the moving line of charge (which, of course, is simply an electric current).

Didn't we know that, you say? Yes, but I think this is more powerful, not only to reveal the relativistic relationship between the electric and magnetic fields, but also once you have written the few lines of algebraic code in MATLAB (or python or whatever the kids are using these days) you can ask about more complicated situations. You can play with physics (which, IMHO, is how you really understand it).

So, to round off, what's the magnetic field of a perpendicular infinite line of charge moving with respect to you. I am sure you could, with a bit of work, calculate it with usual mathematical approaches, but let's just take a look.

Here's at rest
A bit like further up, but now pointing along a different axis.

Before we add velocity, you physicists and budding physicists make a prediction! Here goes! A tenth the velocity of light and we get
I dunno if we were expecting that! Remember, top left is looking along the x-axis, along the direction of motion. So we have created some magnetic structure. Just not the simple structure we normally see!

And now at 99.99% we get
And, of course, I could play with lots of other geometries, like what happens if you move a ring of charge etc. But let's not get too excited, and come back to that another day.

by Cusp ( at March 21, 2015 12:11 AM

March 20, 2015

Lubos Motl - string vacua and pheno

LHC insists on a near-discovery muon penguin excess
None of the seemingly strong anomalies reported by the LHCb collaboration has been recognized as a survivor but many people believe that similar events are not being overlooked by TRF and they rely on this blog as a source, so I must give you a short report about a new bold announcement by LHCb.
20 March 2015: \(B^0\to K^*\mu^+\mu^-\): new analysis confirms old puzzle (LHCb CERN website)
In July 2013, TRF readers were told about the 3.7 excess in these muon decays of B-mesons.

The complete 2011-2012 data, which was just 3 inverse femtobarns because we talk about LHCb (perhaps I should remind you that it is a "cheaper" LHC detector that focuses on bottom quarks and therefore on CP-violation and flavor violation), have been analyzed. The absolute strength of the signal has decreased but so did the noise so the significance level remained at 3.7 sigma!

The Quanta Magazine quickly wrote a story with an optimistic title
‘Penguin’ Anomaly Hints at Missing Particles
where a picture of a penguin seems to play a central role.

Why are we talking about these Antarctic birds here? It's because they are actually Feynman diagrams.

The Standard Model calculates the probability of the decay of the B-mesons to the muon pairs via a one-loop diagram – which is just the skeleton of the picture above – and this diagram has been called "penguin" by particle physicists who didn't see that it was really a female with big breasts and a very thin waistline.

But there may have been more legitimate reasons for the "penguin" terminology – for example, because it sounds more concise than a "Dolly Buster diagram", for example. ;-)

The point is that there are particular particles running in the internal lines of the diagram according to the Standard Model and an excess of these decays would probably be generated by a diagram of the same "penguin" topology but with new particle species used for the internal lines. Those hypothetical beasts are indicated by the question marks on the penguin picture.

Adam Falkowski at Resonaances adds some skeptical words about this deviation. He thinks that what the Standard Model predicts is highly uncertain so there is no good reason to conclude that it must be new physics even though he thinks that the it's become very unlikely that it's just noise.

Perhaps more interestingly, the Quanta Magazine got an answer from Nima who talked about his heart broken by the LHCb numerous times in the past.

Various papers have proposed partially satisfactory models attempting to explain the anomaly. For example, two months ago, I described a two-Higgs model with a gauged lepton-mu-minus-tau number which claims to explain this anomaly along with two others.

Gordon Kane discussed muon decays of B-mesons in his guest blog in late 2012, before similar anomalies became widely discussed by the experimenters, and he sketched his superstring explanation for these observations.

LHCb is a role model for an experiment that "may see an anomaly" but "doesn't really tell us who is the culprit" – the same unsatisfactory semi-answer that you may get from high-precision colliders etc. That's why the brute force and high energy – along with omnipotent detectors such as ATLAS and CMS – seem to be so clearly superior at the end. The LHCb is unlikely to make us certain that it is seeing something new – even if it surpasses 5 sigma – because even if it does see something, it doesn't tell us sufficiently many details for the "story about the new discovery" to make sense.

But it's plausible that these observations will be very useful when a picture of new physics starts to emerge thanks to the major experiments...

The acronym LHCb appears in 27 TRF blog posts.

by Luboš Motl ( at March 20, 2015 05:12 PM

astrobites - astro-ph reader's digest

Wreaking Havoc with a Stellar Fly-By


Illustration of the RW Aurigae system containing two stars (dubbed A and B) and a disk around Star A. The schematic labels the angles needed to define the system’s geometry and angular momenta. Fig. 1 from the paper.

Physical models in astronomy are generally as simple as possible. On the one hand, you don’t want to oversimplify reality. On the other hand, you don’t want to throw in more parameters than could ever be constrained from observations. But some cases deviate just enough from a basic textbook case to be really interesting, like the subject of today’s paper: a pair of stars called RW Aurigae that features a long arm of gas and dust wrapped around one star.

You can’t model RW Aurigae as a single star with a disk of material around it, because there is a second star. And you can’t model it as a regular old binary system either, because there are interactions between the stars and the asymmetric circumstellar disk. The authors of today’s paper create a comprehensive smooth particle hydrodynamic model that considers many different observations of RW Aurigae. They consider the system’s shape, size, motion, composition, and geometry, and they conduct simulated observations of the model for comparison with real observations.

A tidal encounter


Simulated particles in motion in RW Aurigae. This view of the smooth particle hydrodynamic model has each particle color-coded by its velocity toward or away from the observer (the color bar is in km/s). Star A is the one in front with a large tidally disrupted disk. Fig. 2 from the paper.

The best-fitting model of RW Aurigae matches observations of many different aspects as observed today, including particle motions. Because the model is like a movie you can play backward or forward in time, the authors are able to show that the current long arm of gas most likely came from a tidal disruption. This was long suspected to be the case, based on appearance alone, but this paper’s detailed model shows that the physics checks out with our intuition.

What exactly is a tidal disruption? Well, in this case, over the last 600 years or so (a remarkably short time in astronomy!), star B passed close enough to the disk around star A to tear it apart with gravity. Because some parts of the disk were significantly closer to star B than others, they felt different amounts of gravitational force. As time went on, this changed the orbits of individual particles in the disk and caused the overall shape to change. This is the same effect that creates tides on Earth: opposite sides of the Earth are closer or farther from the Moon’s gravity at different times, which causes Earth’s oceans on the far side from the Moon to feel less force than the water closer to it. The figure above shows present-day motions of simulated particles in RW Aurigae that resulted from the tidal encounter. The figure below shows snapshots from a movie of the hydrodynamic model, from about 600 years in the past to today. Instead of representing motion, the brighter regions represent more particles (higher density).


Snapshots of the RW Aurigae model over a 600-year time span as viewed from Earth. From top left to right bottom the times are -589, -398, -207, and 0 years from today, respectively. Brighter colors indicates higher density. There is a stream of material linking stars A and B in the bottom right panel, but it is not visible here due to the density contrast chosen. Fig. 4 from the paper.

Simulating observations

Observational astronomers are always constrained to viewing the cosmos from one angle. We don’t get to choose how systems like RW Aurigae are oriented on the sky. But models let us change our viewing angle and better understand the full 3D physical picture. The figure below shows the disk around star A if we could view it from above in the present. As before, brighter areas have more particles. Simulated observations, such as measuring the size of the disk in the figure below, agree well with actual observations of RW Aurigae.


Top-down view of the present-day disk in star A from the RW Aurigae model. The size of the model disk agrees with estimates from observations, and the disk has clearly become eccentric after its tidal encounter with star B. Fig. 9 from the paper.

The final mystery the authors of today’s paper explore is a dimming that happened during observations of RW Aurigae in 2010/2011. The model suggests this dimming was likely caused by the stream of material between stars A and B passing in front of star A from our line of sight. However, since the disk and related material are clumpy and still changing shape, they make no predictions about specific future dimming events. Interestingly, another recent astro-ph paper by Petrov et al. report another deep dimming in 2014. They suggest it may arise from dust grains close to star A being “stirred up” from strong stellar winds and moving into our line of sight.

Combining models and observations like today’s paper does is an incredibly useful technique to learn about how structures of all sizes form in the Universe. Tides affect everything from Earth, to stars, to galaxies. This is one of the first cases we’ve seen of a protoplanetary disk having a tidal encounter. The Universe is a messy place, and understanding dynamic interactions like RW Aurigae’s is an important step toward a clearer picture of how stars, planets, and galaxies form and evolve.

by Meredith Rawls at March 20, 2015 04:24 PM

Sean Carroll - Preposterous Universe

Guest Post: Don Page on God and Cosmology

Don Page is one of the world’s leading experts on theoretical gravitational physics and cosmology, as well as a previous guest-blogger around these parts. (There are more world experts in theoretical physics than there are people who have guest-blogged for me, so the latter category is arguably a greater honor.) He is also, somewhat unusually among cosmologists, an Evangelical Christian, and interested in the relationship between cosmology and religious belief.

Longtime readers may have noticed that I’m not very religious myself. But I’m always willing to engage with people with whom I disagree, if the conversation is substantive and proceeds in good faith. I may disagree with Don, but I’m always interested in what he has to say.

Recently Don watched the debate I had with William Lane Craig on “God and Cosmology.” I think these remarks from a devoted Christian who understands the cosmology very well will be of interest to people on either side of the debate.

Open letter to Sean Carroll and William Lane Craig:

I just ran across your debate at the 2014 Greer-Heard Forum, and greatly enjoyed listening to it. Since my own views are often a combination of one or the others of yours (though they also often differ from both of yours), I thought I would give some comments.

I tend to be skeptical of philosophical arguments for the existence of God, since I do not believe there are any that start with assumptions universally accepted. My own attempt at what I call the Optimal Argument for God (one, two, three, four), certainly makes assumptions that only a small fraction of people, and perhaps even only a small fraction of theists, believe in, such as my assumption that the world is the best possible. You know that well, Sean, from my provocative seminar at Caltech in November on “Cosmological Ontology and Epistemology” that included this argument at the end.

I mainly think philosophical arguments might be useful for motivating someone to think about theism in a new way and perhaps raise the prior probability someone might assign to theism. I do think that if one assigns theism not too low a prior probability, the historical evidence for the life, teachings, death, and resurrection of Jesus can lead to a posterior probability for theism (and for Jesus being the Son of God) being quite high. But if one thinks a priori that theism is extremely improbable, then the historical evidence for the Resurrection would be discounted and not lead to a high posterior probability for theism.

I tend to favor a Bayesian approach in which one assigns prior probabilities based on simplicity and then weights these by the likelihoods (the probabilities that different theories assign to our observations) to get, when the product is normalized by dividing by the sum of the products for all theories, the posterior probabilities for the theories. Of course, this is an idealized approach, since we don’t yet have _any_ plausible complete theory for the universe to calculate the conditional probability, given the theory, of any realistic observation.

For me, when I consider evidence from cosmology and physics, I find it remarkable that it seems consistent with all we know that the ultimate theory might be extremely simple and yet lead to sentient experiences such as ours. A Bayesian analysis with Occam’s razor to assign simpler theories higher prior probabilities would favor simpler theories, but the observations we do make preclude the simplest possible theories (such as the theory that nothing concrete exists, or the theory that all logically possible sentient experiences occur with equal probability, which would presumably make ours have zero probability in this theory if there are indeed an infinite number of logically possible sentient experiences). So it seems mysterious why the best theory of the universe (which we don’t have yet) may be extremely simple but yet not maximally simple. I don’t see that naturalism would explain this, though it could well accept it as a brute fact.

One might think that adding the hypothesis that the world (all that exists) includes God would make the theory for the entire world more complex, but it is not obvious that is the case, since it might be that God is even simpler than the universe, so that one would get a simpler explanation starting with God than starting with just the universe. But I agree with your point, Sean, that theism is not very well defined, since for a complete theory of a world that includes God, one would need to specify the nature of God.

For example, I have postulated that God loves mathematical elegance, as well as loving to create sentient beings, so something like this might explain both why the laws of physics, and the quantum state of the universe, and the rules for getting from those to the probabilities of observations, seem much simpler than they might have been, and why there are sentient experiences with a rather high degree of order. However, I admit there is a lot of logically possible variation on what God’s nature could be, so that it seems to me that at least we humans have to take that nature as a brute fact, analogous to the way naturalists would have to take the laws of physics and other aspects of the natural universe as brute facts. I don’t think either theism or naturalism solves this problem, so it seems to me rather a matter of faith which makes more progress toward solving it. That is, theism per se cannot deduce from purely a priori reasoning the full nature of God (e.g., when would He prefer to maintain elegant laws of physics, and when would He prefer to cure someone from cancer in a truly miraculous way that changes the laws of physics), and naturalism per se cannot deduce from purely a priori reasoning the full nature of the universe (e.g., what are the dynamical laws of physics, what are the boundary conditions, what are the rules for getting probabilities, etc.).

In view of these beliefs of mine, I am not convinced that most philosophical arguments for the existence of God are very persuasive. In particular, I am highly skeptical of the Kalam Cosmological Argument, which I shall quote here from one of your slides, Bill:

  1. If the universe began to exist, then there is a transcendent cause
    which brought the universe into existence.
  2. The universe began to exist.
  3. Therefore, there is a transcendent cause which brought the
    universe into existence.

I do not believe that the first premise is metaphysically necessary, and I am also not at all sure that our universe had a beginning. (I do believe that the first premise is true in the actual world, since I do believe that God exists as a transcendent cause which brought the universe into existence, but I do not see that this premise is true in all logically possible worlds.)

I agree with you, Sean, that we learn our ideas of causation from the lawfulness of nature and from the directionality of the second law of thermodynamics that lead to the commonsense view that causes precede their effects (or occur at the same time, if Bill insists). But then we have learned that the laws of physics are CPT invariant (essentially the same in each direction of time), so in a fundamental sense the future determines the past just as much as the past determines the future. I agree that just from our experience of the one-way causation we observe within the universe, which is just a merely effective description and not fundamental, we cannot logically derive the conclusion that the entire universe has a cause, since the effective unidirectional causation we commonly experience is something just within the universe and need not be extrapolated to a putative cause for the universe as a whole.

However, since to me the totality of data, including the historical evidence for the Resurrection of Jesus, is most simply explained by postulating that there is a God who is the Creator of the universe, I do believe by faith that God is indeed the cause of the universe (and indeed the ultimate Cause and Determiner of everything concrete, that is, everything not logically necessary, other than Himself—and I do believe, like Richard Swinburne, that God is concrete and not logically necessary, the ultimate brute fact). I have a hunch that God created a universe with apparent unidirectional causation in order to give His creatures some dim picture of the true causation that He has in relation to the universe He has created. But I do not see any metaphysical necessity in this.

(I have a similar hunch that God created us with the illusion of libertarian free will as a picture of the true freedom that He has, though it might be that if God does only what is best and if there is a unique best, one could object that even God does not have libertarian free will, but in any case I would believe that it would be better for God to do what is best than to have any putative libertarian free will, for which I see little value. Yet another hunch I have is that it is actually sentient experiences rather than created individual `persons’ that are fundamental, but God created our experiences to include beliefs that we are individual persons to give us a dim image of Him as the one true Person, or Persons in the Trinitarian perspective. However, this would take us too far afield from my points here.)

On the issue of whether our universe had a beginning, besides not believing that this is at all relevant to the issue of whether or not God exists, I agreed almost entirely with Sean’s points rather than yours, Bill, on this issue. We simply do not know whether or not our universe had a beginning, but there are certainly models, such as Sean’s with Jennifer Chen (hep-th/0410270 and gr-qc/0505037), that do not have a beginning. I myself have also favored a bounce model in which there is something like a quantum superposition of semiclassical spacetimes (though I don’t really think quantum theory gives probabilities for histories, just for sentient experiences), in most of which the universe contracts from past infinite time and then has a bounce to expand forever. In as much as these spacetimes are approximately classical throughout, there is a time in each that goes from minus infinity to plus infinity.

In this model, as in Sean’s, the coarse-grained entropy has a minimum at or near the time when the spatial volume is minimized (at the bounce), so that entropy increases in both directions away from the bounce. At times well away from the bounce, there is a strong arrow of time, so that in those regions if one defines the direction of time as the direction in which entropy increases, it is rather as if there are two expanding universes both coming out from the bounce. But it is erroneous to say that the bounce is a true beginning of time, since the structure of spacetime there (at least if there is an approximately classical spacetime there) has timelike curves going from a proper time of minus infinity through the bounce (say at proper time zero) and then to proper time of plus infinity. That is, there are worldlines that go through the bounce and have no beginning there, so it seems rather artificial to say the universe began at the bounce that is in the middle just because it happens to be when the entropy is minimized. I think Sean made this point very well in the debate.

In other words, in this model there is a time coordinate t on the spacetime (say the proper time t of a suitable collection of worldlines, such as timelike geodesics that are orthogonal to the extremal hypersurface of minimal spatial volume at the bounce, where one sets t = 0) that goes from minus infinity to plus infinity with no beginning (and no end). Well away from the bounce, there is a different thermodynamic time t' (increasing with increasing entropy) that for t >> 0 increases with t but for t << 0 decreases with t (so there t' becomes more positive as t becomes more negative). For example, if one said that t' is only defined for |t| > 1, say, one might have something like

t' = (t^2 - 1)^{1/2},

the positive square root of one less than the square of t. This thermodynamic time t' only has real values when the absolute value of the coordinate time t, that is, |t|, is no smaller than 1, and then t' increases with |t|.

One might say that t' begins (at t' = 0) at t = -1 (for one universe that has t' growing as t decreases from -1 to minus infinity) and at t = +1 (for another universe that has t' growing as t increases from +1 to plus infinity). But since the spacetime exists for all real t, with respect to that time arising from general relativity there is no beginning and no end of this universe.

Bill, I think you also objected to a model like this by saying that it violates the second law (presumably in the sense that the coarse-grained entropy does not increase monotonically with t for all real t). But if we exist for t >> 1 (or for t << -1; there would be no change to the overall behavior if t were replaced with -t, since the laws are CPT invariant), then we would be in a region where the second law is observed to hold, with coarse-grained entropy increasing with t' \sim t (or with t' \sim -t if t << -1). A viable bounce model would have it so that it would be very difficult or impossible for us directly to observe the bounce region where the second law does not apply, so our observations would be in accord with the second law even though it does not apply for the entire universe.

I think I objected to both of your probability estimates for various things regarding fine tuning. Probabilities depend on the theory or model, so without a definite model, one cannot claim that the probability for some feature like fine tuning is small. It was correct to list me among the people believing in fine tuning in the sense that I do believe that there are parameters that naively are far different from what one might expect (such as the cosmological constant), but I agreed with the sentiment of the woman questioner that there are not really probabilities in the absence of a model.

Bill, you referred to using some “non-standard” probabilities, as if there is just one standard. But there isn’t. As Sean noted, there are models giving high probabilities for Boltzmann brain observations (which I think count strongly against such models) and other models giving low probabilities for them (which on this regard fits our ordered observations statistically). We don’t yet know the best model for avoiding Boltzmann brain domination (and, Sean, you know that I am skeptical of your recent ingenious model), though just because I am skeptical of this particular model does not imply that I believe that the problem is insoluble or gives evidence against a multiverse; in any case it seems also to be a problem that needs to be dealt with even in just single-universe models.

Sean, at one point your referred to some naive estimate of the very low probability of the flatness of the universe, but then you said that we now know the probability of flatness is very near unity. This is indeed true, as Stephen Hawking and I showed long ago (“How Probable Is Inflation?” Nuclear Physics B298, 789-809, 1988) when we used the canonical measure for classical universes, but one could get other probabilities by using other measures from other models.

In summary, I think the evidence from fine tuning is ambiguous, since the probabilities depend on the models. Whether or not the universe had a beginning also is ambiguous, and furthermore I don’t see that it has any relevance to the question of whether or not God exists, since the first premise of the Kalam cosmological argument is highly dubious metaphysically, depending on contingent intuitions we have developed from living in a universe with relatively simple laws of physics and with a strong thermodynamic arrow of time.

Nevertheless, in view of all the evidence, including both the elegance of the laws of physics, the existence of orderly sentient experiences, and the historical evidence, I do believe that God exists and think the world is actually simpler if it contains God than it would have been without God. So I do not agree with you, Sean, that naturalism is simpler than theism, though I can appreciate how you might view it that way.

Best wishes,


by Sean Carroll at March 20, 2015 03:17 PM

Symmetrybreaking - Fermilab/SLAC

The LHC does a dry run

Engineers have started the last step required before sending protons all the way around the renovated Large Hadron Collider.

All systems are go! The the Large Hadron Collider’s operations team has started running the accelerator through its normal operational cycle sans particles as a final dress rehearsal before the restart later this month.

“This is where we bring it all together,” says Mike Lamont, the head of CERN’s operations team.

Over the last two years, 400 engineers and technicians worked a total of 1 million hours repairing, upgrading and installing new technology into the LHC. And now, the world’s most powerful particle accelerator is almost ready to start doing its thing.

“During this final checkout, we will be testing all of the LHC’s subsystems to make sure the entire machine is ready,” says Markus Albert, one of the LHC operators responsible for this dry run. “We don’t want any surprises once we start operation with beam.”

Engineers will simulate the complete cycle of injecting, steering, accelerating, squeezing, colliding and finally dumping protons. Then engineers will power down the magnets and start the process all over again.

“Everything will behave exactly as if there is beam,” Albert says. “This way we can validate that these systems will all run together.”

Operators practiced sending beams of protons part of the way around the ring earlier this month.

During this test, engineers will keep a particularly close eye on the LHC’s superconducting magnet circuits, which received major work and upgrades during the shutdown.

“The whole magnet system was taken apart and put back together again, and with upgraded magnet protection systems everything needs to be very carefully checked out,” Lamont says. “In fact, this has been going on for the last six months in the powering tests.”

They will also scrutinize the beam interlock system—the system that triggers the beam dump, which diverts the beam out of the LHC and into a large block of graphite if anything goes wrong.

“There are thousands of inputs that feed into the beam interlock system, and if any of these inputs say something is wrong or they are not happy about the behavior of the beam, the system dumps the beam within three turns of the LHC,” Lamont says. 

During the week of March 23, engineers plan to send a proton beam all the way around the LHC for the first time in over two years. By the end of May, they hope to start high-energy proton-proton collisions.

“Standard operation is providing physics data to the four experiments,” Albert says. “The rest is just preparatory work.”


LHC restart timeline

February 2015

The Large Hadron Collider is now cooled to nearly its operational temperature.

Info-Graphic by: Sandbox Studio, Chicago

LHC filled with liquid helium

The Large Hadron Collider is now cooled to nearly its operational temperature.
Read more…
A first set of superconducting magnets has passed the test and is ready for the Large Hadron Collider to restart in spring.
Info-Graphic by: Sandbox Studio, Chicago

First LHC magnets prepped for restart

A first set of superconducting magnets has passed the test and is ready for the Large Hadron Collider to restart in spring. Read more…
Engineers and technicians have begun to close experiments in preparation for the next run.
Info-Graphic by: Sandbox Studio, Chicago

LHC experiments prep for restart

Engineers and technicians have begun to close experiments in preparation for the next run.
Read more…


Like what you see? Sign up for a free subscription to symmetry!


by Sarah Charley at March 20, 2015 02:50 PM

Jester - Resonaances

LHCb: B-meson anomaly persists
Today LHCb released a new analysis of the angular distribution in  the B0 → K*0(892) (→K+π-) μ+ μ- decays. In this 4-body decay process, the angles between the direction of flight of all the different particles can be measured as a function of the invariant mass  q^2 of the di-muon pair. The results are summarized in terms of several form factors with imaginative names like P5', FL, etc. The interest in this particular decay comes from the fact that 2 years ago LHCb reported a large deviation from the standard model prediction in one q^2 region of 1 form factor called P5'. That measurement was based on 1 inverse femtobarn of data;  today it was updated to full 3 fb-1 of run-1 data. The news is that the anomaly persists in the q^2 region 4-8 GeV, see the plot.  The measurement  moved a bit toward the standard model, but the statistical errors have shrunk as well.  All in all, the significance of the anomaly is quoted as 3.7 sigma, the same as in the previous LHCb analysis. New physics that effectively induces new contributions to the 4-fermion operator (\bar b_L \gamma_\rho s_L) (\bar \mu \gamma_\rho \mu) can significantly improve agreement with the data, see the blue line in the plot. The preference for new physics remains remains high, at the 4 sigma level, when this measurement is combined with other B-meson observables.

So how excited should we be? One thing we learned today is that the anomaly is unlikely to be a statistical fluctuation. However, the observable is not of the clean kind, as the measured angular distributions are  susceptible to poorly known QCD effects. The significance depends a lot on what is assumed about these uncertainties, and experts wage ferocious battles about the numbers. See for example this paper where larger uncertainties are advocated, in which case the significance becomes negligible. Therefore, the deviation from the standard model is not yet convincing at this point. Other observables may tip the scale.  If a  consistent pattern of deviations in several B-physics observables emerges,  only then we can trumpet victory.

Plots borrowed from David Straub's talk in Moriond; see also the talk of Joaquim Matias with similar conclusions. David has a post with more details about the process and uncertainties. For a more popular write-up, see this article on Quanta Magazine. 

by Jester ( at March 20, 2015 01:56 PM

Clifford V. Johnson - Asymptotia

Festival of Books!
what_are_you_reading(Click for larger view of 2010 Festival "What are you reading?" wall.) So the Festival of Books is 18-19th April this year. If you're in or near LA, I hope you're going! It's free, it's huge (the largest book festival in the USA) and also huge fun! They've announced the schedule of events and the dates on which you can snag (free) tickets for various indoor panels and appearances since they are very popular, as usual. So check out the panels, appearances, and performances here. (Check out several of my past posts on the Festival here. Note also that the festival is on the USC campus which is easy to get to using great public transport links if you don't want to deal with traffic and parking.) Note also that the shortlist for the 2014 LA Times Book Prizes was announced (a while back - I forgot to post about it) and it is here. I always find it interesting... for a start, it is a great list of reading suggestions! By the way, apparently I'm officially an author - not just a guy who writes from time to time - an author. Why? Well, I'm listed as one on the schedule site. I'll be on one of the author panels! It is moderated by KC Cole, and I'll be joining [...] Click to continue reading this post

by Clifford at March 20, 2015 01:56 PM

Sean Carroll - Preposterous Universe

Auction: Multiply-Signed Copy of Why Evolution Is True

Here is a belated but very welcome spinoff of our Moving Naturalism Forward workshop from 2012: Jerry Coyne was clever enough to bring along a copy of his book, Why Evolution Is True, and have all the participants sign it. He subsequently gathered a few more distinguished autographs, and to make it just a bit more beautiful, artist Kelly Houle added some original illustrations. Jerry is now auctioning off the book to benefit Doctors Without Borders. Check it out:



Here is the list of signatories:

  • Dan Barker
  • Sean Carroll
  • Jerry Coyne
  • Richard Dawkins
  • Terrence Deacon
  • Simon DeDeo
  • Daniel Dennett
  • Owen Flanagan
  • Anna Laurie Gaylor
  • Rebecca Goldstein
  • Ben Goren
  • Kelly Houle
  • Lawrence Krauss
  • Janna Levin
  • Jennifer Ouellette
  • Massimo Pigliucci
  • Steven Pinker
  • Carolyn Porco
  • Nicholas Pritzker
  • Alex Rosenberg
  • Don Ross
  • Steven Weinberg

Jerry is hoping it will fetch a good price to benefit the charity, so we’re spreading the word. I notice that a baseball signed by Mickey Mantle goes for about $2000. In my opinion a book signed by Steven Weinberg alone should go for even more, so just imagine what this is worth. You have ten days to get your bids in — and if it’s a bit pricey for you personally, I’m sure there’s someone who loves you enough to buy it for you.

by Sean Carroll at March 20, 2015 01:02 AM

March 19, 2015

Clifford V. Johnson - Asymptotia

LAIH Luncheon with Jack Miles
LAIH_Jack_Miles_6_march_2015_2 (Click for larger view.) On Friday 6th March the Los Angeles Institute for the Humanities (LAIH) was delighted to have our luncheon talk given by LAIH Fellow Jack Miles. He told us some of the story behind (and the making of) the Norton Anthology of World Religions - he is the main editor of this massive work - and lots of the ins and outs of how you go about undertaking such an enterprise. It was fascinating to hear how the various religions were chosen, for example, and how he selected and recruited specialist editors for each of the religions. It was an excellent talk, made all the more enjoyable by having Jack's quiet and [...] Click to continue reading this post

by Clifford at March 19, 2015 09:16 PM

Marco Frasca - The Gauge Connection

New Lucasian Professor

After a significant delay, Cambridge University made known the name of Michael Green‘s successor at the Lucasian chair.Michael Cates The 19th Lucasian Professor is Michael Cates, Professor at University of Edinburgh and Fellow of the Royal Society. Professor Cates is worldwide known for his researches in the field of soft condensed matter. It is a well deserved recognition and one of the best choice ever for this prestigious chair. So, we present our best wishes to Professor Cates of an excellent job in this new role.

Filed under: Condensed matter physics, News, Physics Tagged: Cambridge University, Edinburgh University, Lucasian Professor, Soft condensed matter

by mfrasca at March 19, 2015 08:58 AM

March 18, 2015

ZapperZ - Physics and Physicists

CERN's ALPHA Experiment
See, I like this. I like to highlight things that most of the general public simply don't know much about, especially when another major facility throws a huge shadow over it.

This article mentions two important things about CERN: It is more than just the LHC, and it highlights another very important experiment, the ALPHA experiment.

ALPHA’s main aim is to study the internal structure of the antihydrogen atom, and see if there exist any discernible differences within it that set it apart from regular hydrogen. In 2010 ALPHA was the first experiment to trap 38 antihydrogen atoms (an antielectron orbiting an antiproton) for about one-fifth of a second and then the team perfected its apparatus and technique to trap a total of 309 antihydrogen atoms for 1000 s in 2011. Hangst hopes that with the new updated ALPHA 2 device (which includes lasers for spectroscopy), the researchers will soon see precisely what lies within an antihydrogen atom by studying its spectrum. They had a very short test run of a few weeks with ALPHA 2 late last year, and will begin their next set of experiment in earnest in the coming months.

They will be producing more amazing results in the future, because this is all uncharted territory. 


by ZapperZ ( at March 18, 2015 09:37 PM