# Particle Physics Planet

## April 01, 2015

### Jester - Resonaances

What If, Part 1
This is the do-or-die year, so Résonaances will be dead serious. This year, no stupid jokes on April Fools' day: no Higgs in jail, no loose cables, no discovery of supersymmetry, or such. Instead, I'm starting with a new series "What If" inspired  by XKCD.  In this series I will answer questions that everyone is dying to know the answer to. The first of these questions is

If HEP bloggers were Muppets,
which Muppet would they be?

• Gonzo the Great: Lubos@Reference Frame (odd-numbered days)
The one true artist. Not treated seriously by other Muppets, but adored by chicken.
• Animal: Lubos@Reference Frame (even-numbered days)
My favorite Muppet. Pure mayhem and destruction. Has only two modes: beat it, or eat it.
• Swedish Chef: Tommaso@Quantum Diaries Survivor
The Muppet with a penchant for experiment. You don't understand what he says but it's always amusing nonetheless.
• Kermit the Frog: Matt@Of Particular Significance
Born Muppet leader, though not clear if he really wants the job.
• Miss Piggy: Sabine@Backreaction
Not the only female Muppet, but certainly the best known. Admired for her stage talents but most of all for her punch.
• Rowlf: Sean@Preposterous Universe
One-Muppet orchestra. Impressive as an artist or and as a comedian, though some complain he's gone to the dogs.

• Statler and Waldorf: Peter@Not Even Wrong
Constantly heckling other Muppets from the balcony, yet every week back for more.
• Fozzie Bear:  Jester@Résonaances
Failed stand-up comedian. Always stressed that he may not be funny after all.

If you have a match for Beaker, Bunsen Honeydew, or Dr Strangepork, let me know in the comments.

In preparation:
-If physicists were smurfs...
-If physicists lived in Middle-earth...

-If physicists were Hobbit's dwarves...
and more.

### Quantum Diaries

LHC Run 2 cancelled, CERN closes doors

After a three week review CERN Director General, Rolf Dieter Heuer has announced that the LHC will not have another run and that the international laboratory will be closing its doors to science. The revelation follows an intense week of discussion, analysis and rumour mongering.

While deleting some old files from the myriad of hard drives at the CERN Computing Centre, IT support found some data nobody had seen before. “It was just sitting there on a few hard drives in the corner” said Linus Distro, from IT Support. “So I told the analysts to take a look at it and the rest is history!”

The single event that definitely proved the existence of supersymmetry (BBC)

It turns out the rest is history, because these few exobytes of data held the answers to all of the open questions of physics. After discovering a staggering 327 new particles the physicists managed to prove the existence of supersymmetry, extra-dimensions, dark matter, micro black holes, technicolor, and top quark condendsates. But not string theory, that’s just silly.

Theorist John Ellis commented “I never thought I’d see this in my lifetime. I mean, I expected to see supersymmetry and dark matter, but now we have technicolor too. It’s quite simply amazing. We’ve been sitting on this data for years without even knowing it.”

Due to take on the role of Director General in 2016, Fabiola Gianotti said “Now that physics is finished I’m not sure what to do. I was expecting a long and industrious career at the lab, now I can retire early and buy a nice beach house near Napoli.”

The situation for unviersitities across the world is less clear. PhD students are expected to have up to seven theses each to cope with all the extra discoveries. Professors are starting to panic, trying to save as much of their funding as possible. There has been a sudden increase in the number of conferences in Hawai’i, Cuba, and the Bahamas, as postdocs squeeze as much opportunity out of the final weeks of their careers as possible.

The ALICE Control Room will be repurposed into a massive Call of Duty multiplayer facilitiy (ALICE Matters)

“The atmosphere on site is incredible!” shouted one slightly inebriated physicist, “People say we should measure everything down to the 6th decimal place, but to be honest we’ll probably just stop after four.”

Famous atheist Richard Dawkins as leapt on the opportunity to prove the non existence of god. “If those files answer all the questions physics has left then surely it proves there is no god.” he tweeted last week. And he’s not alone. Thousands of people across the globe are finally realising that with no questions left to answer, they are completely intellectually and spiritually satisfied for the first time in history, and are busy validating their own world views.

Among the top answers are the following: Schrödinger’s cat is alive and well and living in Droitwich, god plays dice on Tuesdays, light is a particle and a wave and Canadian (and hopes you’re having a good day), electrons are strawberry flavoured, Leibniz and Newton were good friend who discovered calculus together, and if you could ride a beam of light it would be totally freaking awesome.

While the phycisists may not have much to do anymore the number of visitors has increased by a factor 3500% in the past two weeks. People from all over the world are descending upon CERN to experience extra dimensions and parallel universes. For 20 CHF a family can visit a parallel universe of their choosing for up to two weeks. Head of CERN Visits Mick Storr said “It’s a great time to visit CERN. Finally we know where we came from, where we’re going, and what we’re made of. Now I just need to work out what to have for dinner.”

Early crowds grather to see the creation of the daily 14:00 wormhole at CMS. (CERN)

It’s unclear what will happen next. There are certainly questions about how best to use the extra dimensions, but the biggest problem is a social one. Nobody knows what will happen to the thousands of physicists who will have to re-enter the “real world”. It’s a scary place for some, and physicists lack basic transferable skills such as burger flipping and riot control.

Whatever happens, everyone will look back at the Winter of 2015 as most exciting year in science history. This year’s Nobel Prize ceremony will be a complicated matter indeed.

### Peter Coles - In the Dark

Interlude

The University of Sussex is closing down for a week to allow people to take a breather around Easter weekend. After this afternoon’s staff meeting, I will heading off for a week’s holiday and probably won’t be blogging until I get back, primarily because I won’t have an internet connection where I’m going. That’s a deliberate decision, by the way….

So, as the saying goes, there will now follow a short intermission….

PS. The suitably restful and very typical bit of 1950s  “light” music accompanying this is called Pastoral Montage, and it was written by South African born composer Gideon Fagan.

### The n-Category Cafe

Split Octonions and the Rolling Ball

You may enjoy these webpages:

because they explain a nice example of the Erlangen Program more tersely — and I hope more simply — than before, with the help of some animations made by Geoffrey Dixon using WebGL. You can actually get a ball to roll in way that illustrates the incidence geometry associated to the exceptional Lie group ${\mathrm{G}}_{2}\mathrm\left\{G\right\}_2$!

Abstract. Understanding exceptional Lie groups as the symmetry groups of more familiar objects is a fascinating challenge. The compact form of the smallest exceptional Lie group, ${\mathrm{G}}_{2}\mathrm\left\{G\right\}_2$, is the symmetry group of an 8-dimensional nonassociative algebra called the octonions. However, another form of this group arises as symmetries of a simple problem in classical mechanics! The space of configurations of a ball rolling on another ball without slipping or twisting defines a manifold where the tangent space of each point is equipped with a 2-dimensional subspace describing the allowed infinitesimal motions. Under certain special conditions, the split real form of ${\mathrm{G}}_{2}\mathrm\left\{G\right\}_2$ acts as symmetries. We can understand this using the quaternions together with an 8-dimensional algebra called the ‘split octonions’. The rolling ball picture makes the geometry associated to ${\mathrm{G}}_{2}\mathrm\left\{G\right\}_2$ quite vivid. This is joint work with James Dolan and John Huerta, with animations created by Geoffrey Dixon.

I’m going to take this show on the road and give talks about it at Penn State, the University of York (virtually), and elsewhere. And there’s no shortage of material to read for more details. John Huerta has blogged about this work here:

* John Huerta, G2 and the rolling ball.

and I have a 5-part series where I gradually lead up to the main idea, starting with easier examples:

* John Baez, Rolling circles and balls.

There’s also plenty of actual papers:

So, enjoy!

## March 31, 2015

### Christian P. Robert - xi'an's og

Le Monde puzzle [#905]

A recursive programming  Le Monde mathematical puzzle:

Given n tokens with 10≤n≤25, Alice and Bob play the following game: the first player draws an integer1≤m≤6 at random. This player can then take 1≤r≤min(2m,n) tokens. The next player is then free to take 1≤s≤min(2r,n-r) tokens. The player taking the last tokens is the winner. There is a winning strategy for Alice if she starts with m=3 and if Bob starts with m=2. Deduce the value of n.

Although I first wrote a brute force version of the following code, a moderate amount of thinking leads to conclude that the person given n remaining token and an adversary choice of m tokens such that 2m≥n always win by taking the n remaining tokens:

optim=function(n,m){

outcome=(n<2*m+1)
if (n>2*m){
for (i in 1:(2*m))
outcome=max(outcome,1-optim(n-i,i))
}
return(outcome)
}


eliminating solutions which dividers are not solutions themselves:

sol=lowa=plura[plura&lt;100]
for (i in 3:6){
sli=plura[(plura&gt;10^(i-1))&amp;(plura&lt;10^i)]
ace=sli-10^(i-1)*(sli%/%10^(i-1))
lowa=sli[apply(outer(ace,lowa,FUN=&quot;==&quot;),
1,max)==1]
lowa=sort(unique(lowa))
sol=c(sol,lowa)}


> subs=rep(0,16)
> for (n in 10:25) subs[n-9]=optim(n,3)
> for (n in 10:25) if (subs[n-9]==1) subs[n-9]=1-optim(n,2)
> subs
[1] 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
> (10:25)[subs==1]
[1] 18


Ergo, the number of tokens is 18!

Filed under: Books, Kids, R, Statistics, University life Tagged: Le Monde, mathematical puzzle, R, recursive function

### astrobites - astro-ph reader's digest

Falling stones paint it black
• Title: Darkening of Mercury’s surface by cometary carbon
• Authors: Megan Bruck Syal, Peter H. Schultz, Miriam A. Riner
• First author’s institution: Lawrence Livermore National Laboratory
• Status of the Paper: Published in Nature
• ImageBy NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington. Edited version of Image:Mercury in color – Prockter07.jpg by Papa Lima Whiskey. (NASA/JPL [1]) [Public domain], via Wikimedia Commons

What’s the issue?

Figure 1: The red curve illustrates the reflectance of Moon-like material mixed with organic free quartz sand and the blue curve shows the reflectance for the material mixed with organics (sugar) after the impact of the projectile. The impact causes the production of carbon for the material mixed with organics and its reflectance is lower.

Today’s Astrobite presents an explanation of a feature of the Solar System‘s smallest and innermost planet: the darkening of Mercury. So far astronomers could get no satisfaction in explaining the fact that Mercury’s surface is darker than the Moon’s. In principle, iron is the most common cause for darkening of bodies without an air atmosphere. The problem for Mercury is: Mercury’s surface consists of less iron than the Moon. Thus, there has to be a material different from iron that “paints” Mercury black.

Shooting on a Range and Gambling in Monte Carlo for Science

The authors of the article propose that carbon, instead of iron, can darken the surface of Mercury sufficiently. Indeed they do experiments at NASA’s Ames Vertical Gun Range, where they shoot projectiles onto Moon-like material mixed with/without organics. When the projectile is shot on the material with organics, the heat induced by the impact causes the formation of carbon. As you can see in Figure 1 (Figure 3 in the paper) the reflectance of light from the surface of the two materials is significantly lower, when carbon is present.

At this point you may ask: Fine, but why should there be more carbon on Mercury than on the Moon? The explanation of the authors is based on two observations.

• Comets consist on average about 18% of carbon.
• The number of cometary impacts per unit area decreases roughly inversely with decreasing radial distance from the Sun.

Considering that enough of the impacting material retains on Mercury, the authors suggest that the larger amount of meteoritic impacts enrich the surface of Mercury more in carbon than the surface of the Moon. To be precise, they consider only small meteorites – so called micrometeorites – and they assume a constant spherical size of 0.25 cm and constant speed of 20 km/s. You may argue now that this is a drastic simplification since meteorites have varying sizes and corresponding higher speeds. The authors are aware of that, but they argue that larger objects have higher speeds such that they will not be captured by Mercury’s gravitational field. Thus the impact of larger objects is negligible and micrometeorites are the dominating objects for impact.

Figure 2: The plot illustrates the probability distribution (black), the amount of mass retained on the surface obtained from the resolved micrometeorites for Mercury (dark blue points and curve) and the Moon (red points and curve) for different impact angles measured with respect to the horizontal axis. In comparison the results obtained from tracer particles for Mercury are plotted as light blue diamonds.

The authors test their idea with a Monte Carlo code, where they compute the percentage of retaining meteorites on the Moon and Mercury for different impact angles. In Figure 2 of this Astrobite (Figure 1 in the paper) you can see the probability for a micrometeorite of certain impact angle and the mass fraction of the objects that retain on Mercury. The impactors (micrometeorites) are resolved by several grid cells in the code. The drop in the retainment fraction at 30° reflects the fact that the micrometeorite has relatively seen more energy compared to the target than at different angles. The results are also similar for tracer particles that follow the movement of impactors during the simulation. The authors explain the differences at 15° with asymmetric shock conditions such that some mass of the impactors does not stay on the surface. However, this process is – in contrast to the first method – not resolved by tracing the micrometeorites as single objects by tracer particles.

Conclusion

Altogether, you can see that a sufficient amount of mass from the impactors stays on the surface (on average 83 % for Mercury and 63 % for the Moon) and the authors conclude that approximately 50 times more carbon-rich micrometeorites are delivered to Mercury than to the Moon. Together with the result from the shooting experiment that carbon darkens the surface efficiently, micrometeorite impact may cause the darkening of Mercury. In other words: Falling stones paint it black.

Attribution for the image of Mercury: By NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington. Edited version of Image:Mercury in color – Prockter07.jpg by Papa Lima Whiskey. (NASA/JPL [1]) [Public domain], via Wikimedia Commons

### Quantum Diaries

CERN Had Dark Energy All Along; Uses It To Fuel Researchers

I don’t usually get to spill the beans on a big discovery like this, but this time, I DO!

# CERN Had Dark Energy All Along!!

That’s right. That mysterious energy making up ~68% of the universe was being used all along at CERN! Being based at CERN now, I’ve had a first hand glimpse into the dark underside of Dark Energy. It all starts at the Crafted Refilling of Empty Mugs Area (CREMA), pictured below.

One CREMA station at CERN

Researchers and personnel seem to stumble up to these stations at almost all hours of the day, looking very dreary and dazed. They place a single cup below the spouts, and out comes a dark and eerie looking substance, which is then consumed. Some add a bit of milk for flavor, but all seem perkier and refreshed after consumption. Then they disappear from whence they came. These CREMA stations seem to be everywhere, from control rooms to offices, and are often found with groups of people huddled around them. In fact, they seem to exert a force on all who use them, keeping them in stable orbits about the stations.

Q. How much of this dark stuff do you consume on a daily basis?

A. At least one cup in the morning to fuel up, I don’t think I could manage to get to lunchtime without that one. Then multiple other cups distributed over the day, depending on the workload. It always feels like they help my thinking.

Q. Do you know where it comes from?

A. We have a machine in our office which takes capsules. I’m not 100% sure where those capsules are coming from, but they seem to restock automatically, so no one ever asked.

Q. Have you been hiding this from the world on purpose?

A. Well our stock is important to our group, if we would just share it with everyone around we could run out. And no one of us can make it through the day without. We tried alternatives, but none are so effective.

Q. Do you remember the first time you tried it?

A. Yes, they hooked me on it in university. From then on nothing worked without!

Q. Where does CERN get so much of it?

A. I never thought about this question. I think I’m just happy that there is enough for everyone here, and physicist need quite a lot of it to work.

In order to gauge just how much of this Dark Energy is being consumed, I studied the flux of people from the cafeteria as a function of time with cups of Dark Energy. I’ve compiled the results into the Dark Energy Consumption As Flux (DECAF) plot below.

Dark Energy Consumption as Flux plot. Taken March 31, 2015. Time is given in 24h time. Errors are statistical.

As the DECAF plot shows, there is a large spike in consumption, particularly after lunch. There is a clear peak at times after 12:20 and before 13:10. Whether there is an even larger peak hiding above 13:10 is not known, as the study stopped due to my advisor asking “shouldn’t you be doing actual work?”

There is an irreducible background of Light Energy in the cups used for Dark Energy, particularly of the herbal variety. Fortunately, there is often a dangly tag hanging off of the cup  to indicate to others that they are not using the precious Dark Energy supply, and provide a clear signal for this study to eliminate the background.

While illuminating, this study still does not uncover the exact nature of Dark Energy, though it is clear that it is fueling research here and beyond.

### Emily Lakdawalla - The Planetary Society Blog

Revitalized 0.81m telescope studying properties of NEOs
Thanks to a new focal reducer and re-aluminized mirror from a Shoemaker NEO grant, a 0.81-meter telescope in Italy is performing astrometric follow-up observations and physical studies of asteroids.

### Lubos Motl - string vacua and pheno

Quantum gravity from quantum error-correcting codes?
Guest blog by Dr Beni Yoshida, quantum information fellow at Caltech

The lessons we learned from the Ryu-Takayanagi formula, the firewall paradox, and the ER=EPR conjecture have convinced us that quantum information theory can become a powerful tool to sharpen our understanding of various problems in high-energy physics. But many of the concepts utilized so far rely on entanglement entropy and its generalizations, quantities developed by Von Neumann more than 60 years ago. We live in the 21st century. Why don’t we use more modern concepts, such as the theory of quantum error-correcting codes?
Off-topic, LHC: CERN sent quite some current to the shorted segment of the circuit, apparently melted and destroyed the offending metallic piece in a diode box, and miraculously cured the LHC! Restart could be within days. LM
In a recent paper with Daniel Harlow, Fernando Pastawski and John Preskill, we have proposed a toy model of the AdS/CFT correspondence based on quantum error-correcting codes. Fernando has already written how this research project started after a fateful visit by Daniel to Caltech and John’s remarkable prediction in 1999. In this post, I hope to write an introduction which may serve as a reader’s guide to our paper, explaining why I’m so fascinated by the beauty of the toy model.

This is certainly a challenging task because I need to make it accessible to everyone while explaining real physics behind the paper. My personal philosophy is that a toy model must be as simple as possible while capturing key properties of the system of interest. In this post, I will try to extract some key features of the AdS/CFT correspondence and construct a toy model which captures these features. This post may be a bit technical compared to other recent posts, but anyway, let me give it a try...

Bulk locality paradox and quantum error-correction

The AdS/CFT correspondence says that there is some kind of correspondence between quantum gravity on $$(d+1)$$-dimensional asymptotically-AdS space and $$d$$-dimensional conformal field theory on its boundary. But how are they related?

The AdS-Rindler reconstruction tells us how to “reconstruct” a bulk operator from boundary operators. Consider a bulk operator $$\phi$$ and a boundary region A on a hyperbolic space (in other words, a negatively-curved plane). On a fixed time-slice, the causal wedge of A is a bulk region enclosed by the geodesic line of A (a curve with a minimal length). The AdS-Rindler reconstruction says that $$\phi$$ can be represented by some integral of local boundary operators supported on A if and only if $$\phi$$ is contained inside the causal wedge of A. Of course, there are multiple regions A,B,C,… whose causal wedges contain $$\phi$$, and the reconstruction should work for any such region.

The Rindler-wedge reconstruction

That a bulk operator in the causal wedge can be reconstructed by local boundary operators, however, leads to a rather perplexing paradox in the AdS/CFT correspondence. Consider a bulk operator $$\phi$$ at the center of a hyperbolic space, and split the boundary into three pieces, A, B, C. Then the geodesic line for the union of BC encloses the bulk operator, that is, $$\phi$$ is contained inside the causal wedge of BC. So, $$\phi$$ can be represented by local boundary operators supported on BC. But the same argument applies to AB and CA, implying that the bulk operator $$\phi$$ corresponds to local boundary operators which are supported inside AB, BC, and CA simultaneously. It would seem then that the bulk operator $$\phi$$ must correspond to an identity operator times a complex phase. In fact, similar arguments apply to any bulk operators, and thus, all the bulk operators must correspond to identity operators on the boundary. Then, the AdS/CFT correspondence seems so boring...

The bulk operator at the center is contained inside causal wedges of BC, AB, AC. Does this mean that the bulk operator corresponds to an identity operator on the boundary?

Almheiri, Dong, and Harlow have recently proposed an intriguing way of reconciling this paradox with the AdS/CFT correspondence [see also Polchinski et al., TRF]. They proposed that the AdS/CFT correspondence can be viewed as a quantum error-correcting code. Their idea is as follows. Instead of $$\phi$$ corresponding to a single boundary operator, $$\phi$$ may correspond to different operators in different regions, say $$O_{AB}$$, $$O_{BC}$$, $$O_{CA}$$ living in AB, BC, CA respectively. Even though $$O_{AB}$$, $$O_{BC}$$, $$O_{CA}$$ are different boundary operators, they may be equivalent inside a certain low energy subspace on the boundary.

This situation resembles the so-called quantum secret-sharing code. The quantum information at the center of the bulk cannot be accessed from any single party A, B or C because $$\phi$$ does not have representation on A, B, or C. It can be accessed only if multiple parties cooperate and perform joint measurements. It seems that a quantum secret is shared among three parties, and the AdS/CFT correspondence somehow realizes the three-party quantum secret-sharing code!

Entanglement wedge reconstruction?

Recently, causal wedge reconstruction has been further generalized to the notion of entanglement wedge reconstruction. Imagine we split the boundary into four pieces A,B,C,D such that A,C are larger than B,D. Then the geodesic lines for A and C do not form the geodesic line for the union of A and C because we can draw shorter arcs by connecting endpoints of A and C, which form the global geodesic line. The entanglement wedge of AC is a bulk region enclosed by this global geodesic line of AC. And the entanglement wedge reconstruction predicts that $$\phi$$ can be represented as an integral of local boundary operators on AC if and only if $$\phi$$ is inside the entanglement wedge of AC [1].

Causal wedge vs entanglement wedge.

Building a minimal toy model; the five-qubit code

Okay, now let’s try to construct a toy model which admits causal and entanglement wedge reconstructions of bulk operators. Because I want a simple toy model, I take a rather bold assumption that the bulk consists of a single qubit while the boundary consists of five qubits, denoted by A, B, C, D, E.

Reconstruction of a bulk operator in the “minimal” model.

What does causal wedge reconstruction teach us in this minimal setup of five and one qubits? First, we split the boundary system into two pieces, ABC and DE and observe that the bulk operator $$\phi$$ is contained inside the causal wedge of ABC. From the rotational symmetries, we know that the bulk operator $$\phi$$ must have representations on ABC, BCD, CDE, DEA, EAB. Next, we split the boundary system into four pieces, AB, C, D and E, and observe that the bulk operator $$\phi$$ is contained inside the entanglement wedge of AB and D. So, the bulk operator $$\phi$$ must have representations on ABD, BCE, CDA, DEB, EAC. In summary, we have the following:
The bulk operator must have representations on R if and only if R contains three or more qubits.
This is the property I want my toy model to possess.

What kinds of physical systems have such a property? Luckily, we quantum information theorists know the answer; the five-qubit code. The five-qubit code, proposed here and here, has an ability to encode one logical qubit into five-qubit entangled states and corrects any single qubit error. We can view the five-qubit code as a quantum encoding isometry from one-qubit states to five-qubit states:$\alpha | 0 \rangle + \beta | 1 \rangle \rightarrow \alpha | \tilde{0} \rangle + \beta | \tilde{1} \rangle$ where $$| \tilde{0} \rangle$$ and $$| \tilde{1} \rangle$$ are the basis for a logical qubit. In quantum coding theory, logical Pauli operators $$\bar{X}$$ and $$\bar{Z}$$ are Pauli operators which act like Pauli X (bit flip) and Z (phase flip) on a logical qubit spanned by $$| \tilde{0} \rangle$$ and $$| \tilde{1} \rangle$$. In the five-qubit code, for any set of qubits R with volume 3, some representations of logical Pauli X and Z operators, $$\bar{X}_{R}$$ and $$\bar{Z}_{R}$$, can be found on R. While $$\bar{X}_{R}$$ and $$\bar{X}_{R'}$$ are different operators for $$R \not= R'$$, they act exactly in the same manner on the codeword subspace spanned by $$| \tilde{0} \rangle$$ and $$| \tilde{1} \rangle$$. This is exactly the property I was looking for.

Holographic quantum error-correcting codes

We just found possibly the smallest toy model of the AdS/CFT correspondence, the five-qubit code! The remaining task is to construct a larger model. For this goal, we view the encoding isometry of the five-qubit code as a six-leg tensor. The holographic quantum code is a network of such six-leg tensors covering a hyperbolic space where each tensor has one open leg. These open legs on the bulk are interpreted as logical input legs of a quantum error-correcting code while open legs on the boundary are identified as outputs where quantum information is encoded. Then the entire tensor network can be viewed as an encoding isometry.

The six-leg tensor has some nice properties. Imagine we inject some Pauli operator into one of six legs in the tensor. Then, for any given choice of three legs, there always exists a Pauli operator acting on them which counteracts the effect of the injection. An example is shown below:

In other words, if an operator is injected from one tensor leg, one can “push” it into other three tensor legs.

Finally, let’s demonstrate causal wedge reconstruction of bulk logical operators. Pick an arbitrary open tensor leg in the bulk and inject some Pauli operator into it. We can “push” it into three tensor legs, which are then injected into neighboring tensors. By repeatedly pushing operators to the boundary in the network, we eventually have some representation of the operator living on a piece of boundary region A. And the bulk operator is contained inside the causal wedge of A. (Here, the length of the curve can be defined as the number of tensor legs cut by the curve). You can also push operators into the boundary by choosing different tensor legs which lead to different representations of a logical operator. You can even have a rather exotic representation which is supported non-locally over two disjoint pieces of the boundary, realizing entanglement wedge reconstruction.

Causal wedge and entanglement wedge reconstruction.

What’s next?

This post is already pretty long and I need to wrap it up…

Shor’s quantum factoring algorithm is a revolutionary invention which opened a whole new research avenue of quantum information science. It is often forgotten, but the first quantum error-correcting code is another important invention by Peter Shor (and independently by Andrew Steane) which enabled a proof that the quantum computation can be performed fault-tolerantly. The theory of quantum error-correcting codes has found interesting applications in studies of condensed matter physics, such as topological phases of matter. Perhaps then, quantum coding theory will also find applications in high energy physics.

Indeed, many interesting open problems are awaiting us. Is entanglement wedge reconstruction a generic feature of tensor networks? How do we describe black holes by quantum error-correcting codes? Can we build a fast scrambler by tensor networks? Is entanglement a wormhole (or maybe a perfect tensor)? Can we resolve the firewall paradox by holographic quantum codes? Can the physics of quantum gravity be described by tensor networks? Or can the theory of quantum gravity provide us with novel constructions of quantum codes?

I feel that now is the time for quantum information scientists to jump into the research of black holes. We don’t know if we will be burned by a firewall or not..., but it is worth trying.

1. Whether entanglement wedge reconstruction is possible in the AdS/CFT correspondence or not still remains controversial. In the spirit of the Ryu-Takayanagi formula which relates entanglement entropy to the length of a global geodesic line, entanglement wedge reconstruction seems natural. But that a bulk operator can be reconstructed from boundary operators on two separate pieces A and C non-locally sounds rather exotic. In our paper, we constructed a toy model of tensor networks which allows both causal and entanglement wedge reconstruction in many cases. For details, see our paper.

### Symmetrybreaking - Fermilab/SLAC

LHC restart back on track

The Large Hadron Collider has overcome a technical hurdle and could restart as early as next week.

On Monday, teams working on the Large Hadron Collider resolved a problem that had been delaying the restart of the accelerator, according to a statement from CERN.

On March 24, the European physics laboratory announced that a short circuit to ground had occured in one of the connections with an LHC magnet. LHC magnets are superconducting, which means that they can maintain a high electrical current with zero electrical resistance. To be superconducting, the LHC magnets must be chilled to almost minus 460 degrees Fahrenheit.

The short circuit ocurred between a superconducting magnet and its diode. Diodes help protect the LHC's magnets by diverting electrical current into a parallel circuit if the magnets lose their superconductivity.

When teams discovered the problem, all eight sections of the LHC were already cooled to operating temperature. To fix the problem, they knew that they might have to go through a weeks-long process of carefully rewarming and then recooling one section.

The short circuit was caused by a fragment of metal caught between the magnet and the diode. After locating the fragment and examining it via X-ray, engineers and technicians decided to try to melt it. They could do this in a way similar to blowing a fuse. Importantly, the technique would not require them to warm up the magnets.

They injected almost 400 amps of current into the diode circuit for a few milliseconds. Measurements made today showed the short circuit had disappeared.

Now the teams must conduct further tweaks and tests and restart the final commissioning of the accelerator. The LHC could see beams as early as next week.

Like what you see? Sign up for a free subscription to symmetry!

### Peter Coles - In the Dark

Why the Big Bang wasn’t as loud as you think…

So how loud was the Big Bang?

I’ve posted on this before but a comment posted today reminded me that perhaps I should recycle it and update it as it relates to the cosmic microwave background, which is what I work on on the rare occasions on which I get to do anything interesting.

As you probably know the Big Bang theory involves the assumption that the entire Universe – not only the matter and energy but also space-time itself – had its origins in a single event a finite time in the past and it has been expanding ever since. The earliest mathematical models of what we now call the  Big Bang were derived independently by Alexander Friedman and George Lemaître in the 1920s. The term “Big Bang” was later coined by Fred Hoyle as a derogatory description of an idea he couldn’t stomach, but the phrase caught on. Strictly speaking, though, the Big Bang was a misnomer.

Friedman and Lemaître had made mathematical models of universes that obeyed the Cosmological Principle, i.e. in which the matter was distributed in a completely uniform manner throughout space. Sound consists of oscillating fluctuations in the pressure and density of the medium through which it travels. These are longitudinal “acoustic” waves that involve successive compressions and rarefactions of matter, in other words departures from the purely homogeneous state required by the Cosmological Principle. The Friedman-Lemaitre models contained no sound waves so they did not really describe a Big Bang at all, let alone how loud it was.

However, as I have blogged about before, newer versions of the Big Bang theory do contain a mechanism for generating sound waves in the early Universe and, even more importantly, these waves have now been detected and their properties measured.

The above image shows the variations in temperature of the cosmic microwave background as charted by the Planck Satellite. The average temperature of the sky is about 2.73 K but there are variations across the sky that have an rms value of about 0.08 milliKelvin. This corresponds to a fractional variation of a few parts in a hundred thousand relative to the mean temperature. It doesn’t sound like much, but this is evidence for the existence of primordial acoustic waves and therefore of a Big Bang with a genuine “Bang” to it.

A full description of what causes these temperature fluctuations would be very complicated but, roughly speaking, the variation in temperature you corresponds directly to variations in density and pressure arising from sound waves.

So how loud was it?

The waves we are dealing with have wavelengths up to about 200,000 light years and the human ear can only actually hear sound waves with wavelengths up to about 17 metres. In any case the Universe was far too hot and dense for there to have been anyone around listening to the cacophony at the time. In some sense, therefore, it wouldn’t have been loud at all because our ears can’t have heard anything.

Setting aside these rather pedantic objections – I’m never one to allow dull realism to get in the way of a good story- we can get a reasonable value for the loudness in terms of the familiar language of decibels. This defines the level of sound (L) logarithmically in terms of the rms pressure level of the sound wave Prms relative to some reference pressure level Pref

L=20 log10[Prms/Pref].

(the 20 appears because of the fact that the energy carried goes as the square of the amplitude of the wave; in terms of energy there would be a factor 10).

There is no absolute scale for loudness because this expression involves the specification of the reference pressure. We have to set this level by analogy with everyday experience. For sound waves in air this is taken to be about 20 microPascals, or about 2×10-10 times the ambient atmospheric air pressure which is about 100,000 Pa.  This reference is chosen because the limit of audibility for most people corresponds to pressure variations of this order and these consequently have L=0 dB. It seems reasonable to set the reference pressure of the early Universe to be about the same fraction of the ambient pressure then, i.e.

Pref~2×10-10 Pamb.

The physics of how primordial variations in pressure translate into observed fluctuations in the CMB temperature is quite complicated, because the primordial universe consists of a plasma rather than air. Moreover, the actual sound of the Big Bang contains a mixture of wavelengths with slightly different amplitudes. In fact here is the spectrum, showing a distinctive signature that looks, at least in this representation, like a fundamental tone and a series of harmonics…

If you take into account all this structure it all gets a bit messy, but it’s quite easy to get a rough but reasonable estimate by ignoring all these complications. We simply take the rms pressure variation to be the same fraction of ambient pressure as the averaged temperature variation are compared to the average CMB temperature,  i.e.

Prms~ a few ×10-5Pamb.

If we do this, scaling both pressures in logarithm in the equation in proportion to the ambient pressure, the ambient pressure cancels out in the ratio, which turns out to be a few times 10-5. With our definition of the decibel level we find that waves of this amplitude, i.e. corresponding to variations of one part in a hundred thousand of the reference level, give roughly L=100dB while part in ten thousand gives about L=120dB. The sound of the Big Bang therefore peaks at levels just a bit less than 120 dB.

As you can see in the Figure above, this is close to the threshold of pain,  but it’s perhaps not as loud as you might have guessed in response to the initial question. Modern popular beat combos often play their dreadful rock music much louder than the Big Bang….

A useful yardstick is the amplitude  at which the fluctuations in pressure are comparable to the mean pressure. This would give a factor of about 1010 in the logarithm and is pretty much the limit that sound waves can propagate without distortion. These would have L≈190 dB. It is estimated that the 1883 Krakatoa eruption produced a sound level of about 180 dB at a range of 100 miles. By comparison the Big Bang was little more than a whimper.

PS. If you would like to read more about the actual sound of the Big Bang, have a look at John Cramer’s webpages. You can also download simulations of the actual sound. If you listen to them you will hear that it’s more of  a “Roar” than a “Bang” because the sound waves don’t actually originate at a single well-defined event but are excited incoherently all over the Universe.

### Symmetrybreaking - Fermilab/SLAC

Which physics machine has the power to go all the way?

The competition is fierce, and only four fantastic pieces of physics equipment emerged from the fray. Below are your Fundamental Four match-ups, so get voting to make sure your favorite makes it to the Grand Unified Championship.

You have until midnight PDT on Thursday, April 2, to vote in this round. Come back on April 3 to see if your pick advanced and vote in the final round.

### Tommaso Dorigo - Scientificblogging

Fighting Plagiarism In Scientific Papers
Plagiarism is the most sincere form of flattery, they say (or rather, this is said of imitation). In arts - literature, music, painting - it can at times be tolerated, as an artist might want to take inspiration from others, elaborate on an idea, or give it a different twist. In art it is the realization of the idea which matters.

## March 30, 2015

### Christian P. Robert - xi'an's og

MCMskv, Lenzerheide, Jan. 5-7, 2016

Following the highly successful [authorised opinion!, from objective sources] MCMski IV, in Chamonix last year, the BayesComp section of ISBA has decided in favour of a two-year period, which means the great item of news that next year we will meet again for MCMski V [or MCMskv for short], this time on the snowy slopes of the Swiss town of Lenzerheide, south of Zürich. The committees are headed by the indefatigable Antonietta Mira and Mark Girolami. The plenary speakers have already been contacted and Steve Scott (Google), Steve Fienberg (CMU), David Dunson (Duke), Krys Latuszynski (Warwick), and Tony Lelièvre (Mines, Paris), have agreed to talk. Similarly, the nine invited sessions have been selected and will include Hamiltonian Monte Carlo,  Algorithms for Intractable Problems (ABC included!), Theory of (Ultra)High-Dimensional Bayesian Computation, Bayesian NonParametrics, Bayesian Econometrics,  Quasi Monte Carlo, Statistics of Deep Learning, Uncertainty Quantification in Mathematical Models, and Biostatistics. There will be afternoon tutorials, including a practical session from the Stan team, tutorials for which call is open, poster sessions, a conference dinner at which we will be entertained by the unstoppable Imposteriors. The Richard Tweedie ski race is back as well, with a pair of Blossom skis for the winner!

As in Chamonix, there will be parallel sessions and hence the scientific committee has issued a call for proposals to organise contributed sessions, tutorials and the presentation of posters on particularly timely and exciting areas of research relevant and of current interest to Bayesian Computation. All proposals should be sent to Mark Girolami directly by May the 4th (be with him!).

Filed under: Kids, Mountains, pictures, R, Statistics, Travel, University life Tagged: ABC, BayesComp, Bayesian computation, Blossom skis, Chamonix, Glenlivet, Hamiltonian Monte Carlo, intractable likelihood, ISBA, MCMSki, MCMskv, Monte Carlo Statistical Methods, Richard Tweedie, ski town, STAN, Switzerland, Zurich

### ATLAS Experiment

Moriond Electroweak: physics, skiing and Italian food

If you’re a young physicist working in high energy physics, you realize very soon in your career that “going for Moriond” and “going to Moriond” are two different things, and that neither of the two means that you’re actually going here:

The original location of the Moriond conference series

“Les rencontres de Moriond” is one of the main Winter conferences for our field. Starting from its original location in Moriond, it has been held around the French and Italian Alps since 1966. In the 60s and 70s, there was a clear distinction between two branches of the same conference, as “electroweak” and “QCD” physics were still done in different labs and accelerators: in those years the former had to do for example with the discovery of the W and Z bosons and their interactions, while the latter saw the developments of a model to describe the “quarks” that compose protons and neutrons, and the discovery of these constituents themselves. Nowadays, both kinds of physics are studied at the LHC and in other experiments around the world, so the results presented in the two conferences are not necessarily divided by topic anymore.

This year I was lucky enough to be contributing some results that were “going for Moriond”, which means they’d be approved by the Collaboration to be presented at this conference for the first time, but I would also be “going to Moriond” in person. This year’s “Moriond Electroweak” was held in the Italian mountain resort of La Thuile, and had a special significance. In the session that celebrated the 50 years of the conference, the founder Jean Trân Thanh Vân reminded the audience of the two pillars of this conference:

• encourage discussions and exchanges between theoretical and experimental physicists;
• let young scientists meet senior researchers and discuss their results.

The official Moriond EW t-shirt and the announcement of the slalom competition

The first point was made when theorists and experimentalists alike were asked to take part in a slalom competition. The results were not categorized by subject of study, but certainly the cheering came from and towards both parties.

The latter took place almost every evening, in the dedicated “young scientists session”. Here, students and young post-docs can apply to give a short talk and answer questions on their research topic in front of an international audience of theorists and experimentalists.

The questions and answers can be then carried on to the (abundant) dinner. As an Italian, I do appreciate the long evenings dedicated to a mixture of excellent food combined with physics discussions (and where the two can be identified with each other, as in this snapshot from a talk by Francesco Riva).

New physics matched to Italian dinner choices at the conference, according to Francesco Riva

Back to the physics: the results I contributed to were shown in the afternoon session, before the 50th anniversary talks. They’re in the top corner of one of the slides from the summary of the ATLAS and CMS new physics searches.

Beyond the Standard Model: New results presented at the Moriond EW conference

That’s only the tip of the iceberg of a search that looks for new phenomena that would manifest as an excess of collimated jets of particles in the central region of the detector, and it shows that there is no new physics to be found here, nor in any of the other searches shown in the conference so far. (What we didn’t know at that time was that there would be something not consistent with expectations in the LHCb results shown just one day after, as explained in this article). Given that so far we have not found much beyond what we consider Standard (as in belonging to the predictions made by the Standard Model of Particle Physics), the conference had a special focus on searches that look for the unexpected in unexpected places. “Stealthy” is how the physics beyond the Standard Model that is particularly hard to find is characterized, and as experimentalists we want to pay particular attention to the “blind spots” where we haven’t yet looked for the upcoming LHC runs. This was highlighted in the morning talks, describing searches for Supersymmetry in blind spots and searches for particles that leave no immediate signature after the collision because of their long lifetime. There were also other ideas of how to test the Standard Model with very high precision, as highlighted in another food-related slide by Francesco Riva.

Techniques to find new physics, according to Francesco Riva

No one in the audience forgot, however, that the new LHC run will bring more energy and more data. Both will allow us to investigate new, rare processes that were not accessible in the first run. Discoveries might be just around the corner!

Overall, the “Rencontres de Moriond” conferences have the effect of leaving everyone enthusiastic for the discussion and eager for more results: in particular, next year’s edition may see some of the first results of the upcoming LHC run. And of course, the results will be best discussed on skis and over dinner.

 Caterina Doglioni is a post-doctoral researcher in the ATLAS group of the University of Geneva. She got her taste for calorimeters with the Rome Sapienza group in the commissioning of the ECAL at the CMS experiment during her Master’s thesis. She continued her PhD work with the University of Oxford and moved to hadronic calorimeters: she worked on calibrating and measuring hadronic jets with the first ATLAS data. She is still using jets to search for new physics phenomena, while thinking about calorimeters at a new future hadron collider.

### Quantum Diaries

Superconducting test accelerator achieves first electron beam

Last week the first SRF cavities of Fermilab’s superconducting test accelerator propelled their first electrons. Photo: Reidar Hahn

The newest particle accelerators and those of the future will be built with superconducting radio-frequency (SRF) cavities, and institutions around the world are working hard to develop this technology. Fermilab’s advanced superconducting test accelerator was built to take advantage of SRF technology accelerator research and development.

On Friday, after more than seven years of planning and building by scientists and engineers, the accelerator has delivered its first beam.

The Fermilab superconducting test accelerator is a linear accelerator (linac) with three main components: a photoinjector that includes an RF gun coupled to an ultraviolet-laser system, several cryomodules and a beamline. Electron bunches are produced when an ultraviolet pulse generated by the laser hits a cathode located on the back plate of the gun. Acceleration continues through two SRF cavities inside the cryomodules. After exiting the cryomodules, the bunches travel down a beamline, where researchers can assess them.

Each meter-long cavity consists of nine cells made from high-purity niobium. In order to become superconductive, the cavities sit in a vessel filled with superfluid liquid helium at temperatures close to absolute zero.

As RF power pulses through these cavities, it creates an oscillating electric field that runs through the cells. If the charged particles meet the oscillating waves at the right phase, they are pushed forward and propelled down the accelerator.

The major advantage of using superconductors is that the lack of electrical resistance allows virtually all the energy passing through to be used for accelerating particle beams, ultimately creating more efficient accelerators.

The superconducting test accelerator team celebrates first beam in the operations center at NML. Vladimir Shiltsev, left, is pointing to an image of the beam. Photo: Pavel Juarez, AD

“It’s more bang for the buck,” said Elvin Harms, one of the leaders of the commissioning effort.

The superconducting test accelerator’s photoinjector gun first produced electrons in June 2013. In the current run, electrons are being shot through one single-cavity cryomodule, with a second, upgraded model to be installed in the next few months. Future plans call for accelerating the electron beam through an eight-cavity cryomodule, CM2, which was the first to reach the specifications of the proposed International Linear Collider (ILC).

Fermilab is one of the few facilities that provides space for advanced accelerator research and development. These experiments will help set the stage for future superconducting accelerators such as SLAC’s Linac Coherent Light Source II, of which Fermilab is one of several partner laboratories.

“The linac is similar to other accelerators that exist, but the ability to use this type of setup to carry out accelerator science experiments and train students is unique,” said Philippe Piot, a physicist at Fermilab and professor at Northern Illinois University leading one of the first experiments at the test accelerator. A Fermilab team has designed and is beginning to construct the Integrable Optics Test Accelerator ring, a storage ring that will be attached to the superconducting test accelerator in the years to come.

“This cements the fact that Fermilab has been building up the infrastructure for mastering SRF technology,” Harms said. “This is the crown jewel of that: saying that we can build the components, put them together, and now we can accelerate a beam.”

Diana Kwon

### Lubos Motl - string vacua and pheno

David Gross' NYU lecture
I think that this 97-minute-long public lecture by David Gross at New York University hasn't been embedded on this blog yet:

It is not just another copy of a talk you have heard five times.

He has talked about the Standard Model's being nano-nanophysics, QCD, Higgs, some signs of SUSY (and perhaps unification) at the TeV scale that we may already be seeing, the future colliders (probably in China), and Schrödinger's dogs, among other things.

There were some questions at the end, too.

### CERN Bulletin

Safety Training: places available in March and April 2015

There are places available in the forthcoming Safety courses. For updates and registrations, please refer to the Safety Training Catalogue (see here).

### Emily Lakdawalla - The Planetary Society Blog

Your First Timeline of Events for LightSail's Test Flight
The team behind The Planetary Society’s LightSail spacecraft is kicking off a series of simulations to ensure the spacecraft’s ground systems are ready for launch.

### CERN Bulletin

Academic Training Lecture | Practical Statistics for LHC Physicists: Descriptive Statistics, Probability and Likelihood | 7-9 April
Please note that our next series of Academic Training Lectures will take place on the 7, 8 and 9 April 2015   Practical Statistics for LHC Physicists: Descriptive Statistics, Probability and Likelihood, by Harrison Prosper, Floridia State University, USA. from 11.00 a.m. to 12.00 a.m. in the Council Chamber (503-1-001) https://indico.cern.ch/event/358542/

### Peter Coles - In the Dark

Found in Translation…

A nice surprise was waiting for me when I arrived at work this morning in the form of a parcel from Oxford University Press containing six copies of the new Arabic edition of my book  Cosmology: A Very Short Introduction. I think I’ve put them the right way up. I was a bit confused because they open the opposite way to books in English, as arabic is read from right to left rather than from left to right.

Anyway, although I can’t read Arabic it’s nice to have these to put with the other foreign editions, including these. I still can’t remember whether the first one is Japanese or Korean…

…still, it’s interesting to see how they’ve chosen different covers for the different translations, and at least I know what my name looks like in Russian Bulgarian!

### John Baez - Azimuth

A Networked World (Part 2)

guest post by David Spivak

### Creating a knowledge network

In 2007, I asked myself: as mathematically as possible, what can formally ground meaningful information, including both its successful communication and its role in decision-making? I believed that category theory could be useful in formalizing the type of object that we call information, and the type of relationship that we call communication.

Over the next few years, I worked on this project. I tried to understand what information is, how it is stored, and how it can be transferred between entities that think differently. Since databases store information, I wanted to understand databases category-theoretically. I eventually decided that databases are basically just categories $\mathcal{C},$ corresponding to a collection of meaningful concepts and connections between them, and that these categories are equipped with functors $\mathcal{C}\to\mathbf{Set}.$ Such a functor assigns to each meaningful concept a set of examples and connects them as dictated by the morphisms of $\mathbf{Set}.$ I later found out that this “databases as categories” idea was not original; it is due to Rosebrugh and others. My view on the subject has matured a bit since then, but I still like this basic conception of databases.

If we model a person’s knowledge as a database (interconnected tables of examples of things and relationships), then the network of knowledgeable humans could be conceptualized as a simplicial complex equipped with a sheaf of databases. Here, a vertex represents an individual, with her database of knowledge. An edge represents a pair of individuals and a common ground database relating their individual databases. For example, you and your brother have a database of concepts and examples from your history. The common-ground database is like the intersection of the two databases, but it could be smaller (if the people don’t yet know they agree on something). In a simplicial complex, there are not only vertices and edges, but also triangles (and so on). These would represent databases held in common between three or more people.

I wanted “regular people” to actually make such a knowledge network, i.e., to share their ideas in the form of categories and link them together with functors. Of course, most people don’t know categories and functors, so I thought I’d make things easier for them by equipping categories with linguistic structures: text boxes for objects, labeled arrows for morphisms. For example, “a person has a mother” would be a morphism from the “person” object, to the “mother” object. I called such a linguistic category an olog, playing on the word blog. The idea (originally inspired during a conversation with my friend Ralph Hutchison) was that I wanted people, especially scientists, to blog their ontologies, i.e., to write “onto-logs” like others make web-logs.

Ologs codify knowledge. They are like concept webs, except with more rules that allow them to simultaneously serve as database schemas. By introducing ologs, I hoped I could get real people to upload their ideas into what is now called the cloud, and make the necessary knowledge network. I tried to write my papers to engage an audience of intelligent lay-people rather than for an audience of mathematicians. It was a risk, but to me it was the only honest approach to the larger endeavor.

(For students who might want to try going out on a limb like this, you should know that I was offered zero jobs after my first postdoc at University of Oregon. The risk was indeed risky, and one has to be ok with that. I personally happened to be the beneficiary of good luck and was offered a grant, out of the clear blue sky, by a former PhD in algebraic geometry, who worked at the Office of Naval Research at the time. That, plus the helping hands of Haynes Miller and many other brilliant and wonderful people, can explain how I lived to tell the tale.)

So here’s how the simplicial complex of ologs would ideally help humanity steer. Suppose we say that in order for one person to learn from another, the two need to find a common language and align some ideas. This kind of (usually tacit) agreement on, or alignment of, an initial common-ground vocabulary and concept-set is important to get their communication onto a proper footing.

For two vertices in such a simplicial network, the richer their common-ground olog (i.e., the database corresponding to the edge between them) is, the more quickly and accurately the vertices can share new ideas. As ideas are shared over a simplex, all participating databases can be updated, hence making the communication between them richer. In around 2010, Mathieu Anel and I worked out a formal way this might occur; however, we have not yet written it up. The basic idea can be found here.

In this setup, the simplicial complex of human knowledge should grow organically. Scientists, business people, and other people might find benefit in ologging their ideas and conceptions, and using them to learn from their peers. I imagined a network organizing itself, where simplices of like-minded people could share information with neighboring groups across common faces.

I later wrote a book called Category Theory for the Sciences, available free online, to help scientists learn how category theory could apply to familiar situations like taxonomies, graphs, and symmetries. Category theory, simply explained, becomes a wonderful key to the whole world of pure mathematics. It’s the closest thing we have to a universal language of thought, and therefore an appropriate language for forming connections.

My working hypothesis for the knowledge network was this. The information held by people whose worldview is more true—more accurate—would have better predictive power, i.e., better results. This is by definition: I define ones knowledge to be accurate as the extent to which, when he uses this knowledge to direct his actions, he has good luck handling his worldly affairs. As Louis Pasteur said, “Luck favors the prepared mind.” It follows that if someone has a track record of success, others will value finding broad connections into his olog. However, to link up with someone you must find a part of your olog that aligns with his—a functorial connection—and you can only receive meaningful information from him to the extent that you’ve found such common ground.

Thus, people who like to live in fiction worlds would find it difficult to connect, except to other like-minded “Obama’s a Christian”-type people. To the extent you are imbedded in a fictional—less accurate, less predictive—part of the network, you will find it difficult to establish functorial connections to regions of more accurate knowledge, and therefore you can’t benefit from the predictive and conceptual value of this knowledge.

In other words, people would be naturally inclined to try to align their understanding with people that are better informed. I felt hope that this kind of idea could lead to a system in which honesty and accuracy were naturally rewarded. At the very least, those who used it could share information much more effectively than they do now. This was my plan; I just had to make it real.

I had a fun idea for publicizing ologs. The year was in 2008, and I remember thinking it would be fantastic if I could olog the political platform and worldview of Barack Obama and of Sarah Palin. I wished I could sit down with them and other politicians and help them write ologs about what they believed and wanted for the country. I imagined that some politicians might have ologs that look like a bunch of disconnected text boxes—like a brain with neurons but no synapses—a collection of talking points but no real substantive ideas.

Anyway, there I was, trying to understand everything this way: all information was categories (or perhaps sketches) and presheaves. I would work with interested people from any academic discipline, such as materials science, to make ologs about whatever information they wanted to record category-theoretically. Ologs weren’t a theory of everything, but instead, as Jack Morava put it, a theory of anything.

One day I was working on a categorical sketch to model processes within processes, but somehow it really wasn’t working properly. The idea was simple: each step in a recipe is a mini-recipe of its own. Like chopping the carrots means getting out a knife and cutting board, putting a carrot on there, and bringing the knife down successively along it. You can keep zooming into any of these and see it as its own process. So there is some kind of nested, fractal-like behavior here. The olog I made could model the idea of steps in a recipe, but I found it difficult to encode the fact that each step was itself a recipe.

This nesting thing seemed like an idea that mathematics should treat beautifully, and ologs weren’t doing it justice. It was then when I finally admitted that there might be other fish in the mathematical sea.

## March 29, 2015

### Christian P. Robert - xi'an's og

intuition beyond a Beta property

A self-study question on X validated exposed an interesting property of the Beta distribution:

If x is B(n,m) and y is B(n+½,m) then √xy is B(2n,2m)

While this can presumably be established by a mere change of variables, I could not carry the derivation till the end and used instead the moment generating function E[(XY)s/2] since it naturally leads to ratios of B(a,b) functions and to nice cancellations thanks to the ½ in some Gamma functions [and this was the solution proposed on X validated]. However, I wonder at a more fundamental derivation of the property that would stem from a statistical reasoning… Trying with the ratio of Gamma random variables did not work. And the connection with order statistics does not apply because of the ½. Any idea?

Filed under: Books, Kids, R, Statistics, University life Tagged: beta distribution, cross validated, moment generating function, Stack Echange

### ZapperZ - Physics and Physicists

Stephen Hawking and Brian Cox To Trademark Their Names
"Stephen Hawking" and "Brian Cox" will be trademark names very soon. So if you have plans to market t-shirts and other products with these people's names, watch out!

Maybe this will get rid of some of the tacky stuff that I've seen associated to them, especially Hawking. But then again, who knows, they may turn around and produce their own tacky merchandise.

Zz.

### ZapperZ - Physics and Physicists

A Tale Of Two Scientists
It is fascinating to read about the stuff behind the scene involving the negotiations between the United States and Iran regarding Iran's nuclear program. And in the middle of all this are two scientists/engineers out of MIT with nuclear science background.

At the Massachusetts Institute of Technology in the mid-1970s, Ernest J. Moniz was an up-and-coming nuclear scientist in search of tenure, and Ali Akbar Salehi, a brilliant Iranian graduate student, was finishing a dissertation on fast-neutron reactors.

The two did not know each other, but they followed similar paths once they left the campus: Mr. Moniz went on to become one of the nation’s most respected nuclear physicists and is now President Obama’s energy secretary. Mr. Salehi, who was part of the last wave of Iranians to conduct nuclear studies at America’s elite universities, returned to an Iran in revolution and rose to oversee the country’s nuclear program.

You may read more about it in the article. And I definitely agree with this sentiment:

Mr. Moniz, 70, understands his role well: He is providing not only technical expertise but also political cover for Mr. Kerry. If a so-called framework agreement is reached in the next few days, it will be Mr. Moniz who will have to vouch to a suspicious Congress, to Israel and to Arab allies that Iran would be incapable of assembling the raw material for a single nuclear weapon in less than a year.

“It wouldn’t mean much coming from Kerry,” said a member of the administration deeply involved in the strategy who spoke on the condition of anonymity. “The theory is that Ernie’s judgment on that matter is unassailable.”

At the heart of this is a scientific/technical issues. Now once presented, it is up to the politicians to decide, because beyond that, it is no longer a science/technical decision, but a political one. To have someone, a negotiator, who is not only knowledgeable in that area, but also who happens to be a world-renowned expert, is extremely beneficial.

Zz.

### Peter Coles - In the Dark

Praise, by R.S. Thomas

Today is Palm Sunday, the start of what Christians call “Holy Week”, which culiminates in Easter. It’s also the birthday of the great Welsh poet R.S. Thomas, who was born on this day in 1913. Thomas spent much of his life as an Anglican priest. I’m not a Christian but I am drawn to the religious verse of R.S. Thomas not only for its directness and lack of artifice but also the honesty with which he addresses the problems his faith sets him. There are many atheists who think religion is some kind of soft option for those who can’t cope with life in an unfriendly universe, but reading R.S. Thomas, whose faith was neither cosy nor comfortable, led me to realise that is very far from the case. I recommend him as an antidote to the simple-minded antagonism of people like Richard Dawkins. There are questions that science alone will never answer, so we should respect people who search for a truth we ourselves cannot understand.

And whether or not it is clear to you, no doubt the universe is unfolding as it should. Therefore be at peace with God, whatever you conceive Him to be.

I will be offline for the Easter holiday so I thought I’d post a poem that I find appropriate to the time of yea. You can read it as Praise for God, or for Nature, or for both. I don’t think it matters.

I praise you because
you are artist and scientist
in one. When I am somewhat
with a set-square, I hear
you murmuring to yourself
in a notation Beethoven
dreamed of but never achieved.
You run off your scales of
rain water and sea water, play
the chords of the morning
and evening light, sculpture
by leaf, when spring
comes, the stanzas of
an immense poem. You speak
all languages and none,
prayers with the simplicity
of a flower, confronting
us, when we would domesticate you
to our uses, with the rioting
viruses under our lens.

### Geraint Lewis - Cosmic Horizons

Musings on academic careers - Part 1
As promised, I'm going to put down some thoughts on academic careers. In doing this, I should put my cards on the table and point out that while I am a full-time professor of astrophysics of the University of Sydney, I didn't really plan my career or following the musings given below. The musings come from take a hard look at the modern state of play in modern academia.

I am going to be as honest as possible, and surely some of my colleagues will disagree with my musings. Some people have a romantic view of many things, including science, and will trot out the line that science is somewhat distinct from people. That might be the case, but the act of doing science is clearly done my people, and that means all of the issues that govern human interactions come into play. It is important to remember this.

Now, there may be some lessons below for how to become a permanent academic, but there is no magic formula. But realising some of these lessons on what is at play may help.

Some of you may have heard me harp on about some of these issues before, but hopefully there is some new stuff as well. OK. Let's begin.

Career Management
It must be remembered that careers rarely just happen. Careers must be managed. I know some people hate to realise this, as science is supposed to be above all this career stuff - surely "good people" will be identified and rewarded!

Many students and postdocs seem to bumble along and only think of "what's next?" when they are up against the wire. I have spoken with students about the process of applying for postdocs, the long lead time needed, the requirement of at least three referees, all aspects of job hunting, and then, just moments from the submission of their PhD, they suddenly start looking for jobs. I weep a little when they frantically ask me "Who should I have as my third referee?"

Even if you are a brand-new PhD student, you need to think about career management. I don't mean planning, such as saying I will have a corner office in Harvard in 5 years (although there is nothing wrong with having aspirational goals!), but management. So, what do I mean?

Well, if you are interested in following a career in academia, then learn about the various stages and options involved and how you get from one to the other. This (and careers beyond academia) should be mandatory for new students, and reminded at all stages of your career that you need to keep thinking about it. What kind of things should you be doing at the various stages of your career? What experience would your next employer like you to have? It is very important to try and spot holes in your CV and fill them in; this is very important! If you know you have a weakness, don't ignore it, fix it.

Again, there is no magic formula to guarantee that you will be successful in moving from one stage to another, but you should be able to work out the kind of CV you need. If you are having difficulties in identifying these things, talk with people (get a mentor!).

And, for one final point, the person responsible for managing your career is you. Not your supervisor, not your parents, and not the non-existent gods of science. You are.

Being Strategic
This is part of your career management.

In the romantic vision of science, an academic is left to toddle along and be guided by their inquisitive nature to find out what is going on in the Universe. But academia does not work that way (no matter how much you want to rage against it). If you want an academic career, then it is essential to realise that you will be compared to your peers at some point. At some point, someone is is going to have a stack of CVs in front of them and will be going through them and will have to choose a subset who met the requirements for a position, and then rank those subset to find the best candidate. As part of your career management you need to understand what people are looking for! (I speak from experience of helping people prepare for jobs who know little about the actual job, the people offering it, what is needed etc etc).

I know people get very cross with this, but there are key indicators people look at, things like the number of papers, citation rates, grant income, student supervision, teaching experience. Again, at all points you need to ask "is there a hole in my CV?" and if there is, fill it! Do not ignore it.

But, you might be saying, how can I be strategic in all of this? I just get on with my work! You need to think about what you do. If you have a long running project, are there smaller projects you can do when waiting to spin out some short, punchy papers? Can I lead something that I will become world known in? Is there an idea I can spin to a student to make progress on? You should be thinking of "results" and results becoming talks at conferences and papers in journals.

If you are embarking on a new project, a project that is going to require substantial investment of time, you should ensure something will come from it, even if it is a negative or null result. You should never spend a substantial period of time, such as six months, and not have anything to show for it!

Are there collaborations you could forge and contribute to? Many people have done very well by being part of large collaborations, resulting in many papers, although, be aware that when seeing survey papers on a CV now as "well, what did this person contribute to the project?".

The flip-side is also important. Beware of spending to much time on activities that do not add to you CV! I have seen some, especially students, spending a lot of time on committees and jobs that really don't benefit them. Now, don't get me wrong. Committee work and supporting meetings etc is important, but think about where you are spending your time and ask yourself if your CV is suffering because of it.

How many hours should I work?
Your CV does not record the number of hours you work! It records your research output and successes. If you are publishing ten papers a year on four hour days, then wonderful, but if you are two years into a postdoc, working 80 hours per week and have not published anything, you might want to think about how you are using your time.

But I am a firm believer of working smarter, not harder, and thinking and planning ideas and projects. Honestly, I have a couple of papers which (in a time before children) were born from ideas that crystalised over a weekend and submitted soon after. I am not super-smart, but do like to read widely, to go to as many talks as I can, to learn new things, and apply ideas to new problems.

One thing I have seen over and over again is people at various stages of their careers becoming narrower and narrower in their focus, and it depresses me when I go to talks in my own department and see students not attending. This narrowness, IMHO, does not help in establishing an academic career. This, of course, is not guaranteed, but when I look at CVs, I like to see breadth.

So, number of hours is not really an important issue, your output is. Work hours do become important when you are a permanent academic because all the different things, especially admin and teaching you have to do, but as an early career researcher, it should not be the defining thing. Your output is.

I actually think this is a big one,  and is one which worries me as I don't think people at many stages of their career actually think about. Being a student is different to being an postdoctoral researcher, is different to being an academic, and it seems to be that people embarking on PhDs, with many a romantic notion about winning a Nobel prize somewhere along the way, don't really know what an "academic" is and what they do, just that it is some sort of goal.

In fact, this is such a big one, I think this might be a good place to stop and think about later musings.

### Emily Lakdawalla - The Planetary Society Blog

Field Report from Mars: Sol 3971 - March 26, 2015
Opportunity reaches a marathon milestone—in more ways than one. Larry Crumpler reports on the current status of the seemingly unstoppable Mars rover.

## March 28, 2015

### Lubos Motl - string vacua and pheno

Dark matter: Science Friday with Weinberg, Hooper, Cooley

## March 19, 2015

### Clifford V. Johnson - Asymptotia

LAIH Luncheon with Jack Miles
(Click for larger view.) On Friday 6th March the Los Angeles Institute for the Humanities (LAIH) was delighted to have our luncheon talk given by LAIH Fellow Jack Miles. He told us some of the story behind (and the making of) the Norton Anthology of World Religions - he is the main editor of this massive work - and lots of the ins and outs of how you go about undertaking such an enterprise. It was fascinating to hear how the various religions were chosen, for example, and how he selected and recruited specialist editors for each of the religions. It was an excellent talk, made all the more enjoyable by having Jack's quiet and [...] Click to continue reading this post

### Marco Frasca - The Gauge Connection

New Lucasian Professor

After a significant delay, Cambridge University made known the name of Michael Green‘s successor at the Lucasian chair. The 19th Lucasian Professor is Michael Cates, Professor at University of Edinburgh and Fellow of the Royal Society. Professor Cates is worldwide known for his researches in the field of soft condensed matter. It is a well deserved recognition and one of the best choice ever for this prestigious chair. So, we present our best wishes to Professor Cates of an excellent job in this new role.

Filed under: Condensed matter physics, News, Physics Tagged: Cambridge University, Edinburgh University, Lucasian Professor, Soft condensed matter

## March 18, 2015

### ZapperZ - Physics and Physicists

CERN's ALPHA Experiment
See, I like this. I like to highlight things that most of the general public simply don't know much about, especially when another major facility throws a huge shadow over it.

This article mentions two important things about CERN: It is more than just the LHC, and it highlights another very important experiment, the ALPHA experiment.

ALPHA’s main aim is to study the internal structure of the antihydrogen atom, and see if there exist any discernible differences within it that set it apart from regular hydrogen. In 2010 ALPHA was the first experiment to trap 38 antihydrogen atoms (an antielectron orbiting an antiproton) for about one-fifth of a second and then the team perfected its apparatus and technique to trap a total of 309 antihydrogen atoms for 1000 s in 2011. Hangst hopes that with the new updated ALPHA 2 device (which includes lasers for spectroscopy), the researchers will soon see precisely what lies within an antihydrogen atom by studying its spectrum. They had a very short test run of a few weeks with ALPHA 2 late last year, and will begin their next set of experiment in earnest in the coming months.

They will be producing more amazing results in the future, because this is all uncharted territory.

Zz.