Particle Physics Planet


June 30, 2015

astrobites - astro-ph reader's digest

Stellar Accountants and Alcohol

Title: Tracing Embedded Stellar Populations in Clusters and Galaxies Using Molecular Emission: Methanol as a Signature of Low-Mass End of the IMF
Authors: Lars E. Kristensen & Edward A. Bergin
First author’s institution: Harvard-Smithsonian Center for Astrophysics
Status: Accepted for publication in Astrophysical Journal Letters.

Most stars form in clusters that contain a number of stars of different masses. This mass distribution of stars is called the Initial Mass Function, or IMF. The IMF is used in many areas of astronomy. It can tell us how gas collapses into starshow a distant galaxy evolves, and where black holes might be lurking.

The easiest way to measure the IMF in a young star cluster is simply to count the number of forming stars of different masses. The problem with this method is that a few high-mass stars will completely outshine thousands of low-mass stars in the same cluster. So for distant clusters in our own galaxy, and for other galaxies, the low-mass stars which make up most of the total mass are basically invisible.

Today’s paper explores a new method for finding newborn low-mass stars. They show that observations of methyl alcohol, or methanol, in distant forming star clusters can be used to find stars too faint to see.

Methanol and Molecular Outflows

Methanol, along with many other molecules, is formed on grains of dust in the cold, dense gas where stars form. Methanol is freed from dust grains when the grains collide with fast moving gas particles. As forming stars pull in surrounding gas, they form an accretion disk which powers high-energy jets, blasting gas away from the star. These molecular outflows free lots of methanol from their host dust grains. The methanol can than be observed by millimeter telescopes such as ALMA. So observing methanol emission is a good way to trace molecular outflows and the forming stars which power them.

Why is it easier to find the molecular outflows from young low-mass stars than the stars themselves? The strength of the molecular outflow depends directly on the mass of the star, so a star with the mass of the Sun will have an outflow that is ten times weaker than the outflow from a 10 solar mass star. Compare this to the luminosity of the stars involved: the 10 solar mass star will outshine thousands of Suns!

A Simple Model and an Observation

Screen Shot 2015-06-29 at 3.02.17 PM

Figure 1: Modeled star cluster with 3000 stars. The inset shows the number of stars as a function of their mass. The mass of the stars is indicated by the size of their symbols.

Screen Shot 2015-06-29 at 3.01.57 PM

Figure 2: (Top) Methanol emission from a modeled star cluster, as it would be seen by ALMA. (Middle) ALMA image of methanol emission from a real cluster. (Bottom) Intensity of methanol emission as a function of distance from the center of each cluster. The solid blue line shows the model multiplied by a factor of 2 (see text). The dashed blue line includes fainter emission and is not multiplied by a factor.

To test this idea, the authors construct a very simple model of a star cluster. Shown in Figure 1, this model contains 3000 stars with a typical mass distribution. As in real clusters, the modeled cluster has only a few massive stars, but these would dominate the starlight coming from the cluster. The modeled stars are paired with molecular outflows guided by observations of methanol in nearby star clusters. The top of Figure 2 show what this modeled methanol emission would look like if observed by ALMA. The authors then compare this modeled methanol image to a real ALMA image of a distant cluster in the middle of Figure 2.

The comparison in Figure 2 shows that the simple model cluster with methanol outflows qualitatively matches a true observation of a forming cluster. This is important because most of the modeled methanol emission comes from low-mass stars, which are too faint to detect in the real cluster. By imaging the molecular outflows instead of the stars themselves, astronomers are able to cut down on the glare from the few most massive stars in a cluster and trace the true mass distribution. However, the modeled and true images do not match in detail, because the model is only a rough guess at what the true cluster might look like.

Where the Model Misses

In the bottom of Figure 2, the modeled and observed clusters are compared in detail. The extra methanol in the center of the real cluster comes from a massive molecular outflow, while the extra emission in the outskirts of the real cluster may be imaging noise. Between these two regions, the observed emission is twice as bright as in the model cluster. This could be due to a difference in age, IMF, size, or shape between the simple model and observed cluster, and these discrepancies will be accounted for in future sophisticated modeling. Nevertheless, the qualitative match between the model and observations is impressive because of the simplicity of the model.

Methanol: An Alco-holy Grail?

As ALMA extends these observations to other galaxies, the emission of methanol could be combined with more sophisticated cluster models to measure the low-mass end of the initial mass function, a holy grail of star formation and galaxy evolution studies. Looking for these humble puffs of alcohol could give us insight into how the majority of stars in the universe are born and how their galaxies live and die.

by Jesse Feddersen at June 30, 2015 12:17 AM

June 29, 2015

Jester - Resonaances

Sit down and relaxion
New ideas are rare in particle physics these days. Solutions to the naturalness problem of the Higgs mass are true collector's items. For these reasons, the new mechanism addressing the naturalness problem via cosmological relaxation have stirred a lot of interest in the community. There's already an article explaining the idea in popular terms. Below, I will give you a more technical introduction.  
In the Standard Model, the W and Z bosons and fermions get their masses via the Brout-Englert-Higgs mechanism. To this end, the Lagrangian contains  a scalar field H with a negative mass squared  V = - m^2 |H|^2. We know that the value of the parameter m is around 90 GeV - the Higgs boson mass divided by the square root of 2. The naturalness problem is that, given what we know about quantum field theory, one expects m~M, where M is the cut-off scale where the Standard Model is replaced by another theory. The dominant paradigm has been that, around the energy scale of 100 GeV, the Standard Model must be replaced by a new theory in which the parameter m is protected from quantum corrections.  We know several mechanisms that could potentially protect the Higgs mass: supersymmetry, Higgs compositeness, the Goldstone mechanism, extra-dimensional gauge symmetry, and conformal symmetry. However, according to experimentalists, none seems to be realized at the weak scale; therefore, we need to accept that nature is fine-tuned (e.g. susy is just behind the corner), or to seek solace in religion (e.g. anthropics).  Or to find a new solution to the naturalness problem: one that is not fine-tuned and is consistent with experimental data.

Relaxation is a genuinely new solution, even if somewhat contrived. It is based on the following ingredients:

  1.  The Higgs mass term in the potential is V = M^2 |H|^2. That is to say,  the magnitude of the mass term is close to the cut-off of the theory, as suggested by the naturalness arguments. 
  2. The Higgs field is coupled to a new scalar field - the relaxion - whose vacuum expectation value is time-dependent in the early universe, effectively changing the Higgs mass squared during its evolution.
  3. When the mass squared turns negative and electroweak symmetry is broken, a back-reaction mechanism should prevent further time evolution of the relaxion, so that the Higgs mass terms is frozen at a seemingly unnatural value.       

These 3 ingredients can be realized in a toy model where the Standard Model is coupled to the QCD axion. The crucial interactions are  
Then the story goes as follows. The axion Φ starts at a small value where the M^2 term dominates and there's no electroweak symmetry breaking. During inflation its value slowly increases. Once g*Φ > M^2, electroweak symmetry breaking is triggered and the Higgs field acquires a vacuum expectation value.  The crucial point is that the height of the axion potential Λ depends on the light quark masses which in turn depend on the Higgs expectation value v. As the relaxion evolves, v increases, and Λ also increases proportionally, which provides the desired back-reaction. At some point, the slope of the axion potential is neutralized by the rising Λ, and the Higgs expectation value freezes in. The question is now quantitative: is it possible to arrange the freeze-in to happen at v< ? It turns out the answer is yes, at the cost of choosing strange (though not technically unnatural) theory parameters.  In particular, the dimensionful coupling g between the relaxion and the Higgs has to be less than 10^-20 GeV (for a cut-off scale larger than 10 TeV), the inflation has to last for at least 10^40 e-folds, and the Hubble scale during inflation has to be smaller than the QCD scale.   

The toy-model above ultimately fails: since the QCD axion is frozen at a non-zero value, one effectively generates an order one CP violating θ-term in the Standard Model Lagrangian, in conflict with the experimental bound  θ < 10^-10. Nevertheless, the same  mechanism can be implemented in a realistic model. One possibility is to add new QCD-like interactions with its own axion playing the relaxion role. In addition, one needs new "quarks" charged under the new strong interactions. These masses have to be sensitive to the electroweak scale v, thus providing a back-reaction on the axion potential that terminates its evolution. In such a model, the quantitative details would be a bit different than in the QCD axion toy-model. However, the "strangeness" of the parameters persists in any model constructed so far. Especially, the very low scale of inflation required by the relaxation mechanism is worrisome. Could it be that the naturalness problem is just swept into the realm of poorly understood physics of inflation? The ultimate verdict thus depends on whether a complete and  healthy model incorporating both relaxation and inflation can be constructed. TBC, for sure.

Thanks to Brian for a great tutorial. 

by Jester (noreply@blogger.com) at June 29, 2015 10:31 PM

Christian P. Robert - xi'an's og

life and death along the RER B, minus approximations

viemortrerbWhile cooking for a late Sunday lunch today, I was listening as usual to the French Public Radio (France Inter) and at some point head the short [10mn] Périphéries that gives every weekend an insight on the suburbs [on the “other side’ of the Parisian Périphérique boulevard]. The idea proposed by a geographer from Montpellier, Emmanuel Vigneron, was to point out the health inequalities between the wealthy 5th arrondissement of Paris and the not-so-far-away suburbs, by following the RER B train line from Luxembourg to La Plaine-Stade de France…

The disparities between the heart of Paris and some suburbs are numerous and massive, actually the more one gets away from the lifeline represented by the RER A and RER B train lines, so far from me the idea of negating this opposition, but the presentation made during those 10 minutes of Périphéries was quite approximative in statistical terms. For instance, the mortality rate in La Plaine is 30% higher than the mortality rate in Luxembourg and this was translated into the chances for a given individual from La Plaine to die in the coming year are 30% higher than if he [or she] lives in Luxembourg. Then a few minutes later the chances for a given individual from Luxembourg to die are 30% lower than he [or she] lives in La Plaine…. Reading from the above map, it appears that the reference is the mortality rate for the Greater Paris. (Those are 2010 figures.) This opposition that Vigneron attributes to a different access to health facilities, like the number of medical general practitioners per inhabitant, does not account for the huge socio-demographic differences between both places, for instance the much younger and maybe larger population in suburbs like La Plaine. And for other confounding factors: see, e.g., the equally large difference between the neighbouring stations of Luxembourg and Saint-Michel. There is no socio-demographic difference and the accessibility of health services is about the same. Or the similar opposition between the southern suburban stops of Bagneux and [my local] Bourg-la-Reine, with the same access to health services… Or yet again the massive decrease in the Yvette valley near Orsay. The analysis is thus statistically poor and somewhat ideologically biased in that I am unsure the data discussed during this radio show tells us much more than the sad fact that suburbs with less favoured populations show a higher mortality rate.


Filed under: Statistics, Travel Tagged: Bagneux, boulevard périphérique, Bourg-la-Rein, France, France Inter, inequalities, Luxembourg, national public radio, Orsay, Paris, Paris suburbs, Périphéries, RER B, Saint-Michel, Stade de France, Yvette

by xi'an at June 29, 2015 10:15 PM

Emily Lakdawalla - The Planetary Society Blog

We're moving!
It's really happening. We're leaving our home of five years on South Grand and heading to our new home just two miles east on South Los Robles.

June 29, 2015 05:57 PM

The n-Category Cafe

What is a Reedy Category?

I’ve just posted the following preprint, which has apparently quite little to do with homotopy type theory.

The notion of Reedy category is common and useful in homotopy theory; but from a category-theoretic point of view it is odd-looking. This paper suggests a category-theoretic understanding of Reedy categories, which I find more satisfying than any other I’ve seen.

So what is a Reedy category anyway? The idea of this paper is to start instead with the question “what is a Reedy model structure?” For a model category <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> and a Reedy category <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, then <semantics>M C<annotation encoding="application/x-tex">M^C</annotation></semantics> has a model structure in which a map <semantics>AB<annotation encoding="application/x-tex">A\to B</annotation></semantics> is

  • …a weak equivalence iff <semantics>A xB x<annotation encoding="application/x-tex">A_x\to B_x</annotation></semantics> is a weak equivalence in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> for all <semantics>xC<annotation encoding="application/x-tex">x\in C</annotation></semantics>.
  • …a cofibration iff the induced map <semantics>A x L xAL xBB x<annotation encoding="application/x-tex">A_x \sqcup_{L_x A} L_x B \to B_x</annotation></semantics> is a cofibration in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> for all <semantics>xC<annotation encoding="application/x-tex">x\in C</annotation></semantics>.
  • …a fibration iff the induced map <semantics>A xB x× M xBM xA<annotation encoding="application/x-tex">A_x \to B_x \times_{M_x B} M_x A</annotation></semantics> is a fibration in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> for all <semantics>xC<annotation encoding="application/x-tex">x\in C</annotation></semantics>.

Here <semantics>L x<annotation encoding="application/x-tex">L_x</annotation></semantics> and <semantics>M x<annotation encoding="application/x-tex">M_x</annotation></semantics> are the latching object and matching object functors, which are defined in terms of the Reedy structure of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>. However, at the moment all we care about is that if <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> has degree <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> (part of the structure of a Reedy category is an ordinal-valued degree function on its objects), then <semantics>L x<annotation encoding="application/x-tex">L_x</annotation></semantics> and <semantics>M x<annotation encoding="application/x-tex">M_x</annotation></semantics> are functors <semantics>M C nM<annotation encoding="application/x-tex">M^{C_n} \to M</annotation></semantics>, where <semantics>C n<annotation encoding="application/x-tex">C_n</annotation></semantics> is the full subcategory of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> on the objects of degree less than <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>. In the prototypical example of <semantics>Δ op<annotation encoding="application/x-tex">\Delta^{op}</annotation></semantics>, where <semantics>M C<annotation encoding="application/x-tex">M^{C}</annotation></semantics> is the category of simplicial objects in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>, <semantics>L nA<annotation encoding="application/x-tex">L_n A</annotation></semantics> is the “object of degenerate <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-simplices” whereas <semantics>M nA<annotation encoding="application/x-tex">M_n A</annotation></semantics> is the “object of simplicial <semantics>(n1)<annotation encoding="application/x-tex">(n-1)</annotation></semantics>-spheres (potential boundaries for <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-simplices)”.

The fundamental observation which makes the Reedy model structure tick is that if we have a diagram <semantics>AM C n<annotation encoding="application/x-tex">A\in M^{C_n}</annotation></semantics>, then to extend it to a diagram defined at <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> as well, it is necessary and sufficient to give an object <semantics>A x<annotation encoding="application/x-tex">A_x</annotation></semantics> and a factorization <semantics>L xAA xM xA<annotation encoding="application/x-tex">L_x A \to A_x \to M_x A</annotation></semantics> of the canonical map <semantics>L xAM xA<annotation encoding="application/x-tex">L_x A \to M_x A</annotation></semantics> (and similarly for morphisms of diagrams). For <semantics>Δ op<annotation encoding="application/x-tex">\Delta^{op}</annotation></semantics>, this means that if we have a partially defined simplicial object with objects of <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics>-simplices for all <semantics>k<n<annotation encoding="application/x-tex">k\lt n</annotation></semantics>, then to extend it with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-simplices we have to give an object <semantics>A n<annotation encoding="application/x-tex">A_n</annotation></semantics>, a map <semantics>L nAA n<annotation encoding="application/x-tex">L_n A \to A_n</annotation></semantics> including the degeneracies, and a map <semantics>A nM nA<annotation encoding="application/x-tex">A_n \to M_n A</annotation></semantics> assigning the boundary of every simplex, such that the composite <semantics>L nAA nM nA<annotation encoding="application/x-tex">L_n A \to A_n \to M_n A</annotation></semantics> assigns the correct boundary to degenerate simplices.

Categorically speaking, this observation can be reformulated as follows. Given a natural transformation <semantics>α:FG<annotation encoding="application/x-tex">\alpha : F\to G</annotation></semantics> between parallel functors <semantics>F,G:MN<annotation encoding="application/x-tex">F,G:M\to N</annotation></semantics>, let us define the bigluing category <semantics>Gl(α)<annotation encoding="application/x-tex">Gl(\alpha)</annotation></semantics> to be the category of quadruples <semantics>(M,N,ϕ,γ)<annotation encoding="application/x-tex">(M,N,\phi,\gamma)</annotation></semantics> such that <semantics>MM<annotation encoding="application/x-tex">M\in M</annotation></semantics>, <semantics>NinN<annotation encoding="application/x-tex">N\inN</annotation></semantics>, and <semantics>ϕ:FMN<annotation encoding="application/x-tex">\phi:F M \to N</annotation></semantics> and <semantics>γ:NGM<annotation encoding="application/x-tex">\gamma : N \to G M</annotation></semantics> are a factorization of <semantics>α M<annotation encoding="application/x-tex">\alpha_M</annotation></semantics> through <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics>. (I call this “bigluing” because if <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> is constant at the initial object, then it reduces to the comma category <semantics>(Id/G)<annotation encoding="application/x-tex">(Id/G)</annotation></semantics>, which is sometimes called the gluing construction) The above observation is then that <semantics>M C xGl(α)<annotation encoding="application/x-tex">M^{C_x}\simeq Gl(\alpha)</annotation></semantics>, where <semantics>α:L xM x<annotation encoding="application/x-tex">\alpha: L_x \to M_x</annotation></semantics> is the canonical map between functors <semantics>M C nM<annotation encoding="application/x-tex">M^{C_n} \to M</annotation></semantics> and <semantics>C x<annotation encoding="application/x-tex">C_x</annotation></semantics> is the full subcategory of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> on <semantics>C n{x}<annotation encoding="application/x-tex">C_n \cup \{x\}</annotation></semantics>. Moreover, it is an easy exercise to reformulate the usual construction of the Reedy model structure as a theorem that if <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> and <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics> are model categories and <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> and <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> are left and right Quillen respectively, then <semantics>Gl(α)<annotation encoding="application/x-tex">Gl(\alpha)</annotation></semantics> inherits a model structure.

Therefore, our answer to the question “what is a Reedy model structure?” is that it is one obtained by repeatedly (perhaps transfinitely) bigluing along a certain kind of transformation between functors <semantics>M CM<annotation encoding="application/x-tex">M^C \to M</annotation></semantics> (where <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is a category playing the role of <semantics>C n<annotation encoding="application/x-tex">C_n</annotation></semantics> previously). This motivates us to ask, given <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, how can we find functors <semantics>F,G:M CM<annotation encoding="application/x-tex">F,G : M^{C}\to M</annotation></semantics> and a map <semantics>α:FG<annotation encoding="application/x-tex">\alpha : F \to G</annotation></semantics> such that <semantics>Gl(α)<annotation encoding="application/x-tex">Gl(\alpha)</annotation></semantics> is of the form <semantics>M C<annotation encoding="application/x-tex">M^{C'}</annotation></semantics> for some new category <semantics>C<annotation encoding="application/x-tex">C'</annotation></semantics>?

Of course, we expect <semantics>C<annotation encoding="application/x-tex">C'</annotation></semantics> to be obtained from <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> by adding one new object “<semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>”. Thus, it stands to reason that <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics>, <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, and <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics> will have to specify, among other things, the morphisms from <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to objects in <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, and the morphisms to <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> from objects of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>. These two collections of morphisms form diagrams <semantics>W:CSet<annotation encoding="application/x-tex">W:C\to\Set</annotation></semantics> and <semantics>U:C opSet<annotation encoding="application/x-tex">U:C^{op} \to \Set</annotation></semantics>, respectively; and given such <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> and <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> we do have canonical functors <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> and <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, namely the <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>-weighted colimit and the <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>-weighted limit. Moreover, a natural transformation from the <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>-weighted colimit to the <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>-weighted limit can naturally be specified by giving a map <semantics>W×UC(,)<annotation encoding="application/x-tex">W\times U \to C(-,-)</annotation></semantics> in <semantics>Set C op×C<annotation encoding="application/x-tex">\Set^{C^{op}\times C}</annotation></semantics>. In <semantics>C<annotation encoding="application/x-tex">C'</annotation></semantics>, this map will supply the composition of morphisms through <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>. (A triple consisting of <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>, <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>, and a map <semantics>W×UC(,)<annotation encoding="application/x-tex">W\times U \to C(-,-)</annotation></semantics> is also known as an object of the Isbell envelope of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>.)

It remains only to specify the hom-set <semantics>C(x,x)<annotation encoding="application/x-tex">C'(x,x)</annotation></semantics> (and the relevant composition maps), and for this there is a “universal choice”: we take <semantics>C(x,x)=(W CU){id x}<annotation encoding="application/x-tex">C'(x,x) = (W \otimes_C U) \sqcup \{\id_x\}</annotation></semantics>. That is, we throw in composites of morphisms <semantics>xyx<annotation encoding="application/x-tex">x\to y \to x</annotation></semantics>, freely subject to the associative law, and also an identity morphism. This <semantics>C<annotation encoding="application/x-tex">C'</annotation></semantics> has a universal property (it is a “collage” in the bicategory of profunctors) which ensures that the resulting biglued category is indeed equivalent to <semantics>M C<annotation encoding="application/x-tex">M^{C'}</annotation></semantics>.

A category with degrees assigned to its objects can be obtained by iterating this construction if and only if any nonidentity morphism between objects of the same degree factors uniquely-up-to-zigzags through an object of strictly lesser degree (i.e. the category of such factorizations is connected). What remains is to ensure that the resulting latching and matching objects are left and right Quillen. It turns out that this is equivalent to requiring that morphisms between objects of different degrees also have connected or empty categories of factorizations through objects of strictly lesser degree.

I call a category satisfying these conditions almost-Reedy. This doesn’t look much like the usual definition of Reedy category, but it turns out to be very close to it. If <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is almost-Reedy, let <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics> (resp. <semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics>) be the class of morphisms <semantics>f:xy<annotation encoding="application/x-tex">f:x\to y</annotation></semantics> such that <semantics>deg(x)deg(y)<annotation encoding="application/x-tex">\deg(x)\le \deg(y)</annotation></semantics> (resp. <semantics>deg(y)deg(x)<annotation encoding="application/x-tex">\deg(y)\le \deg(x)</annotation></semantics>) and that do not factor through any object of strictly lesser degree than <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>. Then we can show that just as in a Reedy category, every morphism factors uniquely into a <semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics>-morphism followed by a <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics>-morphism.

The only thing missing from the usual definition of a Reedy category, therefore, is that <semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics> and <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics> be subcategories, i.e. closed under composition. And indeed, this can fail to be true; but it is all that can go wrong: <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is a Reedy category if and only if it is an almost-Reedy category such that <semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics> and <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics> are closed under composition. (In particular, this means that <semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics> and <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics> don’t have to be given as data in the definition of a Reedy category; they are recoverable from the degree function. This was also noticed by Riehl and Verity.)

In other words, the notion of Reedy category (very slightly generalized) is essentially inevitable. Moreover, as often happens, once we understand a definition more conceptually, it is easier to generalize further. The same analysis can be repeated in other contexts, yielding the existing notions of generalized Reedy category and enriched Reedy category, as well as new generalizations such as a combined notion of “enriched generalized Reedy category”.

(I should note that some of the ideas in this paper were noticed independently, and somewhat earlier, by Richard Garner. He also pointed out that the bigluing model structure is a special case of the “Grothendieck construction” for model categories.)

This paper is, I think, slightly unusual, for a paper in category theory, in that one of its main results (unique <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics>-<semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics>-factorization in an almost-Reedy category) depends on a sequence of technical lemmas, and as far as I know there is no particular reason to expect it to be true. This made me worry that I’d made a mistake somewhere in one of the technical lemmas that might bring the whole theorem crashing down. After I finished writing the paper, I thought this made it a good candidate for an experiment in computer formalization of some non-HoTT mathematics.

Verifying all the results of the paper would have required a substantial library of basic category theory, but fortunately the proof in question (including the technical lemmas) is largely elementary, requiring little more than the definition of a category. However, formalizing it nevertheless turned out to be much more time-consuming that I had hoped, and as a result I’m posting this paper quite some months later than I might otherwise have. But the result I was worried about turned out to be correct (here is the Coq code, which unlike the HoTT Coq library requires only a standard Coq v8.4 install), and now I’m much more confident in it. So was it worth it? Would I choose to do it again if I knew how much work it would turn out to be? I’m not sure.

Having this formalization does provide an opportunity for another interesting experiment. As I said, the theorem turned out to be correct; but the process of formalization did uncover a few minor errors, which I corrected before posting the paper. I wonder, would those errors have been caught by a human referee? And you can help answer that question! I’ve posted a version without these corrections, so you can read it yourself and look for the mistakes. The place to look is Theorem 7.16, its generalization Theorem 8.26, and the sequences of lemmas leading up to them (starting with Lemmas 7.12 and 8.15). The corrected version that I linked to up top mentions all the errors at the end, so you can see how many of them you caught — then post your results in the comments! You do, of course, have the advantage over an ordinary referee that I’m telling you there is at least one error to find.

Of course, you can also try to think of an easier proof, or a conceptual reason why this theorem ought to be true. If you find one (or both), I will be both happy (for obvious reasons) and sad (because of all the time I wasted…).

Let me end by mentioning one other thing I particularly enjoyed about this paper: it uses two bits of very pure category theory in its attempt to explain an apparently ad hoc definition from homotopy theory.

The first of these bits is “tight lax colimits of diagrams of profunctors”. It so happens that an object <semantics>(U,W,α)<annotation encoding="application/x-tex">(U,W,\alpha)</annotation></semantics> of the Isbell envelope can also be regarded as a special sort of lax diagram in <semantics>Prof<annotation encoding="application/x-tex">Prof</annotation></semantics>, and the category <semantics>C<annotation encoding="application/x-tex">C'</annotation></semantics> constructed from it is its lax colimit. Moreover, the universal property of this lax colimit — or more precisely, its stronger universal property as a “tight colimit” in the equipment <semantics>Prof<annotation encoding="application/x-tex">Prof</annotation></semantics> — is precisely what we need in order to conclude that <semantics>M C<annotation encoding="application/x-tex">M^{C'}</annotation></semantics> is the desired bigluing category.

The second of these bits is an absolute coequalizer that is not split. The characterization of non-split absolute coequalizers seemed like a fairly esoteric and very pure bit of category theory when I first learned it. I don’t, of course, mean this in any derogatory way; I just didn’t expect to ever need to use it in an application to, say, homotopy theory. But it turned out to be exactly what I needed at one point in this paper, to “enrich” an argument involving a two-step zigzag (whose unenriched version I learned from Riehl-Verity).

by shulman (viritrilbia@gmail.com) at June 29, 2015 05:48 PM

Jaques Distler - Musings

Asymptotic Safety and the Gribov Ambiguity

Recently, an old post of mine about the Asymptotic Safety program for quantizing gravity received a flurry of new comments. Inadvertently, one of the pseudonymous commenters pointed out yet another problem with the program, which deserves a post all its own.

Before launching in, I should say that

  1. Everything I am about to say was known to Iz Singer in 1978. Though, as with the corresponding result for nonabelian gauge theory, the import seems to be largely unappreciated by physicists working on the subject.
  2. I would like to thank Valentin Zakharevich, a very bright young grad student in our Math Department for a discussion on this subject, which clarified things greatly for me.

Yang-Mills Theory

Let’s start by reviewing Singer’s explication of the Gribov ambiguity.

Say we want to do the path integral for Yang-Mills Theory, with compact semi-simple gauge group <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. For definiteness, we’ll talk about the Euclidean path integral, and take <semantics>M=S 4<annotation encoding="application/x-tex">M= S^4</annotation></semantics>. Fix a principal <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-bundle, <semantics>PM<annotation encoding="application/x-tex">P\to M</annotation></semantics>. We would like to integrate over all connections, <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, on <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, modulo gauge transformations, with a weight given by <semantics>e S YM(A)<annotation encoding="application/x-tex">e^{-S_{\text{YM}}(A)}</annotation></semantics>. Let <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> be the space of all connections on <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics> the (infinite dimensional) group of gauge transformations (automorphisms of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> which project to the identity on <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>), and <semantics>=𝒜/𝒢<annotation encoding="application/x-tex">\mathcal{B}=\mathcal{A}/\mathcal{G}</annotation></semantics>, the gauge equivalence classes of connections.

“Really,” what we would like to do is integrate over <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>. In practice, what we actually do is fix a gauge and integrate over actual connections (rather than equivalence classes thereof). We could, for instance, choose background field gauge. Pick a fiducial connection, <semantics>A¯<annotation encoding="application/x-tex">\overline{A}</annotation></semantics>, on <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, and parametrize any other connection <semantics>A=A¯+Q<annotation encoding="application/x-tex"> A= \overline{A}+Q </annotation></semantics> with <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics> a <semantics>𝔤<annotation encoding="application/x-tex">\mathfrak{g}</annotation></semantics>-valued 1-form on <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>. Background field gauge is

(1)<semantics>D A¯*Q=0<annotation encoding="application/x-tex">D_{\overline{A}}* Q = 0 </annotation></semantics>

which picks out a linear subspace <semantics>𝒬𝒜<annotation encoding="application/x-tex">\mathcal{Q}\subset\mathcal{A}</annotation></semantics>. The hope is that this subspace is transverse to the orbits of <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics>, and intersects each orbit precisely once. If so, then we can do the path integral by integrating1 over <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics>. That is, <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics> is the image of a global section of the principal <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics>-bundle, <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}\to \mathcal{B}</annotation></semantics> and integrating over <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics> is equivalent to integrating over its image, <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics>.

What Gribov found (in a Coulomb-type gauge) is that <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics> intersects a given gauge orbit more than once. Singer explained that this is not some accident of Coulomb gauge. The bundle <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}\to \mathcal{B}</annotation></semantics> is nontrivial and no global gauge choice (section) exists.

A small technical point: <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics> doesn’t act freely on <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics>. Except for the case2 <semantics>G=SU(2)<annotation encoding="application/x-tex">G=SU(2)</annotation></semantics>, there are reducible connections, which are fixed by a subgroup of <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics>. Because of the presence of reducible connections, we should interpret <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics> as a stack. However, to prove the nontriviality, we don’t need to venture into the stacky world; it suffices to consider the irreducible connections, <semantics>𝒜 0𝒜<annotation encoding="application/x-tex">\mathcal{A}_0\subset \mathcal{A}</annotation></semantics>, on which <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics> acts freely. We then have <semantics>𝒜 0 0<annotation encoding="application/x-tex">\mathcal{A}_0\to \mathcal{B}_0</annotation></semantics> of which <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics> acts freely on the fibers. If we were able to find a global section of <semantics>𝒜 0 0<annotation encoding="application/x-tex">\mathcal{A}_0\to \mathcal{B}_0</annotation></semantics>, then we would have established <semantics>𝒜 0 0×𝒢<annotation encoding="application/x-tex"> \mathcal{A}_0\cong \mathcal{B}_0\times \mathcal{G} </annotation></semantics> But Singer proves that

  1. <semantics>π k(𝒜 0)=0,k>0<annotation encoding="application/x-tex">\pi_k(\mathcal{A}_0)=0,\,\forall k\gt 0</annotation></semantics>. But
  2. <semantics>π k(𝒢)0<annotation encoding="application/x-tex">\pi_k(\mathcal{G})\neq 0</annotation></semantics> for some <semantics>k>0<annotation encoding="application/x-tex">k\gt 0</annotation></semantics>.

Hence <semantics>𝒜 0 0×𝒢<annotation encoding="application/x-tex"> \mathcal{A}_0\ncong \mathcal{B}_0\times \mathcal{G} </annotation></semantics> and no global gauge choice is possible.

What does this mean for Yang-Mills Theory?

  • If we’re working on the lattice, then <semantics>𝒢=G N<annotation encoding="application/x-tex">\mathcal{G}= G^N</annotation></semantics>, where <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics> is the number of lattice sites. We can choose not to fix a gauge and instead divide our answers by <semantics>Vol(G) N<annotation encoding="application/x-tex">Vol(G)^N</annotation></semantics>, which is finite. That is what is conventionally done.
  • In perturbation theory, of course, you never see any of this, because you are just working locally on <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>.
  • If we’re working in the continuum, and we’re trying to do something non-perturbative, then we just have to work harder. Locally on <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>, we can always choose a gauge (any principal <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics>-bundle is locally-trivial). On different patches of <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>, we’ll have to choose different gauges, do the path integral on each patch, and then piece together our answers on patch overlaps using partitions of unity. This sounds like a pain, but it’s really no different from what anyone has to do when doing integration on manifolds.

Gravity

The Asymptotic Freedom people want to do the path-integral over metrics and search for a UV fixed point. As above, they work in Euclidean signature, with <semantics>M=S 4<annotation encoding="application/x-tex">M=S^4</annotation></semantics>. Let <semantics>ℳℯ𝓉<annotation encoding="application/x-tex">\mathcal{Met}</annotation></semantics> be the space of all metrics on <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>, <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics> the group of diffeomorphism, and <semantics>=ℳℯ𝓉/𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{B}= \mathcal{Met}/\mathcal{Diff}</annotation></semantics> the space of metrics on <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> modulo diffeomorphisms.

Pick a (fixed, but arbitrary) fiducial metric, <semantics>g¯<annotation encoding="application/x-tex">\overline{g}</annotation></semantics>, on <semantics>S 4<annotation encoding="application/x-tex">S^4</annotation></semantics>. Any metric, <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics>, can be written as <semantics>g μν=g¯ μν+h μν<annotation encoding="application/x-tex"> g_{\mu\nu} = \overline{g}_{\mu\nu}+ h_{\mu\nu} </annotation></semantics> They use background field gauge,

(2)<semantics>¯ μh μν12¯ ν(h μ μ )=0<annotation encoding="application/x-tex">\overline{\nabla}^\mu h_{\mu\nu}-\tfrac{1}{2}\overline{\nabla}_\nu(\tensor{h}{^\mu_\mu}) = 0 </annotation></semantics>

where <semantics>¯<annotation encoding="application/x-tex">\overline{\nabla}</annotation></semantics> is the Levi-Cevita connection for <semantics>g¯<annotation encoding="application/x-tex">\overline{g}</annotation></semantics>, and indices are raised and lowered using <semantics>g¯<annotation encoding="application/x-tex">\overline{g}</annotation></semantics>. As before, (2) defines a subspace <semantics>𝒬ℳℯ𝓉<annotation encoding="application/x-tex">\mathcal{Q}\subset \mathcal{Met}</annotation></semantics>. If it happens to be true that <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics> is everywhere transverse to the orbits of <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics> and meets every <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics> orbit precisely once, then we can imagine doing the path integral over <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics> instead of over <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>.

In addition to the other problems with the asymptotic safety program (the most grievous of which is that the infrared regulator used to define <semantics>Γ k(g¯)<annotation encoding="application/x-tex">\Gamma_k(\overline{g})</annotation></semantics> is not BRST-invariant, which means that their prescription doesn’t even give the right path-integral measure locally on <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics>), the program is saddled with the same Gribov problem that we just discussed for gauge theory, namely that there is no global section of <semantics>ℳℯ𝓉<annotation encoding="application/x-tex">\mathcal{Met}\to\mathcal{B}</annotation></semantics>, and hence no global choice of gauge, along the lines of (2).

As in the gauge theory case, let <semantics>ℳℯ𝓉 0<annotation encoding="application/x-tex">\mathcal{Met}_0</annotation></semantics> be the metrics with no isometries3. <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics> acts freely on the fibers of <semantics>ℳℯ𝓉 0 0<annotation encoding="application/x-tex">\mathcal{Met}_0\to \mathcal{B}_0</annotation></semantics>. Back in his 1978 paper, Singer already noted that

  1. <semantics>π k(ℳℯ𝓉 0)=0,k>0<annotation encoding="application/x-tex">\pi_k(\mathcal{Met}_0)=0,\,\forall k\gt 0</annotation></semantics>, but
  2. <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics> has quite complicated homotopy-type.

Of course, none of this matters perturbatively. When <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> is small, i.e. for <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> close to <semantics>g¯<annotation encoding="application/x-tex">\overline{g}</annotation></semantics>, (2) is a perfectly good gauge choice. But the claim of the Asymptotic Safety people is that they are doing a non-perturbative computation of the <semantics>β<annotation encoding="application/x-tex">\beta</annotation></semantics>-functional, and that <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> is not assumed to be small. Just as in gauge theory, there is no global gauge choice (whether (2) or otherwise). And that should matter to their analysis.


Note: Since someone will surely ask, let me explain the situation in the Polyakov string. There, the gauge group isn’t <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics>, but rather the larger group, <semantics>𝒢=𝒟𝒾𝒻𝒻Weyl<annotation encoding="application/x-tex">\mathcal{G}= \mathcal{Diff}\ltimes \text{Weyl}</annotation></semantics>. And we only do a partial gauge-fixing: we don’t demand a metric, but rather only a Weyl equivalence-class of metrics. That is, we demand a section of <semantics>ℳℯ𝓉/Weylℳℯ𝓉/𝒢<annotation encoding="application/x-tex">\mathcal{Met}/\text{Weyl} \to \mathcal{Met}/\mathcal{G}</annotation></semantics>. And that can be done: in <semantics>d=2<annotation encoding="application/x-tex">d=2</annotation></semantics>, every metric is diffeomorphic to a Weyl-rescaling of a constant-curvature metric.


1 To get the right measure on <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics>, we need to use the Fadeev-Popov trick. But, as long as <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics> is transverse to the gauge orbits, that’s all fine, and the prescription can be found in any textbook.

2 For more general choice of <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>, we would also have to require <semantics>H 2(M,)=0<annotation encoding="application/x-tex">H^2(M,\mathbb{Z})=0</annotation></semantics>.

3 When <semantics>dim(M)>1<annotation encoding="application/x-tex">dim(M)\gt 1</annotation></semantics>, <semantics>ℳℯ𝓉 0(M)<annotation encoding="application/x-tex">\mathcal{Met}_0(M)</annotation></semantics> is dense in <semantics>ℳℯ𝓉(M)<annotation encoding="application/x-tex">\mathcal{Met}(M)</annotation></semantics>. But for <semantics>dim(M)=1<annotation encoding="application/x-tex">dim(M)=1</annotation></semantics>, <semantics>ℳℯ𝓉 0=<annotation encoding="application/x-tex">\mathcal{Met}_0=\emptyset</annotation></semantics>. In that case, we actually can choose a global section of <semantics>ℳℯ𝓉(S 1)ℳℯ𝓉(S 1)/𝒟𝒾𝒻𝒻(S 1)<annotation encoding="application/x-tex">\mathcal{Met}(S^1) \to \mathcal{Met}(S^1)/\mathcal{Diff}(S^1)</annotation></semantics>.

by distler (distler@golem.ph.utexas.edu) at June 29, 2015 05:46 PM

The n-Category Cafe

Feynman's Fabulous Formula

Guest post by Bruce Bartlett.

There is a beautiful formula at the heart of the Ising model; a formula emblematic of all of quantum field theory. Feynman, the king of diagrammatic expansions, recognized its importance, and distilled it down to the following combinatorial-geometric statement. He didn’t prove it though — but Sherman did.

Feynman’s formula. Let <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> be a planar finite graph, with each edge <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> regarded as a formal variable denoted <semantics>x e<annotation encoding="application/x-tex">x_e</annotation></semantics>. Then the following two polynomials are equal:

<semantics> H evenGx(H)= [γ]P(G)(1(1) w[γ]x[γ])<annotation encoding="application/x-tex">\displaystyle \sum_{H \subseteq_{even} G} x(H) = \prod_{[\vec{\gamma}] \in P(G)} \left(1 - (-1)^{w[\vec{\gamma}]} x[\vec{\gamma}]\right) </annotation></semantics>

pic

I will explain this formula and its history below. Then I’ll explain a beautiful generalization of it to arbitrary finite graphs, expressed in a form given by Cimasoni.

What the formula says

The left hand side of Feynman’s formula is a sum over all even subgraphs <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, including the empty subgraph. An even subgraph <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> is one which has an even number of half-edges emanating from each vertex. For each even subgraph <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics>, we multiply the variables <semantics>x e<annotation encoding="application/x-tex">x_e</annotation></semantics> of all the edges <semantics>eH<annotation encoding="application/x-tex">e \in H</annotation></semantics> together to form <semantics>x(H)<annotation encoding="application/x-tex">x(H)</annotation></semantics>. So, the left hand side is a polynomial with integer coefficients in the variables <semantics>x e i<annotation encoding="application/x-tex">x_{e_i}</annotation></semantics>.

The right hand side is a product over all <semantics>γP(G)<annotation encoding="application/x-tex">\vec{\gamma} \in P(G)</annotation></semantics>, where <semantics>P(G)<annotation encoding="application/x-tex">P(G)</annotation></semantics> is the set of all prime, reduced, unoriented, closed paths in <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. That’s a bit subtle, so let me define it carefully. Firstly, our graph is not oriented. But, by an oriented edge <semantics>e<annotation encoding="application/x-tex">\mathbf{e}</annotation></semantics>, I mean an unoriented edge <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> equipped with an orientation. An oriented closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> is a word of composable oriented edges <semantics>e 1e n<annotation encoding="application/x-tex">\mathbf{e_1} \cdots \mathbf{e_n}</annotation></semantics>; we consider <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> up to cyclic ordering of the edges. The oriented closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> is called called reduced if it never backtracks, that is, if no oriented edge <semantics>e<annotation encoding="application/x-tex">\mathbf{e}</annotation></semantics> is immediately followed by the oriented edge <semantics>e 1<annotation encoding="application/x-tex">\mathbf{e^{-1}}</annotation></semantics>. The oriented closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> is called prime if, when viewed as a cyclic word, it cannot be expressed as the product <semantics>δ r<annotation encoding="application/x-tex">\vec{\delta}^r</annotation></semantics> of a given oriented closed path <semantics>δ<annotation encoding="application/x-tex">\vec{\delta}</annotation></semantics> for any <semantics>r2<annotation encoding="application/x-tex">r \geq 2</annotation></semantics>. Note that the oriented closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> is reduced (resp. prime) if and only if <semantics>γ 1<annotation encoding="application/x-tex">\vec{\gamma}^{-1}</annotation></semantics> is. It therefore makes sense to talk about prime reduced unoriented closed paths <semantics>[γ]<annotation encoding="application/x-tex">[\vec{\gamma}]</annotation></semantics>, by which we mean simply an equivalence class <semantics>[γ]=[γ 1]<annotation encoding="application/x-tex">[\vec{\gamma}] = [\vec{\gamma}^{-1}]</annotation></semantics>.

Suppose <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> is embedded in the plane, so that each edge forms a smooth curve. Then given an oriented closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics>, we can compute the winding number <semantics>w(γ)<annotation encoding="application/x-tex">w(\vec{\gamma})</annotation></semantics> of the tangent vector along the curve. We need to fix a convention about what happens at vertices, where we pass from the tangent vector <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> at the target of <semantics>e i<annotation encoding="application/x-tex">\mathbf{e_i}</annotation></semantics> to the tangent vector <semantics>v<annotation encoding="application/x-tex">v'</annotation></semantics> at the source of <semantics>e i+1<annotation encoding="application/x-tex">\mathbf{e_{i+1}}</annotation></semantics>. We choose to rotate <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> into <semantics>v<annotation encoding="application/x-tex">v'</annotation></semantics> by the angle less than <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> in absolute value.

Note that <semantics>w(γ)=w(γ)<annotation encoding="application/x-tex">w(-\vec{\gamma}) = -w(\vec{\gamma})</annotation></semantics>, so that its parity <semantics>(1) w[γ]<annotation encoding="application/x-tex">(-1)^{w[\vec{\gamma}]}</annotation></semantics> makes sense for unoriented paths. Finally, by <semantics>x[γ]<annotation encoding="application/x-tex">x[\vec{\gamma}]</annotation></semantics> we simply mean the product of all the variables <semantics>x e i<annotation encoding="application/x-tex">x_{e_i}</annotation></semantics> for <semantics>e i<annotation encoding="application/x-tex">e_i</annotation></semantics> along <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics>.

The product on the right hand side is infinite, since <semantics>P(G)<annotation encoding="application/x-tex">P(G)</annotation></semantics> is infinite in general (we will shortly do some examples). But, we regard the product as a formal power series in the terms <semantics>x e 1x e 2x e n<annotation encoding="application/x-tex">x_{e_1} x_{e_2} \cdots x_{e_n}</annotation></semantics>, each of which only receives finitely many contributions (there are only finitely many paths of a given length), so the right hand side is a well-defined formal power series.

Examples

Let’s do some examples, taken from Sherman. Suppose <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> is a graph with one vertex <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> and one edge <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics>:

pic

Write <semantics>x=x(e)<annotation encoding="application/x-tex">x = x(e)</annotation></semantics>. There are two even subgraphs — the empty one, and <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> itself. So the sum over even subgraphs gives <semantics>1+x<annotation encoding="application/x-tex">1+x</annotation></semantics>. There is only a single closed path in <semantics>P(G)<annotation encoding="application/x-tex">P(G)</annotation></semantics>, namely <semantics>[e]<annotation encoding="application/x-tex">[\mathbf{e}]</annotation></semantics>, with odd winding number, so the sum over paths also gives <semantics>1+x<annotation encoding="application/x-tex">1+x</annotation></semantics>. Hooray!

Now let’s consider a graph with two loops:

pic

There are 4 even subgraphs, and the left hand side of the formula is <semantics>1+x 1+x 2+x 1x 2<annotation encoding="application/x-tex">1 + x_1 + x_2 + x_1x_2</annotation></semantics>. Now let’s count closed paths <semantics>γP(G)<annotation encoding="application/x-tex">\gamma \in P(G)</annotation></semantics>. There are infinitely many; here is a table. Let <semantics>e 1<annotation encoding="application/x-tex">\mathbf{e_1}</annotation></semantics> and <semantics>e 2<annotation encoding="application/x-tex">\mathbf{e_2}</annotation></semantics> be the counterclockwise oriented versions of <semantics>e 1<annotation encoding="application/x-tex">e_1</annotation></semantics> and <semantics>e 2<annotation encoding="application/x-tex">e_2</annotation></semantics>. <semantics>[γ] 1(1) w[γ]x[γ] [e 1] 1+x 1 [e 2] 1+x 2 [e 1e 2] 1+x 1x 2 [e 1e 2 1] 1x 1x 2 [e 1 2e 2] 1x 1 2x 2 [e 1 2e 2 1] 1+x 1 2x 2 [e 1e 2 2] 1x 1x 2 2 [e 1 1e 2 2] 1+x 1x 2 2 <annotation encoding="application/x-tex"> \begin{array}{cc} [\vec{\gamma}] & 1 - (-1)^{w[\vec{\gamma}]} x[\vec{\gamma}] \\ ------ & ------ \\ [\mathbf{e_1}] & 1 + x_1 \\ [\mathbf{e_2}] & 1 + x_2 \\ [\mathbf{e_1 e_2}] & 1 + x_1 x_2 \\ [\mathbf{e_1 e_2^{-1}}] & 1 - x_1 x_2 \\ [\mathbf{e_1^2 e_2}] & 1 - x_1^2 x_2 \\ [\mathbf{e_1^2 e_2^{-1}}] & 1 + x_1^2 x_2 \\ [\mathbf{e_1 e_2^2}] & 1 - x_1 x_2^2 \\ [\mathbf{e_1^{-1} e_2^2}] & 1 + x_1 x_2^2 \\ \cdots & \cdots \end{array} </annotation></semantics> If we multiply out the terms the right hand side gives <semantics>(1+x 1+x 2+x 1x 2)(1x 1 2x 2 2)(1x 1 4x 2 2)(1x 1 2x 2 4)<annotation encoding="application/x-tex"> (1 + x_1 + x_2 + x_1 x_2) (1-x_1^2 x_2^2) (1-x_1^4 x_2^2)(1-x_1^2x_2^4) \cdots </annotation></semantics> In order for this to equal <semantics>1+x 1+x 2+x 1x 2<annotation encoding="application/x-tex"> 1 + x_1 + x_2 + x_1x_2 </annotation></semantics> we will need some miraculous cancellation in the higher powers to occur! And indeed this is what happens. The minus signs from the winding numbers conspire to cancel the remaining terms. Even in this simple example, the mechanism is not obvious — but it does happen.

Pondering the meaning of the formula

Let’s ponder the formula. Why do I say it is so beautiful?

Well, the left hand side is combinatorial — it has only to do with the abstract graph <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, having the property that it is embeddable in the plane (this property can be abstractly encoded via Kuratowski’s theorem). The right hand side is geometric — we fix some embedding of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> in the plane, and then compute winding numbers of tangent vectors! So, the formula expresses a combinatorial (or topological) property of the graph in terms of geometry.

Ok… but why is this formula emblematic of all of quantum field theory? Well, summing over all loops is what the path integral in quantum mechanics is all about. (See Witten’s IAS lectures on the Dirac index on manifolds, for example.) Note that the quantum mechanics path integral has recently been made rigorous in the work of Baer and Pfaffle, as well as Fine and Sawin.

Also, I think the formula has overtones of the linked-cluster theorem in perturbative quantum field theory, which relates the generating function for all Feynman diagrams (similar to the even subgraphs) to the generating function for connected Feynman diagrams (similar to the closed paths). You can see why Feynman was interested!

History of the formula

One beautiful way of computing the partition function in the Ising model, due to Kac and Ward, is to express it as a square root of a certain determinant. (I hope to explain this next time.) To do this though, they needed a “topological theorem” about planar graphs. Their theorem was actually false in general, as shown by Sherman. It was Feynman who reformulated it in the above form. From Mark Kac’s autobiography (clip):

The two-dimensional case for so-called nearest neighbour interactions was solved by Lars Onsager in 1944. Onsager’s solution, a veritable tour de force of mathematical ingenuity and inventiveness, uncovered a number of suprising features and started a series of investigations which continue to this day. The solution was difficult to understand and George Uhlenbeck urged me to try to simplify it. “Make it human” was the way he put it …. At the Institute [for Advanced Studies at Princeton] I met John C. Ward … we succeeded in rederiving Onsager’s result. Our success was in large measure due to knowing the answer; we were, in fact, guided by this knowledge. But our solution turned out to be incomplete… it took several years and the effort of several people before the gap in the derivation was filled. Even Feynman got into the act. He attended two lectures I gave in 1952 at Caltech and came up with the clearest and sharpest formulation of what was needed to fill the gap. The only time I have ever seen Feynman take notes was during the two lectures. Usually, he is miles ahead of the speaker but following combinatorial arguments is difficult for all mortals.

Feynman’s formula for general graphs

Every finite graph can be embedded in some closed oriented surface of high enough genus. So there should be a generalization of the formula to all finite graphs, not just planar ones. But on the right hand side of the formula, how do we compute the winding number of a closed path on a general surface? The answer, in the formulation of Cimasoni, is beautiful: we should sum over spin structures on the surface, each weighted by their Arf invariant!

Generalized Feynman formula. Let <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> be a finite graph of genus <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics>. Then the following two polynomials are equal: <semantics> H evenGx(H)=12 g λSpin(Σ)(1) Arf(λ) [γ]P(G)(1(1) w λ[γ]x[γ])<annotation encoding="application/x-tex"> \sum_{H \subseteq_{even} G} x(H) = \frac{1}{2^g} \sum_{\lambda \in Spin(\Sigma)} (-1)^{Arf(\lambda)} \prod_{[\vec{\gamma}] \in P(G)} (1 - (-1)^{w_\lambda[\vec{\gamma}]} x[\vec{\gamma}]) </annotation></semantics>

The point is that a spin structure on <semantics>Σ<annotation encoding="application/x-tex">\Sigma</annotation></semantics> can be represented as a nonzero vector field <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics> on <semantics>Σ<annotation encoding="application/x-tex">\Sigma</annotation></semantics> minus a finite set of points, with even index around these points. (Of course, a nonzero vector field on the whole of <semantics>Σ<annotation encoding="application/x-tex">\Sigma</annotation></semantics> won’t exist, except on the torus. That is why we need these points.) So, we can measure the winding number <semantics>w λ(γ)<annotation encoding="application/x-tex">w_\lambda(\vec{\gamma})</annotation></semantics> of a closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> with respect to this background vector field <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>.

The first version of this generalized Feynman formula was obtained by Loebl, in the case where all vertices have degree 2 or 4, and using the notion of Sherman rotation numbers instead of spin structures (see also Loebl and Somberg). In independent work, Cimasoni formulated it differently using the language of spin structures and Arf invariants, and proved it in the slightly more general case of general graphs, though his proof is not a direct one. Also, in unpublished work, Masbaum and Loebl found a direct combinatorial argument (in the style of Sherman’s proof of the planar case) to prove this general, spin-structures version.

Last thoughts

I find the generalized Feynman’s formula to be very beautiful. The left hand side is completely combinatorial / topological, manifestly only depending on <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. The right hand picks some embedding of the graph in a surface, and is very geometric, referring to high-brow things such as spin structures and Arf invariants! Who knew that there was such an elegant geometric theorem lurking behing arbitrary finite graphs?

Moreover, it is all part of a beautiful collection of ideas relating the Ising model to the free fermion conformal field theory. (Indeed, the appearance of spin structures and winding numbers is telling us we are dealing with fermions.) Of course, physicists knew this for ages, but it hasn’t been clear to mathematicians exactly what they meant :-)

But in recent times, mathematicians are making this all precise, and beautiful geometry is emerging, like the above formula. There’s even a Fields medal in the mix. It’s all about discrete complex analysis, spinors on Riemann surfaces, the discrete Dirac equation, isomonodromic deformation of flat connections, heat kernels, conformal invariance, Pfaffians, and other amazing things (here is a recent expository talk of mine). I hope to explain some of this story next time.

by willerton (S.Willerton@sheffield.ac.uk) at June 29, 2015 01:18 PM

Clifford V. Johnson - Asymptotia

Warm…
So far, the Summer has not been as brutal in the garden as it was last year. Let's hope that continues. I think that late rain we had last month (or earlier this month?) helped my later planting get a good start too. This snap of a sunflower was taken on a lovely warm evening in the garden the other day, after a (only slightly too) hot day... sunflower_june_2015 -cvj Click to continue reading this post

by Clifford at June 29, 2015 01:17 PM

Peter Coles - In the Dark

SpaceX – the Anatomy of an Explosion

Yesterday an unmanned Falcon-9 SpaceX rocket was launched from Cape Canaveral in Florida. All seemed to go well. At first…

Here’s a super-slow-motion video of the terrifying explosion that engulfed and destroyed the rocket:

I’m no rocket scientist – and no doubt a full expert analysis of this event will be published before too long – but it does seem clear that the problem originated in the Stage 2 rocket. I fancy I can see something happen near the top of the rocket just before the main explosion started.

It’s not easy putting things into space, but we shouldn’t stop doing things just because they’re hard.

 


by telescoper at June 29, 2015 08:18 AM

June 28, 2015

Christian P. Robert - xi'an's og

Kamiltonian Monte Carlo [no typo]

kamilHeiko Strathmann, Dino Sejdinovic, Samuel Livingstone, Zoltán Szabó, and Arthur Gretton arXived a paper last week about Kamiltonian MCMC, the K being related with RKHS. (RKHS as in another KAMH paper for adaptive Metropolis-Hastings by essentially the same authors, plus Maria Lomeli and Christophe Andrieu. And another paper by some of the authors on density estimation via infinite exponential family models.) The goal here is to bypass the computation of the derivatives in the moves of the Hamiltonian MCMC algorithm by using a kernel surrogate. While the genuine RKHS approach operates within an infinite exponential family model, two versions are proposed, KMC lite with an increasing sequence of RKHS subspaces, and KMC finite, with a finite dimensional space. In practice, this means using a leapfrog integrator with a different potential function, hence with a different dynamics.

The estimation of the infinite exponential family model is somewhat of an issue, as it is estimated from the past history of the Markov chain, simplified into a random subsample from this history [presumably without replacement, meaning the Markovian structure is lost on the subsample]. This is puzzling because there is dependence on the whole past, which cancels ergodicity guarantees… For instance, we gave an illustration in Introducing Monte Carlo Methods with R [Chapter 8] of the poor impact of approximating the target by non-parametric kernel estimates. I would thus lean towards the requirement of a secondary Markov chain to build this kernel estimate. The authors are obviously aware of this difficulty and advocate an attenuation scheme. There is also the issue of the cost of a kernel estimate, in O(n³) for a subsample of size n. If, instead, a fixed dimension m for the RKHS is selected, the cost is in O(tm²+m³), with the advantage of a feasible on-line update, making it an O(m³) cost in fine. But again the worry of using the whole past of the Markov chain to set its future path…

Among the experiments, a KMC for ABC that follows the recent proposal of Hamiltonian ABC by Meeds et al. The arguments  are interesting albeit sketchy: KMC-ABC does not require simulations at each leapfrog step, is it because the kernel approximation does not get updated at each step? Puzzling.

I also discussed the paper with Michael Betancourt (Warwick) and here his comments:

“I’m hesitant for the same reason I’ve been hesitant about algorithms like Bayesian quadrature and GP emulators in general. Outside of a few dimensions I’m not convinced that GP priors have enough regularization to really specify the interpolation between the available samples, so any algorithm that uses a single interpolation will be fundamentally limited (as I believe is born out in non-trivial scaling examples) and trying to marginalize over interpolations will be too awkward.

They’re really using kernel methods to model the target density which then gives the gradient analytically. RKHS/kernel methods/ Gaussian processes are all the same math — they’re putting prior measures over functions. My hesitancy is that these measures are at once more diffuse than people think (there are lots of functions satisfying a given smoothness criterion) and more rigid than people think (perturb any of the smoothness hyper-parameters and you get an entirely new space of functions).

When using these methods as an emulator you have to set the values of the hyper-parameters which locks in a very singular definition of smoothness and neglects all others. But even within this singular definition there are a huge number of possible functions. So when you only have a few points to constrain the emulation surface, how accurate can you expect the emulator to be between the points?

In most cases where the gradient is unavailable it’s either because (a) people are using decades-old Fortran black boxes that no one understands, in which case there are bigger problems than trying to improve statistical methods or (b) there’s a marginalization, in which case the gradients are given by integrals which can be approximated with more MCMC. Lots of options.”


Filed under: Books, Statistics, University life Tagged: adaptive MCMC methods, Bayesian quadrature, Gatsby, Hamiltonian Monte Carlo, Introducing Monte Carlo Methods with R, London, Markov chain, non-parametric kernel estimation, reproducing kernel Hilbert space, RKHS, smoothness

by xi'an at June 28, 2015 10:15 PM

Tommaso Dorigo - Scientificblogging

In Memory Of David Cline
I was saddened today to hear of the death of David Cline. I do not have much to say here - I am not good with obituaries - but I do remember meeting him at a conference in Albuquerque in 2008, where we chatted on several topics, among them the history of the CDF experiment, a topic on which I had just started to write a book. 

Perhaps the best I can do here as a way to remember Cline, whose contributions to particle physics can and will certainly be better described by many others, is to report a quote from a chapter of the book, which describes a funny episode on the very early days of CDF. I think he did have a sense of humor, so he might not dislike it if I do.

---

read more

by Tommaso Dorigo at June 28, 2015 03:59 PM

Peter Coles - In the Dark

It is important that the DfE publish correct science content in their GCSE subject content

telescoper:

You would think that the people in the Department for Education who draft the subject content for GCSE science would know stuff about science…

Sadly, it seems not…

Sadly, it seems this is not the case…

Originally posted on Teaching science in all weather:

Yesterday I posted this reaction to the publication by the DfE of the GCSE_combined_science_content (copy taken – original link here). Others, including @alby and @hrogerson have written and commented about this as well.

[Another update: in the comments Richard Needham from the ASE has reminded me that over the next few weeks QfQual will be using these documents to ratify the Exam Boards’ science GCSE specifications. Not a good situation.]

[An update: the DfE released the GCSE_single_science_content in another document (original link here). Some of the errors below including the kinetic energy formula have not made it into this document and the space physics is obviously only considered interesting enough for the triple scientists. I will check the rest.]

I thought it relevant to post some specific points (just from the physics section – which didn’t even appear correctly in the table of contents). Now…

View original 935 more words


by telescoper at June 28, 2015 03:51 PM

Peter Coles - In the Dark

Stonewall and After – in Praise of Drag Queens

Despite not being able to go to the big event in London yesterday, it’s been a very memorable Pride Weekend, preceded as it was by the ruling of the Supreme Court of the United States of America that the right for same sex couples to get married was protected under the constitution. The White House responded to the judgement in appropriate style:

white-house-rainbow-3

I’m tempted to quote Genesis 9:16, but I won’t.

My facebook and twitter feeds have been filled with rainbows all weekend, as is my wordpress editor page as I write this piece. It’s been great to see so many people, straight and gay, celebrating diversity and equality. Even a Dalek joined in.

Gay Dalek

I’m a bit more cynical about the number of businesses that have tried to cash in on  Pride but even that is acceptance of a sort. It’s all very different from the first Pride March I went on, way back in 1986. That was a much smaller scale event than yesterday’s, and politicians were – with very few exceptions – notable by their absence.

In fact today is the anniversary of the event commemorated by Pride. It was in the early hours of the morning of Saturday June 28th 1969 that the Stonewall Riots took place in the Greenwich Village area of New York City. There are few photographs and no film footage of what happened which, together with some conflicting eyewitness accounts, has contrbuted to the almost mythical status of these demonstations, which were centred on the Stonewall Inn (which, incidentally, still exists).  What is, I think, clear is that they were the spontaneous manifestation of the anger of a community that had simply had enough of the way it was being treated by the police. Although it wasn’t the first such protest in the USA, I still think it is also the case that Stonewall was a defining moment in the history of the movement for LGBT equality.

One of the myths that has grown up around Stonewall is that the Stonewall Inn was a place primarily frequented by drag queens and it was the drag queens who began the fight back against intolerable  police harassment. That was the standard version, but the truth is much more complicated and uncertain that that. Nevertheless, it is clear that it was the attempted arrest of four people – three male (cross-dressers) and one female – that ignited the protest. Whether they led it or not, there’s no doubt that drag queens played a major role in the birth of the gay liberation movement. Indeed, to this day, it remains the case that the “T” part of the LGBT spectrum (which I interpret to include Transgender and Transvestite) is often neglected by the rest of the rainbow.

I have my own reasons for being grateful for drag queens. When I was a youngster (still at School) I occasionally visited a gay bar in Newcastle called the Courtyard. I was under age for drinking alcohol let alone anything else – the age of consent was 21 in those days – but I got a kick out of the attention I received and flirted outrageously without ever taking things any further. I never had to buy my own drinks, let’s put it that way.

Anyway, one evening I left the pub to get the bus home – the bus station was adjacent to the pub – but was immediately confronted by a young bloke who grabbed hold of me and asked if I was a “poof”. Before I could answer, a figure loomed up behind him and shouted “Leave him alone!”. My assailant let go of me and turned round to face my guardian angel, or rather guardian drag queen. No ordinary drag queen either. This one, at least in my memory, was enormous: about six foot six and built like a docker, but looking even taller because of the big hair and high heels. The yob laughed sneeringly whereupon he received the immediate response of a powerful right jab to the point of the chin, like something out of boxing manual. His head snapped back and hit the glass wall of a bus shelter. Blood spurted from his mouth as he slumped to the ground.

I honestly thought he was dead, and so apparently did my rescuer who told me in no uncertain terms to get the hell away. Apart from everything else, the pub would have got into trouble if they’d known I had even been in there. I ran to the next stop where I got a bus straightaway. I was frightened there would be something on the news about a violent death in the town centre, but that never happened. It turns out the “gentleman” concerned had bitten his tongue when the back of his head hit the bus shelter. Must have been painful, but not life-threatening. My sympathy remains limited.

I think there’s a moral to this story, but I’ll leave it up to you to decide what it is.


by telescoper at June 28, 2015 02:55 PM

Emily Lakdawalla - The Planetary Society Blog

SpaceX Rocket Breaks Apart En Route to International Space Station
A SpaceX Falcon 9 rocket broke apart over the Atlantic Ocean during today's flight to the International Space Station.

June 28, 2015 02:48 PM

June 27, 2015

Peter Coles - In the Dark

Open Day in the Sun

Just back from a day on campus at an Open Day for potential students at Sussex University (and an afternoon getting on with work). I think the day went well, and it was nice to get some good weather for a change!

Anyway here’s a picture I took as things were starting to wind down this afternoon.

image

Now I am going to make myself something to eat and chill out with a glass of wine..


by telescoper at June 27, 2015 05:55 PM

June 26, 2015

Clifford V. Johnson - Asymptotia

Naddy
Yesterday, an interesting thing happened while I was out in my neighbourhood walking my son for a good hour or more (covered, in a stroller - I was hoping he'd get some sleep), visiting various shops, running errands. Before describing it, I offer two bits of background information as (possibly relevant?) context. (1) I am black. (2) I live in a neighbourhood where there are very few people of my skin colour as residents. Ok, here's the thing: * * * I'm approaching two young (mid-to-late teens?) African-American guys, sitting at a bus stop, chatting and laughing good-naturedly. As I begin to pass them, nodding a hello as I push the stroller along, one of them stops me. [...] Click to continue reading this post

by Clifford at June 26, 2015 11:33 PM

Christian P. Robert - xi'an's og

the girl who saved the king of Sweden [book review]

When visiting a bookstore in Florence last month, during our short trip to Tuscany, I came upon this book with enough of a funny cover and enough of a funny title (possibly capitalising on the similarity with “the girl who played with fire”] to make me buy it. I am glad I gave in to this impulse as the book is simply hilarious! The style and narrative relate rather strongly to the series of similarly [mostly] hilarious picaresque tales written by Paasilina and not only because both authors are from Scandinavia. There is the same absurd feeling that the book characters should not have this sort of things happening to them and still the morbid fascination to watch catastrophe after catastrophe being piled upon them. While the story is deeply embedded within the recent history of South Africa and [not so much] of Sweden for the past 30 years, including major political figures, there is no true attempt at making the story in the least realistic, which is another characteristic of the best stories of Paasilina. Here, a young girl escapes the poverty of the slums of Soweto, to eventually make her way to Sweden along with a spare nuclear bomb and a fistful of diamonds. Which alas are not eternal… Her intelligence helps her to overcome most difficulties, but even her needs from time to time to face absurd situations as another victim. All is well that ends well for most characters in the story, some of whom one would prefer to vanish in a gruesome accident. Which seemed to happen until another thread in the story saved the idiot. The satire of South Africa and of Sweden is most enjoyable if somewhat easy! Now I have to read the previous volume in the series, The Hundred-Year-Old Man Who Climbed Out of the Window and Disappeared!


Filed under: Books, Kids, Travel Tagged: Arto Paasilinna, book review, Finland, Firenze, Italy, Scandinavia, South Africa, Soweto, Sweden, the girl who saved the king of Sweden, The Hundred-Year-Old Man Who Climbed Out of the Window and Disappeared, Tuscany

by xi'an at June 26, 2015 10:15 PM

astrobites - astro-ph reader's digest

Nature’s Starships Vol. I – Ride in on Shooting Stars

Title: Nature’s Starships. I. Observed Abundances and Relative Frequencies of Amino Acids in Meteorites [published version]
Authors: Alyssa K. Cobb, Ralph E. Pudritz.
First author’s institution: Origins Institute, McMaster University, ABB 241, 1280 Main Street, Hamilton, ON L8S 4M1, Canada.
Status: Published in Astrophysical Journal on February 24, 2014.

The building blocks of life within planetesimals

TBD

Fig. 1: The principal structure of an amino acid. The “R” indicates the “side chain”, which varies across different amino acids. Source: Wikimedia Commons

Amino acids are a crucial ingredient for a vast majority of organic compounds, including our own DNA, and an important source of organic material that had to be delivered to young planets in our Solar System via meteoritic impacts. Therefore, by understanding where and how the first amino acids formed is a critical step to understanding when and how life formed in the first place. Life is hanging around in basically every biological niche on Earth no matter what life formation theory you believe in, be it abiogenesis (life arising from non-living matter in the primordial soup) or panspermia (the “contamination” of Earth by living microorganisms that are already ubiquitous throughout the Universe due to distribution by meteroids and asteroids). Not only Earth, far-away worlds like Jupiter’s moon Europa may also harbor life.

What do all these rocks want to tell us?

Fortunately, the meteorites that fall on Earth give us a record of the kind of materials they are made of. A relatively rare class of meteorites, the carbonaceous chondrites (C-type), are known for their high water, carbon (hence, carbonaceous) and organic contents and are therefore interesting to investigate with regards to the building blocks of life.

TBD

Fig. 2: Classification scheme of carbonaceous chondrites. Source: Cobb & Pudritz (2014)

It is important to incorporate all findings from C-type meteorites into a comprehensive picture showing their general classifications; figure 2 shows this, despite the somewhat messy classifications (sorry geologists!). They are first sorted depending on the chemical composition (CI to CK) and then according to petrographic type, which means how elements within the meteorite are altered. Aqueous alteration, the process of transforming molecules to organic material, happens in very hydrated environments. Thermal metamorphism describes the transformation of meteorites under high-heat environments, which not organic material-friendly.

Amino acid distillery – how and where

Meteorites are thought to be the end-product of disruptional collisions between so-called parent bodies – planetesimals of several kilometers size in the early Solar System, alike nowadays asteroids. These guys were big enough to be heated up from the inside by radiogenic elements and therefore probably had an onion-like structure inside – very hot and dense in the inner part and cool in the outer part where there is not too much heat from the radionuclides.

Here, we obviously need a link between the different classes from Figure 2 and the physical origin of these classes. In the onion-shell picture, signs of thermal metamorphism mean the presence of hot temperatures, and hence formation deep within the parent body in a dry and dense environment (types 4-6 in Figure 2). On the other hand, aqueously altered samples have been formed in the outer shells of a parent body where there is a lot of water and significantly lower temperatures are expected (types 1-3 in Figure 2).

Cobb & Pudritz collected the abundance data results of numerous laboratory experiment. From these, they tried to relate the abundance of relatively simple amino acids to meteorite petrographic types and inferred the place in the parent body where these amino acids originally formed! One of their main results is presented in Figure 3.

TBD

Fig. 3: Average amino acid abundances for different meteorite classes. Each color represents a specific amino acid type. Meteorites with petrographic type 2 show the most diverse and rich spectrum of amino acids. These types are thought to have been formed in intermediate layers of the meteorite parent bodies, which means that the temperatures and water abundances found in the intermediate layers favor the reaction rates for these compounds! Source: Cobb & Pudritz (2014)

The graphic shows the abundances of specific types of amino acids within meteorite samples of certain petrographic classes, as explained above. As can be seen, the type 2 samples contain the most diverse and rich abundances of amino acids, several orders of magnitudes higher than for the other types! Cobb & Pudritz think that the intermediate layers in parent bodies, to which these meteorite types belong, form an optimal environment for the formation of amino acids, which corresponds to temperatures from 200 to 400 degrees Celsius in a watery environment.

Why this is interesting and where to go from here

As a summary, Cobb & Pudritz argue that amino acids potentially formed in very specific layers of the parent bodies. They claim that they can now relate the origin of the amino acids within these layers to the chemistry and environment of the natal protostellar disk, which surrounded the forming Sun. By understanding more about the environment needed for the building blocks of life to emerge, we come closer to a deep understanding of the processes behind the emergence of life and can possibly use these results to gain knowledge about the frequency of life on other planets and moons in the solar system and maybe even in extrasolar systems. Therefore, wait for more about the origins of life in Nature’s Starships Vol. II…! (And read about it next month. :-) )

by Tim Lichtenberg at June 26, 2015 06:16 PM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Robert Boyle Summer School 2015

This weekend, I’m at the Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s my favourite annual conference by some margin – a small number of talks by some highly eminent scholars of the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

Born in Lismore Castle into a wealthy landowning family, Boyle  became one of the most important figures in the Scientific Revolution,  well-known for his scientific discoveries, his role in the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

boyle

The Irish-born scientist and aristocrat Robert Boyle   

IMG_0902[1]

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

As ever, the summer school takes place in Lismore, the beautiful town that is the home of Lismore Castle where Boyle was born. This year, the conference commemorates the 350th anniversary of the Philosophical Transactions of the Royal Society by considering the history of the publication of scientific work, from the first issue of  Phil. Trans. to the problem of fraud in scientific publication today.

The first talk this morning was ‘Robert Boyle, Philosophical Transactions and Scientific Communication’ by Professor Michael Hunter of Birbeck College. Professor Hunter is one of the world’s foremost experts on Boyle, and he gave a thorough overview of Boyle’s use of the Phil. Trans to disseminate his findings. Afterwards, Dr. Aileen Fyfe  of the University of St Andrews gave the talk ‘Peer Review: A History From 1665′ carefully charting how the process of peer review evolved over time from Boyle’s time to today.

IMG_9853

The renowned Boyle scholar Professor Michael Hunter of Birbeck College, UCL, in action

This afternoon, we had the wonderful talk ‘Lady Ranelagh, the Hartlib Circle and Networks for Scientific Correspondence’  in the spectacular setting of St Carthage’s Cathedral, given by Dr.Michelle DiMeo of the Chemical Heritage Foundation.  I knew nothing of Lady Ranelagh or the notion of a Republic of Letters  before this. The Hartlib Circle was clearly an important  forerunner of the Philosophical Transactions and Lady Ranelagh’s role in the Circle and in Boyle’s scientific life has been greatly overlooked.

eire06-041e

St Carthage’s Cathedral in Lismore

IMG_0328

Professor DiMeo unveiling a plaque in memory of Lady Ranelagh at the Castle. The new plaque is on the right, to accompany the existing plaque in memory of Robert Boyle on the left 

Tomorrow will see talks by Professor Dorothy Bishop (Oxford) and Sir John Pethica (Trinity College Dublin), but in the meanwhile I need to catch some sleep before tonight’s barbecue in Lismore Castle!

a700f8da8bdea9b0a9ad5044f71de8c5

Off to the Castle for dinner

Update

We had some great music up at the Castle last night, followed by an impromptu session in one of the village pubs. The highlight for many was when Sir John Pethica,  VP of the Royal Society, produced a fiddle from somewhere and joined in. As did his wife, Pam – talk about Renaissance men and women!

Turning to more serious topics, this morning Professor Bishop gave a frightening account of some recent cases of fraudulent publication in science – including a case she herself played a major part in exposing! However, not to despair, as Sir John noted in his presentation that the problem may be much more prevalent in some areas of science than others. This made sense to me, as my own experience of the publishing world in physics is that of very conservative editors that err on the side of caution. Indeed, it took a long time for our recent discovery of an unknown theory by Einstein to be accepted as such by the physics journals.

All in all, a superb conference in a beautiful setting.  On the last day, we were treated to a tour of the castle gardens, accompanied by Robert Boyle and his sister.

Copy of IMG_0572

Robert Boyle and his sister Lady Ranelagh picking flowers at the Castle on the last day of the conference

You can find the full conference programme here. The meeting was sponsored by Science Foundation Ireland, the Royal Society of Chemistry, the Institute of Chemisty (Ireland), the Institute of Physics (Ireland), the Robert Boyle Foundation,  i-scan, Abbott, Lismore Castle Arts and the Lismore House Hotel.


by cormac at June 26, 2015 04:27 PM

Peter Coles - In the Dark

It is so ordered..

image

I never expected to be moved to tears by the eloquence of a court judgement, even if the margin was as narrow as it could have been (5-4).


by telescoper at June 26, 2015 04:12 PM

Christian P. Robert - xi'an's og

Introduction to Monte Carlo methods with R and Bayesian Essentials with R

sales1Here are the  download figures for my e-book with George as sent to me last week by my publisher Springer-Verlag.  With an interesting surge in the past year. Maybe simply due to new selling strategies of the published rather to a wider interest in the book. (My royalties have certainly not increased!) Anyway thanks to all readers. As an aside for wordpress wannabe bloggers, I realised it is now almost impossible to write tables with WordPress, another illustration of the move towards small-device-supported blogs. Along with a new annoying “simpler” (or more accurately dumber) interface and a default font far too small for my eyesight. So I advise alternatives to wordpress that are more sympathetic to maths contents (e.g., using MathJax) and comfortable editing.

salesBessAnd the same for the e-book with Jean-Michel, which only appeared in late 2013. And contains more chapters than Introduction to Monte Carlo methods with R. Incidentally, a reader recently pointed out to me the availability of a pirated version of The Bayesian Choice on a Saudi (religious) university website. And of a pirated version of Introducing Monte Carlo with R on a Saõ Paulo (Brazil) university website. This may be alas inevitable, given the diffusion by publishers of e-chapters that can be copied with no limitations…


Filed under: Books, R, Statistics, University life Tagged: Bayesian Essentials with R, book sales, Brazil, copyright, Introduction to Monte Carlo Methods with R, Saudi Arabia, Springer-Verlag

by xi'an at June 26, 2015 12:18 PM

John Baez - Azimuth

Higher-Dimensional Rewriting in Warsaw (Part 2)

Today I’m going to this workshop:

Higher-Dimensional Rewriting and Applications, 28-29 June 2015, Warsaw, Poland.

Many of the talks will be interesting to people who are trying to use category theory as a tool for modelling networks!

For example, though they can’t actually attend, Lucius Meredith and my student Mike Stay hope to use Google Hangouts to present their work on Higher category models of the π-calculus. The π-calculus is a way of modelling networks where messages get sent here and there, e.g. the internet. Check out Mike’s blog post about this:

• Mike Stay, A 2-categorical approach to the pi calculus, The n-Category Café, 26 May 2015.

Krzysztof Bar, Aleks Kissinger and Jamie Vicary will be speaking about Globular, a proof assistant for computations in n-categories:

This talk is a progress report on Globular, an online proof assistant for semistrict higher-dimensional rewriting. We aim to produce a tool which can visualize higher-dimensional categorical diagrams, assist in their construction with a point-and-click interface, perform type checking to prevent incorrect composites, and automatically handle the interchanger data at each dimension. Hosted on the web, it will have a low barrier to use, and allow hyperlinking of formalized proofs directly from research papers. We outline the theoretical basis for the tool, and describe the challenges we have overcome in its design.

Eric Finster will be talking about another computer system for dealing with n-categories, based on the ‘opetopic’ formalism that James Dolan and I invented. And Jason Morton is working on a computer system for computation in compact closed categories! I’ve seen it, and it’s cool, but he can’t attend the workshop, so David Spivak will be speaking on his work with Jason on the theoretical foundations of this software:

We consider the linked problems of (1) finding a normal form for morphism expressions in a closed compact category and (2) the word problem, that is deciding if two morphism expressions are equal up to the axioms of a closed compact category. These are important ingredients for a practical monoidal category computer algebra system. Previous approaches to these problems include rewriting and graph-based methods. Our approach is to re-interpret a morphism expression in terms of an operad, and thereby obtain a single composition which is strictly associative and applied according to the abstract syntax tree. This yields the same final operad morphism regardless of the tree representation of the expression or order of execution, and solves the normal form problem up to automorphism.

Recently Eugenia Cheng has been popularizing category theory, touring to promote her book Cakes, Custard and Category Theory. But she’ll be giving two talks in Warsaw, I believe on distributive laws for Lawvere theories.

As for me, I’ll be promoting my dream of using category theory to understand networks in electrical engineering. I’ll be giving a talk on control theory and a talk on electrical circuits: two sides of the same coin, actually.

• John Baez, Jason Erbele and Nick Woods, Categories in control.

If you’ve seen a previous talk of mine with the same title, don’t despair—this one has new stuff! In particular, it talks about a new paper by Nick Woods and Simon Wadsley.

Abstract. Control theory is the branch of engineering that studies dynamical systems with inputs and outputs, and seeks to stabilize these using feedback. Control theory uses “signal-flow diagrams” to describe processes where real-valued functions of time are added, multiplied by scalars, differentiated and integrated, duplicated and deleted. In fact, these are string diagrams for the symmetric monoidal category of finite-dimensional vector spaces, but where the monoidal structure is direct sum rather than the usual tensor product. Jason Erbele has given a presentation for this symmetric monoidal category, which amounts to saying that it is the PROP for bicommutative bimonoids with some extra structure.

A broader class of signal-flow diagrams also includes “caps” and “cups” to model feedback. This amounts to working with a larger symmetric monoidal category where objects are still finite-dimensional vector spaces but the morphisms are linear relations. Erbele also found a presentation for this larger symmetric monoidal category. It is the PROP for a remarkable thing: roughly speaking, an object with two special commutative dagger-Frobenius structures, such that the multiplication and unit of either one and the comultiplication and counit of the other fit together to form a bimonoid.

• John Baez and Brendan Fong, Circuits, categories and rewrite rules.

Abstract. We describe a category where a morphism is an electrical circuit made of resistors, inductors and capacitors, with marked input and output terminals. In this category we compose morphisms by attaching the outputs of one circuit to the inputs of another. There is a functor called the ‘black box functor’ that takes a circuit, forgets its internal structure, and remembers only its external behavior. Two circuits have the same external behavior if and only if they impose same relation between currents and potentials at their terminals. This is a linear relation, so the black box functor goes from the category of circuits to the category of finite-dimensional vector spaces and linear relations. Constructing this functor makes use of Brendan Fong’s theory of ‘decorated cospans’—and the question of whether two ‘planar’ circuits map to the same relation has an interesting answer in terms of rewrite rules.

The answer to the last question, in the form of a single picture, is this:

(Click to enlarge.) How can you change an electrical circuit made out of resistors without changing what it does? 5 ways are shown here:

  1. You can remove a loop of wire with a resistor on it. It doesn’t do anything.
  2. You can remove a wire with a resistor on it if one end is unattached. Again, it doesn’t do anything.

  3. You can take two resistors in series—one after the other—and replace them with a single resistor. But this new resistor must have a resistance that’s the sum of the old two.

  4. You can take two resistors in parallel and replace them with a single resistor. But this resistor must have a conductivity that’s the sum of the old two. (Conductivity is the reciprocal of resistance.)

  5. Finally, the really cool part: the Y-Δ transform. You can replace a Y made of 3 resistors by a triangle of resistors But their resistances must be related by the equations shown here.

For circuits drawn on the plane, these are all the rules you need! This was proved here:

• Yves Colin de Verdière, Isidoro Gitler and Dirk Vertigan, Réseaux électriques planaires II.

It’s just the beginning of a cool story, which I haven’t completely blended with the categorical approach to circuits. Doing so clearly calls for 2-categories: those double arrows are 2-morphisms! For more, see:

• Joshua Alman, Carl Lian and Brandon Tran, Circular planar electrical networks I: The electrical poset EPn.


by John Baez at June 26, 2015 04:17 AM

June 25, 2015

Emily Lakdawalla - The Planetary Society Blog

Of Course I Still Love You, Falcon: New SpaceX Ship Ready to Catch Rockets
SpaceX is gearing up for its seventh paid cargo run to the International Space Station, and the third attempt to catch the first stage of its Falcon 9 rocket on a drone ship in the ocean.

June 25, 2015 07:13 PM

Tommaso Dorigo - Scientificblogging

Early-Stage Researcher Positions To Open Soon
The Marie-Curie network I am coordinating, AMVA4NewPhysics, is going to start very soon, and with its start several things are going to happen. One you should not be concerned with is the arrival of the first tranche of the 2.4Meuros that the European Research Council has granted us. Something more interesting to you, if you have a degree in Physics or Statistics, is the fact that the network will soon start hiring ten skilled post-lauream researchers across Europe, with the aim of providing them with an exceptional plan of advanced training in particle physics, data analysis, statistics, machine learning, and more.

read more

by Tommaso Dorigo at June 25, 2015 06:57 PM

Emily Lakdawalla - The Planetary Society Blog

Inclusive Astronomy Conference
Last week, more than 150 astronomers gathered in Nashville for a conference to examine fundamental questions in our field: Who gets to practice astronomy? How can we make astronomy more inclusive?

June 25, 2015 02:00 PM

Symmetrybreaking - Fermilab/SLAC

Exploring dark energy with robots

The Dark Energy Spectroscopic Instrument will produce a 3-D space map using a ‘hive’ of robots. 

Five thousand pencil-shaped robots, densely nested in a metal hive, whir to life with a precise, dizzying choreography. Small U-shaped heads swivel into a new arrangement in a matter of seconds.

This preprogrammed routine will play out about four times per hour every night at the Dark Energy Spectroscopic Instrument. The robots of DESI will be used to produce a 3-D map of one-third of the sky. This will help DESI fulfill its primary mission of investigating dark energy, a mysterious force thought to be causing the acceleration of the expansion of the universe.

The tiny robots will be arranged in 10 wedge-shaped metal “petals” that together form a cylinder about 2.6 feet across. They will maneuver the ends of fiber-optic cables to point at sets of galaxies and other bright objects in the universe. DESI will determine their distance from Earth based on the light they emit.

DESI’s robots are in development at Lawrence Berkeley National Laboratory, the lead in the DESI collaboration, and at the University of Michigan.

Courtesy of: DESI collaboration

The robots—each about 8 millimeters wide in their main section and 8 inches long—will be custom-built around commercially available motors measuring just 4 millimeters in diameter. This type of precision motor, at this size, became commercially available in 2013 and is now manufactured by three companies. The motors have found use in medical devices such as insulin pumps, surgical robots and diagnostic tools.

At DESI, the robots will automate what was formerly a painstaking manual process used at previous experiments. At the Baryon Oscillation Spectroscopic Survey, or BOSS, which began in 2009, technicians must plug 1000 fibers by hand several times each day into drilled metal plates, like operators plugging cables into old-fashioned telephone switchboards.

“DESI is exciting because all of that work will be done robotically,” says Risa Wechsler, a co-spokesperson for DESI and an associate professor of the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and SLAC National Accelerator Laboratory. Using the robots, DESI will be able to redirect all of its 5000 fibers in an elaborate dance in less than 30 seconds (see video).

“DESI definitely represents a new era,” Wechsler says.

In addition to precisely measuring the color of light emitted by space objects, DESI will also measure how the clustering of galaxies and quasars, which are very distant and bright objects, has evolved over time. It will calculate the distance for up to 25 million space objects, compared to the fewer than 2 million objects examined by BOSS.

The robots are designed to both collect and transmit light. After each repositioning of fibers, a special camera measures the alignment of each robot’s fiber-optic cable within thousandths of a millimeter. If the robots are misaligned, they are automatically individually repositioned to correct the error.

Each robot has its own electronics board and can shut off and turn on independently, says Joe Silber, an engineer at Berkeley Lab who manages the system that includes the robotic array.

In seven successive generations of prototype designs, Silber has worked to streamline and simplify the robots, trimming down their design from 60 parts to just 18. “It took a long time to really understand how to make these things as cheap and simple as possible,” he says. “We were trying not to get too clever with them.”

The plan is for DESI to begin a 5-year run at Kitt Peak National Observatory near Tucson, Arizona, in 2019. Berkeley and Michigan scientists plan to build a test batch of 500 robots early next year, and to build the rest in 2017 and 2018.

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at June 25, 2015 01:00 PM

Clifford V. Johnson - Asymptotia

Speed Dating for Science!
youtubespace panelLast night was amusing. I was at the YouTubeLA space with 6 other scientists from various fields, engaging with an audience of writers and other creators for YouTube, TV, film, etc. It was an event hosted by the Science and Entertainment Exchange and Youtube/Google, and the idea was that we each had seven minutes to present in seven successive rooms with different audiences in each, so changing rooms each seven minutes. Of course, early on during the planning conference call for the event, one of the scientists asked why it was not more efficient to simply have one large [...] Click to continue reading this post

by Clifford at June 25, 2015 04:45 AM

June 24, 2015

Sean Carroll - Preposterous Universe

Algebra of the Infrared

In my senior year of college, when I was beginning to think seriously about graduate school, a magical article appeared in the New York Times magazine. Called “A Theory of Everything,” by KC Cole, it conveyed the immense excitement that had built in the theoretical physics community behind an idea that had suddenly exploded in popularity after burbling beneath the surface for a number of years: a little thing called “superstring theory.” The human-interest hook for the story was simple — work on string theory was being led by a brilliant 36-year-old genius, a guy named Ed Witten. It was enough to cement Princeton as the place I most wanted to go to for graduate school. (In the end, they didn’t let me in.)

Nearly thirty years later, Witten is still going strong. As evidence, check out this paper that recently appeared on the arxiv, with co-authors Davide Gaiotto and Greg Moore:

Algebra of the Infrared: String Field Theoretic Structures in Massive N=(2,2) Field Theory In Two Dimensions
Davide Gaiotto, Gregory W. Moore, Edward Witten

We introduce a “web-based formalism” for describing the category of half-supersymmetric boundary conditions in 1+1 dimensional massive field theories with N=(2,2) supersymmetry and unbroken U(1)R symmetry. We show that the category can be completely constructed from data available in the far infrared, namely, the vacua, the central charges of soliton sectors, and the spaces of soliton states on ℝ, together with certain “interaction and boundary emission amplitudes”. These amplitudes are shown to satisfy a system of algebraic constraints related to the theory of A∞ and L∞ algebras. The web-based formalism also gives a method of finding the BPS states for the theory on a half-line and on an interval. We investigate half-supersymmetric interfaces between theories and show that they have, in a certain sense, an associative “operator product.” We derive a categorification of wall-crossing formulae. The example of Landau-Ginzburg theories is described in depth drawing on ideas from Morse theory, and its interpretation in terms of supersymmetric quantum mechanics. In this context we show that the web-based category is equivalent to a version of the Fukaya-Seidel A∞-category associated to a holomorphic Lefschetz fibration, and we describe unusual local operators that appear in massive Landau-Ginzburg theories. We indicate potential applications to the theory of surface defects in theories of class S and to the gauge-theoretic approach to knot homology.

I cannot, in good conscience, say that I understand very much about this new paper. It’s a kind of mathematica/formal field theory that is pretty far outside my bailiwick. (This is why scientists roll their eyes when a movie “physicist” is able to invent a unified field theory, build a time machine, and construct nanobots that can cure cancer. Specialization is real, folks!)

But there are two things about the paper that I nevertheless can’t help but remarking on. One is that it’s 429 pages long. I mean, damn. That’s a book, not a paper. Scuttlebutt informs me that the authors had to negotiate specially with the arxiv administrators just to upload the beast. Most amusingly, they knew perfectly well that a 400+ page work might come across as a little intimidating, so they wrote a summary paper!

An Introduction To The Web-Based Formalism
Davide Gaiotto, Gregory W. Moore, Edward Witten

This paper summarizes our rather lengthy paper, “Algebra of the Infrared: String Field Theoretic Structures in Massive N=(2,2) Field Theory In Two Dimensions,” and is meant to be an informal, yet detailed, introduction and summary of that larger work.

This short, user-friendly introduction is a mere 45 pages — still longer than 95% of the papers in this field. After a one-paragraph introduction, the first words of the lighthearted summary paper are “Let X be a Kähler manifold, and W : X → C a holomorphic Morse function.” So maybe it’s not that informal.

The second remarkable thing is — hey look, there’s my name! Both of the papers cite one of my old works from when I was a grad student, with Simeon Hellerman and Mark Trodden. (A related paper was written near the same time by Gary Gibbons and Paul Townsend.)

Domain Wall Junctions are 1/4-BPS States
Sean M. Carroll, Simeon Hellerman, Mark Trodden

We study N=1 SUSY theories in four dimensions with multiple discrete vacua, which admit solitonic solutions describing segments of domain walls meeting at one-dimensional junctions. We show that there exist solutions preserving one quarter of the underlying supersymmetry — a single Hermitian supercharge. We derive a BPS bound for the masses of these solutions and construct a solution explicitly in a special case. The relevance to the confining phase of N=1 SUSY Yang-Mills and the M-theory/SYM relationship is discussed.

Simeon, who was a graduate student at UCSB at the time and is now faculty at the Kavli IPMU in Japan, was the driving force behind this paper. Mark and I had recently written a paper on different ways that topological defects could intersect and join together. Simeon, who is an expert in supersymmetry, noticed that there was a natural way to make something like that happen in supersymmetric theories: in particular, you could have domain walls (sheets that stretch through space, separating different possible vacuum states) could intersect at “junctions.” Even better, domain-wall junction configurations would break some of the supersymmetry but not all of it. Setups like that are known as BPS states, and are highly valued and useful to supersymmetry aficionados. In general, solutions to quantum field theories are very difficult to find and characterize with any precision, but the BPS property lets you invoke some of the magic of supersymmetry to prove results that would otherwise be intractable.

Admittedly, the above paragraph is likely to be just as opaque to the person on the street as the Gaiotto/Moore/Witten paper is to me. The point is that we were able to study the behavior of domain walls and how they come together using some simple but elegant techniques in field theory. Think of drawing some configuration of walls as a network of lines in a plane. (All of the configurations we studied were invariant along some “vertical” direction in space, as well as static in time, so all the action happens in a two-dimensional plane.) Then we were able to investigate the set of all possible ways such walls could come together to form allowed solutions. Here’s an example, using walls that separate four different possible vacuum states:

wall-moduli-3

As far as I understand it (remember — not that far!), this is a very baby version of what Gaiotto, Moore, and Witten have done. Like us, they look at a large-distance limit, worrying about how defects come together rather than the detailed profiles of the individual configurations. That’s the “infrared” in their title. Unlike us, they go way farther, down a road known as “categorification” of the solutions. In particular, they use a famous property of BPS states: you can multiply them together to get other BPS states. That’s the “algebra” of their title. To mathematicians, algebras aren’t just ways of “solving for x” in equations that tortured you in high school; they are mathematical structures describing sets of vectors that can be multiplied by each other to produce other vectors. (Complex numbers are an algebra; so are ordinary three-dimensional vectors, using the cross product operation.)

At this point you’re allowed to ask: Why should I care? At least, why should I imagine putting in the work to read a 429-page opus about this stuff? For that matter, why did these smart guys put in the work to write such an opus?

It’s a reasonable question, but there’s also a reasonable answer. In theoretical physics there are a number of puzzles and unanswered questions that we are faced with, from “Why is the mass of the Higgs 125 GeV?” to “How does information escape from black holes?” Really these are all different sub-questions of the big one, “How does Nature work?” By construction, we don’t know the answer to these questions — if we did, we’d move onto other ones. But we don’t even know the right way to go about getting the answers. When Einstein started thinking about fitting gravity into the framework of special relativity, Riemannian geometry was absolutely the last thing on his mind. It’s hard to tell what paths you’ll have to walk down to get to the final answer.

So there are different techniques. Some people will try a direct approach: if you want to know how information comes out of a black hole, think as hard as you can about what happens when black holes radiate. If you want to know why the Higgs mass is what it is, think as hard as you can about the Higgs field and other possible fields we haven’t yet found.

But there’s also a more patient, foundational approach. Quantum field theory is hard; to be honest, we don’t understand it all that well. There’s little question that there’s a lot to be learned by studying the fundamental behavior of quantum field theories in highly idealized contexts, if only to better understand the space of things that can possibly happen with an eye to eventually applying them to the real world. That, I suspect, is the kind of motivation behind a massive undertaking like this. I don’t want to speak for the authors; maybe they just thought the math was cool and had fun learning about these highly unrealistic (but still extremely rich) toy models. But the ultimate goal is to learn some basic wisdom that we will someday put to use in answering that underlying question: How does Nature work?

As I said, it’s not really my bag. I don’t have nearly the patience nor that mathematical aptitude that is required to make real progress in this kind of way. I’d rather try to work out on general principles what could have happened near the Big Bang, or how our classical world emerges out of the quantum wave function.

But, let a thousand flowers bloom! Gaiotto, Moore, and Witten certainly know what they’re doing, and hardly need to look for my approval. It’s one strategy among many, and as a community we’re smart enough to probe in a number of different directions. Hopefully this approach will revolutionize our understanding of quantum field theory — and at my retirement party everyone will be asking me why I didn’t stick to working on domain-wall junctions.

by Sean Carroll at June 24, 2015 09:48 PM

ZapperZ - Physics and Physicists

Gravitational Lensing
Here's a simple intro to gravitational lensing, if you are not familiar with it.



Zz.

by ZapperZ (noreply@blogger.com) at June 24, 2015 06:29 PM

Symmetrybreaking - Fermilab/SLAC

Seeing in gamma rays

The latest sky maps produced by Fermi Gamma-ray Space Telescope combine seven years of observations.

Maps from the Fermi Gamma-ray Space Telescope literally show the universe in a different light.

Today Fermi’s Large Area Telescope (LAT) collaboration released the latest data from nearly seven years of watching the universe at a broad range of gamma-ray energies.

Gamma rays are the highest-energy form of light in the cosmos. They come from jets of high-energy particles accelerated near supermassive black holes at the centers of galaxies, shock waves around exploded stars, and the intense magnetic fields of fast-spinning collapsed stars. On Earth, gamma rays are produced by nuclear reactors, lightning and the decay of radioactive elements.

From low-Earth orbit, the Fermi Gamma-ray Space Telescope scans the entire sky for gamma rays every three hours. It captures new and recurring sources of gamma rays at different energies, and it can be diverted from its usual course to fix on explosive events known as gamma-ray bursts.

Combining data collected over years, the LAT collaboration periodically creates gamma-ray maps of the universe. These colored maps plot the universe’s most extreme events and high-energy objects.

The all-sky maps typically portray the universe as an ellipse that shows the entire sky at once, as viewed from Earth. On the maps, the brightest gamma-ray light is shown in yellow and progressively dimmer gamma-ray light is shown in red, blue, and black. These are false colors, though; gamma-rays are invisible.

The maps are oriented with the center of the Milky Way at their center and the plane of our galaxy oriented horizontally across the middle.  The plane of the Milky Way is bright in gamma rays. Above and below the bright band, much of the gamma-ray light comes from outside of our galaxy.

“What you see in gamma rays is not so predictable,” says Elliott Bloom, a SLAC National Accelerator Laboratory professor and member of the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) who is part of a scientific collaboration supporting Fermi’s principal instrument, the Large Area Telescope.

Teams of researchers have identified mysterious, massive “bubbles” blooming 30,000 light-years outward from our galaxy’s center, for example, with most features appearing only at gamma-ray wavelengths.

Scientists create several versions of the Fermi sky maps. Some of them focus only on a specific energy range, says Eric Charles, another member of the Fermi collaboration who is also a KIPAC scientist.

“You learn a lot by correlating things in different energy ‘bins,’” he says. “If you look at another map and see completely different things, then there may be these different processes. What becomes useful is at different wavelengths you can make comparisons and correlate things.”

But sometimes what you need is the big picture, says Seth Digel, a SLAC senior staff scientist and a member of KIPAC and the Fermi team. “There are some aspects you can only study with maps, such as looking at the extended gamma-ray emissions—not just the point sources, but regions of the sky that are glowing in gamma rays for different reasons.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at June 24, 2015 04:13 PM

arXiv blog

How Machine Vision Solved One of the Great Mysteries of 20th-Century Surrealist Art

The great Belgian surrealist Magritte painted two versions of one of his masterpieces, and nobody has been able to distinguish the original from the copy. Until now.


In 1983, a painting by the Belgian surrealist René Magritte came up for auction in New York. The artwork was painted in 1948 and depicts a bird of prey morphing into a leaf which is being eaten by a caterpillar–perhaps an expression of sorrow for the Second World War, which Magritte spent in occupied Belgium.

June 24, 2015 04:00 PM

ATLAS Experiment

Faster and Faster!
Simon Ammann from Switzerland starts from the hill during the training jump of the second station of the four hills ski jumping tournament in Garmisch-Partenkirchen, southern Germany, on Thursday, Dec. 31, 2009. (AP Photo/  Matthias Schrader)

Simon Ammann from Switzerland starts from the hill during the training jump of the second station of the four hills ski jumping tournament in Garmisch-Partenkirchen, southern Germany, on Thursday, Dec. 31, 2009. (AP Photo/ Matthias Schrader)

Faster and Faster! This is how it gets as soon as LS1 ends and the first collisions of LHC Run 2 approaches. As you might have noticed, at particle physics experiments we LOVE acronyms! LS1 stands for the first Long Shutdown of the Large Hadron Collider.

After the end of Run 1 collisions in March 2013 we had two full years of repairs, consolidations and upgrades of the ATLAS detector. Elevators at P1 (that is Point 1, one of the 8 zones where we can get access to the LHC tunnel located 100 m underground) were once again as crowded as elevator shafts in a coal mine. Although all the activities were well programmed, during the last days the activity was frenetic and we had the impression that the number of things in our t0-do lists was increasing rather than reducing.

Finally, last week I was sitting in the ACR (Atlas Control Room) with experts, shifters, run coordinators, and the ATLAS spokesperson for the first fills of the LHC that produced “low luminosity collisions”. You might think that, for a collider that is designed to reach a record instantaneous luminosity (that is the rate of collisions in a given amount of time), last week’s collisions were just a warm up.

Well, this is not entirely true.

2015-06-11 17.16.49

Racks in USA15 (100m underground) hosting trigger electronics for the selection of minimum bias collisions (rack in foreground with brown cables). In background (with thick black cables), electronics for the calorimetric trigger. (Picture by the author.)

Last week we had the unique opportunity to collect data with very particular beam conditions that we call “low pile-up”. That means that every time the bunches of protons cross one through the other, the protons have a very small probability of actually colliding. What is important is that the probability of having two or more collisions at the same time is negligible, since we are only interested in collisions that are produced once for each bunch crossing. These data are fundamental for performing a variety of physics measurements.

Just to cite a few of them:

  • the measurement of the proton-proton cross section (“how large are the protons?”) at the new center of mass energy of 13 TeV;
  • the study of diffractive processes in proton-proton collisions (YES, protons are waves also!); and
  • the characterization of “minimum bias” collisions (these represent the overwhelming majority of collisions and are just the “opposite” of collisions that produce top quarks, Higgs and eventually exotic or supersymmetric particles) which are key ingredients for tuning our Monte Carlo simulations that will be used for all physics analysis in ATLAS (including Higgs physics and Beyond Standard Model searches).

Over the past few months, I’ve been coordinating a working group with people around the world (Italy, Poland, China, UK, and US) – none of them resident full time at CERN – who are responsible for the on-line selection of these events (we call this the trigger). Although we meet weekly (not trivial due to the different time zones), and we regularly exchange e-mails, I had never met with these people face to face. It was strange to finally see their faces in a meeting room at CERN, although I could recognize their voices.

Clint Eastwood in "Per Qualche Dollaro in piu`" movie (Director: Sergio Leone)

Clint Eastwood in “Per Qualche Dollaro in piu`” movie (Director: Sergio Leone) ( Produzioni Europee Associate and United Artists)

We have worked very hard for the last week of data-taking trying to be prepared for all possible scenarios and problems we might encounter. There were no room for mistakes that could spoil the quality of data.

We cannot press the “replay” button.

It was like “one shot, one kill”.

Luckily everything ran smoothly, and there weren’t too many issues and none of them severe.

This is only one of the activities where my institution, the Istituto Nazionale di Fisica NucleareSezione di Bologna, and the University of Bologna and the other 12 ATLAS Italian groups were involved during the Run 2 start up of LHC.


doc01069020150318091345_001 Antonio Sidoti is a physicist in Bologna (Italy) at the Istituto Nazionale Fisica Nucleare. His research include top quark associated with Higgs production searches, upgrade studies for the new inner tracker and trigger software development using Graphical Processing Units. He is coordinating the ATLAS Minimum Bias and Forward Detector Trigger Signature group and is now deputy coordinator of the physics analysis for the Italian groups in ATLAS. When he is not working he plays piano, runs marathons, skis or sails with a windsurf.

by Antonio Sidoti at June 24, 2015 01:47 PM

CERN Bulletin

CERN Bulletin Issue No. 26-27/2015
Link to e-Bulletin Issue No. 26-27/2015Link to all articles in this issue No.

June 24, 2015 01:42 PM

Emily Lakdawalla - The Planetary Society Blog

What to expect when you're expecting a flyby: Planning your July around New Horizons' Pluto Pictures (version 2)
Three months ago, I posted an article explaining what to expect during the flyby. This is a revised version of the same post, with some errors corrected, the expected sizes of Nix and Hydra updated, and times of press briefings added.

June 24, 2015 12:57 PM

CERN Bulletin

CERN Bulletin Issue No. 24-25/2015
Link to e-Bulletin Issue No. 24-25/2015Link to all articles in this issue No.

June 24, 2015 12:20 PM

Lubos Motl - string vacua and pheno

CMS cooling magnet glitch not serious, detector will run


Two weeks ago, Adam Falkowski propagated the following Twitter rumor:
LHC rumor: serious problems with the CMS magnet. Possibly, little to none useful data from CMS this year.
Fortunately, this proposition seems to be heavily exaggerated fearmongering at this point.




On June 14th, CMS published this news:
CMS is preparing for high-luminosity run at 13 TeV
Jester's rumor was based on a true fact but all of its "important" claims were wrong.




The CMS detector has had a problem with the magnet cooling system. After some time, a problem was found in the machinery that feeds superconducting helium to the system, in particular, something was wrong with oil which reached the cold box, a component in the initial compression stage.

Repairs have shown that the CMS magnet itself has not been contaminated by oil which means that the problem was superficial and it has been hopefully fixed by now. Between June 15th and 19th, the LHC went to a "technical stop" – and the LHC schedule also says that since last Saturday to this Sunday, the LHC is scrubbing for the 50 ns operation – but once it is over, the CMS should be doing all of its work again.

Starting from next Monday, the LHC should be ramping up the intensity with the 50 ns beam for 3 weeks. July 20th-24th will be "machine development". Following 14 days will be "scrubbing for 25 ns operation". Intensity ramp-up with the 25 ns beam will begin on August 8th.

by Luboš Motl (noreply@blogger.com) at June 24, 2015 08:48 AM

June 23, 2015

astrobites - astro-ph reader's digest

Close the door and break it down during protostellar collapse

Title: Simulations of protostellar collapse using multigroup radiation hydrodynamics – The first collapse and The second collapse

Authors: N. Vaytet, G. Chabrier, E. Audit, B. Commerçon, J. Masson, J. Ferguson, F. Delahave

First author’s affiliation: AA (École Normale Supérieure de Lyon)

Date of publication: 2012 and 2013

Paper status: published in A&A

 

Drama scene

Set: Party

Characters: Astronomer, other guest not affiliated to astronomy

The astronomer and the other person meet, feel attracted to each other and at the end of the party, they decide to leave the party together.

Guest (making eyes at the astronomer): “Let’s sit down for awhile and you can tell me more about the stars!”

Astronomer (with shining eyes): “Well, do you know how star formation works?”

Guest (slightly surprised): “… No, I don’t. Tell me.” (ogling even stronger now)

Astronomer (enthusiastic): “It’s really fascinating and not so difficult too understand! A star forms mainly due to gravitational collapse, well, there is more to that, but as an assumption, you can ….”

Other person (thinking): “This is hopeless. I’ll sleep alone tonight.”

If you recognize yourself as the astronomer in the above scene, then you’re lucky; today’s bite deals with the details of the collapsing phase of star formation, especially the formation of the first and second core.

Phases of star formation described by open, closed and broken doors

As an analogy, imagine the collapsing phase to a protostar as the following sequence: closing a door first and then breaking it down. You’ll understand what I mean in the following. As the astronomer in the scene above said, stars form due to gravitational collapse of dense cores. While the gas inside the core collapses, the density at the center increases. Eventually, the density becomes so dense that it becomes optically thick to its own radiation. What does that mean? It means that the energy, which is produced in the collapse cannot be radiated away from the inner region any more and will be absorbed by the surrounding gas instead; the door is closed for radiation to leave the core. Astrophysicists say that this is when the first core has formed. Since the energy cannot escape the collapsing gas any more, the first core starts to heat up and counteracts the gravitational contraction. The phase ends, when the molecular hydrogen (H2) starts to break apart (dissociate) at a temperature of about 2000 K. The break up of molecular hydrogen reduces the increase in temperature significantly and the core undergoes a second (nearly) isothermal collapse phase. Loosely speaking: The door for collapse is broken down until all molecular hydrogen is burnt. After the dissociation phase, the second collapse is over and the second core has formed. Hereafter, a new door has been installed and the core contracts slowly again since the heating up counteracts gravity (see figure 1).

Fig. 1: The figure illustrates the evolution of star formation from first collapse until the the formation of the second core. The different lines show results from different models. The dashed lines are from previous models, while the solid lines illustrate the authors' results. The solid red line shows the curve for their multi-frequency approach and the solid black line a frequency independent - so called grey - approach. As you can see, the results do not differ significantly for the frequency dependency.

Fig. 1: The figure illustrates the evolution of star formation from first collapse until the the formation of the second core. The different lines show results from different models. The dashed lines are from previous models, while the solid lines illustrate the authors’ results. The solid red line shows the curve for their multi-frequency approach and the solid black line a frequency independent – so called grey – approach. As you can see, the results do not differ significantly for the frequency dependency. (The figure is Fig. 6 in the original paper dealing with the second core.)

Results of the paper: Frequency dependence is unimportant and second core properties are independent of initial conditions

The entire process includes a lot of physics and the authors focus on how the energy induced by the compression of the gas is transported during collapse of the dense core. In astrophysical terms this form of transport is called radiative transfer. What is different in the authors work compared to previous studies is that they take into account the fact that the radiation of energy is different for different frequencies. In previous studies the emissivity of the gas was assumed to be the same for all frequencies, called grey assumption. Nevertheless, the authors want to avoid to use too much of their limited computing resources and thus approximate the frequency dependence by summarizing frequencies within a range to get a couple of frequency bins. You wonder what “bin” means? Consider the following: as opposed to have numbers 1,2,3,4,5…99, 100 we would lump together 1-50 and 50-100. We’ve cut down the number of calculations from 100 to 2. That’s what the authors do. However, the authors find that there actually is no significant difference by taking into account the dependency on frequencies and a simple grey approach. As a remark for the evolution after the second core, the authors point out that they expect a frequency dependence for appropriate modelling. Vaytet et al. consider the collapse of a 0.1, 1 and 10 Msun cloud and they find that first cores have typical sizes of about 10 AU, lifetimes of a bit more than 1000 years and contain about 2% of a solar mass with slightly larger numbers for increased initial mass. The conditions of the second core turns out to be completely independent of the initial conditions and the authors estimate typical masses of about 10-3 Msun and 3*10-3 AU in radius.

Conclusion

Two results are interesting here: First, their results suggest that modellers can get away with ignoring the frequency dependence to save computational time. Second, the second core properties seem to be independent on initial core conditions. This would mean that similar stars might evolve from very different initial conditions. (Just as it happens in real life: Obama and Bush have a very different background and yet, they both became president.) Thus, it might be that the differences in stellar mass and size actually stem from processes after the second core collapse. A question, which probably can be answered not too far in the future by using more sophisticated three dimensional simulations (and extra computational time).

by Michael Küffmeier at June 23, 2015 06:34 PM

Symmetrybreaking - Fermilab/SLAC

Bringing neutrino research back to India

The India-based Neutrino Observatory will provide a home base for Indian particle physicists.

Pottipuram, a village in southern India, is mostly known for its farming. Goats graze on the mountains and fields yield modest harvests of millets and pulses.

Earlier this year, Pottipuram became known for something else: The government announced that, nearby, scientists will construct a new research facility that will advance particle physics in India.

A legacy of discovery

From 1951 to 1992, Indian scientists studied neutrinos and muons in a facility located deep within what was then one of the largest active gold mines in the world, the Kolar Gold Fields.

The lab hosted international collaborations, including one that discovered atmospheric neutrinos—elusive particles that shoot out of collisions between cosmic rays and our atmosphere. The underground facility also served as a training ground for young and aspiring particle physicists.

But when the gold reserves dwindled, the mining operations pulled out. And the lab, unable to maintain a vast network of tunnels on its own, shut down, too. Indian particle physicists who wanted to do science in their country had to switch to a related field, such as nuclear physics or materials science.

Almost immediately after the closure of the Kolar lab, plans began to take shape to build a new place to study fundamental particles and forces. Physicist Naba Mondal of the Tata Institute of Fundamental Research in Mumbai, who had researched at Kolar, worked with other scientists to build a collaboration—informally at first, and then officially in 2002. They now count as partners scientists from 21 universities and research institutions across India.

The facility they plan to build is called the India-based Neutrino Observatory.

Mondal, who leads the INO collaboration, has high hopes the facility will give Indian particle physics students the chance to do first-class research at home.

“They can't all go to CERN or Fermilab,” he says. “If we want to attract them to science, we have to have experimental facilities right here in the country.”

INO collaboration meeting 2014, at Iichep Madurai.

Courtesy of: India-based Neutrino Observatory

Finding a place

INO will house large detectors that will catch particles called neutrinos.

Neutrinos are produced by a variety of processes in nature and hardly ever interact with other matter; they are constantly streaming through us. But they’re not the only particles raining down on us from space. There are also protons and atomic nuclei coming from cosmic rays.

To study neutrinos, scientists need a way to pick them out from the crowd. INO scientists want to do this by building their detectors inside a mountain, shielded by layers of rock that can stop cosmic ray particles but not the slippery neutrinos.

Rock is especially dense in the remote, monolithic hills near Pottipuram. So, the scientists set about asking the village for their blessing to build there.

This posed a challenge to Mondal. India is a large country with 22 recognized regional languages. Mondal grew up in West Bengal, near Kolkata, more than 1200 miles away from Pottipuram and speaks Bengali, Hindi and English. The residents of Pottipuram speak Tamil.

Luckily, some of Mondal’s colleagues speak Tamil, too.

One such colleague is D. Indumathi of the Institute of Mathematical Sciences in Chennai. Indumathi spent more than 5 years coordinating a physics subgroup working on designing INO’s proposed main detector, a 50,000-ton, magnetized stack of iron plates and scintillator. But her abilities and interests extend beyond the pure physics of the project.

“I like talking about science to people,” she says. “I get very involved, and I am very passionate about it. So in that sense [outreach] was also a role that I could naturally take up.”

She spent about one year talking with residents of Pottipuram, fielding questions about whether the experiment would produce a radiation hazard (it won’t) and whether the goats would continue to have access to the mountain (they will). In the end, the village consented to the construction.

Courtesy of: India-based Neutrino Observatory

Neutrino physics for a new generation

Young people have shown the most interest in INO, Indumathi says. Students in both college and high school are tantalized by these particles that might throw light on yet unanswered questions about the evolution of the universe. They enjoy discussing research ideas that haven’t even found their way into their textbooks.

“[There] is a tremendous feeling of wanting to participate—to be a part of this lab that is going to come up in their midst,” Indumathi says.

Student S. Pethuraj, from another village in Tamil Nadu, first heard about INO when he attended a series of lectures by Mondal and other scientists in his second year of what was supposed to be a terminal master’s degree at Madurai Kamaraj University.

Pethuraj connected with the professors and arranged to take a winter course from them on particle physics.

“After their lectures my mind was fully trapped in particle physics,” he says.

Pethuraj applied and was accepted to a PhD program expressly designed as preparation for INO studies at the Tata Institute for Fundamental Research. He is now completing coursework.

“INO is giving me cutting-edge research experience in experimental physics and instrumentation,” he says. “This experience creates in me a lot of confidence in handling and understanding the experiments.”

Other young people are getting involved with engineering at INO. The collaboration has already hired recent graduates to help design the many intricate detector systems involved in such a massive undertaking.

The impact of the INO will only increase after its construction, especially for those who will have the lab in their backyard, Mondal says.

“The students from the area—they will visit and talk to the scientists there and get an idea about how science is being done,” he says. “That will change even the culture of doing science.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Troy Rummler at June 23, 2015 01:00 PM

astrobites - astro-ph reader's digest

Flinging Asteroids into the Habitable Zone

Roughly a quarter of the exoplanets we have discovered are close to or residing within their host star’s habitable zone (HZ). The habitable zone is defined as the range of distances from a host star in which a planet can harbor liquid water on its surface. There are two main sources for liquid water on planets: outgassing from the planet’s interior or water transferred through impacts from asteroids and comets. Some planets can be formed almost completely devoid water within the HZ, so the authors in this paper investigate the possibility of asteroids transporting water into the HZ during the late stages of planetary formation.

The authors model an asteroid debris disk as a ring of 10,000 asteroids (with a total water content equal to 200 times the mass of ocean water on Earth), and then simulate collisions between these to obtain a final mass distribution of asteroids. These asteroids are flung into the habitable zone as their orbits are perturbed and made increasingly eccentric by their host stars and a orbiting gas giant planet (ex. a massive planet similar to Jupiter). Over time, the asteroid belt becomes depopulated due to these ejections and mutual collisions between them.

For a binary star system, planetary orbits are tricky to compute since their orbits are generally unstable compared to a single star system. Similarly, it is difficult to define a habitable zone since there are radiation contributions from two host stars. For their simulation, the authors define the permanent habitable zone as the area in which a planet always receives the right amount of radiation to sustain surface liquid water on average.

By tracking the number of asteroids being ejected from the asteroid belt and entering the HZ, the results of these simulations show that a maximum of 70 Earth’s oceans worth of water (or 35% of the available water in the asteroid belt) can be transported to the HZ within a 10 Myr timescale (see Fig. 1). However this is a very short simulation time compared to the usual planetary formation timescale of 100 Myr, so these results are biased towards seeing fast and efficient transport of water. The authors also take into account water mass loss due to impacts between asteroids and ice sublimation due to the radiation from the host stars, but also assume all the water transported into the HZ will be delivered to the planet residing within.

Fig. 1: The number of Earth's oceans worth of water transported by asteroids into the habitable zone. The data are divided by orbital eccentricity (e_b) and semi-major axis (a_b) of the companion.

Fig. 1: The number of Earth’s oceans worth of water transported by asteroids into the habitable zone. The data are divided by orbital eccentricity (e_b) and semi-major axis (a_b) of the companion. “B” indicates a binary star system, and “S” indicates a single star. The colors indicate different sub-regions of the habitable zone into which asteroids cross.

The simulation also shows that for a single star system, an asteroid will need up to 20 times longer (compared to a planet in a binary star system) to reach the HZ (see Fig. 1). This is because asteroids are not sufficiently perturbed by the gas giant alone in a single star system, but a binary star system has enough dynamical instability to produce frequent perturbations and increase the probability of an asteroid crossing into the HZ by several fold.

Of course, this leaves the question of whether water transport via asteroids is a viable mechanism for supplying a single star planet system (like our own Earth) with liquid water. There are currently still several competing hypotheses as to how our planet obtained its water supply, but these sorts of simulations should shed light on the feasibility of water transport through impacting bodies.

by Anson Lam at June 23, 2015 09:00 AM

June 22, 2015

Lubos Motl - string vacua and pheno

Strings 2015: India


I think that the cost-and-benefit analysis implies that it's not a good idea for me to describe most of the talks at the annual string theorists' conference. If there are volunteers, especially among the participants, I will be happy to publish their observations, however.




The conference is taking place this week, from Monday through Friday, at the Tata Institute in Bengalúru, India. I usually prefer the traditional "colonial" European names of major non-European cities – Peking, New Amsterdam etc. – but the Indian-Czech name Bengalúru simply sounds better than its old English parody (up to 2006), Bangalore. ;-)




Here are the basic URLs:
Strings 2015: main web page
Strings 2015: talk titles and links to PDF files (and links to separate pages of the talks, with coordinates etc.)
I am sure the readers and clickers who know how to read and click may find the other pages once they read and click. ;-) I have looked at several of the PDF files that have already appeared. They are very interesting. It is not yet clear to me whether videos will be posted somewhere, too.

There are rumors that a well-known author is just completing the book How the Conservatives [Not Hippies] Saved Physics but I am afraid that you shouldn't trust everything on the Internet. ;-)

by Luboš Motl (noreply@blogger.com) at June 22, 2015 05:08 PM

arXiv blog

Data Mining Reveals the Surprising Factors Behind Successful Movies

The secret to making profitable movies will amaze you. (Spoiler: it’s not hiring top box office stars.)

June 22, 2015 04:00 PM

Quantum Diaries

LARP completes first successful test of High-Luminosity LHC coil

This article appeared in Fermilab Today on June 22, 2015.

Steve Gould of the Fermilab Technical Division prepares a cold test of a short quadrupole coil. The coil is of the type that would go into the High-Luminosity LHC. Photo: Reidar Hahn

Steve Gould of the Fermilab Technical Division prepares a cold test of a short quadrupole coil. The coil is of the type that would go into the High-Luminosity LHC. Photo: Reidar Hahn

Last month, a group collaborating across four national laboratories completed the first successful tests of a superconducting coil in preparation for the future high-luminosity upgrade of the Large Hadron Collider, or HL-LHC. These tests indicate that the magnet design may be adequate for its intended use.

Physicists, engineers and technicians of the U.S. LHC Accelerator Research Program (LARP) are working to produce the powerful magnets that will become part of the HL-LHC, scheduled to start up around 2025. The plan for this upgrade is to increase the particle collision rate, or luminosity, by approximately a factor of 10, so expanding the collider’s physics reach by creating 10 times more data.

“The upgrade will help us get closer to new physics. If we see something with the current run, we’ll need more data to get a clear picture. If we don’t find anything, more data may help us to see something new,” said Technical Division’s Giorgio Ambrosio, leader of the LARP magnet effort.

LARP is developing more advanced quadrupole magnets, which are used to focus particle beams. These magnets will have larger beam apertures and the ability to produce higher magnetic fields than those at the current LHC.

The Department of Energy established LARP in 2003 to contribute to LHC commissioning and prepare for upgrades. LARP includes Brookhaven National Laboratory, Fermilab, Lawrence Berkeley National Laboratory and SLAC. Its members began developing the technology for advanced large-aperture quadrupole magnets around 2004.

The superconducting magnets currently in use at the LHC are made from niobium titanium, which has proven to be a very effective material to date. However, they will not be able to support the higher magnetic fields and larger apertures the collider needs to achieve higher luminosities. To push these limits, LARP scientists and engineers turned to a different material, niobium tin.

Niobium tin was discovered before niobium titanium. However, it has not yet been used in accelerators because, unlike niobium titanium, niobium tin is very brittle, making it susceptible to mechanical damage. To be used in high-energy accelerators, these magnets need to withstand large amounts of force, making them difficult to engineer.

LARP worked on this challenge for almost 10 years and went through a number of model magnets before it successfully started the fabrication of coils for 150-millimeter-aperture quadrupoles. Four coils are required for each quadrupole.

LARP and CERN collaborated closely on the design of the coils. After the first coil was built in the United States earlier this year, the LARP team successfully tested it in a magnetic mirror structure. The mirror structure makes possible tests of individual coils under magnetic field conditions similar to those of a quadrupole magnet. At 1.9 Kelvin, the coil exceeded 19 kiloamps, 15 percent above the operating current.

The team also demonstrated that the coil was protected from the stresses and heat generated during a quench, the rapid transition from superconducting to normal state.

“The fact that the very first test of the magnet was successful was based on the experience of many years,” said TD’s Guram Chlachidze, test coordinator for the magnets. “This knowledge and experience is well recognized by the magnet world.”

Over the next few months, LARP members plan to test the completed quadrupole magnet.

“This was a success for both the people building the magnets and the people testing the magnets,” said Fermilab scientist Giorgio Apollinari, head of LARP. “We still have a mountain to climb, but now we know we have all the right equipment at our disposal and that the first step was in the right direction.”

Diana Kwon

by Fermilab at June 22, 2015 03:56 PM

Lubos Motl - string vacua and pheno

Glimpsed particles that the LHC may confirm
The LHC is back in business. Many of us have watched the webcast today. There was a one-hour delay at the beginning. Then they lost the beam once. And things went pretty much smoothly afterwards. After a 30-month coffee break, the collider is collecting actual data to be used in the future papers at the center mass of \(13\TeV\).

So far, no black hole has destroyed the Earth.



It's possible that the LHC will discover nothing new, at least for years. But it is in no way inevitable. I would say that it's not even "very likely". We have various theoretical reasons to expect one discovery or another. A theory-independent vague argument is that the electroweak scale has no deep reason to be too special. And every time we added an order of magnitude to the energies, we saw something new.




But in this blog post, I would like to recall some excesses – inconclusive but tantalizing upward deviations from the Standard Model predictions – that have been mentioned on this blog. Most of them emerged from ATLAS or CMS analyses at the LHC. Some of them may be confirmed soon.




Please submit your corrections if some of the "hopeful hints" have been killed. And please submit those that I forgot.

The hints below will be approximately sorted from those that I consider most convincing at this moment. The energy at the beginning is the estimated mass of a new particle.
I omitted LHC hints older than November 2011 but you may see that the number of possible deviations has been nontrivial.



The most accurate photographs of the Standard Model's elementary particles provided by CERN so far. The zoo may have to be expanded.

Stay tuned.

by Luboš Motl (noreply@blogger.com) at June 22, 2015 06:45 AM

June 21, 2015

Tommaso Dorigo - Scientificblogging

Seeing Jupiter In Daylight
Have you ever seen Venus in full daylight ? It's a fun experience. Of course we are accustomed to see even a small crescent Moon in daylight -it is large and although of the same colour of clouds, it cannot be missed in a clear sky. But Venus is a small dot, and although it can be quite bright after the sunset or before dawn, during the day it is just a unconspicuous, tiny white dot which you never see, unless you look exactly in its direction.

read more

by Tommaso Dorigo at June 21, 2015 09:07 PM

The n-Category Cafe

What's so HoTT about Formalization?

In my last post I promised to follow up by explaining something about the relationship between homotopy type theory (HoTT) and computer formalization. (I’m getting tired of writing “publicity”, so this will probably be my last post for a while in this vein — for which I expect that some readers will be as grateful as I).

As a potential foundation for mathematics, HoTT/UF is a formal system existing at the same level as set theory (ZFC) and first-order logic: it’s a collection of rules for manipulating syntax, into which we can encode most or all of mathematics. No such formal system requires computer formalization, and conversely any such system can be used for computer formalization. For example, the HoTT Book was intentionally written to make the point that HoTT can be done without a computer, while the Mizar project has formalized huge amounts of mathematics in a ZFC-like system.

Why, then, does HoTT/UF seem so closely connected to computer formalization? Why do the overwhelming majority of publications in HoTT/UF come with computer formalizations, when such is still the exception rather than the rule in mathematics as a whole? And why are so many of the people working on HoTT/UF computer scientists or advocates of computer formalization?

To start with, note that the premise of the third question partially answers the first two. If we take it as a given that many homotopy type theorists care about computer formalization, then it’s only natural that they would be formalizing most of their papers, creating a close connection between the two subjects in people’s minds.

Of course, that forces us to ask why so many homotopy type theorists are into computer formalization. I don’t have a complete answer to that question, but here are a few partial ones.

  1. HoTT/UF is built on type theory, and type theory is closely connected to computers, because it is the foundation of typed functional programming languages like Haskell, ML, and Scala (and, to a lesser extent, less-functional typed programming languages like Java, C++, and so on). Thus, computer proof assistants built on type theory are well-suited to formal proofs of the correctness of software, and thus have received a lot of work from the computer science end. Naturally, therefore, when a new kind of type theory like HoTT comes along, the existing type theorists will be interested in it, and will bring along their predilection for formalization.

  2. HoTT/UF is by default constructive, meaning that we don’t need to assert the law of excluded middle or the axiom of choice unless we want to. Of course, most or all formal systems have a constructive version, but with type theories the constructive version is the “most natural one” due to the Curry-Howard correspondence. Moreover, one of the intriguing things about HoTT/UF is that it allows us to prove certain things constructively that in other systems require LEM or AC. Thus, it naturally attracts attention from constructive mathematicians, many of whom are interested in computable mathematics (i.e. when something exists, can we give an algorithm to find it?), which is only a short step away from computer formalization of proofs.

  3. One could, however, try to make similar arguments from the other side. For instance, HoTT/UF is (at least conjecturally) an internal language for higher topos theory and homotopy theory. Thus, one might expect it to attract an equal influx of higher topos theorists and homotopy theorists, who don’t care about computer formalization. Why hasn’t this happened? My best guess is that at present the traditional 1-topos theorists seem to be largely disjoint from the higher topos theorists. The former care about internal languages, but not so much about higher categories, while for the latter it is reversed; thus, there aren’t many of us in the intersection who care about both and appreciate this aspect of HoTT. But I hope that over time this will change.

  4. Another possible reason why the influx from type theory has been greater is that HoTT/UF is less strange-looking to type theorists (it’s just another type theory) than to the average mathematician. In the HoTT Book we tried to make it as accessible as possible, but there are still a lot of tricky things about type theory that one seemingly has to get used to before being able to appreciate the homotopical version.

  5. Another sociological effect is that Vladimir Voevodsky, who introduced the univalence axiom and is a Fields medalist with “charisma”, is also a very vocal and visible advocate of computer formalization. Indeed, his personal programme that he calls “Univalent Foundations” is to formalize all of mathematics using a HoTT-like type theory.

  6. Finally, many of us believe that HoTT is actually the best formal system extant for computer formalization of mathematics. It shares most of the advantages of type theory, such as the above-mentioned close connection to programming, the avoidance of complicated ZF-encodings for even basic concepts like natural numbers, and the production of small easily-verifiable “certificates” of proof correctness. (The advantages of some type theories that HoTT doesn’t yet share, like a computational interpretation, are work in progress.) But it also rectifies certain infelicious features of previously existing type theories, by specifying what equality of types means (univalence), including extensionality for functions and truth values, providing well-behaved quotient types (HITs), and so on, making it more comfortable for ordinary mathematicians. (I believe that historically, this was what led Voevodsky to type theory and univalence in the first place.)

There are probably additional reasons why HoTT/UF attracts more people interested in computer formalization. (If you can think of others, please share them in the comments.) However, there is more to it than this, as one can guess from the fact that even people like me, coming from a background of homotopy theory and higher category theory, tend to formalize a lot of our work on HoTT. Of course there is a bit of a “peer pressure” effect: if all the other homotopy type theorists formalize their papers, then it starts to seem expected in the subject. But that’s far from the only reason; here are some “real” ones.

  1. Computer formalization of synthetic homotopy theory (the “uniquely HoTT” part of HoTT/UF) is “easier”, in certain respects, than most computer formalization of mathematics. In particular, it requires less infrastructure and library support, because it is “closer to the metal” of the underlying formal system than is usual for actually “interesting” mathematics. Thus, formalizing it still feels more like “doing mathematics” than like programming, making it more attractive to a mathematician. You really can open up a proof assistant, load up no pre-written libraries at all, and in fairly short order be doing interesting HoTT. (Of course, this doesn’t mean that there is no value in having libraries and in thinking hard about how best to design those libraries, just that the barrier to entry is lower.)

  2. Precisely because, as mentioned above, type theory is hard to grok for a mathematician, there is a significant benefit to using a proof assistant that will automatically tell you when you make a mistake. In fact, messing around with a proof assistant is one of the best ways to learn type theory! I posted about this almost exactly four years ago.

  3. I think the previous point goes double for homotopy type theory, because it is an unfamiliar new world for almost everyone. The types of HoTT/UF behave kind of like spaces in homotopy theory, but they have their own idiosyncracies that it takes time to develop an intuition for. Playing around with a proof assistant is a great way to develop that intuition. It’s how I did it.

  4. Moreover, because that intuition is unique and recently developed for all of us, we may be less confident in the correctness of our informal arguments than we would be in classical mathematics. Thus, even an established “homotopy type theorist” may be more likely to want the comfort of a formalization.

  5. Finally, there is an additional benefit to doing mathematics with a proof assistant (as opposed to formalizing mathematics that you’ve already done on paper), which I think is particularly pronounced for type theory and homotopy type theory. Namely, the computer always tells you what you need to do next: you don’t need to work it out for yourself. A central part of type theory is inductive types, and a central part of HoTT is higher inductive types; both of which are characterized by an induction principle (or “eliminator”) which says that in order to prove a statement of the form “for all <semantics>x:W<annotation encoding="application/x-tex">x:W</annotation></semantics>, <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics>”, it suffices to prove some number of other statements involving the predicate <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>. The most familiar example is induction on the natural numbers, which says that in order to prove “for all <semantics>n<annotation encoding="application/x-tex">n\in \mathbb{N}</annotation></semantics>, <semantics>P(n)<annotation encoding="application/x-tex">P(n)</annotation></semantics>” it suffices to prove <semantics>P(0)<annotation encoding="application/x-tex">P(0)</annotation></semantics> and “for all <semantics>n<annotation encoding="application/x-tex">n\in \mathbb{N}</annotation></semantics>, if <semantics>P(n)<annotation encoding="application/x-tex">P(n)</annotation></semantics> then <semantics>P(n+1)<annotation encoding="application/x-tex">P(n+1)</annotation></semantics>”. When using proof by induction, you need to isolate <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> as a predicate on <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>, specialize to <semantics>n=0<annotation encoding="application/x-tex">n=0</annotation></semantics> to check the base case, write down <semantics>P(n)<annotation encoding="application/x-tex">P(n)</annotation></semantics> as the inductive hypothesis, then replace <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> by <semantics>n+1<annotation encoding="application/x-tex">n+1</annotation></semantics> to find what you have to prove in the induction step. The students in an intro to proofs class have trouble with all of these steps, but professional mathematicians have learned to do them automatically. However, for a general inductive or higher inductive type, there might instead be four, six, ten, or more separate statements to prove when applying the induction principle, many of which involve more complicated transformations of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, and it’s common to have to apply several such inductions in a nested way. Thus, when doing HoTT on paper, a substantial amount of time is sometimes spent simply figuring out what has to be proven. But a proof assistant equipped with a unification algorithm can do that for you automatically: you simply say “apply induction for the type <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>” and it immediately decides what <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is and presents you with a list of the remaining goals that have to be proven.

To summarize this second list, then, I think it’s fair to say that compared to formalizing traditional mathematics, formalizing HoTT tends to give more benefit at lower cost. However, that cost is still high, especially when you take into account the time spent learning to use a proof assistant, which is often not the most user-friendly of software. This is why I always emphasize that HoTT can perfectly well be done without a computer, and why we wrote the book the way we did.

by shulman (viritrilbia@gmail.com) at June 21, 2015 06:01 AM

June 20, 2015

ZapperZ - Physics and Physicists

Quantum Superposition Destroyed By Gravitational Time Dilation?
This is another interesting take on why we see our world classically and not quantum mechanically. Gravitational time dilation is enough to destroy coherent states that maintain superposition.

With this premise, the team worked out that even the Earth's gravitational field is strong enough to cause decoherence in quite small objects across measurable timescales. The researchers calculated that an object that weighs a gram and exists in two quantum states, separated vertically by a thousandth of a millimetre, should decohere in around a millisecond. 

I think this is similar to Penrose's claim that gravity is responsible for decoherence of quantum states. It will be interesting if anyone can experimentally verify this latest theoretical finding.

Zz.

by ZapperZ (noreply@blogger.com) at June 20, 2015 02:26 AM

June 19, 2015

Tommaso Dorigo - Scientificblogging

ATLAS Pictures Colour Flow Between Quarks
In 1992 I started working at my undergraduate thesis, the search for all-hadronic top quark pairs in CDF data. The CDF experiment was just starting to collect proton-antiproton collision data with the brand-new silicon vertex detector in what was called Run 1a, which ended in 1993 and produced the data on which the first evidence claim of top quarks was based. But I was still working on the Run 0 data: 4 inverse picobarns of collisions -the very first collisions at the unprecedented energy of 1.8 TeV. And I was not alone: many analyses of those data were still in full swing.

read more

by Tommaso Dorigo at June 19, 2015 06:46 PM

Tommaso Dorigo - Scientificblogging

ATLAS Pictures Colour Flow Between Quarks
In 1992 I started working at my undergraduate thesis, the search for all-hadronic top quark pairs in CDF data. The CDF experiment was just starting to collect proton-antiproton collision data with the brand-new silicon vertex detector in what was called Run 1a, which ended in 1993 and produced the data on which the first evidence claim of top quarks was based. But I was still working on the Run 0 data: 4 inverse picobarns of collisions -the very first collisions at the unprecedented energy of 1.8 TeV. And I was not alone: many analyses of those data were still in full swing.

read more

by Tommaso Dorigo at June 19, 2015 06:46 PM

CERN Bulletin

DECLARATION TO COUNCIL
One year ago, the Staff Association, together with the CERN-ESO Pensioners' Association, organized a staff meeting in front of this building to express our concern about certain actions of this Committee. Today we deem it necessary to come before you and convey in person, dear delegates, the concerns and worries of the staff. Indeed, the last 18 months we have observed a tendency of Council to take matters, in particular in the field of pensions, into its own hands, bypassing established governance structures, which Council has itself put into place. As a result, the Director General was prevented from playing his essential role of intermediary between staff and Council, an essential element of the established social dialogue. The creation of CERN in 1954 was very much based on the willingness of many countries of the old Continent to share resources to create a joint fundamental physics laboratory. The emphasis was on sharing resources for the common good to allow European scientists to engage in fundamental research that none of the individual countries could afford given the dire economic situation in Europe in the early nineteen fifties. These days, in contrast with the beginning of CERN in the nineteen fifties, we hear more and more delegates put forward their individual complicated national economic situation to suggest measures, which reflect those “back home”. Does that mean that measures of austerity must be blindly and uniformly applied everywhere and to everything? Should CERN merely be seen as a cost centre, an expense to be minimized? Should the spirit of decades-long cooperation at CERN be replaced by a competition between Member States in matters of taxation, economic privileges, and social benefits? Of course not. Member States should act in the global long-term interest of the Organization. Nobody can claim that our economies are today in a worse state than during the first years following the Second World War or than at the height of the financial crisis in 2008 and 2009. CERN, and fundamental research, is an investment for the future, a strategic resource to be managed, an opportunity for growth and development. Therefore, today, as in the past, we should strengthen, rather than weaken, the sixty-years-old humanistic vision, driven by several Nobel Prize laureates, to fund a laboratory that embodies "science for peace" by bringing multiple nations and cultures together. The Organization’s success story is clearly associated with the motivation and dedication of the staff, CERN’s essential resource. For over sixty years, many hundreds of CERN staff, women and men, of all ages, cultural and training backgrounds, all together, build, operate and develop the accelerator, computing, administrative, and experimental infrastructures. Their efforts and experience, in collaboration with the thousands of users, enabled a series of major physics discoveries (the latest being the so-called Higgs boson) leading to several Nobel and other prizes. CERN’s knowledge transfer contributes to the training of hundreds of students, postgraduates and teachers, the scientific and technical elites in a modern Europe, thus creating jobs with a high added value, and hence creating growth and wealth. Thanks to its own developments and through its policy of technology transfer, CERN provides the economic and industrial world with important advanced technologies such as particle acceleration and detection, now routinely applied in medicine (e.g. scanners, hadron therapy). The World Wide Web has revolutionised the way we communicate and do business around the world.  In a strictly short-term financial approach, some delegates to Council attack, more or less explicitly, the level of our wages and, more often, that of our pensions. More specifically, they challenge the sixty million Swiss francs that CERN, as the employer, must pay annually as a special contribution to the balanced package of measures to ensure full funding of the Pension Fund on the 30-year horizon. This package was approved by Council in December 2010 Council, and formalized in a Council Resolution in June 2011, just four years ago. Thanks to these measures the funding ratio has recovered significantly in only four years. Therefore, unilaterally attacking one of the components of the package is not only unequitable but also counterproductive, putting the Fund’s long-term equilibrium in jeopardy. In addition, and more recently, the Council appointed three legal experts to study how far the concept of acquired rights protects the level of pensions of (current and future) beneficiaries of the CERN Pension Fund. Without doubt, with the intention to find out by how much these benefits can be cut. Speaking as representatives of staff and pensioners, we want to express our strong disapproval of envisaging measures and policies aimed at reduced employment conditions for the sake of short-term savings, which endanger the future of our Organization. Trust between employer and employee is essential to guarantee efficient dialogue and social peace, as we have known it up to now. Therefore, statements putting into question the impartiality of some CERN officials in their dealings with Council, and attacking their professional integrity, are received as an insult to all staff. Such behaviour endangers the credibility of the Organization, not only in its role of employer but also in its role of a State, in as far as CERN defines its Rules. We recall that these rules, CERN, being an international organization, must be in agreement with the rules and jurisprudence applicable to the international civil service. CERN, through the voice of Council, must be a genuine trustworthy social partner that acts in good faith. Therefore, we demand respect for commitments and procedures, in particular that Council - respects its commitment to pay 60 MCHF per year for 30 years as the Organization’s contribution to the balanced package of measures towards restoring full funding of the Pension Fund, approved by Council in 2010; - respects and strictly enforces the competences of the governing bodies and the rights of all stakeholders. The Organization must not destabilize its employees with inappropriate initiatives. Governance structures and procedures defined by Council are in place to ensure that needed measures in the field of social and economic policy can be proposed in a timely way by the relevant bodies. The Staff Association, representing active staff as well as retirees, follows the Rules, and respects, and will continue to respect, its commitments towards the Organization. In the upcoming discussions it will continue to cooperate in a positive and constructive spirit with CERN Management, in the SCC, and with the Member States delegates in TREF, the PFGB, and elsewhere, in the interest of all parties: CERN Management, the staff, the user community, and the Member States. We hope that, from its side, Council will adopt the same positive, constructive and open attitude.

by Staff Association at June 19, 2015 01:33 PM

CERN Bulletin

Action of June 18: well done!
On Thursday morning more than 500 of you answered the call of the Staff Council and the CERN-ESO Pensioners’ Association (GAC-EPA) by participating in the gathering in the hall of the Main Building to say "Respect the rules", "Respect commitments" and "Work in the interest of the Organization"               Our action was a genuine success and we thank those who have contributed. While the participants in the hall on the ground floor of the Main building received the latest news regarding the CERN Pension Fund some forty representatives of the Staff Association and GAC-EPA occupied the Conference room on the 6th floor where the CERN Council would meet from 9:30 a.m.   The Director-General was informed about the occupation of the room around 8:40 a.m. After discussion between the Staff Association and the Director-General, it was agreed that the room would be released before the start of the meeting at 9:30 a.m. if the Staff Association were allowed to read a statement before the Council in session and that the declaration would be included in the minutes of the Council meeting. The Director-General transmitted this request to the President of the Council, Prof. A. Zalewska, who gave her agreement. Thus, the representatives of the Staff Association and GAC-EPA released the room around 9:20 a.m., while remaining in the corridor so that the delegates of the Member States could see them. Then, as agreed, after the opening of the session of the Council, Gertjan Bossen, President of GAC-EPA, and Michel Goossens, President of the Staff Association, were invited to read their statement to the Council (see the text). After reading our statement the President of Council proposed to the delegates to discuss its contents in point 18 of the agenda of the Council: “CERN Pension Fund”.                 In solidarity with our action many ESO staff members gathered in a room of their organization in Munich, where they were able to follow the events here live. Our action today was necessary, but will probably not be sufficient. To win we need the continued support of all CERN staff members. Without that support, we can achieve nothing, and even risk losing a lot. Therefore, for future actions, we are counting on you!

by Staff Association at June 19, 2015 01:18 PM

arXiv blog

Solving the Last Great 3-D Printing Challenge: Printing in Color

Nobody mentions the big problem with 3-D printing: how to do it in color. Now they won’t have to, thanks to a new technique.

June 19, 2015 01:06 PM

CERN Bulletin

LHC Report: Start of intensity ramp-up before a short breather

The first Stable Beams on 3 June were followed, to the accompaniment of thunderstorms, by the start of a phase known as the “intensity ramp-up” which saw the LHC team deliver physics with 50 bunches per beam. Time was also taken for a special five-day run devoted principally to the LHCf experiment. This week (15-19 June) the beam-based programme of the machine and its experiments was stopped temporarily for regular maintenance work.

 

LHCf’s Arm1 detector.

While the first stable colliding beams were delivered with only 3 nominal bunches per beam, the aim of last week’s operations was to start the process of increasing the number of bunches in the beam with an ultimate 2015 target of ~2400 bunches per beam. The number of bunches is gradually increased in well-defined steps. At each step – 3 bunches per beam, then 13, 40 and, finally, 50 – the machine protection team requests 3 fills and around 20 hours of Stable Beams to verify that all systems are behaving properly. During each fill, checks are made of instrumentation, feedback response, beam loss through the cycle, machine protection systems, RF, beam induced heating, orbit stability, etc. A check list is completed and signed off by the machine protection panel before authorisation is given for the next step with increased intensity. Following this pattern, the LHC reached 50 on 50 bunches by the weekend of 13-14 June. 

There was an extended hiatus in the intensity ramp-up during the week for a five-day special physics run devoted primarily to LHCf – the far forward experiment situated in the LHC around 140 m left and right of the ATLAS interaction point. Low luminosity and low pile-up conditions were required by LHCf and these were delivered at 6.5 TeV with a special de-squeezed optics with relatively large beam sizes at the interaction points of all experiments. The required data were successfully delivered to LHCf  in a series of fills with up to 39 bunches per beam. ATLAS, CMS, LHCb and ALICE all took advantage of the special conditions to take data themselves.

Monday 15 June saw the start of a five-day technical stop. This is the first of three technical stops scheduled during the 2015 operating period, before a longer stop planned during the end-of-year holidays. A normal year of LHC operation includes five-day technical stops every ten weeks or so to allow the machine and the experiments to carry out maintenance work and other interventions. Following the restart this weekend, a week or so will be devoted to a scrubbing run aimed at reducing electron clouds by conditioning the surface of the beam pipes around the ring. This run will prepare the way for a three-week period of operation with 50 ns bunch spacing and an associated intensity ramp-up to the order of 1000 bunches per beam.

June 19, 2015 10:06 AM

John Baez - Azimuth

On Care For Our Common Home

There’s been a sea change on attitudes toward global warming in the last couple of years, which makes me feel much less need to determine the basic facts of the matter, or convince people of these facts. The challenge is now to do something.

Even the biggest European oil and gas companies are calling for a carbon tax! Their motives, of course, should be suspect. But they have realized it’s hopeless to argue about the basics. They wrote a letter to the United Nations beginning:

Dear Excellencies:

Climate change is a critical challenge for our world. As major companies from the oil & gas sector, we recognize both the importance of the climate challenge and the importance of energy to human life and well-being. We acknowledge that the current trend of greenhouse gas emissions is in excess of what the Intergovernmental Panel on Climate Change (IPCC) says is needed to limit the temperature rise to no more than 2 degrees above pre-industrial levels. The challenge is how to meet greater energy demand with less CO2. We stand ready to play our part.

It seems there are just a few places, mostly former British colonies, where questioning the reality and importance of man-made global warming is a popular stance among politicians. Unfortunately one of these, the United States, is a big carbon emitter. Otherwise we could just ignore these holdouts.

Given all this, it’s not so surprising that Pope Francis has joined the crowd and released a document on environmental issues:

• Pope Francis, Enyclical letter Laudato Si’: on care for our common home.

Still, it is interesting to read this document, because unlike most reports we read on climate change, it addresses the cultural and spiritual dimensions of this problem.

I believe arguments should be judged by their merits, not the fact that they’re made by someone with an impressive title like

His Holiness Francis, Bishop of Rome, Vicar of Jesus Christ, Successor of the Prince of the Apostles, Supreme Pontiff of the Universal Church, Primate of Italy, Archbishop and Metropolitan of the Roman Province, Sovereign of the Vatican City State, Servant of the servants of God.

(Note the hat-tip to Darwin there. )

But in fact Francis has some interesting things to say. And among all the reportage on this issue, it’s hard to find more than quick snippets of the actual 182-page document, which is actually quite interesting. So, let me quote a bit.

I will try to dodge the explicitly Christian bits, because I really don’t want people arguing about religion on this blog—in fact I won’t allow it. Of course discussing what the Pope says without getting into Christianity is very difficult and perhaps even absurd. But let’s try.

I will also skip the extensive section where he summarizes the science. It’s very readable, and for an audience who doesn’t want numbers and graphs it’s excellent. But I figure the audience of this blog already knows that material.

So, here are some of the passages I found most interesting.

St. Francis of Assisi

He discusses how St. Francis of Assisi has been an example to him, and says:

Francis helps us to see that an integral ecology calls for openness to categories which transcend the language of mathematics and biology, and take us to the heart of what it is to be human. Just as happens when we fall in love with someone, whenever he would gaze at the sun, the moon or the smallest of animals, he burst into song, drawing all other creatures into his praise.

[…]

If we approach nature and the environment without this openness to awe and wonder, if we no longer speak the language of fraternity and beauty in our relationship with the world, our attitude will be that of masters, consumers, ruthless exploiters, unable to set limits on their immediate needs. By contrast, if we feel intimately united with all that exists, then sobriety and care will well up spontaneously. The poverty and austerity of Saint Francis were no mere veneer of asceticism, but something much more radical: a refusal to turn reality into an object simply to be used and controlled.

Weak responses

On the responses to ecological problems thus far:

The problem is that we still lack the culture needed to confront this crisis. We lack leadership capable of striking out on new paths and meeting the needs of the present with concern for all and without prejudice towards coming generations. The establishment of a legal framework which can set clear boundaries and ensure the protection of ecosystems has become indispensable, otherwise the new power structures based on the techno-economic paradigm may overwhelm not only our politics but also freedom and justice.

It is remarkable how weak international political responses have been. The failure of global summits on the environment make it plain that our politics are subject to technology and finance. There are too many special interests, and economic interests easily end up trumping the common good and manipulating information so that their own plans will not be affected. The Aparecida Document urges that “the interests of economic groups which irrationally demolish sources of life should not prevail in dealing with natural resources”. The alliance between the economy and technology ends up sidelining anything unrelated to its immediate interests. Consequently the most one can expect is superficial rhetoric, sporadic acts of philanthropy and perfunctory expressions of concern for the environment, whereas any genuine attempt by groups within society to introduce change is viewed as a nuisance based on romantic illusions or an obstacle to be circumvented.

In some countries, there are positive examples of environmental improvement: rivers, polluted for decades, have been cleaned up; native woodlands have been restored; landscapes have been beautified thanks to environmental renewal projects; beautiful buildings have been erected; advances have been made in the production of non-polluting energy and in the improvement of public transportation. These achievements do not solve global problems, but they do show that men and women are still capable of intervening positively. For all our limitations, gestures of generosity, solidarity and care cannot but well up within us, since we were made for love.

At the same time we can note the rise of a false or superficial ecology which bolsters complacency and a cheerful recklessness. As often occurs in periods of deep crisis which require bold decisions, we are tempted to think that what is happening is not entirely clear. Superficially, apart from a few obvious signs of pollution and deterioration, things do not look that serious, and the planet could continue as it is for some time. Such evasiveness serves as a licence to carrying on with our present lifestyles and models of production and consumption. This is the way human beings contrive to feed their self-destructive vices: trying not to see them, trying not to acknowledge them, delaying the important decisions and pretending that nothing will happen.

On the risks:

It is foreseeable that, once certain resources have been depleted, the scene will be set for new wars, albeit under the guise of noble claims.

Everything is connected

He writes:

Everything is connected. Concern for the environment thus needs to be joined to a sincere love for our fellow human beings and an unwavering commitment to resolving the problems of society.

Moreover, when our hearts are authentically open to universal communion, this sense of fraternity excludes nothing and no one. It follows that our indifference or cruelty towards fellow creatures of this world sooner or later affects the treatment we mete out to other human beings. We have only one heart, and the same wretchedness which leads us to mistreat an animal will not be long in showing itself in our relationships
with other people. Every act of cruelty towards any creature is “contrary to human dignity”. We can hardly consider ourselves to be fully loving if we disregard any aspect of reality: “Peace, justice and the preservation of creation are three absolutely interconnected themes, which cannot be separated and treated individually without once again falling into reductionism”.

Technology: creativity and power

Technoscience, when well directed, can produce important means of improving the quality of human life, from useful domestic appliances to great transportation systems, bridges, buildings and public spaces. It can also produce art and enable men and women immersed in the material world to “leap” into the world of beauty. Who can deny the beauty of an aircraft or a skyscraper? Valuable works of art and music now make use of new technologies. So, in the beauty intended by the one who uses new technical instruments and in the contemplation of such beauty, a quantum leap occurs, resulting in a fulfilment which is uniquely human.

Yet it must also be recognized that nuclear energy, biotechnology, information technology, knowledge of our DNA, and many other abilities which we have acquired, have given us tremendous power. More precisely, they have given those with the knowledge, and especially the economic resources to use them, an impressive dominance over the whole of humanity and the entire world. Never has humanity had such power over itself, yet nothing ensures that it will be used wisely, particularly when we consider how it is currently being used. We need but think of the nuclear bombs dropped in the middle of the twentieth century, or the array of technology which Nazism, Communism and other totalitarian regimes have employed to kill millions of people, to say nothing of the increasingly deadly arsenal of weapons available for modern warfare. In whose hands does all this power lie, or will it eventually end up? It is extremely risky for a small part of humanity to have it.

The globalization of the technocratic paradigm

The basic problem goes even deeper: it is the way that humanity has taken up technology and its development according to an undifferentiated and one-dimensional paradigm. This paradigm exalts the concept of a subject who, using logical and rational procedures, progressively approaches and gains control over an external object. This subject makes every effort to establish the scientific and experimental method, which in itself is already a technique of possession, mastery and transformation. It is as if the subject were to find itself in the presence of something formless, completely open to manipulation. Men and women have constantly intervened in nature, but for a long time this meant being in tune with and respecting the possibilities offered by the things themselves. It was a matter of receiving what nature itself allowed, as if from its own hand. Now, by contrast, we are the ones to lay our hands on things, attempting to extract everything possible from them while frequently ignoring or forgetting the reality in front of us. Human beings and material objects no longer extend a friendly hand to one another; the relationship has become confrontational. This has made it easy to accept the idea of infinite or unlimited growth, which proves so attractive to economists, financiers and experts in technology. It is based on the lie that there is an infinite supply of the earth’s goods, and this leads to the planet being squeezed dry beyond every limit. It is the false notion that “an infinite quantity of energy and resources are available, that it is possible to renew them quickly, and that the negative effects of the exploitation of the natural order can be easily absorbed”.

The difficulty of changing course

The idea of promoting a different cultural paradigm and employing technology as a mere instrument is nowadays inconceivable. The technological paradigm has become so dominant that it would be difficult to do without its resources and even more difficult to utilize them without being dominated by their internal logic. It has become countercultural to choose a lifestyle whose goals are even partly independent of technology, of its costs and its power to globalize and make us all the same. Technology tends to absorb everything into its ironclad logic, and those who are surrounded with technology “know full well that it moves forward in the final analysis neither for profit nor for the well-being of the human race”, that “in the most radical sense of the term power is its motive – a lordship over all”. As a result, “man seizes hold of the naked elements of both nature and human nature”. Our capacity to make decisions, a more genuine freedom and the space for each one’s alternative creativity are diminished.

The technocratic paradigm also tends to dominate economic and political life. The economy accepts every advance in technology with a view to profit, without concern for its potentially negative impact on human beings. Finance overwhelms the real economy. The lessons of the global financial crisis have not been assimilated, and we are learning all too slowly the lessons of environmental deterioration. Some circles maintain that current economics and technology will solve all environmental problems, and argue, in popular and non-technical terms, that the problems of global hunger and poverty will be resolved simply by market growth. They are less concerned with certain economic theories which today scarcely anybody dares defend, than with their actual operation in the functioning of the economy. They may not affirm such theories with words, but nonetheless support them with their deeds by showing no interest in more balanced levels of production, a better distribution of wealth, concern for the environment and the rights of future generations. Their behaviour shows that for them maximizing profits is enough.

Toward an ecological culture

Ecological culture cannot be reduced to a series of urgent and partial responses to the immediate problems of pollution, environmental decay and the depletion of natural resources. There needs to be a distinctive way of looking at things, a way of thinking, policies, an educational programme, a lifestyle and a spirituality which together generate resistance to the assault of the technocratic paradigm. Otherwise, even the best ecological initiatives can find themselves caught up in the same globalized logic. To seek only a technical remedy to each environmental problem which comes up is to separate what is in reality interconnected and to mask the true and deepest problems of the global system.

Yet we can once more broaden our vision. We have the freedom needed to limit and direct technology; we can put it at the service of another type of progress, one which is healthier, more human, more social, more integral. Liberation from the dominant technocratic paradigm does in fact happen sometimes, for example, when cooperatives of small producers adopt less polluting means of production, and opt for a non-consumerist model of life, recreation and community. Or when technology is directed primarily to resolving people’s concrete problems, truly helping them live with more dignity and less suffering. Or indeed when the desire to create and contemplate beauty manages to overcome reductionism through a kind of salvation which occurs in beauty and in those who behold it. An authentic humanity, calling for a new synthesis, seems to dwell in the midst of our technological culture, almost unnoticed, like a mist seeping gently beneath a closed door. Will the promise last, in spite of everything, with all that is authentic rising up in stubborn resistance?

Integral ecology

Near the end he calls the for the development of an ‘integral ecology’. I find it fascinating that this has something in common with ‘network theory':

Since everything is closely interrelated, and today’s problems call for a vision capable of taking into account every aspect of the global crisis, I suggest that we now consider some elements of an integral ecology, one which clearly respects its human and social dimensions.

Ecology studies the relationship between living organisms and the environment in which they develop. This necessarily entails reflection and debate about the conditions required for the life and survival of society, and the honesty needed to question certain models of development, production and consumption. It cannot be emphasized enough how everything is interconnected. Time and space are not independent of one another, and not even atoms or subatomic particles can be considered in isolation. Just as the different aspects of the planet—physical, chemical and biological—are interrelated, so too living species are part of a network which we will never fully explore and understand. A good part of our genetic code is shared by many living beings. It follows that the fragmentation of knowledge and the isolation of bits of information can actually become a form of ignorance, unless they are integrated into a broader vision of reality.

When we speak of the “environment”, what we really mean is a relationship existing between nature and the society which lives in it. Nature cannot be regarded as something separate from ourselves or as a mere setting in which we live. We are part of nature, included in it and thus in constant interaction with it. Recognizing the reasons why a given area is polluted requires a study of the workings of society, its economy, its behaviour patterns, and the ways it grasps reality. Given the scale of change, it is no longer possible to find a specific, discrete answer for each part of the problem. It is essential to seek comprehensive solutions which consider the interactions within natural systems themselves and with social systems. We are faced not with two separate crises, one environmental and the other social, but rather with one complex crisis which is both social and environmental. Strategies for a solution demand an integrated approach to combating poverty, restoring dignity to the excluded, and at the same time protecting nature.

Due to the number and variety of factors to be taken into account when determining the environmental impact of a concrete undertaking, it is essential to give researchers their due role, to facilitate their interaction, and to ensure broad academic freedom. Ongoing research should also give us a better understanding of how different creatures relate to one another in making up the larger units which today we term “ecosystems”. We take these systems into account not only to determine how best to use them, but also because they have an intrinsic value independent of their usefulness.

Ecological education

He concludes by discussing the need for ‘ecological education’.

Environmental education has broadened its goals. Whereas in the beginning it was mainly centred on scientific information, consciousness-raising and the prevention of environmental risks, it tends now to include a critique of the “myths” of a modernity grounded in a utilitarian mindset (individualism, unlimited progress, competition, consumerism, the unregulated market). It seeks also to restore the various levels of ecological equilibrium, establishing harmony within ourselves, with others, with nature and other living creatures, and with God. Environmental education should facilitate making the leap towards the transcendent which gives ecological ethics its deepest meaning. It needs educators capable of developing an ethics of ecology, and helping people, through effective pedagogy, to grow in solidarity, responsibility and compassionate care.

Even small good practices can encourage new attitudes:

Education in environmental responsibility can encourage ways of acting which directly and significantly affect the world around us, such as avoiding the use of plastic and paper, reducing water consumption, separating refuse, cooking only what can reasonably be consumed, showing care for other living beings, using public transport or car-pooling, planting trees, turning off unnecessary lights, or any number of other practices. All of these reflect a generous and worthy creativity which brings out the best in human beings. Reusing something instead of immediately discarding it, when done for the right reasons, can be an act of love which expresses our own dignity.

We must not think that these efforts are not going to change the world. They benefit society, often unbeknown to us, for they call forth a goodness which, albeit unseen, inevitably tends to spread. Furthermore, such actions can restore our sense of self-esteem; they can enable us to live more fully and to feel that life on earth is worthwhile.

Part of the goal is to be more closely attentive to what we have, not fooled into thinking we’d always be happier with more:

It is a return to that simplicity which allows us to stop and appreciate the small things, to be grateful for the opportunities which life affords us, to be spiritually detached from what we possess, and not to succumb to sadness for what we lack. This implies avoiding the dynamic of dominion and the mere accumulation of pleasures.

Such sobriety, when lived freely and consciously, is liberating. It is not a lesser life or one lived with less intensity. On the contrary, it is a way of living life to the full. In reality, those who enjoy more and live better each moment are those who have given up dipping here and there, always on the look-out for what they do not have. They experience what it means to appreciate each person and each thing, learning familiarity with the simplest things and how to enjoy them. So they are able to shed unsatisfied needs, reducing their obsessiveness and weariness. Even living on little, they can live a lot, above all when they cultivate other pleasures and find satisfaction in fraternal encounters, in service, in developing their gifts, in music and art, in contact with nature, in prayer. Happiness means knowing how to limit some needs which only diminish us, and being open to the many different possibilities which life can offer.


by John Baez at June 19, 2015 04:02 AM

June 18, 2015

Clifford V. Johnson - Asymptotia

Screen Junkies: Science and Jurassic World
So the episode I mentioned is out! It's a lot of fun, and there's so very much that we talked about that they could not fit into the episode. See below. It is all about Jurassic World - a huge box-office hit. movie_science_screen_shotIf you have not seen it yet, and don't want specific spoilers, watch out for where I write the word spoilers in capitals, and read no further. If you don't even want my overall take on things without specifics, read only up to where I link to the video. Also, the video has spoilers. I'll embed the video here, and I have some more thoughts that I'll put below. One point I brought up a bit (you can see the beginning of it in my early remarks) is the whole business of the poor portrayal of science and scientists overall in the film, as opposed to in the original Jurassic Park movie. In the original, putting quibbles over scientific feasibility aside (it's not a documentary, remember!), you have the "dangers of science" on one side, but you also have the "wonders of science" on the other. This includes that early scene or two that still delight me (and many scientists I know - and a whole bunch who were partly inspired by the movie to go into science!) of how genuinely moved the two scientist characters (played by Laura Dern and Sam Neil) are to see walking living dinosaurs, the subject of their life's work. Right in front of them. Even if you're not a scientist, you immediately relate to that feeling. It helps root the movie, as does that fact that pretty much all the characters are fleshed [...] Click to continue reading this post

by Clifford at June 18, 2015 08:01 PM

Symmetrybreaking - Fermilab/SLAC

Mathematician to know: Emmy Noether

Noether's theorem is a thread woven into the fabric of the science.

We are able to understand the world because it is predictable. If we drop a rubber ball, it falls down rather than flying up. But more specifically: if we drop the same ball from the same height over and over again, we know it will hit the ground with the same speed every time (within vagaries of air currents). That repeatability is a huge part of what makes physics effective.

The repeatability of the ball experiment is an example of what physicists call “the law of conservation of energy.” An equivalent way to put it is to say the force of gravity doesn’t change in strength from moment to moment.

The connection between those ways of thinking is a simple example of a deep principle called Noether’s theorem: Wherever a symmetry of nature exists, there is a conservation law attached to it, and vice versa. The theorem is named for arguably the greatest 20th century mathematician: Emmy Noether.

“Noether's theorem to me is as important a theorem in our understanding of the world as the Pythagorean theorem,” says Fermilab physicist Christopher Hill, who wrote a book on the topic with Nobel laureate Leon Lederman.

So who was the mathematician behind Noether’s theorem?

The life of Noether

Amalie Emmy Noether was born in Bavaria (now part of Germany) in 1882. She earned her doctorate in mathematics in 1907 from the University of Erlangen, which was a socially progressive institution for its day. She stayed at Erlangen to teach for several years, though without pay, as women were not technically allowed to teach at universities in Germany at the time.

One of the leading mathematicians of the age, David Hilbert, invited her to join him at the University of Göttingen, where she remained from 1916 until 1933. Liberalized laws in Germany following World War I allowed Noether to be granted a teaching position, but she was still paid only a small amount for her teaching work.

In 1933, the Nazi regime fired all Jewish professors and followed the next year by firing all female professors. A Jewish woman, Noether left Germany for the United States. She worked as a visiting professor at Bryn Mawr College, but her time in America was short. She died in 1935 at age 53, from complications following surgery.

Many of the leading male mathematicians and physicists of the day eulogized her, including Albert Einstein, who wrote in the New York Times, “However inconspicuously the life of these individuals runs its course, none the less the fruits of their endeavors are the most valuable contributions which one generation can make to its successors.”

Physicists tend to know her work primarily through her 1918 theorem. But mathematicians are familiar with a variety of Noether theorems, Noetherian rings, Noether groups, Noether equations, Noether modules and many more.

Over the course of her career, Noether developed much of modern abstract algebra: the grammar and the syntax of math, letting us say what we need to in math and science. She also contributed to the theory of groups, which is another way to treat symmetries; this work has influenced mathematical side of quantum mechanics and superstring theory.

Noether and particle physics

Because their work relies on symmetry and conservation laws, nearly every modern physicist uses Noether’s theorem. It’s a thread woven into the fabric of the science, part of the whole cloth. Every time scientists use a symmetry or a conservation law, from the quantum physics of atoms to the flow of matter on the scale of the cosmos, Noether’s theorem is present. Noetherian symmetries answer questions like these: If you perform an experiment at different times or in different places, what changes and what stays the same? Can you rotate your experimental setup? Which properties of particles can change, and which are inviolable?

Conservation of energy comes from time-shift symmetry: You can repeat an experiment at different times, and the result is the same. Conservation of momentum comes from space-shift symmetry: You can perform the same experiment in different places, and it comes out with the same results. Conservation of angular momentum, which when combined with the conservation of energy under the force of gravity explains the Earth’s motion around the sun, comes from symmetry under rotations. And the list goes on.

The greatest success of Noether’s theorem came with quantum physics, and especially the particle physics revolution that rose after Noether’s death. Many physicists, inspired by Noether’s theorem and the success of Einstein’s general theory of relativity, looked at geometrical descriptions and mathematical symmetries to describe the new types of particles they were discovering.

“It's definitely true that Noether's theorem is part of the foundation on which modern physics is built,” says physicist Natalia Toro of the Perimeter Institute and the University of Waterloo. “We apply it every day to deep and well-tested principles like conservation of energy and momentum.”

According to the law of conservation of electric charge, the total amount of electric charge going into an experiment must be the same as what comes out, even if particle types change or if matter hits antimatter and is annihilated. That law has the same symmetry that a circle has. A perfect circle can be rotated around its center by any angle and it looks the same; the same math describes the quantum mechanical property of an electron. If the amount of that rotation can change from place to place, the symmetry of a circle yields the entire theory of electromagnetism, which governs everything from the generation of electricity to the structure of atoms to matter on cosmic scales. In that way, Noether takes us from a simple symmetry to the world we know.

“Noether's theorem has even greater power than that,” Toro says, “in helping us to organize our thinking when exploring aspects of the universe where we don't yet know the basic laws. That's a tall order, and as we seek experimental answers to these questions, symmetries and conservation laws—tightly linked by Noether's theorem—are one of the few theoretical tools that we have to guide us.”

 

Live from the Perimeter Institute, starting at 5 p.m. PDT / 8 p.m. EDT: Mathematician Peter Olver explores Noether’s life and career, and delves into the curious history of her famous theorems. Physicist Ruth Gregory looks at the lasting impact of Noether’s theorem, and how it connects with the Standard Model and Einstein’s general relativity.

 

 

Like what you see? Sign up for a free subscription to symmetry!

by Matthew R. Francis at June 18, 2015 03:46 PM

Clifford V. Johnson - Asymptotia

Calling Shenanigans
I hadn't realized that I knew some of the journalists who were at the event at which Tim Hunt made his negative-stereotype-strengthening remarks. I trust their opinion and integrity quite a bit, and so I'm glad to hear reports from them about what they witnessed. This includes Deborah Blum, who was speaking in the same session as Hunt that day, and who was at the luncheon. She spoke with Hunt about his views and intentions. Thanks, Deborah for calling shenanigans on the "I was only joking" defense so often used to hide behind the old "political correctness gone mad" trope. Read her article here, and further here. -cvj (Spoof poster imaged is by Jen Golbeck) Click to continue reading this post

by Clifford at June 18, 2015 03:17 PM

astrobites - astro-ph reader's digest

Radio Loud AGN = Mergers?

Title: Radio Loud AGN are Mergers
Authors: Chiaberge, Gilli, Lotz & Norman
First Author Institution: Space Telescope Science Insitute, 3700 Martin Drive, Baltimore, MD 21218
Status: Accepted for publication to ApJ

One of the most important issues currently plaguing astronomers is how galaxies and their central super massive black holes (SMBHs) coevolve together and interact. There is a correlation between the mass of a galaxy and the black hole at the centre, suggesting they somehow manage to grow together. The problem is simulations seem to show that black holes grow on short timescales (10-100 million years) in bursts of intense activity during which they accrete the material around them. This activity transforms the black hole and it’s surroundings into an active galactic nucleus (AGN). Whereas galaxies are thought to grow their mass more steadily through star formation and galaxy mergers that take billions of years. So if the black hole and the galaxy are growing on such drastically different timescales, how come we see their masses so tightly correlated?

In order for a black hole to eventually accrete some material, this material must lose almost all of its angular momentum, difficult to do without a process to trigger it. Simulations have shown though that with interactions, bars, disk instabilities and mergers of galaxies such a process becomes much more feasible. Particularly these simulations have shown that mergers can drive gas in a galaxy inwards toward the black hole due to tidal forces. Therefore astronomers have long been searching for as much observational evidence as possible for links between merging galaxies and the presence of a high/low power AGN in a galaxy. The theory being that the bigger the merger, the more gas is funnelled inwards and the black hole then accretes, producing an AGN which emits a lot of high energy light often in the form of radio waves (due to synchrotron radiation from the electrons spiralling in the magnetic field of the black hole).

The authors in this paper therefore study a sample of AGN which emit in the radio, which is split into two categories: radio-loud (bright in the radio) and radio-quiet (faint in the radio) AGN. It is often theorised therefore that radio-loud AGN must possess an extra source of energy in order for them to be this much brighter and drive a lot of the powerful jets that are observed around them. The authors of this paper therefore investigate whether the source of this power is due to a recent merger in these radio-loud AGN hosting galaxies.

Radio-quiet AGN from the  CDFS catalogue. Low X-ray power (top) and high X-ray power (bottom) are shown, along with those galaxies classified as non-mergers (left) and mergers (right). Originally figure 3 in Chiaberge et al. (2015).

Figure 1: Radio-quiet AGN from the CDFS catalogue. Low X-ray power (top) and high X-ray power (bottom) are shown, along with those galaxies classified as non-mergers (left) and mergers (right). Originally figure 3 in Chiaberge et al. (2015).

Their sample is composed of galaxies with 1 < z < 2.5 which are all classed as Faranoff-Riley class II (FRII); this means that the brightest component of the radio comes from the edges of the emission, for example clumps at the ends of jets expelled by the black hole. This is opposed to FRI class galaxies whose brightest radio component is located at the centre of the galaxy. Radio loud FRII galaxies with high X-ray power are selected from the 3CR catalog and those with low X-ray power from Extended Chandra Deep Field South (CDFS). They also use this catalogue to select low X-ray power radio-quiet AGN along with the specific 4Msec CDFS catalog to select a sample of high X-ray power radio-quiet galaxies. The X-ray emission comes from the accretion disc aruond the black hole, so by splitting each sample by low/high power X-ray emission as well, we can understand how the rate of the accretion ties in with this study of mergers.

A selection of the radio-quiet AGN selected are shown in Figure 1. The Hubble Space Telescope (HST) GOODS-SOUTH field was also used to select a sample of inactive galaxies to use as a control sample against the radio AGN .

Figure 2: The merger fraction against the average radio loudness for each samples as labelled. The red and blue symbols show results measured in other studies including McClure et al. (2004; red triangles) and low redshift galaxies from  Madrid et al. (2006; blue hexagon). The dashed line shows the separation between radio quiet (RQ) and radio loud (RL) galaxies from Terashima & Wilson (2003). Originally Figure 6 in Chiaberge et al. (2015).

Figure 2: The merger fraction against the average radio loudness for each samples as labelled. The red and blue symbols show results measured in other studies including McClure et al. (2004; red triangles) and low redshift galaxies from Madrid et al. (2006; blue hexagon). The dashed line shows the separation between radio quiet (RQ) and radio loud (RL) galaxies from Terashima & Wilson (2003). Originally Figure 6 in Chiaberge et al. (2015).

Figure 3: The merger fraction against the average redshift for each sample as labelled. The red and blue symbols show results measured in other studies including McClure et al. (2004) and low redshift galaxies from  Madrid et al. (2006). Originally left panel of Figure 5 in Chiaberge et al. (2015).

Figure 3: The merger fraction against the average redshift for each sample as labelled. The red and blue symbols show results measured in other studies including McClure et al. (2004) and low redshift galaxies from Madrid et al. (2006). Originally left panel of Figure 5 in Chiaberge et al. (2015).

With these samples selected, the authors then try to determine whether mergers are associated with any of the low/high power X-ray emission or the radio loud/quiet AGN activity and whether any of these classes are more likely to be triggered by a merger than others. Firstly the authors look at the merger fraction found in each sample of high and low powered radio-quiet and radio-loud AGN samples – this is a measure of how many of the galaxies in each sample are classified (by 4 human classifiers) as having clear signatures of a merger, for example: double nuclei, close pairs, tidal tails, bridges or distorted morphologies. The authors found this fraction to be 92% for the radio loud galaxies and 38% for the radio quiet galaxies (compared also to 27% and 20% for bright and faint inactive galaxies respectively).

 

There is clear statistical evidence therefore that the radio loud AGNs almost always reside in galaxies where mergers are undergoing or have recently happened. The authors also find that this merger fraction is independent of the X-ray power of the AGN. This dependence of the radio loud AGN on the merger fraction can be  clearly seen in Figure 2 for the samples in this study and low redshift samples (z < 0.3) from other investigations. Figure 3 also shows the merger fraction plotted against redshift and shows that this result is invariant with redshift.

The authors conclude that radio-loud galaxies are unambiguously associated with mergers, independent of redshift and power, whereas radio-quiet galaxies are indistinguishable from normal galaxies in the same redshift range. This suggests that mergers do indeed produce radio loud AGN. The authors therefore speculate that the AGN observed at z > 1 have a much higher black hole spin in order to produce such radio loud emission. This ties in very well with this merger theory, which are thought to increase the spin due to the merger of the two black holes at the centres of the originally merging pair of galaxies.

by Becky Smethurst at June 18, 2015 12:00 PM

Lubos Motl - string vacua and pheno

Standard model as string-inspired double field theory
Patrick of UC Davis told us not to overlook a new intriguing Korean hep-th paper
Standard Model Double Field Theory
by Choi and Park. They excite their readers by saying that it's possible to rewrite the Standard Model – or a tiny modification of it – as a special kind (or variation) of a field theory that emerged in string theory: the so-called double field theory.




The paper is rather short so you may try to quickly read it – I did – and I am a bit disappointed after I did. The abstract suggested that there is something special about the Standard Model (if it is not completely unique) that makes its rewriting as a double field theory more natural (or completely natural if not unavoidable). I couldn't find any fingerprint of this sort in the paper. It seems to me that what they did to the Standard Model could be done to any quantum field theory in the same class.




Double field theory is a quantum field theory but it has a special property that emerged while describing phenomena in string theory. But if you remember your humble correspondent's recent comments about "full-fledged string theory", "string-inspired research", and "non-stringy research", you should know that I would place this Korean paper to the middle category. I disagree with them that the features they are trying to cover or find are "purely stringy". It's a field theory based on finitely many point-like particle species – their composition is picked by hand, and so are various interactions and constrains taming these fields – so it is simply not string theory where all these technicalities (the field content, interactions, and constraints) are completely determined from a totally different starting point (and not "adjustable" at all). They're not solving the full equations of string theory etc. Again, I don't think that the theories they are describing should be counted as "string theory" although the importance of string theory for this research to have emerged is self-evident.

What is double field theory (DFT)? In string theory, there is this cute phenomenon called T-duality.

If a circular dimension is compactified on a circle of radius \(R\) i.e. circumference \(2\pi R\), the momentum becomes quantized in units of \(1/R\) i.e. \(p=n/R\), \(n\in\ZZ\). That's true even in ordinary theories of point-like particles and follows from the single-valuedness of the wave function on the circle. However, in string theory, a closed string may also be wrapped around the circle \(w\) times (the winding number). In this way, you deal with a string of the minimum length \(2\pi R w\) whose minimum mass is \(2\pi R w T\) where \(T=1/2\pi\alpha'\) is the string tension (mass/energy density per unit length).

So there are contributions to the mass of the particle-which-is-a-string that go like \(n/R\) and \(wR/ \alpha'\), respectively (note that the factors of \(2\pi\) cancel). Well, a more accurate comment is that there are contributions to \(m^2\) that go like \((n/R)^2\) and \((wR/ \alpha')^2\), respectively, but I am sure that you become able to fix all these technical "details" once you start to study the theory quantitatively.

There is a nice symmetry between \(n/R\) and \(wR/ \alpha'\). If you exchange \(n\leftrightarrow w\) and \(R\leftrightarrow \alpha'/R\), the two terms get interchanged. (The squaring changes nothing about it.) That's cool. It means that the spectrum (and degeneracies) of a closed string on a circle of radius \(R\) is the same as on the radius \(\alpha'/R\). This is no coincidence. The symmetry actually does exist in string theory and applies to all the interactions, too. In particular, something special occurs when \(R^2=\alpha'\). For this "self-dual" radius, the magnitude of momentum-like and winding-like contributions is the same and that's the point where string theory produces new enhanced symmetries. For example, in bosonic string theory on the self-dual circle, \(U(1)\times U(1)\) from the Kaluza-Klein \(g_{\mu 5}\) and B-field \(B_{\mu 5}\) potentials gets extended to \(SU(2)\times SU(2)\).

You may compactify several dimensions on circles, i.e. on the torus \(T^k\). The T-duality may be interpreted as \(n_i\leftrightarrow w_i\), the exchange of the momenta and winding numbers, which may be generalized "locally" on the world sheet to the "purely left-moving" parity transformation of \(X_i\). The reversal of the sign of the left-moving part of \(X_i\) only may also be interpreted as a Hodge-dualization of \(\partial_\alpha X_i\) on the world sheet.

In the full string theory, the theory has the \(O(k,k)\) symmetry rotating all the \(k\) compactified coordinates \(X_i\) and their \(k\) T-duals \(\tilde X_i\) if you ignore the periodicities as well as what the bosons are actually doing on the world sheet (left-movers vs right-movers). Normally, we only want to use either the tilded or the untilded fields \(X\). The double field theory is any formalism that tries to describe the string in such a way that both \(X_i\) and \(\tilde X_i\) exist at the same time. You have to "reduce" 1/2 of these spacetime coordinates at a "different place" not to get a completely wrong theory but it's possible to find such a "different place": some extra constraints on the string fields.

When the periodicities (circular compactification) are not ignored, the theory on \(T^k\) has the symmetry just \(O(k,k,\ZZ)\), a discrete subgroup, and the moduli space of the vacua is the coset\[

{\mathcal M}= (O(k,\RR)\times O(k,\RR)) \backslash O(k,k,\RR) / O(k,k,\ZZ)

\] because both compact simple \(O(k,\RR)\) transformations remain symmetries. This \(k^2\)-dimensional moduli space parameterizes all the radii and angles in the torus \(T^k\) as well as all the compact antisymmetric B-field components on it. It's not hard to see why there are \(k^2\) parameters associated with the torus: you may describe each torus using "standardized" periodic coordinates between zero and one, and all the information about the shape and the B-field may be stored in the general (non-symmetric, non-antisymmetric) tensor \(g_{mn}+B_{mn}\) which obviously has \(k^2\) components.

OK, what do you do with the Standard Model?

I said that the spacetime coordinates are effectively "doubled" when we add all the T-dual coordinates at the same moment. In this "Standard Model" case, it's done with all the spacetime coordinates, including – and especially – the 3+1 large dimensions. So instead of 3+1, we get 3+1+1+3 = 4+4 dimensions (note that the added dimensions have the opposite signature so the "sum" always has the same number of time-like and space-like coordinates).

The parent spacetime is 8-dimensional and the parent Lorentz group is \(O(4,4)\). This is broken to \(O(3,1)\times O(1,3)\). We obviously don't want an eight-dimensional spacetime. The authors describe some (to my taste, ad hoc) additional constraints that make all the fields in the 8-dimensional spacetime independent of 1+3 coordinates. So they only depend on the 3+1 coordinates we know and love.

They work hard to rewrite all the normal Standard Model to fields in this 8-dimensional parent spacetime with some extra restrictions and claim that it can be done. They just make some "very small" comments that their formalism bans the \(F\wedge F\) term in QCD – which would solve the strong CP-problem – as well as some quark-lepton couplings (an experimental prediction about the absence of some dimension-six operators). I don't quite get these claims. And their indication that the quarks transform under the first \(O(3,1)\) while the leptons transform under the other \(O(1,3)\), but they may also transform under the "same" factor of the group, sound scary to me. Depending on this choice, one must obtain very different theories, right?

Aside from the very minor (I would say) issue concerning the \(\theta\)-angle, I think it's fair to say that they present no evidence that the Standard Model is "particularly willing" to undergo this doubling exercise.

Even though I "independently discovered" the basic paradigm of the double field theory before others published it, I do share some worries with my then adviser who was discouraging me. The \(O(k,k,\RR)\) symmetry is really "totally broken" at the string scale, by the stringy effects. Some of the bosonic components are left-moving, others (one-half) are right-moving. This is no detail. The left-movers and right-movers are two totally separate worlds. So there isn't any continuous symmetry that totally mixes them.

In some sense, the \(O(k,k,\RR)\) symmetry is an illusion that only "seems" to be relevant in the field-theoretical limit but it's totally broken at the string scale. This fate of a symmetry is strange because we're used to symmetries that are restored at high energies and broken at low energies. Here, it seems to be the other way around.

After all, the symmetry is brutally broken in the double field theory Standard Model, too. The eight spacetime dimensions aren't really equal. Things can depend on four of them but not the other four. Maybe this separation is natural and may be done covariantly – they make it look like it is the case. But I still don't understand any sense or regime in which the \(O(k,k,\RR)\) symmetry could be truly unbroken which is why it seems vacuous to consider this symmetry physical.

Maybe such a symmetry may be useful and important even if it can never be fully restored. I just don't follow the logic. I don't understand why this symmetry would be "necessary for the consistency" or otherwise qualitatively preferred. That's why I don't quite see why we should trust things like \(\theta=0\) which follow from the condition that the Standard Model may be rewritten as a double field theory even though this rewriting doesn't seem to be "essential" for anything.

But at the end, I feel that they have some chance to be right that there's something special and important about theories that may be rewritten in this way. The broader picture of \(O(4,4)\)-symmetric theories reminds me of many ideas especially by Itzhak Bars who has been excited about "theories with two time coordinates" for many years. Here, we have "four times".

Quite generally, more than "one time coordinate" makes the theory inconsistent if you define it too ordinarily. The plane spanned by two of the time coordinates contains circles – and, as you may easily verify, they are closed time-like curves, the source of logical paradoxes (involving your premature castration of your grandfather). So the new time coordinates cannot be quite treated on par with the time coordinate we know. There have to be some gauge-fixing conditions or constraints that only preserve the reality of one time coordinate.

The idea is that there may be some master theory with a noncompact symmetry, \(O(4,4)\) or \(O(\infty,\infty)\) or something worse, which has some huge new "gauge" symmetry that may be fixed in many ways and the gauge fixing produces the "much less noncompact" theories we know – theories with at most one time and with compact Yang-Mills gauge groups. Is this picture really possible? And if it is, are the "heavily noncompact" parent theories more than some awkward formalism that teaches us nothing true? Can these "heavily noncompact" parent theories unify theories that look very different in the normal description? And if this unification may be described mathematically, should we believe that it's physically relevant, or is it just a bookkeeping device that reshuffles many degrees of freedom in an unphysical way?

I am not sure about the answers to any of these questions. Many questions in physics are open and many proposals remain intriguing yet unsettled for a very long time. But I also want to emphasize that it is perfectly conceivable that these questions may be settled and will soon be settled. And they may be settled in both ways. It may be shown that this double field theory formalism is natural, important, and teaches us something. But it may also be shown that it is misguided. Before robust enough evidence exists in either direction, I would find it very dangerous and unscientific to prematurely discard one of the possibilities. The usefulness, relevance, or mathematical depth of the double field theory formalism is just a working hypothesis, I think, and the amount of evidence backing this hypothesis (e.g. nontrivial consistency checks) is in no way comparable to the evidence backing the importance and validity of many established concepts in string theory (or physics).

by Luboš Motl (noreply@blogger.com) at June 18, 2015 07:21 AM

June 17, 2015

Symmetrybreaking - Fermilab/SLAC

Making the portable gamma camera

The end of the Cold War and the cancellation of the Superconducting Super Collider led to the creation of a life-saving medical device.

Each year, more than 5 million Americans take a nuclear heart stress test, which images blood flow in the heart before and after a brisk walk on a treadmill. The test allows doctors to visualize a lack of blood flow that may result from blocked or narrowed coronary arteries, which are linked to heart disease, the leading cause of death in the United States.

The test is conducted with a device called a gamma camera, which also helps diagnose dozens of other conditions, from arthritis to renal failure. Invented in the 1950s, gamma cameras used two 500-pound detectors the size of truck tires and cost hundreds of thousands of dollars. As a result, they were usually located only in regional medical centers.

But new options are available, thanks to a small company, a national laboratory and, in part, the rise and fall of both the Cold War and the Superconducting Super Collider.

A Cold War camera

The small company is Digirad, which a materials scientist started in 1985 as San Diego Semiconductors to create and develop applications for complex crystalline materials. Its name changed to Aurora Technologies in 1991 and to Digirad in 1994.

Sustained by a variety of government R&D contracts, the company’s most successful early product was a gamma-ray detector. In 1991, the Defense Advanced Research Projects Agency (DARPA) gave the company a contract to do more. The agency asked for a prototype portable gamma camera—a detector array with readout and display systems that could remotely determine the number of nuclear warheads contained within the nosecone of a missile. At the camera’s heart were cadmium zinc telluride crystals, which converted gamma rays into electrical signals.

Digirad’s portable gamma camera was to have been a key tool for verifying nuclear weapons reductions. But after the end of the Cold War, the government lost interest. DARPA halted its funding to Digirad in 1993. To survive, the company needed to diversify.

A University of California, San Diego physician who had seen a news story about Digirad suggested that the company repurpose the prototype into a revolutionary medical imaging device. That’s what Digirad set out to create.

Heart-saving gamma rays

To use a gamma camera, physicians first inject into the bloodstream a small amount of a short-lived radioactive isotope, which sends out gamma rays as it decays. The patient must then lay very still inside a hospital’s tunnel-like gamma camera for five to 30 minutes as its detectors record the isotope’s emissions and create images that show doctors where the patient’s blood is flowing or blocked.

With the help of a cooperative research agreement with SLAC National Accelerator Laboratory in 1994 and 1995, Digirad modified its warhead-detecting camera into a much smaller, lightweight version of the medical gamma camera. It unveiled its new product in 1997.

The camera worked, but its price was higher than hospitals could afford.

“Unfortunately, cadmium zinc telluride was just too expensive to use in a commercial product,” says Richard Conwell, then Digirad’s vice president for research and development.

Unbeknownst to Digirad, the solution to this problem had just been created at Lawrence Berkeley National Laboratory.

A Super Collider’s sensor

In the early 1990s, Berkeley Lab electrical engineer Steve Holland was working on silicon detector technology for use in the Superconducting Super Collider, a particle collider slated to be built in central Texas that would have been twice as large and powerful as today’s Large Hadron Collider.

Holland’s challenge was to develop a mass-producible low-noise diode component for the SSC's many charged-particle detectors that would sense matter streaming from the high-energy collisions inside the collider. He did it by creating a diode with a micron-thick electrical-contact layer on the back that could trap noise-creating impurities introduced during fabrication.

In 1993, Congress canceled funding for the Superconducting Super Collider. The silicon detector effort seemed doomed to fade into obscurity.

But fellow Berkeley Lab researcher Carolyn Rossington told physicist William Moses, a member of Berkeley Lab’s Life Sciences Division, about Holland’s diode.

Moses was interested in making a compact gamma camera for diagnosing breast cancer. It turned out that Holland’s diode was just the thing needed to complete the design. The Berkeley Lab team, which included Moses, Rossington and Nadine Wang, described their device at a nuclear medicine and imaging conference in Albuquerque, New Mexico, in November 1997. Digirad scientist Bo Pi was in the audience.

Digirad negotiated with Berkeley Lab for an exclusive license to use Holland’s innovation in nuclear medicine. After developing new methods to manufacture the diode in commercial quantities, Digirad produced its first portable gamma cameras in 2000. Its business rejuvenated, Digirad went public in 2004.

Today, Digirad provides onsite gamma imaging services in remote locations and produces two additional compact gamma cameras that have two or three of the thin, lightweight and adjustable detectors to produce clearer heart images in doctors’ offices or clinics.

Digirad’s portable camera is even valuable to hospitals that already have a large conventional gamma camera.

“I can roll it into any room in my hospital,” says Dr. Janusz Kikut, Associate Professor and Nuclear Medicine Division Chief at the Vermont Medical Center. “In many urgent or unstable cases, it is faster, safer and less expensive to use this portable camera instead of transporting the critically ill patients down to the nuclear medicine department.”

“Holland’s diode has been huge for us,” says Virgil Lott, Digirad’s head of diagnostic imaging. “It has enabled us to take faster, higher-quality gamma imaging much closer to millions of patients.” 

Portable gamma camera from Digirad

Courtesy of: Digirad

 

Like what you see? Sign up for a free subscription to symmetry!

by Mike Ross at June 17, 2015 01:00 PM

June 16, 2015

Symmetrybreaking - Fermilab/SLAC

OPERA catches fifth tau neutrino

The OPERA experiment’s study of tau neutrino appearance has reached the level of “discovery.”

Today the OPERA experiment in Italy announced a discovery related to the behavior of neutrinos.

Light, rarely interacting particles called neutrinos come in three types, called “flavors”: electron, muon and tau. When an electron neutrino collides with a detector, it produces an electron; a muon neutrino produces a muon; and a tau neutrino produces a tau.

In 1998, the Super-Kamiokande experiment in Japan found the first solid evidence that neutrinos do not stick with any one flavor; they oscillate, or switch back and forth between flavors.

The Super-Kamiokande experiment studied muon neutrinos coming from cosmic rays and found that they were not catching as many as they expected; some of the muon neutrinos seemed to disappear. Researchers think they were changing to a flavor that the Super-Kamiokande experiment could not see.

So scientists built an experiment that could see. The OPERA detector at the Italian National Institute for Nuclear Physics at Gran Sasso was the first that could catch an oscillated tau neutrino.

Between 2006 and 2012, the OPERA detector studied a beam of muon neutrinos produced about 450 miles away at CERN on the border of France and Switzerland. Traveling at almost the speed of light, the neutrinos had just enough time to change flavors between their point of origin and the detector.

Neutrinos can pass through the entire planet without bumping into another particle, but if you send enough of them through a large, sensitive detector, you can catch a small number of them per day.

In 2010, the OPERA experiment announced that it had found its first candidate tau neutrino coming from the muon neutrino beam. In 2012, 2013 and 2014, it announced its second, third and fourth.

Now the OPERA experiment has announced its fifth tau neutrino, bringing the result to the level of “discovery.” The probability that they would find five tau neutrinos in their data by chance is less than one in a million.

“The detection of a fifth tau neutrino is extremely important,” says spokesperson Giovanni De Lellis of INFN in Naples, in a press release issued today. “We can… definitely report the discovery of the appearance of tau neutrinos in a muon neutrino beam.”

Scientists will continue to analyze the data in search of additional tau neutrinos.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at June 16, 2015 05:35 PM

June 15, 2015

John Baez - Azimuth

World Energy Outlook 2015

It’s an exciting and nerve-racking time as global carbon emissions from energy production have begun to drop, at least for a little while:

yet keeping warming below 2°C seems ever more difficult:

The big international climate negotiations to be concluded in Paris in December 2015 bring these issues to the forefront in a dramatic way. Countries are already saying what they plan to do: you can read their Intended Nationally Determined Contributions online!

But it’s hard to get an overall picture of the situation. Here’s a new report that helps:

• International Energy Agency, World Energy Outlook Special Report 2015: Energy and Climate Change.

Since the International Energy Agency seems intelligent to me, I’ll just quote their executive summary. If you’re too busy for even the executive summary, let me summarize the summary:

Given the actions that countries are now planning, we could have an increase of around 2.6 °C over preindustrial temperature by 2100, and more after that.

Executive summary

A major milestone in efforts to combat climate change is fast approaching. The importance of the 21st Conference of the Parties (COP21) – to be held in Paris in December 2015 – rests not only in its specific achievements by way of new contributions, but also in the direction it sets. There are already some encouraging signs with a historic joint announcement by the United States and China on climate change, and climate pledges for COP21 being submitted by a diverse range of countries and in development in many others. The overall test of success for COP21 will be the conviction it conveys that governments are determined to act to the full extent necessary to achieve the goal they have already set to keep the rise in global average temperatures below 2 degrees Celsius (°C), relative to pre-industrial levels.

Energy will be at the core of the discussion. Energy production and use account for two-thirds of the world’s greenhouse-gas (GHG) emissions, meaning that the pledges made at COP21 must bring deep cuts in these emissions, while yet sustaining the growth of the world economy, boosting energy security around the world and bringing modern energy to the billions who lack it today. The agreement reached at COP21 must be comprehensive geographically, which means it must be equitable, reflecting both national responsibilities and prevailing circumstances. The importance of the energy component is why this World Energy Outlook Special Report presents detailed energy and climate analysis for the sector and recommends four key pillars on which COP21 can build success.

Energy and emissions: moving apart?

The use of low-carbon energy sources is expanding rapidly, and there are signs that growth in the global economy and energy-related emissions may be starting to decouple. The global economy grew by around 3% in 2014 but energy-related carbon dioxide (CO2) emissions stayed flat, the first time in at least 40 years that such an outcome has occurred outside economic crisis.

Renewables accounted for nearly half of all new power generation capacity in 2014, led by growth in China, the United States, Japan and Germany, with investment remaining strong (at $270 billion) and costs continuing to fall. The energy intensity of the global economy dropped by 2.3% in 2014, more than double the average rate of fall over the last decade, a result stemming from improved energy efficiency and structural changes in some economies, such as China.

Around 11% of global energy-related CO2 emissions arise in areas that operate a carbon market (where the average price is $7 per tonne of CO2), while 13% of energy-related CO2 emissions arise in markets with fossil-fuel consumption subsidies (an incentive equivalent to $115 per tonne of CO2, on average). There are some encouraging signs on both fronts, with reform in sight for the European Union’s Emissions Trading Scheme and countries including India, Indonesia, Malaysia and Thailand taking the opportunity of lower oil prices to diminish fossil-fuel subsidies, cutting the incentive for wasteful consumption.

The energy contribution to COP21

Nationally determined pledges are the foundation of COP21. Intended Nationally
Determined Contributions (INDCs) submitted by countries in advance of COP21 may vary in scope but will contain, implicitly or explicitly, commitments relating to the energy sector. As of 14 May 2015, countries accounting for 34% of energy-related emissions had submitted their new pledges.

A first assessment of the impact of these INDCs and related policy statements (such as by China) on future energy trends is presented in this report in an “INDC Scenario”. This shows, for example, that the United States’ pledge to cut net greenhouse-gas emissions by 26% to 28% by 2025 (relative to 2005 levels) would deliver a major reduction in emissions while the economy grows by more than one-third over current levels. The European Union’s pledge to cut GHG emissions by at least 40% by 2030 (relative to 1990 levels) would see energy-related CO2 emissions decline at nearly twice the rate achieved since 2000, making it one of the world’s least carbon-intensive energy economies. Russia’s energy-related emissions decline slightly from 2013 to 2030 and it meets its 2030 target comfortably, while implementation of Mexico’s pledge would see its energy-related emissions increase slightly while its economy grows much more rapidly. China has yet to submit its INDC, but has stated an intention to achieve a peak in its CO2 emissions around 2030 (if not earlier), an important change in direction, given the pace at which they have grown on average since 2000.

Growth in global energy-related GHG emissions slows but there is no peak by 2030 in the INDC Scenario. The link between global economic output and energy-related GHG emissions weakens significantly, but is not broken: the economy grows by 88% from 2013 to 2030 and energy-related CO2 emissions by 8% (reaching 34.8 gigatonnes). Renewables become the leading source of electricity by 2030, as average annual investment in nonhydro renewables is 80% higher than levels seen since 2000, but inefficient coal-fired power generation capacity declines only slightly.

With INDCs submitted so far, and the planned energy policies in countries that have yet to submit, the world’s estimated remaining carbon budget consistent with a 50% chance of keeping the rise in temperature below 2 °C is consumed by around 2040—eight months later than is projected in the absence of INDCs. This underlines the need for all countries to submit ambitious INDCs for COP21 and for these INDCs to be recognised as a basis upon which to build stronger future action, including from opportunities for collaborative/co-ordinated action or those enabled by a transfer of resources (such as technology and finance). If stronger action is not forthcoming after 2030, the path in the INDC Scenario would be consistent with an an average temperature increase of around 2.6 °C by 2100 and 3.5 °C after 2200.

What does the energy sector need from COP21?

National pledges submitted for COP21 need to form the basis for a “virtuous circle” of rising ambition. From COP21, the energy sector needs to see a projection from political leaders at the highest level of clarity of purpose and certainty of action, creating a clear expectation of global and national low-carbon development. Four pillars can support that achievement:

1. Peak in emissions – set the conditions which will achieve an early peak in global
energy-related emissions.

2. Five-year revision – review contributions regularly, to test the scope to lift the level of ambition.

3. Lock in the vision – translate the established climate goal into a collective long-term emissions goal, with shorter-term commitments that are consistent with the long-term vision.

4. Track the transition – establish an effective process for tracking achievements in
the energy sector.

Peak in emissions

The IEA proposes a bridging strategy that could deliver a peak in global energy-related
emissions by 2020. A commitment to target such a near-term peak would send a clear message of political determination to stay below the 2 °C climate limit. The peak can be
achieved relying solely on proven technologies and policies, without changing the economic and development prospects of any region, and is presented in a “Bridge Scenario”. The technologies and policies reflected in the Bridge Scenario are essential to secure the long-term decarbonisation of the energy sector and their near-term adoption can help keep the door to the 2 °C goal open. For countries that have submitted their INDCs, the proposed strategy identifies possible areas for over-achievement. For those that have yet to make a submission, it sets out a pragmatic baseline for ambition.

The Bridge Scenario depends upon five measures:

• Increasing energy efficiency in the industry, buildings and transport sectors.

• Progressively reducing the use of the least-efficient coal-fired power plants and
banning their construction.

• Increasing investment in renewable energy technologies in the power sector from
$270 billion in 2014 to $400 billion in 2030.

• Gradual phasing out of fossil-fuel subsidies to end-users by 2030.

• Reducing methane emissions in oil and gas production.

These measures have profound implications for the global energy mix, putting a brake on growth in oil and coal use within the next five years and further boosting renewables. In the Bridge Scenario, coal use peaks before 2020 and then declines while oil demand rises to 2020 and then plateaus. Total energy-related GHG emissions peak around 2020. Both the energy intensity of the global economy and the carbon intensity of power generation improve by 40% by 2030. China decouples its economic expansion from emissions growth by around 2020, much earlier than otherwise expected, mainly through improving the energy efficiency of industrial motors and the buildings sector, including through standards for appliances and lighting. In countries where emissions are already in decline today, the decoupling of economic growth and emissions is significantly accelerated; compared with recent years, the pace of this decoupling is almost 30% faster in the European Union (due to improved energy efficiency) and in the United States (where renewables contribute one-third of the achieved emissions savings in 2030). In other regions, the link between economic growth and emissions growth is weakened significantly, but the relative importance of different measures varies. India utilises energy more efficiently, helping it
to reach its energy sector targets and moderate emissions growth, while the reduction of
methane releases from oil and gas production and reforming fossil-fuel subsidies (while
providing targeted support for the poorest) are key measures in the Middle East and Africa, and a portfolio of options helps reduce emissions in Southeast Asia. While universal access to modern energy is not achieved in the Bridge Scenario, the efforts to reduce energy related emissions do go hand-in-hand with delivering access to electricity to 1.7 billion people and access to clean cookstoves to 1.6 billion people by 2030.

by John Baez at June 15, 2015 01:00 PM

June 14, 2015

Jester - Resonaances

Weekend plot: minimum BS conjecture
This weekend plot completes my last week's post:

It shows the phase diagram for models of natural electroweak symmetry breaking. These models can be characterized by 2 quantum numbers:

  • B [Baroqueness], describing how complicated is the model relative to the standard model;   
  • S [Strangeness], describing the fine-tuning needed to achieve electroweak symmetry breaking with the observed Higgs boson mass. 

To allow for a fair comparison, in all models the cut-off scale is fixed to Λ=10 TeV. The standard model (SM) has, by definition,  B=1, while S≈(Λ/mZ)^2≈10^4.  The principle of naturalness postulates that S should be much smaller, S ≲ 10.  This requires introducing new hypothetical particles and interactions, therefore inevitably increasing B.

The most popular approach to reducing S is by introducing supersymmetry.  The minimal supersymmetric standard model (MSSM) does not make fine-tuning better than 10^3 in the bulk of its parameter space. To improve on that, one needs to introduce large A-terms (aMSSM), or  R-parity breaking interactions (RPV), or an additional scalar (NMSSM).  Another way to decrease S is achieved in models the Higgs arises as a composite Goldstone boson of new strong interactions. Unfortunately, in all of those models,  S cannot be smaller than 10^2 due to phenomenological constraints from colliders. To suppress S even further, one has to resort to the so-called neutral naturalness, where new particles beyond the standard model are not charged under the SU(3) color group. The twin Higgs - the simplest  model of neutral naturalness - can achieve S10 at the cost of introducing a whole parallel mirror world.

The parametrization proposed here leads to a striking observation. While one can increase B indefinitely (many examples have been proposed  the literature),  for a given S there seems to be a minimum value of B below which no models exist.  In fact, the conjecture is that the product B*S is bounded from below:
BS ≳ 10^4. 
One robust prediction of the minimum BS conjecture is the existence of a very complicated (B=10^4) yet to be discovered model with no fine-tuning at all.  The take-home message is that one should always try to minimize BS, even if for fundamental reasons it cannot be avoided completely ;)

by Jester (noreply@blogger.com) at June 14, 2015 12:29 AM

Jester - Resonaances

Naturalness' last bunker
Last week Symmetry Breaking ran the article entitled "Natural SUSY's last stand". That title is a bit misleading as it makes you think of General Custer at the eve of Battle of the Little Bighorn, whereas natural supersymmetry has long been dead bodies torn by vultures. Nevertheless, it is  interesting to ask a more general question: are there any natural theories that survived? And if yes, what can we learn about them from the LHC run-2?

For over 30 years naturalness has been the guiding principle in theoretical particle physics.  The standard model by itself has no naturalness problem: it contains 19 free parameters  that are simply not calculable and have to be taken from experiment. The problem arises because we believe the standard model is eventually embedded in a more fundamental  theory where all these parameters, including the Higgs boson mass, are calculable. Once that is done, the calculated Higgs mass will typically be proportional to the heaviest state in that theory as a result of quantum corrections. The exception to this rule is when the fundamental theory possesses a symmetry forbidding the Higgs mass, in which case the mass will be proportional to the scale where the symmetry becomes manifest. Given the Higgs mass is 125  GeV, the concept of naturalness leads to the following prediction: 1) new particles beyond the standard model should appear around the mass scale of 100-300 GeV, and  2) the new theory with the new particles should have a  protection mechanism for the Higgs mass built in.  

There are two main realizations of this idea. In supersymmetry, the protection is provided by opposite-spin partners of the known particles. In particular, the top quark is accompanied by stop quarks who are spin-0 scalars but otherwise they have the same color and electric charge as the top quark. Another protection mechanism can be provided by a spontaneously broken global symmetry, usually realized in the context of new strong interactions from which the Higgs arises as a composite particle. In that case, the protection is provided by the same spin partners, for example the top quark has a fermionic partner with the same quantum numbers but a different mass.

Both of these ideas are theoretically very attractive but are difficult to realize in practice. First of all, it is hard to understand how these 100 new partner particles could be hiding around the corner without leaving any trace in numerous precision experiments. But even if we were willing to believe in the Universal conspiracy, the LHC run-1 was the final nail in the coffin. The point is that both of these scenarios make a very specific  prediction: the existence of new particles with color charges around the weak scale. As the LHC is basically a quark and gluon collider, it can produce colored particles in large quantities. For example, for a 1 TeV gluino (supersymmetric partner of the gluon) some 1000 pairs would have been already produced at the LHC. Thanks to  the large production rate, the limits on colored partners are already quite stringent. For example, the LHC limits on masses of gluinos and massive spin-1 gluon resonances extend well above 1 TeV, while for scalar and fermionic top partners the limits are not far below 1 TeV. This means that a conspiracy theory is not enough: in supersymmetry and composite Higgs one also has to accept a certain degree of fine-tuning, which means we don't even solve the problem that is the very motivation for these theories.

The reasoning above suggests a possible way out.  What if naturalness could be realized without colored partners: without gluinos, stops, or heavy tops. The conspiracy problem would not go away, but at least we could avoid stringent limits from the LHC. It turns out that theories with such a property do exist. They linger away from the mainstream,  but recently they have been gaining popularity under the name of the neutral naturalness.  The reason for that is obvious: such theories may offer a nuclear bunker that will allow naturalness to survive beyond the LHC run-2.

The best known realization of neutral naturalness is the twin Higgs model. It assumes the existence of a mirror world, with mirror gluons, mirror top quarks, a mirror Higgs boson, etc., which is related to the standard model by an approximate parity symmetry.  The parity gives rise to an accidental global symmetry that could protect the Higgs boson mass. At the technical level, the protection mechanism is similar as in composite Higgs models where standard model particles have partners with the same spins.  The crucial difference, however, is that the mirror top quarks and mirror gluons are charged under the mirror color group, not the standard model color.  As we don't have a mirror proton collider yet, the mirror partners are not produced in large quantities at the LHC. Therefore, they could well be as light as our top quark without violating any experimental bounds,  and in agreement with the requirements of naturalness.


A robust prediction of twin-Higgs-like models is that the Higgs boson couplings to matter deviate from the standard model predictions, as a consequence of mixing with the mirror Higgs. The size of this deviation is of the same order as  the fine-tuning in the theory, for example order 10% deviations  are expected when the fine-tuning is 1 in 10. This is perhaps the best motivation for precision Higgs studies:  measuring the Higgs couplings with an accuracy better than 10% may invalidate or boost the idea.  However,  the neutral naturalness points us to experimental signals that are often very different than in the popular models. For example, the mirror color interactions are  expected to behave at low energies similarly to our QCD:  there should be mirror mesons, baryons, glueballs.  By construction, the Higgs boson  must couple to the mirror world, and therefore it offers a portal via which the mirror hadronic junk can be produced and decay, which  may lead to truly exotic signatures such as displaced jets. This underlines the importance to search for exotic Higgs boson decays - very few such studies have been carried out by the LHC experiments so far. Finally, as it has been speculated for long time, dark matter may have something to do the with the mirror world. Neutral naturalness provides a reason for the existence of the mirror world and an approximate parity symmetry relating it to the real world. It may be our best shot at understanding why the amounts of ordinary and dark matter in the Universe are equal  up to a factor of  5 - something that arises as a complete accident in the usual WIMP dark matter scenario.

There's no doubt that the neutral naturalness is a  desperate attempt to save natural electroweak symmetry breaking from the reality check, or at least postpone the inevitable. Nevertheless, the existence of a mirror world is certainly a logical possibility. The recent resurgence of this scenario has led to identifying new interesting models, and new ways to search for them in  experiment. The persistence of the naturalness principle may thus be turned into a positive force, as it may motivate better searches for hidden particles.  It is possible that the LHC data hold the answer to the naturalness puzzle, but we will have to look deeper to extract it.

by Jester (noreply@blogger.com) at June 14, 2015 12:29 AM

June 13, 2015

Jester - Resonaances

On the LHC diboson excess
The ATLAS diboson resonance search showing a 3.4 sigma excess near 2 TeV has stirred some interest. This is understandable: 3 sigma does not grow on trees, and moreover CMS also reported anomalies in related analyses. Therefore it is worth looking at these searches in a bit more detail in order to gauge how excited we should be.

The ATLAS one is actually a dijet search: it focuses on events with two very energetic jets of hadrons.  More often than not, W and Z boson decay to quarks. When a TeV-scale  resonance decays to electroweak bosons, the latter, by energy conservation,  have to move with large velocities. As a consequence, the 2 quarks from W or Z boson decays will be very collimated and will be seen as a single jet in the detector.  Therefore, ATLAS looks for dijet events where 1) the mass of each jet is close to that of W (80±13 GeV) or Z (91±13 GeV), and  2) the invariant mass of the dijet pair is above 1 TeV.  Furthermore, they look into the substructure of the jets, so as to identify the ones that look consistent with W or Z decays. After all this work, most of the events still originate from ordinary QCD production of quarks and gluons, which gives a smooth background falling with the dijet invariant mass.  If LHC collisions lead to a production of  a new particle that decays to WW, WZ, or ZZ final states, it should show as a bump on top of the QCD background. ATLAS observes is this:

There is a bump near 2 TeV, which  could indicate the existence of a particle decaying to WW and/or WZ and/or ZZ. One important thing to be aware of is that this search cannot distinguish well between the above 3  diboson states. The difference between W and Z masses is only 10 GeV, and the jet mass windows used in the search for W and Z  partly overlap. In fact, 20% of the events fall into all 3 diboson categories.   For all we know, the excess could be in just one final state, say WZ, and simply feed into the other two due to the overlapping selection criteria.

Given the number of searches that ATLAS and CMS have made, 3 sigma fluctuations of the background should happen a few times in the LHC run-1 just by sheer chance.  The interest in the ATLAS  excess is however amplified by the fact that diboson searches in CMS also show anomalies (albeit smaller) just below 2 TeV. This can be clearly seen on this plot with limits on the Randall-Sundrum graviton excitation, which is one  particular model leading to diboson resonances. As W and Z bosons sometimes decay to, respectively, one and two charged leptons, diboson resonances can be searched for not only via dijets but also in final states with one or two leptons.  One can see that, in CMS, the ZZ dilepton search (blue line), the WW/ZZ dijet search (green line), and the WW/WZ one-lepton (red line)  search all report a small (between 1 and 2 sigma) excess around 1.8 TeV.  To make things even more interesting,  the CMS search for WH resonances return 3 events  clustering at 1.8 TeV where the standard model background is very small (see Tommaso's post). Could the ATLAS and CMS events be due to the same exotic physics?

Unfortunately, building a model explaining all the diboson data is not easy. Enough to say that the ATLAS excess has been out for a week and there's isn't yet any serious ambulance chasing paper on arXiv. One challenge is the event rate. To fit the excess, the resonance should be produced with a cross section of a couple of tens of femtobarns. This requires the new particle to couple quite strongly to quarks or gluons. At the same time, it should remain a narrow resonance decaying dominantly to dibosons. Furthermore, in concrete models, a sizable coupling to electroweak gauge bosons will get you in trouble with electroweak precision tests.

However, there is yet a bigger problem, which can be also  seen in the plot above. Although the excesses in CMS occur roughly at the same mass, they are not compatible when it comes to the cross section. And so the limits in the single-lepton search are not consistent with the new particle interpretation of the excess in dijet  and  the dilepton searches, at least in the context of the Randall-Sundrum graviton model. Moreover, the limits from the CMS one-lepton search are grossly inconsistent with the diboson interpretation of the ATLAS excess! In order to believe that the ATLAS 3 sigma excess is real one has to move to much more baroque models. One possibility is that  the dijets observed by ATLAS do not originate from  electroweak bosons, but rather from an exotic particle with a similar mass. Another possibility is that the resonance decays only to a pair of Z bosons and not to W bosons, in which case the CMS limits are weaker; but I'm not sure if there exist consistent models with this property.  

My conclusion...  For sure this is something to observe in the early run-2. If this is real, it should clearly show in both experiments already this year.  However, due to the inconsistencies between different search channels and the theoretical challenges, there's little reason to get excited yet.

Thanks to Chris for digging out the CMS plot.

by Jester (noreply@blogger.com) at June 13, 2015 09:13 AM

June 12, 2015

The n-Category Cafe

Carnap and the Invariance of Logical Truth

I see Steve Awodey has a paper just out Carnap and the invariance of logical truth. We briefy discussed this idea in the context of Mautner’s 1946 article back here.

Steve ends the article by portraying homotopy type theory as following in the same tradition, but now where invariance is under homotopy equivalence. I wonder if we’ll see some variant of the model/theory duality he and Forssell found in the case of HoTT.

by david (d.corfield@kent.ac.uk) at June 12, 2015 01:38 PM

June 11, 2015

Quantum Diaries

Starting up LHC Run 2, step by step

I know what you are thinking. The LHC is back in action, at the highest energies ever! Where are the results? Where are all the blog posts?

Back in action, yes, but restarting the LHC is a very measured process. For one thing, when running at the highest beam energies ever achieved, we have to be very careful about how we operate the machine, lest we inadvertently damage it with beams that are mis-steered for whatever reason. The intensity of the beams — how many particles are circulating — is being incrementally increased with successive fills of the machine. Remember that the beam is bunched — the proton beams aren’t continuous streams of protons, but collections that are just a few centimeters long, spaced out by at least 750 centimeters. The LHC started last week with only three proton bunches in each beam, only two of which were actually colliding at an interaction point. Since then, the LHC team has gone to 13 bunches per beam, and then 39 bunches per beam. Full-on operations will be more like 1380 bunches per beam. So at the moment, the beams are of very low intensity, meaning that there are not that many collisions happening, and not that much physics to do.

What’s more, the experiments have much to do also to prepare for the higher collision rates. In particular, there is the matter of “timing in” all the detectors. Information coming from each individual component of a large experiment such as CMS takes some time to reach the data acquisition system, and it’s important to understand how long that time is, and to get all of the components synchronized. If you don’t have this right, then you might not be getting the optimal information out of each component, or worse still, you could end up mixing up information from different bunch crossings, which would be disastrous. This, along with other calibration work, is an important focus during this period of low-intensity beams.

But even if all these things were working right out of the box, we’d still have a long way to go until we had some scientific results. As noted already, the beam intensities have been low, so there aren’t that many collisions to examine. There is much work to do yet in understanding the basics in a revised detector operating at a higher beam energy, such as how to identify electrons and muons once again. And even once that’s done, it will take a while to make measurements and fully vet them before they could be made public in any way.

So, be patient, everyone! The accelerator scientists and the experimenters are hard at work to bring you a great LHC run! Next week, the LHC takes a break for maintenance work, and that will be followed by a “scrubbing run”, the goal of which is to improve the vacuum in the LHC beam pipe. That will allow higher-intensity beams, and position us to take data that will get the science moving once again.

by Ken Bloom at June 11, 2015 10:16 PM

Lubos Motl - string vacua and pheno

String theory and fun: a response to Barry Kripke
The Internet's most notorious anti-string blog posted a link to a view from a young impressive ex-string theorist (if you believe what's written there) that was posted on Reddit one year ago. That comment (plus the long thread beneath it) contains numerous ideas, thoughts, and sentiments. I will respond to the following points:
  1. the ex-string theorist thinks that he and similar people are really, really smart
  2. he left string theory for robotics
  3. \(h=15\) is approximately required to become a professor and it's too much
  4. he found out that string theory wasn't really fun for him
  5. string theory is only the journey, not the destination, and you should like the journey or leave
  6. Nima estimates that one gets a great idea once in 3 years or so
  7. there are too many trained string theorists
  8. there's too much competition and it's harmful
  9. the term "string theory" should be abandoned because it's too broad a subject
  10. the term "string theory" should be abandoned because most people work on different things
  11. Lisi's theory seems to be incomplete or provably wrong according to string theorists
There are lots of reasons why I think that the author of the text is a genuine trained string theorist with the stellar university credentials he has described. It's very likely that I know this person but I just haven't been able to figure out the name. Well, I have one extreme guess (a person much closer to me than the average) but I am not bold enough to speculate here. OK, let me start to respond.




Is he smart?

First, I do think that he is very smart and there is a couple of young researchers who are similarly impressive and whose background sounds very similar. The number of young people in this category who show up in high energy physics every year is comparable to one or two. It depends where one places the cutoff, of course. I do find it likely that this person belongs among 20,000 smartest people on Earth.

The author tells us that he left string theory for robotics. So he is a string theorist who left for robotics? Clearly, it must be Barry Kripke. In February 2015, I was stunned when Barry Kripke wrote a paper on light-cone-quantized string theory. He was supposed to do some plasma physics – and was even great at as mundane things as the construction of a wobot that destroys Howard Wolowitz's wobot. So how could Bawwy have the knowledge to write papers about the light cone quantization of string theory?




The Reddit posting makes it so clear that even Penny would understand it – he calls her Woxanna because Penny isn't sexy enough. Barry was trained as a string theorist but at some point, he decided to leave for robotics. I will call him "Barry" and I wish him good luck.

By the way, I have left the Academia (and the U.S.) mostly for reasons that had very little to do with any specifics of string theory which is a reason why I don't want to combine my experience with this story at all. It would seem off-topic to me.

Is a super-bright guy guaranteed to succeed in robotics?

Here, I can't resist to make one point. While people in string theory – but even other fields such as experimental particle physics (statistics in it etc.) – are likely to be able to do many other things, I believe that in most fields of science and technology, the success depends on good luck much more strongly than in high energy physics.

In string theory or experimental HEP, one may feel that he or she has made it to a field that is very selective when it comes to the required intelligence (of the mathematical type). However, in other fields, the intelligence just isn't that paramount and many other characteristics matter a lot. And so does luck.

To be concrete, I do think that while most string theorists belong among the 50,000 smartest people on Earth, most people who made or make the most spectacular advances in robotics only belong among the 10 million smartest folks. It's a much less strict selection that matters there. Because of these numbers, I don't think that a success of a person who switches to a less selective field is guaranteed – although there are obviously examples of greatly successful people who made a switch like that.

Is \(h=15\) too much?

An author of science papers has \(h=15\) if he has authored 15 papers with at least 15 citations, but he doesn't have 16 papers with at least 16 citations. It's a measure that was introduced as a superior compromise between the number of papers and the total number of citations. I think that it is not true that this criterion is applied strictly but I do agree that it is a reasonable rule-of-thumb.

Getting to \(h=15\) in a "predictable" way, by hard work, if you wish, is a lot of work, indeed. But at the same time, I do think it's a reasonable amount of work if the path to join faculty is supposed to be "straightforward". One needs to write "numerous" papers and they can't be "completely invisible".

On the other hand, I would disagree if Barry claimed that that \(h=15\) is a strict condition. In particular, I am pretty sure that if someone found something as clear, as understandably correct, and as revolutionary as the special theory of relativity, he could easily get a faculty job with one famous publication.

It's not a scenario that "most smart students" should assume to be theirs, but neither it should be. But the point I want to make is that some people are naturally more inclined to do a \(h=1\) spectacular work and that's another way to get through the system, too. I will discuss this point in the section about the journey and the destination.

Doing things that are fun: most good HEP scientists are excessively rigorous

Barry told us that he thought that he should have enjoyed some activities and calculations and if he didn't, the following ones would have to be better. But they weren't. So at some moment, he reevaluated whether those activities were fun for him at all or not. And the answer was No.

Do most graduate students and postdocs in string theory enjoy making long calculations or formatting very technical papers? I strongly doubt it. This kind of difficult and "dry" technical work is generally assumed to be a necessary feature of high quality work.

String theorists are among those who have the highest standards. And indeed, the percentage of fundamentally flawed papers – with errors caused by mistakes that would be avoided with more rigor – are rare in the field. Maybe people think it's necessary because of the high competition etc.

Well, I think that string theorists actually heavily overshoot when it comes to the rigor and amount of "hard work" that should be visible in papers. The sometimes "excessive" degree of rigor and hard work in string papers – when combined with the realization that many authors of such papers may have a very hard time to find a job with an average salary – is making me upset, especially if I see how many people are getting lots of easy money for producing (or just shouting) pure junk.

String theorists, even the brightest ones, should realize that at the end, the criteria by which they are hired etc. are more diverse. They should do things that are fun. If they are almost sure about some result – if they have reasons to think that some calculation has to work – they should take the risk and submit the picture even with the less rigorous justification.

The reason why I say so is that I believe that a typical string paper is fully read by a truly tiny number of readers, often comparable to a dozen. One should think a bit rationally. The time you invest into something may turn out to be a wasted time under certain circumstances. Your technical paper is likely to have fewer readers than J.K. Rowling's fantasies but even if the number is small, you should still care about it and it should affect how much time you're investing into it.

All the detailed evidence may be needed if there's actually any uncertainty about the claims. But sometimes there's none. There's no controversy. So spending too much time with too boring calculations and writeups may be unnecessary. Moreover, papers may become less readable when the new key stuff is hidden in the pile of old and not so important material. And another point to make is that the readers who are qualified enough to read your paper at all are usually able to complete the calculations themselves – and they actually need to rediscover many things to become true converts. When the amount of rigor and boring hard work may be lowered so that it makes string theorists more relaxed and the quality of the work doesn't suffer much, it should be lowered and the community should collectively adapt to such a new format.

I believe that even with such a change, it would still be able to figure out what's right and what's wrong and who's a really good researcher and who's less so. Also, I believe that string theorists should realize that to some extent, even things like their charisma play some role in their jobs etc.

More generally, when it comes to their own well-being, people should think about their happiness, not about hiring and jobs. Those things may sound similar but are you really happy when you turn yourself into a follower of a career goal? Happiness often results from simpler things. I feel sort of happy. And I even believe that Stephen Hawking is a happy man!

The string theory community has to do lots of hard work for it to remain the subcommunity of the scientific community that has the best reason to consider itself #1. On the other hand, it's likely that it may do things less rigorously than now (although one could argue that the standards have already decreased in the recent decade or so). And maybe, it should because the happiness of the people is important, too, and too much boring work may make people less happy than certain more relaxed activities.

Bullšiting about very big questions has mostly been a greater fun than writing detailed pages of papers whose main message was known not to be a game-changer. ;-)

The destinations, not just the journey, is what makes string theory amazing

Barry claims that "string theory is all journey" and he realized that he didn't like the journey. Previously, he thought that the he would find some satisfaction in the "destinations" but there weren't any.

If you ignore the cynical message, his poetic wording sounds impressive – these are quotes that you could carve in stone. But when I think about the content, I don't really agree with it at all.

When you drive your car, you may spend lots of time on the road. But the road – and the asphalt on it – isn't really the purpose of the driving. You have some destinations. Sometimes you visit places, friends, or businesses that are not the most important or likable ones on Earth. But they are more important then the road itself!

It is perfectly fine to be bored in the middle of some boring calculation (on the road) as long as there are points – victories – that bring you satisfaction.

The most important destination is the holy grail, the "completion of a theory of everything". This destination sounds OK as a slogan but this goal isn't really well-defined. In practice, it can mean several great goals. The phenomenological portion of the holy grail is to find the right compactification of string theory that allows one to calculate all the observed features and parameters of observed particle physics. The formal portion of the holy grail is to find the complete definition of string theory and understand why it encompasses all the aspects and features of string theory that have been uncovered so far; perhaps, this achievement would unify and hack all the beautiful mathematics that can exist. One could add the "mini holy grail" to crack all the black hole information mysteries and similar things, too.

Now, the primary holy grail hasn't been found and it's totally plausible if not likely that it won't be found in the next 10 or 20 years, either, if ever. But that doesn't mean that this dream (these dreams) isn't motivating young scientists. Even though I am very far from an unlimited optimist when it comes to this dream, I still count myself as the "dreamer". And I am still eagerly waiting for someone fresh (and maybe someone very experienced!) who brings us the holy grail in 2015. And if that won't work, in 2016. And then... you know how to add 1. I still want to do it myself although it's hard.

As far as I can say, it is perfectly fine for young folks to be motivated by this big dream. After some time, they may find out that the dream is very far and lose their hope completely. In that case, they may decide to do something else (slightly different or something completely different), too. But I think it's right when people keep on trying to find really important things. One of them – perhaps a very young guy (or babe, to make it even more spicy) – may very well succeed.

But as I mentioned, string theory isn't just about one big dream. It has many spectacular results that are just "one level less groundbreaking" than the holy grail. And I am extremely thrilled by those, too, even by those that were found or that I learned 20 years ago. Similar things are being found in recent years, too. Perhaps the frequency is lower than it was in the mid 1980s or mid 1990s but the field keeps on moving. And the rate may accelerate abruptly in the future is someone unlocks a new room of marvelous insights.

When Barry says that "it is all journey", it is hard not to imagine that he actually means that "he would never find some important results" or "he couldn't recognize important results found by others". If true, it's bad for him. But many important results – not extremely far from the holy grail – have been found and are being found. And those results bring people the satisfaction that may compensate the dissatisfaction from many and many hours of the boring ride on the superhighway.

As a driver, you shouldn't be devastated by the need to spend some time on the road. As a string theorist, you mustn't throw up when you need to do some complicated calculations at some point. But it's simply not true that this boring stuff is everything that you do or the motivation why you do it.

Nima's great discovery each three years

Nima Arkani-Hamed is quoted as saying that one makes a very cool discovery once per three years in average. Well, I would just say that Nima's own frequency has been substantially higher than that, according to my definition of a very cool discovery. He's pretty amazing.

By the way, if this estimate were right, it would imply that the usual job contracts are approximately appropriately long. If you're hired as a postdoc for three years and if you do a great discovery each 3 years, it's great because chances are high that you will make a great discovery during the job – and you may get another job later. In that case, you don't need to spend too much time with the boring work. ;-)

There is no guarantee that you will make such a discovery in those 3 years but you know, the world is a risky place, anyway.

I want to reiterate my point that the mechanical production of papers – with lots of nearly copied stuff and a little percentage of new original content, as Barry wrote elsewhere – isn't the unique strategy how to do physics. Many people might agree that it describes their attitude – this may include most of the people who would agree that they're doing the research because of the career – but other people have other attitudes and motives.

Now, someone who has spent time at Stanford, Princeton, and Harvard may have gotten all the great grades and other official stellar endorsements and safely belong among the 20,000 or 50,000 smartest people in the world. But I am afraid that if he thinks that the hard mechanical work is how one has to do things to succeed, it may be because he doesn't really have the extra X Factor, some extra mysterious talent (perhaps combined with some specific dose of independence) that some colleagues may have, however.

Even if most people need to work hard and organize their work and dreams as almost everyone else, geniuses sometimes emerge. Sorry that not everyone belongs among them.

Do too many people learn string theory?

Barry says that just the string theory PhDs from Princeton would easily fill all the string theory jobs in the U.S. That's why the competition is so extreme. And Barry's implicit (or explicit?) recommendation is that fewer people should learn string theory.

I disagree with these assertions, too. Relatively to the importance and clarity of string theory, the number of people in the world who work hard to learn string theory is painfully tiny. I think that less than 5,000 people in the history of the whole mankind could "mostly credibly" claim that they had learned string theory at the level of a two-semester graduate course or better. This is less than one person in one million!

And I think that it's right when lots of people learn string theory even if they are going to do something else. Many people learn so many other things that they don't do for living. Just count how many people do useless things such as sports even though they don't earn even a fraction of what Messi does. String theory may be more demanding (time, talents) but I think that the number of people who should try this path should be much higher than a few thousand. People should only be realistically told how many candidates per job are out there.

Competition is too strong, the number of jobs is low

The number of jobs is low. I agree with that. But first, I think that the number of jobs for theoretical physicists should probably be substantially higher than it is today. Second, I think that almost all people who are hired to do a similar kind of fundamental or formal high-energy theoretical physics simply should be expected to have mastered string theory. It doesn't mean that they will work on actual string theory throughout their lives but I think that the mastery of string theory is a healthy and sensible filter for similar jobs. Pretty much all string theorists must learn "some" basics of particle physics. I don't see a reason why the converse shouldn't be expected from (those considering themselves primarily) particle theorists. It's 2015 now!

In other words, I think that the jobs are being stolen from string theorists by occupations that are "totally outside theoretical physics" as well as "occupations inside the broader theoretical physics" where the demands are lower.

Try to estimate the number of people in the world who are collecting taxes, for example. Czechia is employing about 15,000 people who are collecting taxes. Do you really think that the number of paid string theorists below 5 is too high? Well, I probably won't create new jobs for Princeton string theory PhDs by revealing this comparison. But I am still writing these numbers because if the public were a bit more reasonable and cultural, I actually would create them by this paragraph! ;-) At any rate, the tax system could be simplified. Some streamlined computer-assisted universal value-added tax could be administered by 5 people and the remaining 15,000 could do string theory. That would surely be a better world.

Barry says that the competition is too intense and it's bad. Barry and other commenters write that it burns people out and it is responsible for making people writing papers which are about the quantity, not breakthroughs.

Well, I partly agree with the former. Too much competition makes most people exhausted. But this is true in any field and there are many fields with strong competition. On the other hand, if someone arrives and makes a huge single contribution, he will be able to avoid the stereotypes and find a different route through the system. And if he doesn't, he may always choose to do something else than a similar career even though he has learned string theory. For this reason, the excessive quantitative work that someone doesn't like is a self-inflicted injury.

Concerning the second point, I don't agree with it at all. I don't believe that higher competition may reduce the number of breakthroughs. It still makes people work harder, spend more time and energy with the research. This unavoidably increases the probability that a big discovery is made – even if some of the discoveries could be made by exhausted people.

There is this idea that Albert Einstein is relaxed, he has lots of time for things in the patent office, lots of time for women etc., and that's the best setup to do science. No real job competition etc. Well, I don't think that this reasoning is right. Albert Einstein was highly creative and extremely interested in physics. He would have almost certainly made the discovery if he were hired as a physicist before 1905, too. These days, he could see some young colleagues who write not so original papers. But that doesn't mean that he would be doing the same thing. He would surely insist on his relaxed approach to physics – and freedom to do the same things with the women. He always did. ;-)

String theory is a bad name: the subject is huge

String theory is a name of the theory that has undergone the second revolution and it turned out that in a more accurate and complete understanding of the theory, strings are just some quasi-fundamental objects that are really important in some limits of the theory. But the theory has other limits and in these other limits and in the bulk of the configuration space, there are many objects that are as important as strings.

The actual theory is "the theory formerly known as strings," as Michael Duff liked to say. It's a fun slogan but it is unusable as a name. "M-theory" was thought to replace the term "string theory" for a while but it was at the times when people thought that the understanding of the 11-dimensional (more dimensions than string theory) limit would solve all other mysteries about the whole theory.

This expectation was shown to be incorrect. M-theory is really "just another limit" that, unlike the five 10-dimensional supersymmetric string vacua, has 11 spacetime dimensions (and it contains no strings, just M2-branes and M5-branes). Since these realizations, string theorists use the term "M-theory" only for descriptions of/and situations in string theory that are UV-complete in the sense of string theory but where some 11-dimensional supergravity may be seen as a long-distance limit or an approximate description.

The full theory including the new M-theory limit(s) is still called "string theory" these days. However, the term "string theory" is singular. There is one string theory while before the mid 1990s when string dualities were discovered, people thought that they were studying many string theories.

To summarize, the term "string theory" isn't quite capturing what the theory is according to our today's understanding but it's just a technical term that means something, a very specific theory, it has a damn good reason – the strings are still very important – and I don't really see a reason to change it.

Barry also suggests that the term "string theory" is bad because the "actual topics that string theorists study are wider". Well, when they do something that isn't "quite" string theory or that isn't string theory at all, they shouldn't call it string theory! They may still call themselves string theorists either because they're simply proud about it ;-) or because, at least in some cases, their knowledge of string theory helps them to do something.

But I still think that every string theorist more or less agrees what is "certainly" string theory, what is "partly" string theory (or "inspired by" string theory or "generalized" string theory), and what "is not" string theory at all. My feeling is that there's too much work that "is not" string theory at all and too little work on the truly stringy, foundational issues. I blame the anti-string crackpots and the atmosphere they have created to be the main reason why it's so. But of course, I don't have any reliable proof that they are the main reason. In the past, the activity in "purely stringy" topics has temporarily decreased even though there were no Swolins and Šmoits around.

Barry says that "string theory" is wrong also because string theorists work on things not linked to explicit strings; he suggests "formal particle theory" or "fundamental theory" etc. Well, these terms are also being used but they just mean something else. They are more general. Someone doing conceptual work involving quantum field theory may be a "formal particle theorist" but he doesn't need to be a "string theorist" at all. So I don't see any problem at all. What I implicitly see behind Barry's proposal is to try to make string theory itself invisible as if it were a taboo. If that's so, I surely oppose this suggestion as aggressively as I can. It's not a taboo. It's really the main gem of the "formal particle theory" in the recent 20, 30, and maybe 50 years. But it's a gem, not the whole thing.

Also, sort of independently of that, Barry suggests that "string theory" – even if it is really string theory – is too big a subject and it should be split. So he wants to talk about dualists, mirror symmetrists, supergravitists, D-branists, and similar people. This is ludicrous. People study dualities or mirror symmetry or supergravity or D-brane etc. But those topics usually heavily overlap so they can't quite define where they belong. If they need to describe their research in more detail, they can do it by adding some extra words. But there is no need to coin these new -ists words that sound like ideologies (feminism, Nazism, communism, environmentalism).

Moreover, I don't like what Barry says for another, albeit related, reason. He really proposes to create overspecialized small boxes where people "sharply" fit. I think it's completely counterproductive. This would really be a way to encourage unoriginal derivative work. String theory is a rich subject but it is not so wide to justify the near isolation of its subfields or the non-existence of contributions that cannot be categorized. "Interdisciplinary" (this is a silly overstatement) papers within string theory are surely important and new subdisciplines and intermediate disciplines within string theory may and do regularly emerge. It's really one of the goals of the research. Barry's proposals are proposals to create exactly the kind of stagnating environment that he claimed to have hated!

Also, the term "string theorist" is fine enough to be useful for the laymen or other scientists. They wouldn't really understand how a supergravist differs from a D-branist.

Lisi's writing is bogus

Barry also mentions that Lisi's writing is incomplete or bogus according to (almost?) all string theorists. Good that it's being accepted that people agree about this simple point. But Barry hasn't ever read Lisi's paper. I find such a remark surprising. Not because Lisi's paper is truly important to be read. It's not.

But the amount of attention that was dedicated to Lisi's stuff among the "physics fans" of some type has been so intense that I can hardly imagine that I would resist the temptation to look what this stuff was all about. And of course, TRF reviewed that paper long before all the media. I had an expectation that it's probably wrong – a naive text by someone who doesn't know many important graduate-school-level things about particle physics – as soon as I read the (ambitious) title or whatever I saw at the beginning.

On the other hand, I have always been attracted to papers that had a chance to be groundbreaking (whether they are explicitly presented as stringy papers or not). Each of them could be an important new destination, a station we need to visit before the holy grail sometime in the future. The next string revolution could be ignited by someone who superficially looks like Lisi – or at least someone who likes to surf on Hawaii. ;-) I simply had to see Lisi's paper myself to be sure whether or not there was something promising in it; other people's testimonies wouldn't satisfy me. It seems to me Barry couldn't have known about this excitement about the possible game-changing discoveries. He probably saw just the asphalt everywhere, indeed, which is why it may have been wise for him to abandon the field.

But other young people thankfully haven't and many of them keep on seeing the green scenery, castles, butterflies, and rainbows. I wish them at least as much good luck as I wish to Barry.

by Luboš Motl (noreply@blogger.com) at June 11, 2015 10:12 PM

ZapperZ - Physics and Physicists

July Is The Least Popular Month For Physics
I did not know that!

The Buzz Blog at the APS Physics Central has a very interesting statistics on popularity of the word "physics" on Google search, and it showed a prominent pattern of large, yearly dip in July!

July is the least popular month for physics, marking the bottom of a decline that starts in May. This is not really surprising given that schools in the Northern Hemisphere tend to finish in May or June, and that July is the most popular month for vacations for Americans. Physics is definitely an academic term and it makes sense that its popularity aligns with students and researchers working to the academic calendar. Other academic terms such as "literature", "economics", and "math" also have minimum online interest during July. "Surfing", on the other hand, has a peak interest in July.

This means that right now, the date that this blog entry is posted, is the beginning of the downtrend. I won't blame you guys if the number of hits and read of this blog takes a strong dip starting now! :)

Zz.

by ZapperZ (noreply@blogger.com) at June 11, 2015 03:15 PM

Symmetrybreaking - Fermilab/SLAC

Q&A: New director-general of KEK

Masanori Yamauchi started his three-year term as head of Japan’s major center of particle physics research this spring.

Courtesy of: KEK

At a recent symposium about the proposed International Linear Collider, Symmetry chatted with Masanori Yamauchi, the new director-general of KEK, Japan’s high-energy accelerator research organization. Yamauchi, who received his PhD in physics at the University of Tokyo, has been at the laboratory for more than 30 years.

 

 

 

 

 

S: When did you first become interested in physics?

MY: A long time ago, as a high school student. I read a book on symmetry and asymmetry which impressed me a lot. At university, I chose to enter the physics department.

 

S:What was particle physics like when you were a student?

MY: When I was a grad student, I was staying at Lawrence Berkeley laboratory and doing experiments at SLAC laboratory. At the time, things were centralized in the US and Europe. Experiments in Japan were small. The nature of collaboration at the time was different.

 

S: How has it changed?

MY: It’s more international. KEK’s Belle experiment, which started in 1999, is truly an international collaboration. Almost half of its members are from abroad.

These days more than 20,000 scientists visit KEK every year from abroad to carry out an extensive research program at the accelerator facilities. This provides an extraordinary opportunity, especially to young scientists.

Now we’re hoping to construct the ILC in Japan. Everyone is getting together to design the ILC from scratch. Japan is not taking a strong lead; it’s an international collaboration.

 

S: What have been some of the highlights of your career?

MY: I was a spokesperson for the Belle experiment. We confirmed theory of CP violation proposed by [theorists] Makoto Kobayashi and Toshihide Maskawa [who won the Nobel Prize in Physics in 2008].

In the course of measurements, we observed many interesting things, including CP violation [a violation in the symmetry between matter and antimatter] in B meson decays. This is still puzzling. We still don’t know how it happens. We need at least 10 times more data to find out. That’s why we started the upgrade of KEKB [KEK’s particle accelerator]. It’s called Super KEKB factory, including the upgrade of detector to Belle II.

 

S: What do you do in your free time?

MY: I used to swim a lot, two times a week. Since I became the director-general of KEK, I have no time to swim. That’s my pity.

 

S: What did you do to prepare to become director-general?

MY: I had many chances to talk to the former director-general.

I know what I should do. For a big lab like KEK, it’s extremely important to keep a good relationship with the Japanese people, including people in government and at funding agencies. We deeply recognize that their understanding and support are essential to our scientific research. I often talk to them.

Conversation as the representative of KEK is a lot different from dialog with physicists. I’m not used to it. I have to find appropriate words. Physicists are more likely to talk very frankly and fight.

 

S: What makes KEK unique?

MY: One thing is our diversity. We cover many fields of research.

In physics, besides confirming the Kobayashi-Maskawa theory, we discovered many exotic compound particles and confirmed the discovery of neutrino oscillation.  In material and life science, we determined the structure of novel superconductors and protein-drug complexes. We also studied novel properties induced by hydrogen atoms, spins and electrons in condensed matter.

We have two physics facilities, KEK and J-PARC. Between them we cover flavor physics, B and D meson decays, tau lepton decays, kaons, muons and neutrinos. We have a commitment to the ATLAS experiment [at the Large Hadron Collider].

 

S: What are your priorities for KEK?

MY: KEK's mission in the near future is to derive the best scientific outcomes from ongoing research programs, and to open a firm route to future programs.

The most important thing is the construction of the Super KEKB factory [an upgrade of the KEKB accelerator]. We expect to have the first beam early next year. It is extremely important for us to finish the beam. We are going to carry out a neutrino, muon and kaon program.

As I said, KEK does more than particle physics research. It also has nuclear physics and materials science and life science programs. We will promote them as well.

 

Like what you see? Sign up for a free subscription to symmetry!

 

by Kathryn Jepsen at June 11, 2015 01:00 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
June 30, 2015 03:21 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at