Particle Physics Planet


July 27, 2016

astrobites - astro-ph reader's digest

The Next Generation of Astronomers

Title: Evolution And Persistence of Students’ Astronomy Career Interests: A Gender Study
Authors: Zoey Bergstrom, Philip Sadler & Gerhard Sonnert
First Author’s Institution: Harvard University
Status: Published in the Journal of Astronomy & Earth Sciences Education

A book, a teacher or even a ride with Ms. Frizzle: each of our readers has something that ignited their passion for astronomy. Often, these catalysts are designed to throw us into the depths of curiosity at an early age, with the hopes that we’ll get hooked. Today’s Bite focuses on formally tying together children’s pastimes, their interest levels in Astronomy and what we can do to inspire others.

The study examines students’ interests in Astronomy through middle school and high school. (For our non-American audience, these are roughly between the ages of 11 and 18.) These stages for the beginning of what is known as the “leaky pipe,” in which people who are initially excited about a career in STEM eventually leave the field. The question is: Where does this pipe “begin” and how can we patch up early leaks?

To address these questions, the authors used the National Science Foundation’s OPSCI survey, which surveyed just under 16,000 students in introductory English courses from universities around the United States. The researchers asked students what they wanted to become at four points in their life: during middle school, at the beginning of high school, at the end of high school and at the beginning of college. For example, a student could check off that they were interested in becoming an “Astronomer” during middle school but not interested afterwards. Additionally, students were asked about any interests they had growing up, including “[taking] care of a trained animal” or “[watching] science fiction.”

One of the team’s most interesting results is a visualization of the early leaky pipeline, shown below. It seems that most STEM fields lose future scientists after middle school. In particular, Astronomy attracts many students, many of which end up interested in other STEM fields. In this way, Astronomy is an important gateway for students interested in science!

Figure 1: An inflow & outflow diagram showing the evolving interests of all surveyed students. The black tube represents students who retain interest in astronomy, the red and green tubes represent students who flow between STEM fields (including Astronomy), and the blue tube represents students who flow between STEM fields and non-STEM fields.

Figure 1: An inflow & outflow diagram showing the evolving interests of all surveyed students. The black tube represents students who retain interest in astronomy, the red and green tubes represent students who flow between STEM fields (including Astronomy), and the blue tube represents students who flow between STEM fields and non-STEM fields.

The study connects interest level in Astronomy to the various activities students grew up with. It’s probably no surprise that being active in science programs, watching popular science programs and reading science fiction were all great indicators that a student would be interested in Astronomy in early high school, regardless of gender. Early participation in activities like math competitions also correlated with future scientific interests. Surprisingly, test scores, like the SAT, had no influence in students’ interest in astronomy. And remarkably, students who liked to take care of animals were significantly less likely to be interested in Astronomy!

The authors also considered the sexes independently, sharing important information about one of many underrepresented groups within astrophysics: women. Women were, across the board, less interested in astronomy at every stage of their schooling. However, they were more likely to be interested in astronomy if they had observed stars growing up! (This result was also true for men.) The authors emphasized that women are most excited about science when they can associate their personal identity with a STEM identity.

All of these interesting relationships are correlations — and they might not be causal. In fact, it would not be surprising if uncontrolled confounding variables (like socioeconomic status) were behind some of the uncovered trends. That being said, we can learn important lessons from this study: Share your love of Astronomy! The hard work that you put into reaching out to our community helps us grow and thrive by inspiring the next generation of scientists.

by Ashley Villar at July 27, 2016 01:01 PM

Matt Strassler - Of Particular Significance

The Summer View at CERN

For the first time in some years, I’m spending two and a half weeks at CERN (the lab that hosts the Large Hadron Collider [LHC]). Most of my recent visits have been short or virtual, but this time* there’s a theory workshop that has collected together a number of theoretical particle physicists, and it’s a good opportunity for all of us to catch up with the latest creative ideas in the subject.   It’s also an opportunity to catch a glimpse of the furtive immensity of Mont Blanc, a hulking bump on the southern horizon, although only if (as is rarely the case) nature offers clear and beautiful weather.

More importantly, new results on the data collected so far in 2016 at the LHC are coming very soon!  They will be presented at the ICHEP conference that will be held in Chicago starting August 3rd. And there’s something we’ll be watching closely.

You may remember that in a post last December I wrote:

  “Everybody wants to know. That bump seen on the ATLAS and CMS two-photon plots!  What… IS… it…?

Why the excitement? A bump of this type can be a signal of a new particle (as was the case for the Higgs particle itself.) And since a new particle that would produce a bump of this size was both completely unexpected and completely plausible, there was hope that we were seeing a hint of something new and important.

However, as I wrote in the same post,

  “Well, to be honest, probably it’s just that: a bump on a plot. But just in case it’s not…”

and I went on to discuss briefly what it might mean if it wasn’t just a statistical fluke. But speculation may be about to end: finally, we’re about to find out if it was indeed just a fluke — or a sign of something real.

Since December the amount of 13 TeV collision data available at ATLAS and CMS (the two general purpose experiments at the LHC) has roughly quadrupled, which means that typical bumps and wiggles on their 2015-2016 plots have decreased in relative size by about a factor of two (= square root of four). If the December bump is just randomness, it should also decrease in relative size. If it’s real, it should remain roughly the same relative size, but appear more prominent relative to the random bumps and wiggles around it.

Now, there’s a caution to be added here. The December ATLAS bump was so large and fat compared to what was seen at CMS that (since reality has to appear the same at both experiments, once enough data has been collected) it was pretty obvious that even if it there were a real bump there, at ATLAS it was probably in combination with a statistical fluke that made it look larger and fatter than its true nature. [Something similar happened with the Higgs; the initial bump that ATLAS saw was twice as big as expected, which is why it showed up so early, but it gradually has shrunk as more data has been collected and it is now close to its expected size.  In retrospect, that tells us that ATLAS’s original signal was indeed combined with a statistical fluke that made it appear larger than it really is.] What that means is that even if the December bumps were real, we would expect the ATLAS bump to shrink in size (but not statistical significance) and we would expect the CMS bump to remain of similar size (but grow in statistical significance). Remember, though, that “expectation” is not certainty, because at every stage statistical flukes (up or down) are possible.

In about a week we’ll find out where things currently stand. But the mood, as I read it here in the hallways and cafeteria, is not one of excitement. Moreover, the fact that the update to the results is (at the moment) unobtrusively scheduled for a parallel session of the ICHEP conference next Friday, afternoon time at CERN, suggests we’re not going  to see convincing evidence of anything exciting. If so, then the remaining question will be whether the reverse is true: whether the data will show convincing evidence that the December bump was definitely a fluke.

Flukes are guaranteed; with limited amounts of data, they can’t be avoided.  Discoveries, on the other hand, require skill, insight, and luck: you must ask a good question, address it with the best available methods, and be fortunate enough that (as is rarely the case) nature offers a clear and interesting answer.

 

*I am grateful for the CERN theory group’s financial support during this visit.


Filed under: LHC News, Particle Physics Tagged: atlas, cms, LHC, photons

by Matt Strassler at July 27, 2016 12:32 PM

Christian P. Robert - xi'an's og

Bayes on the beach [and no bogus!]

Bayes on the Beach is a yearly conference taking place in Queensland Gold Coast and organised by Kerrie Mengersen and her BRAG research group at QUT. To quote from the email I just received, the conference will be held at the Mantra Legends Hotel on Surfers Paradise, Gold Coast during November 7 – 9, 2016. The conference provides a forum for discussion on developments and applications of Bayesian statistics, and includes keynote presentations, tutorials, practical problem-based workshops, invited oral presentations, and poster presentations. Abstract submissions are now open until September 2.


Filed under: pictures, Statistics, Travel, University life Tagged: Australia, Bayes on the Beach, BRAG, Gold Coast, Kerrie Mengersen, Queensland, Queensland University of Technology, Surfers Paradise

by xi'an at July 27, 2016 12:18 PM

Peter Coles - In the Dark

Should we worry about the Hubble Constant?

One of the topics that came up in the discussion sessions at the meeting I was at over the weekend was the possible tension between cosmological parameters, especially relating to the determination of the Hubble constant (H0) by Planck and by “traditional” methods based on the cosmological distance ladder; see here for an overview of the latter. Coincidentally, I found this old preprint while tidying up my office yesterday:

Cosmo_params

Things have changed quite a bit since 1979! Before getting to the point I should explain that Planck does not determine H0 directly, as it is not one of the six numbers used to specify the minimal model used to fit the data. These parameters do include information about H0, however, so it is possible to extract a value from the data indirectly. In other words it is a derived parameter:

Planck_parameters

The above summary shows that values of the Hubble constant obtained in this way lie around the 67 to 68  km/s/Mpc mark, with small changes if other measures are included. According to the very latest Planck paper on cosmological parameter estimates the headline determination is H0 = (67.8 +/- 0.9) km/s/Mpc.

Note however that a recent “direct” determination of the Hubble constant by Riess et al.  using Hubble Space Telescope data quotes a headline value of (73.24+/-1.74) km/sec/Mpc. Had these two values been obtained in 1979 we wouldn’t have worried because the errors would have been much larger, but nowadays the measurements are much more precise and there does seem to be a hint of a discrepancy somewhere around the 3 sigma level depending on precisely which determination you use. On the other hand the history of Hubble constant determinations is one of results being quoted with very small “internal” errors that turned out to be much smaller than systematic uncertainties.

I think it’s fair to say that there isn’t a consensus as to how seriously to take this apparent “tension”. I certainly can’t see anything wrong with the Riess et al. result, and the lead author is a Nobel prize-winner, but I’m also impressed by the stunning success of the minimal LCDM model at accounting for such a huge data set with a small set of free parameters. If one does take this tension seriously it can be resolved by adding an extra parameter to the model or by allowing one of the fixed properties of the LCDM model to vary to fit the data. Bayesian model selection analysis however tends to reject such models on the grounds of Ockham’s Razor. In other words the price you pay for introducing an extra free parameter exceeds the benefit in improved goodness of fit. GAIA may shortly reveal whether or not there are problems with the local stellar distance scale, which may reveal the source of any discrepancy. For the time being, however, I think it’s interesting but nothing to get too excited about. I’m not saying that I hope this tension will just go away. I think it will be very interesting if it turns out to be real. I just think the evidence at the moment isn’t convincing me that there’s something beyond the standard cosmological model. I may well turn out to be wrong.

It’s quite interesting to think  how much we scientists tend to carry on despite the signs that things might be wrong. Take, for example, Newton’s Gravitational Constant, G. Measurements of this parameter are extremely difficult to do, but different experiments do seem to be in disagreement with each other. If Newtonian gravity turned out to be wrong that would indeed be extremely exciting, but I think it’s a wiser bet that there are uncontrolled experimental systematics. On the other hand there is a danger that we might ignore evidence that there’s something fundamentally wrong with our theory. It’s sometimes a difficult judgment how seriously to take experimental results.

Anyway, I don’t know what cosmologists think in general about this so there’s an excuse for a poll:

<noscript><a href="http://polldaddy.com/poll/9483425">Take Our Poll</a></noscript>

 

 

 

 


by telescoper at July 27, 2016 10:41 AM

The n-Category Cafe

Topological Crystals (Part 2)

k4_crystal

We’re building crystals, like diamonds, purely from topology. Last time I said how: you take a graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and embed its maximal abelian cover into the vector space <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{R})</annotation></semantics>.

Now let me back up and say a bit more about the maximal abelian cover. It’s not nearly as famous as the universal cover, but it’s very nice.

The basic idea

By ‘space’ let me mean a connected topological space that’s locally nice. The basic idea is that if <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is some space, its universal cover <semantics>X˜<annotation encoding="application/x-tex">\widetilde{X}</annotation></semantics> is a covering space of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> that covers all other covering spaces of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. The maximal abelian cover <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> has a similar universal property — but it’s abelian, and it covers all abelian connected covers. A cover is abelian if its group of deck transformations is abelian.

The cool part is that universal covers are to homotopy theory as maximal abelian covers are to homology theory.

What do I mean by that? For starters, points in <semantics>X˜<annotation encoding="application/x-tex">\widetilde{X}</annotation></semantics> are just homotopy classes of paths in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> starting at some chosen basepoint. And the points in <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> are just ‘homology classes’ of paths starting at the basepoint.

But people don’t talk so much about ‘homology classes’ of paths. So what do I mean by that? Here a bit of category theory comes in handy. Homotopy classes of paths in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> are morphisms in the fundamental groupoid of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. Homology classes of paths are morphisms in the abelianized fundamental groupoid!

But wait a minute — what does that mean? Well, we can abelianize any groupoid by imposing the relations

<semantics>fg=gf<annotation encoding="application/x-tex"> f g = g f </annotation></semantics>

whenever it makes sense to do so. It makes sense to do so when you can compose the morphisms <semantics>f:xy<annotation encoding="application/x-tex">f : x \to y </annotation></semantics> and <semantics>g:xy<annotation encoding="application/x-tex">g : x' \to y'</annotation></semantics> in either order, and the resulting morphisms <semantics>fg<annotation encoding="application/x-tex">f g</annotation></semantics> and <semantics>gf<annotation encoding="application/x-tex">g f</annotation></semantics> have the same source and the same target. And if you work out what that means, you’ll see it means <semantics>x=y=x=y<annotation encoding="application/x-tex">x = y = x' = y'</annotation></semantics>.

But now let me say it all much more slowly, for people who want a more relaxed treatment. After all, this is a nice little bit of topology that could be in an elementary course!

The details

There are lots of slightly different things called ‘graphs’ in mathematics, but in topological crystallography it’s convenient to work with one that you’ve probably never seen before. This kind of graph has two copies of each edge, one pointing in each direction.

So, we’ll say a graph <semantics>X=(E,V,s,t,i)<annotation encoding="application/x-tex">X = (E,V,s,t,i) </annotation></semantics> has a set <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> of vertices, a set <semantics>E<annotation encoding="application/x-tex">E</annotation></semantics> of edges, maps <semantics>s,t:EV<annotation encoding="application/x-tex">s,t : E \to V</annotation></semantics> assigning to each edge its source and target, and a map <semantics>i:EE<annotation encoding="application/x-tex">i : E \to E</annotation></semantics> sending each edge to its inverse, obeying

<semantics>s(i(e))=t(e),t(i(e))=s(e),i(i(e))=e<annotation encoding="application/x-tex"> s(i(e)) = t(e), \quad t(i(e)) = s(e) , \qquad i(i(e)) = e </annotation></semantics>

and

<semantics>i(e)e<annotation encoding="application/x-tex"> i(e) \ne e </annotation></semantics>

for all <semantics>eE<annotation encoding="application/x-tex">e \in E</annotation></semantics>.

That inequality at the end will make category theorists gag: definitions should say what’s true, not what’s not true. But category theorists should be able to see what’s really going on here, so I leave that as a puzzle.

For ordinary folks, let me repeat the definition using more words. If <semantics>s(e)=v<annotation encoding="application/x-tex">s(e) = v</annotation></semantics> and <semantics>t(e)=w<annotation encoding="application/x-tex">t(e) = w</annotation></semantics> we write <semantics>e:vw<annotation encoding="application/x-tex">e : v \to w</annotation></semantics>, and draw <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> as an interval with an arrow on it pointing from <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> to <semantics>w<annotation encoding="application/x-tex">w</annotation></semantics>. We write <semantics>i(e)<annotation encoding="application/x-tex">i(e)</annotation></semantics> as <semantics>e 1<annotation encoding="application/x-tex">e^{-1}</annotation></semantics>, and draw <semantics>e 1<annotation encoding="application/x-tex">e^{-1}</annotation></semantics> as the same interval as <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics>, but with its arrow reversed. The equations obeyed by <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> say that taking the inverse of <semantics>e:vw<annotation encoding="application/x-tex">e : v \to w</annotation></semantics> gives an edge <semantics>e 1:wv<annotation encoding="application/x-tex">e^{-1} : w \to v</annotation></semantics> and that <semantics>(e 1) 1=e<annotation encoding="application/x-tex">(e^{-1})^{-1} = e</annotation></semantics>. No edge can be its own inverse.

A map of graphs, say <semantics>f:XX<annotation encoding="application/x-tex">f : X \to X'</annotation></semantics>, is a pair of functions, one sending vertices to vertices and one sending edges to edges, that preserve the source, target and inverse maps. By abuse of notation we call both of these functions <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>.

I started out talking about topology; now I’m treating graphs very combinatorially, but we can bring the topology back in.

From a graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> we can build a topological space <semantics>|X|<annotation encoding="application/x-tex">|X|</annotation></semantics> called its geometric realization. We do this by taking one point for each vertex and gluing on one copy of <semantics>[0,1]<annotation encoding="application/x-tex">[0,1]</annotation></semantics> for each edge <semantics>e:vw<annotation encoding="application/x-tex">e : v \to w</annotation></semantics>, gluing the point <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> to <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> and the point <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> to <semantics>w<annotation encoding="application/x-tex">w</annotation></semantics>, and then identifying the interval for each edge <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> with the interval for its inverse by means of the map <semantics>t1t<annotation encoding="application/x-tex">t \mapsto 1 - t</annotation></semantics>. Any map of graphs gives rise to a continuous map between their geometric realizations, and we say a map of graphs is a cover if this continuous map is a covering map. For simplicity we denote the fundamental group of <semantics>|X|<annotation encoding="application/x-tex">|X|</annotation></semantics> by <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics>, and similarly for other topological invariants of <semantics>|X|<annotation encoding="application/x-tex">|X|</annotation></semantics>. However, sometimes I’ll need to distinguish between a graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> and its geometric realization <semantics>|X|<annotation encoding="application/x-tex">|X|</annotation></semantics>.

Any connected graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has a universal cover, meaning a connected cover

<semantics>p:X˜X<annotation encoding="application/x-tex"> p : \widetilde{X} \to X </annotation></semantics>

that covers every other connected cover. The geometric realization of <semantics>X˜<annotation encoding="application/x-tex">\widetilde{X}</annotation></semantics> is connected and simply connected. The fundamental group <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics> acts as deck transformations of <semantics>X˜<annotation encoding="application/x-tex">\widetilde{X}</annotation></semantics>, meaning invertible maps <semantics>g:X˜X˜<annotation encoding="application/x-tex">g : \widetilde{X} \to \widetilde{X}</annotation></semantics> such that <semantics>pg=p<annotation encoding="application/x-tex">p \circ g = p</annotation></semantics>. We can take the quotient of <semantics>X˜<annotation encoding="application/x-tex">\widetilde{X}</annotation></semantics> by the action of any subgroup <semantics>Gπ 1(X)<annotation encoding="application/x-tex">G \subseteq \pi_1(X)</annotation></semantics> and get a cover <semantics>q:X˜/GX<annotation encoding="application/x-tex"> q : \widetilde{X}/G \to X</annotation></semantics>.

In particular, if we take <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> to be the commutator subgroup of <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics>, we call the graph <semantics>X˜/G<annotation encoding="application/x-tex">\widetilde{X}/G</annotation></semantics> the maximal abelian cover of the graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, and denote it by <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>. We obtain a cover

<semantics>q:X¯X<annotation encoding="application/x-tex"> q : \overline{X} \to X </annotation></semantics>

whose group of deck transformations is the abelianization of <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics>. This is just the first homology group <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{Z})</annotation></semantics>. In particular, if the space corresponding to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> holes, this is a free abelian group on <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> generators.

I want a concrete description of the maximal abelian cover! I’ll build it starting with the universal cover, but first we need some preliminaries on paths in graphs.

Given vertices <semantics>x,y<annotation encoding="application/x-tex">x,y</annotation></semantics> in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, define a path from <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> to be a word of edges <semantics>γ=e 1e <annotation encoding="application/x-tex">\gamma = e_1 \cdots e_\ell</annotation></semantics> with <semantics>e i:v i1v i<annotation encoding="application/x-tex">e_i : v_{i-1} \to v_i</annotation></semantics> for some vertices <semantics>v 0,,v <annotation encoding="application/x-tex">v_0, \dots, v_\ell</annotation></semantics> with <semantics>v 0=x<annotation encoding="application/x-tex">v_0 = x</annotation></semantics> and <semantics>v =y<annotation encoding="application/x-tex">v_\ell = y</annotation></semantics>. We allow the word to be empty if and only if <semantics>x=y<annotation encoding="application/x-tex">x = y</annotation></semantics>; this gives the trivial path from <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to itself. Given a path <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> from <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> we write <semantics>γ:xy<annotation encoding="application/x-tex">\gamma : x \to y</annotation></semantics>, and we write the trivial path from <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to itself as <semantics>1 x:xx<annotation encoding="application/x-tex">1_x : x \to x</annotation></semantics>. We define the composite of paths <semantics>γ:xy<annotation encoding="application/x-tex"> \gamma : x \to y </annotation></semantics> and <semantics>δ:yz<annotation encoding="application/x-tex"> \delta : y \to z </annotation></semantics> via concatenation of words, obtaining a path we call <semantics>γδ:xz<annotation encoding="application/x-tex">\gamma \delta : x \to z</annotation></semantics>. We call a path from a vertex <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to itself a loop based at <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>.

We say two paths from <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> are homotopic if one can be obtained from the other by repeatedly introducing or deleting subwords of the form <semantics>e ie i+1<annotation encoding="application/x-tex">e_i e_{i+1}</annotation></semantics> where <semantics>e i+1=e i 1<annotation encoding="application/x-tex">e_{i+1} = e_i^{-1}</annotation></semantics>. If <semantics>[γ]<annotation encoding="application/x-tex">[\gamma]</annotation></semantics> is a homotopy class of paths from <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, we write <semantics>[γ]:xy<annotation encoding="application/x-tex"> [\gamma] : x \to y</annotation></semantics>. We can compose homotopy classes <semantics>[γ]:xy<annotation encoding="application/x-tex"> [\gamma] : x \to y</annotation></semantics> and <semantics>[δ]:yz<annotation encoding="application/x-tex">[\delta] : y \to z</annotation></semantics> by setting <semantics>[γ][δ]=[γδ]<annotation encoding="application/x-tex"> [\gamma] [\delta] = [\gamma \delta]</annotation></semantics>.

If <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is a connected graph, we can describe the universal cover <semantics>X˜<annotation encoding="application/x-tex">\widetilde{X}</annotation></semantics> as follows. Fix a vertex <semantics>x 0<annotation encoding="application/x-tex">x_0</annotation></semantics> of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, which we call the basepoint. The vertices of <semantics>X˜<annotation encoding="application/x-tex">\widetilde{X}</annotation></semantics> are defined to be the homotopy classes of paths <semantics>[γ]:x 0x<annotation encoding="application/x-tex"> [\gamma] : x_0 \to x</annotation></semantics> where <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is arbitrary. The edges in <semantics>X˜<annotation encoding="application/x-tex">\widetilde{X}</annotation></semantics> from the vertex <semantics>[γ]:x 0x<annotation encoding="application/x-tex">[\gamma] : x_0 \to x</annotation></semantics> to the vertex <semantics>[δ]:x 0y<annotation encoding="application/x-tex">[\delta] : x_0 \to y</annotation></semantics> are defined to be the edges <semantics>eE<annotation encoding="application/x-tex">e \in E</annotation></semantics> with <semantics>[γe]=[δ]<annotation encoding="application/x-tex">[\gamma e] = [\delta]</annotation></semantics>. In fact, there is always at most one such edge. There is an obvious map of graphs

<semantics>p:X˜X<annotation encoding="application/x-tex"> p : \widetilde{X} \to X </annotation></semantics>

sending each vertex <semantics>[γ]:x 0x<annotation encoding="application/x-tex"> [\gamma] : x_0 \to x</annotation></semantics> of <semantics>X˜<annotation encoding="application/x-tex">\widetilde{X}</annotation></semantics> to the vertex <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. This map is a cover.

Now we are ready to construct the maximal abelian cover <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>. For this, we impose a further equivalence relation on paths, which is designed to make composition commutative whenever possible. However, we need to be careful. If <semantics>γ:xy<annotation encoding="application/x-tex"> \gamma : x \to y </annotation></semantics> and <semantics>δ:xy<annotation encoding="application/x-tex"> \delta : x' \to y' </annotation></semantics>, the composites <semantics>γδ<annotation encoding="application/x-tex"> \gamma \delta </annotation></semantics> and <semantics>δγ<annotation encoding="application/x-tex">\delta \gamma</annotation></semantics> are both well-defined if and only if <semantics>x=y<annotation encoding="application/x-tex">x' = y</annotation></semantics> and <semantics>y=x<annotation encoding="application/x-tex">y' = x</annotation></semantics>. In this case, <semantics>γδ<annotation encoding="application/x-tex">\gamma \delta</annotation></semantics> and <semantics>δγ<annotation encoding="application/x-tex">\delta \gamma</annotation></semantics> share the same starting point and share the same ending point if and only if <semantics>x=x<annotation encoding="application/x-tex">x = x'</annotation></semantics> and <semantics>y=y<annotation encoding="application/x-tex">y = y'</annotation></semantics>. If all four of these equations hold, both <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> and <semantics>δ<annotation encoding="application/x-tex">\delta</annotation></semantics> are loops based at <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>. So, we shall impose the relation <semantics>γδ=δγ<annotation encoding="application/x-tex">\gamma \delta = \delta \gamma</annotation></semantics> only in this case.

We say two paths are homologous if one can be obtained from another by:

  • repeatedly introducing or deleting subwords <semantics>e ie i+1<annotation encoding="application/x-tex">e_i e_{i+1}</annotation></semantics> where <semantics>e i+1=e i 1<annotation encoding="application/x-tex">e_{i+1} = e_i^{-1}</annotation></semantics>, and/or

  • repeatedly replacing subwords of the form <semantics>e ie je j+1e k<annotation encoding="application/x-tex">e_i \cdots e_j e_{j+1} \cdots e_k </annotation></semantics> by those of the form <semantics>e j+1e ke ie j<annotation encoding="application/x-tex">e_{j+1} \cdots e_k e_i \cdots e_j </annotation></semantics>, where <semantics>e ie j<annotation encoding="application/x-tex"> e_i \cdots e_j </annotation></semantics> and <semantics>e j+1e k<annotation encoding="application/x-tex">e_{j+1} \cdots e_k </annotation></semantics> are loops based at the same vertex.

My use of the term ‘homologous’ is a bit nonstandard here!

We denote the homology class of a path <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> by <semantics>[[γ]]<annotation encoding="application/x-tex">[[ \gamma ]]</annotation></semantics>. Note that if two paths <semantics>γ:xy<annotation encoding="application/x-tex">\gamma : x \to y</annotation></semantics>, <semantics>δ:xy<annotation encoding="application/x-tex">\delta : x' \to y'</annotation></semantics> are homologous then <semantics>x=x<annotation encoding="application/x-tex">x = x'</annotation></semantics> and <semantics>y=y<annotation encoding="application/x-tex">y = y'</annotation></semantics>. Thus, the starting and ending points of a homology class of paths are well-defined, and given any path <semantics>γ:xy<annotation encoding="application/x-tex"> \gamma : x \to y </annotation></semantics> we write <semantics>[[γ]]:xy<annotation encoding="application/x-tex"> [[ \gamma ]] : x \to y </annotation></semantics>. The composite of homology classes is also well-defined if we set <semantics>[[γ]][[δ]]=[[γδ]]<annotation encoding="application/x-tex"> [[ \gamma ]] [[ \delta ]] = [[ \gamma \delta ]]</annotation></semantics>.

We construct the maximal abelian cover of a connected graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> just as we constructed its universal cover, but using homology classes rather than homotopy classes of paths. And now I’ll introduce some jargon that should make you start thinking about crystals!

Fix a basepoint <semantics>x 0<annotation encoding="application/x-tex">x_0</annotation></semantics> for <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. The vertices of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>, or atoms, are defined to be the homology classes of paths <semantics>[[γ]]:x 0x<annotation encoding="application/x-tex"> [[\gamma]] : x_0 \to x</annotation></semantics> where <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is arbitrary. Any edge of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>, or bond, goes from some atom <semantics>[[γ]]:x 0x<annotation encoding="application/x-tex">[[ \gamma]] : x_0 \to x</annotation></semantics> to the some atom <semantics>[[δ]]:x 0y<annotation encoding="application/x-tex">[[ \delta ]] : x_0 \to y</annotation></semantics>. The bonds from <semantics>[[γ]]<annotation encoding="application/x-tex">[[ \gamma]]</annotation></semantics> to <semantics>[[δ]]<annotation encoding="application/x-tex">[[ \delta ]] </annotation></semantics> are defined to be the edges <semantics>eE<annotation encoding="application/x-tex">e \in E</annotation></semantics> with <semantics>[[γe]]=[[δ]]<annotation encoding="application/x-tex">[[ \gamma e ]] = [[ \delta ]]</annotation></semantics>. There is at most one bond between any two atoms. Again we have a covering map

<semantics>q:X¯X<annotation encoding="application/x-tex"> q : \overline{X} \to X </annotation></semantics>

The homotopy classes of loops based at <semantics>x 0<annotation encoding="application/x-tex">x_0</annotation></semantics> form a group, with composition as the group operation. This is the fundamental group <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics> of the graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. (It depends on the basepoint <semantics>x 0<annotation encoding="application/x-tex">x_0</annotation></semantics>, but I’ll leave that out out of the notation just to scandalize my colleagues. It’s so easy to live dangerously when you’re an academic!)

Now, this fundamental group is isomorphic to the usual fundamental group of the space associated to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. By our construction of the universal cover, <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics> is also the set of vertices of <semantics>X˜<annotation encoding="application/x-tex">\widetilde{X}</annotation></semantics> that are mapped to <semantics>x 0<annotation encoding="application/x-tex">x_0</annotation></semantics> by <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>. Furthermore, any element <semantics>[γ]π 1(X)<annotation encoding="application/x-tex"> [\gamma] \in \pi_1(X) </annotation></semantics> defines a deck transformation of <semantics>X˜<annotation encoding="application/x-tex">\widetilde{X}</annotation></semantics> that sends each vertex <semantics>[δ]:x 0x<annotation encoding="application/x-tex"> [\delta] : x_0 \to x </annotation></semantics> to the vertex <semantics>[γ][δ]:x 0x<annotation encoding="application/x-tex"> [\gamma] [\delta] : x_0 \to x</annotation></semantics>.

Similarly, the homology classes of loops based at <semantics>x 0<annotation encoding="application/x-tex">x_0</annotation></semantics> form a group with composition as the group operation. Since the additional relation used to define homology classes is precisely that needed to make composition of homology classes of loops commutative, this group is the abelianization of <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics>. It is therefore isomorphic to the first homology group <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{Z})</annotation></semantics> of the geometric realization of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>!

By our construction of the maximal abelian cover, <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{Z})</annotation></semantics> is also the set of vertices of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> that are mapped to <semantics>x 0<annotation encoding="application/x-tex">x_0</annotation></semantics> by <semantics>q<annotation encoding="application/x-tex">q</annotation></semantics>. Furthermore, any element <semantics>[[γ]]H 1(X,)<annotation encoding="application/x-tex"> [[\gamma]] \in H_1(X,\mathbb{Z}) </annotation></semantics> defines a deck transformation of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> that sends each vertex <semantics>[[δ]]:x 0x<annotation encoding="application/x-tex"> [[\delta]] : x_0 \to x </annotation></semantics> to the vertex <semantics>[[γ]][[δ]]:x 0x<annotation encoding="application/x-tex"> [[\gamma]] [[\delta]] : x_0 \to x</annotation></semantics>.

So, it all works out! The fundamental group <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics> acts as deck transformations of the universal cover, while the first homology group <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{Z})</annotation></semantics> acts as deck transformations of the maximal abelian cover!

Puzzle for experts: what does this remind you of in Galois theory?

We’ll get back to crystals next time.

by john (baez@math.ucr.edu) at July 27, 2016 10:25 AM

John Baez - Azimuth

Topological Crystals (Part 2)

We’re building crystals, like diamonds, purely from topology. Last time I said how: you take a graph X and embed its maximal abelian cover into the vector space H_1(X,\mathbb{R}). Now let me say a bit more about the maximal abelian cover. It’s not nearly as famous as the universal cover, but it’s very nice.

First I’ll whiz though the basic idea, and then I’ll give the details.

The basic idea

By ‘space’ let me mean a connected topological space that’s locally nice. The basic idea is that if X is some space, its universal cover \widetilde{X} is a covering space of X that covers all other covering spaces of X. The maximal abelian cover \overline{X} has a similar universal property—but it’s abelian, and it covers all abelian connected covers. A cover is abelian if its group of deck transformations is abelian.

The cool part is that universal covers are to homotopy theory as maximal abelian covers are to homology theory.

What do I mean by that? For starters, points in \widetilde{X} are just homotopy classes of paths in X starting at some chosen basepoint. And the points in \overline{X} are just ‘homology classes’ of paths starting at the basepoint.

But people don’t talk so much about ‘homology classes’ of paths. So what do I mean by that? Here a bit of category theory comes in handy. Homotopy classes of paths in X are morphisms in the fundamental groupoid of X. Homology classes of paths are morphisms in the abelianized version of the fundamental groupoid!

But wait a minute — what does that mean? Well, we can abelianize any groupoid by imposing the relations

f g = g f

whenever it makes sense to do so. It makes sense to do so when you can compose the morphisms f : x \to y and g : x' \to y' in either order, and the resulting morphisms f g and g f have the same source and the same target. And if you work out what that means, you’ll see it means

x = y = x' = y'

But now let me say it all much more slowly, for people who want a more relaxed treatment.

The details

There are lots of slightly different things called ‘graphs’ in mathematics, but in topological crystallography it’s convenient to work with one that you’ve probably never seen before. This kind of graph has two copies of each edge, one pointing in each direction.

So, we’ll say a graph X = (E,V,s,t,i) has a set V of vertices, a set E of edges, maps s,t : E \to V assigning to each edge its source and target, and a map i : E \to E sending each edge to its inverse, obeying

s(i(e)) = t(e), \quad t(i(e)) = s(e) , \qquad i(i(e)) = e

and

i(e) \ne e

for all e \in E.

That inequality at the end will make category theorists gag: definitions should say what’s true, not what’s not true. But category theorists should be able to see what’s really going on here, so I leave that as a puzzle.

For ordinary folks, let me repeat the definition using more words. If s(e) = v and t(e) = w we write e : v \to w, and draw e as an interval with an arrow on it pointing from v to w. We write i(e) as e^{-1}, and draw e^{-1} as the same interval as e, but with its arrow reversed. The equations obeyed by i say that taking the inverse of e : v \to w gives an edge e^{-1} : w \to v and that (e^{-1})^{-1} = e. No edge can be its own inverse.

A map of graphs, say f : X \to X', is a pair of functions, one sending vertices to vertices and one sending edges to edges, that preserve the source, target and inverse maps. By abuse of notation we call both of these functions f.

I started out talking about topology; now I’m treating graphs very combinatorially, but we can bring the topology back in. From a graph X we can build a topological space |X| called its geometric realization. We do this by taking one point for each vertex and gluing on one copy of [0,1] for each edge e : v \to w, gluing the point 0 to v and the point 1 to w, and then identifying the interval for each edge e with the interval for its inverse by means of the map t \mapsto 1 - t.

Any map of graphs gives rise to a continuous map between their geometric realizations, and we say a map of graphs is a cover if this continuous map is a covering map. For simplicity we denote the fundamental group of |X| by \pi_1(X), and similarly for other topological invariants of |X|. However, sometimes I’ll need to distinguish between a graph X and its geometric realization |X|.

Any connected graph X has a universal cover, meaning a connected cover

p : \widetilde{X} \to X

that covers every other connected cover. The geometric realization of \widetilde{X} is connected and simply connected. The fundamental group \pi_1(X) acts as deck transformations of \widetilde{X}, meaning invertible maps g : \widetilde{X} \to \widetilde{X} such that p \circ g = p. We can take the quotient of \widetilde{X} by the action of any subgroup G \subseteq \pi_1(X) and get a cover q : \widetilde{X}/G \to X.

In particular, if we take G to be the commutator subgroup of \pi_1(X), we call the graph \widetilde{X}/G the maximal abelian cover of the graph X, and denote it by \overline{X}. We obtain a cover

q : \overline{X} \to X

whose group of deck transformations is the abelianization of \pi_1(X). This is just the first homology group H_1(X,\mathbb{Z}). In particular, if the space corresponding to X has n holes, this is the free abelian group on
n generators.

I want a concrete description of the maximal abelian cover! I’ll build it starting with the universal cover, but first we need some preliminaries on paths in graphs.

Given vertices x,y in X, define a path from x to y to be a word of edges \gamma = e_1 \cdots e_\ell with e_i : v_{i-1} \to v_i for some vertices v_0, \dots, v_\ell with v_0 = x and v_\ell = y. We allow the word to be empty if and only if x = y; this gives the trivial path from x to itself.

Given a path \gamma from x to y we write \gamma : x \to y, and we write the trivial path from x to itself as 1_x : x \to x. We define the composite of paths \gamma : x \to y and \delta : y \to z via concatenation of words, obtaining a path we call \gamma \delta : x \to z. We call a path from a vertex x to itself a loop based at x.

We say two paths from x to y are homotopic if one can be obtained from the other by repeatedly introducing or deleting subwords of the form e_i e_{i+1} where e_{i+1} = e_i^{-1}. If [\gamma] is a homotopy class of paths from x to y, we write [\gamma] : x \to y. We can compose homotopy classes [\gamma] : x \to y and [\delta] : y \to z by setting [\gamma] [\delta] = [\gamma \delta].

If X is a connected graph, we can describe the universal cover \widetilde{X} as follows. Fix a vertex x_0 of X, which we call the basepoint. The vertices of \widetilde{X} are defined to be the homotopy classes of paths [\gamma] : x_0 \to x where x is arbitrary. The edges in \widetilde{X} from the vertex [\gamma] : x_0 \to x to the vertex [\delta] : x_0 \to y are defined to be the edges e \in E with [\gamma e] = [\delta]. In fact, there is always at most one such edge. There is an obvious map of graphs

p : \widetilde{X} \to X

sending each vertex [\gamma] : x_0 \to x of \widetilde{X} to the vertex
x of X. This map is a cover.

Now we are ready to construct the maximal abelian cover \overline{X}. For this, we impose a further equivalence relation on paths, which is designed to make composition commutative whenever possible. However, we need to be careful. If \gamma : x \to y and \delta : x' \to y' , the composites \gamma \delta and \delta \gamma are both well-defined if and only if x' = y and y' = x. In this case, \gamma \delta and \delta \gamma share the same starting point and share the same ending point if and only if x = x' and y = y'. If all four of these equations hold, both \gamma and \delta are loops based at x. So, we shall impose the relation \gamma \delta = \delta \gamma only in this case.

We say two paths are homologous if one can be obtained from another by:

• repeatedly introducing or deleting subwords e_i e_{i+1} where
e_{i+1} = e_i^{-1}, and/or

• repeatedly replacing subwords of the form

e_i \cdots e_j e_{j+1} \cdots e_k

by those of the form

e_{j+1} \cdots e_k e_i \cdots e_j

where e_i \cdots e_j and e_{j+1} \cdots e_k are loops based at the same vertex.

My use of the term ‘homologous’ is a bit nonstandard here!

We denote the homology class of a path \gamma by [[ \gamma ]]. Note that if two paths \gamma : x \to y, \delta : x' \to y' are homologous then x = x' and y = y'. Thus, the starting and ending points of a homology class of paths are well-defined, and given any path \gamma : x \to y we write [[ \gamma ]] : x \to y . The composite of homology classes is also well-defined if we set

[[ \gamma ]] [[ \delta ]] = [[ \gamma \delta ]]

We construct the maximal abelian cover of a connected graph X just as we constructed its universal cover, but using homology classes rather than homotopy classes of paths. And now I’ll introduce some jargon that should make you start thinking about crystals!

Fix a basepoint x_0 for X. The vertices of \overline{X}, or atoms, are defined to be the homology classes of paths [[\gamma]] : x_0 \to x where x is arbitrary. Any edge of \overline{X}, or bond, goes from some atom [[ \gamma]] : x_0 \to x to the some atom [[ \delta ]] : x_0 \to y. The bonds from [[ \gamma]] to [[ \delta ]] are defined to be the edges e \in E with [[ \gamma e ]] = [[ \delta ]]. There is at most one bond between any two atoms. Again we have a covering map

q : \overline{X} \to X .

The homotopy classes of loops based at x_0 form a group, with composition as the group operation. This is the fundamental group \pi_1(X) of the graph X. This is isomorphic as the fundamental group of the space associated to X. By our construction of the universal cover, \pi_1(X) is also the set of vertices of \widetilde{X} that are mapped to x_0 by p. Furthermore, any element [\gamma] \in \pi_1(X) defines a deck transformation of \widetilde{X} that sends each vertex [\delta] : x_0 \to x to the vertex [\gamma] [\delta] : x_0 \to x.

Similarly, the homology classes of loops based at x_0 form a group with composition as the group operation. Since the additional relation used to define homology classes is precisely that needed to make composition of homology classes of loops commutative, this group is the abelianization of \pi_1(X). It is therefore isomorphic to the first homology group H_1(X,\mathbb{Z}) of the geometric realization of X.

By our construction of the maximal abelian cover, H_1(X,\mathbb{Z}) is also the set of vertices of \overline{X} that are mapped to x_0 by q. Furthermore, any element [[\gamma]] \in H_1(X,\mathbb{Z}) defines a deck transformation of \overline{X} that sends each vertex [[\delta]] : x_0 \to x to the vertex [[\gamma]] [[\delta]] : x_0 \to x.

So, it all works out! The fundamental group \pi_1(X) acts as deck transformations of the universal cover, while the first homology group H_1(X,\mathbb{Z}) acts as deck transformations of the maximal abelian cover.

Puzzle for experts: what does this remind you of in Galois theory?

We’ll get back to crystals next time.


by John Baez at July 27, 2016 10:14 AM

The n-Category Cafe

Topological Crystals (Part 1)

k4_crystal

Over on Azimuth I posted an article about crystals:

In the comments on that post, a bunch of us worked on some puzzles connected to ‘topological crystallography’—a subject that blends graph theory, topology and mathematical crystallography. You can learn more about that subject here:

I got so interested that I wrote this paper about it, with massive help from Greg Egan:

I’ll explain the basic ideas in a series of posts here.

First, a few personal words.

I feel a bit guilty putting so much work into this paper when I should be developing network theory to the point where it does our planet some good. I seem to need a certain amount of beautiful pure math to stay sane. But this project did at least teach me a lot about the topology of graphs.

For those not in the know, applying homology theory to graphs might sound fancy and interesting. For people who have studied a reasonable amount of topology, it probably sounds easy and boring. The first homology of a graph of genus <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> is a free abelian group on <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> generators: it’s a complete invariant of connected graphs up to homotopy equivalence. Case closed!

But there’s actually more to it, because studying graphs up to homotopy equivalence kills most of the fun. When we’re studying networks in real life we need a more refined outlook on graphs. So some aspects of this project might pay off, someday, in ways that have nothing to do with crystallography. But right now I’ll just talk about it as a fun self-contained set of puzzles.

I’ll start by quickly sketching how to construct topological crystals, and illustrate it with the example of graphene, a 2-dimensional form of carbon:

I’ll precisely state our biggest result, which says when the construction gives a crystal where the atoms don’t bump into each other and the bonds between atoms don’t cross each other. Later I may come back and add detail, but for now you can find details in our paper.

Constructing topological crystals

The ‘maximal abelian cover’ of a graph plays a key role in Sunada’s work on topological crystallography. Just as the universal cover of a connected graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has the fundamental group <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics> as its group of deck transformations, the maximal abelian cover, denoted <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>, has the abelianization of <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics> as its group of deck transformations. It thus covers every other connected cover of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> whose group of deck transformations is abelian. Since the abelianization of <semantics>π 1(X)<annotation encoding="application/x-tex">\pi_1(X)</annotation></semantics> is the first homology group <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{Z})</annotation></semantics>, there is a close connection between the maximal abelian cover and homology theory.

In our paper, Greg and I prove that for a large class of graphs, the maximal abelian cover can naturally be embedded in the vector space <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{R})</annotation></semantics>. We call this embedded copy of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> a ‘topological crystal’. The symmetries of the original graph can be lifted to symmetries of its topological crystal, but the topological crystal also has an <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-dimensional lattice of translational symmetries. In 2- and 3-dimensional examples, the topological crystal can serve as the blueprint for an actual crystal, with atoms at the vertices and bonds along the edges.

The general construction of topological crystals was developed by Kotani and Sunada, and later by Eon. Sunada uses ‘topological crystal’ for an even more general concept, but we only need a special case.

Here’s how it works. We start with a graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. This has a space <semantics>C 0(X,)<annotation encoding="application/x-tex">C_0(X,\mathbb{R})</annotation></semantics> of 0-chains, which are formal linear combinations of vertices, and a space <semantics>C 1(X,)<annotation encoding="application/x-tex">C_1(X,\mathbb{R})</annotation></semantics> of 1-chains, which are formal linear combinations of edges. There is a boundary operator

<semantics>:C 1(X,)C 0(X,)<annotation encoding="application/x-tex"> \partial \colon C_1(X,\mathbb{R}) \to C_0(X,\mathbb{R}) </annotation></semantics>

This is the linear operator sending any edge to the difference of its two endpoints. The kernel of this operator is called the space of 1-cycles, <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics>. There is an inner product on the space of 1-chains such that edges form an orthonormal basis. This determines an orthogonal projection

<semantics>π:C 1(X,)Z 1(X,)<annotation encoding="application/x-tex"> \pi \colon C_1(X,\mathbb{R}) \to Z_1(X,\mathbb{R}) </annotation></semantics>

For a graph, <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics> is isomorphic to the first homology group <semantics>H 1(X,)<annotation encoding="application/x-tex">H_1(X,\mathbb{R})</annotation></semantics>. So, to obtain the topological crystal of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, we need only embed its maximal abelian cover <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> in <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics>. We do this by embedding <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> in <semantics>C 1(X,)<annotation encoding="application/x-tex">C_1(X,\mathbb{R})</annotation></semantics> and then projecting it down via <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>.

To accomplish this, we need to fix a basepoint for <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. Each path <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> starting at this basepoint determines a 1-chain <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics>. These 1-chains correspond to the vertices of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>. The graph <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> has an edge from <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics> to <semantics>c γ<annotation encoding="application/x-tex">c_{\gamma'}</annotation></semantics> whenever the path <semantics>γ<annotation encoding="application/x-tex">\gamma'</annotation></semantics> is obtained by adding an extra edge to <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics>. This edge is a straight line segment from the point <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics> to the point <semantics>c γ<annotation encoding="application/x-tex">c_{\gamma'}</annotation></semantics>.

The hard part is checking that the projection <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> maps this copy of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> into <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics> in a one-to-one manner. In Theorems 6 and 7 of our paper we prove that this happens precisely when the graph <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has no ‘bridges’: that is, edges whose removal would disconnect <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

Kotani and Sunada noted that this condition is necessary. That’s actually pretty easy to see. The challenge was to show that it’s sufficient! For this, our main technical tool is Lemma 5, which for any path <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> decomposes the 1-chain <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics> into manageable pieces.

We call the resulting copy of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics> embedded in <semantics>Z 1(X,)<annotation encoding="application/x-tex">Z_1(X,\mathbb{R})</annotation></semantics> a topological crystal.

Let’s see how it works in an example!

Take <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> to be this graph:

Since <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has 3 edges, the space of 1-chains is 3-dimensional. Since <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> has 2 holes, the space of 1-cycles is a 2-dimensional plane in this 3-dimensional space. If we take paths <semantics>γ<annotation encoding="application/x-tex">\gamma</annotation></semantics> in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> starting at the red vertex, form the 1-chains <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics>, and project them down to this plane, we obtain the following picture:

Here the 1-chains <semantics>c γ<annotation encoding="application/x-tex">c_\gamma</annotation></semantics> are the white and red dots. These are the vertices of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>, while the line segments between them are the edges of <semantics>X¯<annotation encoding="application/x-tex">\overline{X}</annotation></semantics>. Projecting these vertices and edges onto the plane of 1-cycles, we obtain the topological crystal for <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. The blue dots come from projecting the white dots onto the plane of 1-cycles, while the red dots already lie on this plane. The resulting topological crystal provides the pattern for graphene:

That’s all there is to the basic idea! But there’s a lot more to say about the mathematics it leads to, and a lot of fun examples to look at: diamonds, triamonds, hyperquartz and more.

by john (baez@math.ucr.edu) at July 27, 2016 02:15 AM

July 26, 2016

Clifford V. Johnson - Asymptotia

But Does That Really Happen…?

panel_sample_real_research_26_07_2016

Sorry I've been quiet for a long stretch recently. I've been tied up with travel, physics research, numerous meetings of various sorts (from the standard bean-counting variety to the "here's three awesome science-y things to put into your movie/TVshow" variety*), and other things, like helping my garden survive this heatwave.

I've lost some time on the book, but I'm back on it for a while, and have [...] Click to continue reading this post

The post But Does That Really Happen…? appeared first on Asymptotia.

by Clifford at July 26, 2016 11:12 PM

Christian P. Robert - xi'an's og

another bogus conference [AISTATS copycat]

wa

catz4Aki Vehtari spotted a bogus conference on human computer interaction and artificial intelligence that had copied the entire scientific committee of AISTATS 2016! The copy of the committee has now disappeared, but the list of topics is very similar to AISTATS 2016. (And Arthur Gretton is still the contact on this other site.) The conference was indicated as run by the Manchester International College, but this presumably is yet another usurpation of names… For instance, the conference is supposed to take place at a local hotel rather than in the College. And the reference has now disappeared. Almost simultaneously, we also received a request to edit the proceedings of this “conference” on Computers, which is a (free?) open access journal I know nothing about. (Except that it is listed as predatory by Jeffrey Beall.)

While it is of course very easy to set a webpage and a registration site for bogus conferences, it is sad that no action can be engaged against such fraudsters!


Filed under: University life Tagged: AISTATS 2016, Computers, Manchester International College, open access, scam conference

by xi'an at July 26, 2016 10:16 PM

Georg von Hippel - Life on the lattice

Lattice 2016, Day Two
Hello again from Lattice 2016 at Southampton. Today's first plenary talk was the review of nuclear physics from the lattice given by Martin Savage. Doing nuclear physics from first principles in QCD is obviously very hard, but also necessary in order to truly understand nuclei in theoretical terms. Examples of needed theory predictions include the equation of state of dense nuclear matter, which is important for understanding neutron stars, and the nuclear matrix elements required to interpret future searches for neutrinoless double β decays in terms of fundamental quantities. The problems include the huge number of required quark-line contractions and the exponentially decaying signal-to-noise ratio, but there are theoretical advances that increasingly allow to bring these under control. The main competing procedures are more or less direct applications of the Lüscher method to multi-baryon systems, and the HALQCD method of computing a nuclear potential from Bethe-Salpeter amplitudes and solving the Schrödinger equation for that potential. There has been a lot of progress in this field, and there are now first results for nuclear reaction rates.

Next, Mike Endres spoke about new simulation strategies for lattice QCD. One of the major problems in going to very fine lattice spacings is the well-known phenomenon critical slowing-down, i.e. the divergence of the autocorrelation times with some negative power of the lattice spacing, which is particularly severe for the topological charge (a quantity that cannot change at all in the continuum limit), leading to the phenomenon of "topology freezing" in simulations at fine lattice spacings. To overcome this problem, changes in the boundary conditions have been proposed: open boundary conditions that allow topological charge to move into and out of the system, and non-orientable boundary conditions that destroy the notion of an integer topological charge. An alternative route lies in algorithmic modifications such as metadynamics, where a potential bias is introduced to disfavour revisiting configurations, so as to forcibly sample across the potential wells of different topological sectors over time, or multiscale thermalization, where a Markov chain is first run at a coarse lattice spacing to obtain well-decorrelated configurations, and then each of those is subjected to a refining operation to obtain a (non-thermalized) gauge configuration at half the lattice spacing, each of which can then hopefully thermalized by a short sequence of Monte Carlo update operations.

As another example of new algorithmic ideas, Shinji Takeda presented tensor networks, which are mathematical objects that assign a tensor to each site of a lattice, with lattice links denoting the contraction of tensor indices. An example is given by the rewriting of the partition function of the Ising model that is at the heart of the high-temperature expansion, where the sum over the spin variables is exchanged against a sum over link variables taking values of 0 or 1. One of the applications of tensor networks in field theory is that they allow for an implementation of the renormalization group based on performing a tensor decomposition along the lines of a singular value decomposition, which can be truncated, and contracting the resulting approximate tensor decomposition into new tensors living on a coarser grid. Iterating this procedure until only one lattice site remains allows the evaluation of partition functions without running into any sign problems and at only O(log V) effort.

After the coffee break, Sara Collins gave the review talk on hadron structure. This is also a field in which a lot of progress has been made recently, with most of the sources of systematic error either under control (e.g. by performing simulations at or near the physical pion mass) or at least well understood (e.g. excited-state and finite-volume effects). The isovector axial charge gA of the nucleon, which for a long time was a bit of an embarrassment to lattice practitioners, since it stubbornly refused to approach its experimental value, is now understood to be particularly severely affected by excited-state effects, and once these are well enough suppressed or properly accounted for, the situation now looks quite promising. This lends much larger credibility to lattice predictions for the scalar and tensor nucleon charges, for which little or no experimental data exists. The electromagnetic form factors are also in much better shape than one or two years ago, with the electric Sachs form factor coming out close to experiment (but still with insufficient precision to resolve the conflict between the experimental electron-proton scattering and muonic hydrogen results), while now the magnetic Sachs form factor shows a trend to undershoot experiment. Going beyond isovector quantities (in which disconnected diagrams cancel), the progress in simulation techniques for disconnected diagrams has enabled the first computation of the purely disconnected strangeness form factors. The sigma term σπN comes out smaller on the lattice than it does in experiment, which still needs investigation, and the average momentum fraction <x> still needs to become the subject of a similar effort as the nucleon charges have received.

In keeping with the pattern of having large review talks immediately followed by a related topical talk, Huey-Wen Lin was next with a talk on the Bjorken-x dependence of the parton distribution functions (PDFs). While the PDFs are defined on the lightcone, which is not readily accessible on the lattice, a large-momentum effective theory formulation allows to obtain them as the infinite-momentum limit of finite-momentum parton distribution amplitudes. First studies show interesting results, but renormalization still remains to be performed.

After lunch, there were parallel sessions, of which I attended the ones into which most of the (g-2) talks had been collected, showing quite a rate of progress in terms of the treatment of in particular the disconnected contributions.

In the evening, the poster session took place.

by Georg v. Hippel (noreply@blogger.com) at July 26, 2016 08:32 PM

Tommaso Dorigo - Scientificblogging

The Daily Physics Problem - 5
As explained in the first installment of this series, these questions are a warm-up for my younger colleagues, who will in two months have to pass a tough exam to become INFN researchers.
A disclaimer follows:

read more

by Tommaso Dorigo at July 26, 2016 08:24 PM

Emily Lakdawalla - The Planetary Society Blog

Rosetta end-of-mission plans: Landing site, time selected
ESA's comet-chasing Rosetta spacecraft is nearing the end of its mission. Last week, ESA announced when and where Rosetta is going to touch down. And tomorrow, it will forever shut down the radio system intended for communicating with the silent Philae lander.

July 26, 2016 08:13 PM

Peter Coles - In the Dark

#LeaveTheDark: Easy social media shares

My blog may be called “In The Dark” but that won’t stop me sharing this.

LeaveTheDark

Yesterday we had a request for a shareable summary of the #LeaveTheDark manifesto. That’s quite difficult to do in brief. There’s a lot to it. A more complete manifesto will follow within the next few days.

But in the meantime we thought these little images might be helpful for people who’d like to spread the word a bit. There’s no issue with copyright or anything like that. We want and need people to share our stuff. That’s the only way we’ll reach enough people to turn last week’s idea into next year’s reality! And we will make this a reality.

So feel free to download and share these images across your networks. The more the merrier…

Leavethedark we are that movement we believe.pngleaveTheDark join us we are memeleaveTheDark Join us we need

Thankyou! Together we really can make a difference and turn back the growing swell of hatred that threatens to engulf our society.

View original post


by telescoper at July 26, 2016 07:13 PM

Emily Lakdawalla - The Planetary Society Blog

We’re Building a Movement!
Today we launch a new expedition to engage our members in more ways than ever before. Since our inception, our members have supported The Planetary Society as we forge new paths in space science and exploration. You have always been at the center of our success and we want the structure of our membership program to reflect that by offering new benefits, premiums and payment options.

July 26, 2016 05:00 PM

astrobites - astro-ph reader's digest

Catching a small galaxy in the act of formation

Title: DDO 68: A flea with smaller fleas that on him prey
Authors: F. Annibali, C. Nipoti, L. Ciotti, M. Tosi, A. Aloisi, M. Bellazzini, M. Cignoni, F. Cusano, D. Paris, and E. Sacchi
First Author’s Institution: INAF, Bologna Observatory, Department of Physics and Astronomy, University of Bologna
Status: Accepted for publication in ApJL

Scientists think they have a pretty good idea of how the first galaxies formed: Gas accreted into areas where dark matter was more dense, relics known as “overdensities” left behind from the Big Bang and inflation. More and more mass glommed together; as the mass grew, so did the gravity. Eventually, stars formed and the first galaxies came into being. Scientists have been able to simulate this chain of events using our current understanding of physics in the early universe, but actually observing this process would require instruments that could measure very small scales at the earliest epochs. Alternatively, you could look for very small, newly-forming galaxies in the current universe to observe how they accrete matter. Unfortunately, most of these dwarf galaxies have been or are in the process of being cannibalized by other, more massive galaxies, rather than doing the accreting themselves.

The authors of today’s paper have conducted a study of a dwarf irregular galaxy called DDO 68. This galaxy lies in an expanse of fairly empty space known as the Lynx-Cancer void about 13 Megaparsecs (or 42 million light years) away from us. DDO 68 is extremely metal-poor, with a metallicity of only 2.5% that of the Sun. Low metallicity, meaning that the galaxy has very low occurrence of elements heavier than hydrogen and helium, is an indication that a galaxy is very young and likely still in the process of forming. This dwarf galaxy has very irregular morphology, with an extended comet-like tail containing stars of many different ages. This is illustrated in the figure below.

Figure 1 Images of DDO 68 taken with the Large Binocular Telescope. (a) shows the galaxy and three separate substructures, denoted as S1, the Tail, and the Arc. S1 is shown magnified in (b), and the (c) shows a zoomed image of the Arc.

The team studying DDO 68 took very deep images of the galaxy using the Large Binocular Telescope located in southeastern Arizona. In these images the astronomers found minute structures that have not been identified previously. Figure 1 shows these two formations, referred to as the “Arc” and “S1” in panels (b) and (c) of the image. The first step was to make sure that these structures are actually associated with DDO 68. The authors did by ensuring the the luminosity of the stars in these structures are compatible with being at the same distance as DDO 68.

Next, using color-magnitude diagrams, the age of the stars in the Arc were determined to be have mostly younger stars with ages of about 200 million years. This is illustrated in Figure 2, where different tracks on the color-magnitude diagram correspond to different stellar ages. However, the stars in S1 are all greater than 2 billion years old. This indicates the the Arc and S1 originated from two different systems that are being accreted by DDO 68. This is supported by the fact that there are no companions near enough to the galaxy to have perturbed it in that manner.

Figure 2 Color-magnitude diagrams of the Arc and S1 substructures of DDO 68 as taken by Hubble Space Telescope. The colored lines correspond to different stellar ages. Panel (a) is the diagram for the Arc and (b) shows the diagram for S1.

So, is the dwarf galaxy DDO 68 cannibalizing its even smaller companions? To test their conclusion, the authors carried out simulations to see if they could replicate the unusual morphology of the galaxy. The simulations showed the the tail and the Arc could be reproduced using the accretion of a single companion. The S1 structure required an additional system to be accreted. In this paper, astronomers have identified unique features that indicate this isolated dwarf galaxy is in the act of accreting smaller companion galaxies. This will help to inform our understanding of the formation of the first galaxies.

by Joanna Bridge at July 26, 2016 04:14 PM

Symmetrybreaking - Fermilab/SLAC

The most important website in particle physics

The first website to be hosted in the US has grown to be an invaluable hub for open science.

With tens of thousands of particle physicists working in the world today, the biggest challenge a researcher can have is keeping track of what everyone else is doing. The articles they write, the collaborations they form, the experiments they run—all of those things are part of being current. After all, high-energy particle physics is a big enterprise, not the province of a few isolated people working out of basement laboratories. 

Particle physicists have a tool that helps them with that. The INSPIRE database allows scientists to search for published papers by topic, author, scholarly journal, what previous papers the authors cited and which newer papers have used it as a reference.

“I don't know any other discipline with such a central tool as INSPIRE,” says Sünje Dallmeier-Tiessen, an information scientist at CERN who manages INSPIRE’s open-access initiative. If you’re a high-energy physicist, “everything that relates to your daily work-life, you can find there.”

Researchers in high-energy physics and related fields use INSPIRE for their professional profiles, job-hunting and promotional materials. They use it to keep track of other people’s research in their disciplines and for finding good resources to cite in their own papers. 

INSPIRE has been around in one form or another since 1969, says Bernard Hecker, who is in charge of SLAC’s portion of INSPIRE.  “So we have a high level of credibility with people who use the service.” 

INSPIRE contains up-to-date information about over a million papers, including those published in the major journals. INSPIRE's database also interacts with the arXiv, a free-access site that hosts papers independently of whether they're published in journals or not. “We text-mine everything [on the arXiv], and then provide search to the content, and search based on specific algorithms we run,” Dallmeier-Tiessen says. 

In that way, INSPIRE is a powerful addition to the arXiv, which itself provides access to many articles that would otherwise require expensive journal subscriptions or exorbitant one-time fees. 

A lot of human labor is involved. The arXiv, for example, doesn’t distinguish between two people with the same last name and same first initial. “We have a strong interest in keeping dynamic profiles and disambiguating different researchers with similar names,” Hecker says. 

To that end, the INSPIRE team looks at author lists on published papers to match individual researchers with their correct institutions. This includes collaborating with the Institute of High Energy Physics in China, as well as cross-checking other databases.

The goal, Hecker says, is “trying to find the stuff that’s directly relevant and not stuff that’s not relevant.” After all, researchers will only use the site if its useful, a complicated challenge that INSPIRE has met consistently. “We’re trying to optimize the time researchers spend on the site.”

Sandbox Studio, Chicago with Lexi Fodor

Now That's What I Call Physics

Every January, the INSPIRE team releases a list of the top 40 most cited articles in high-energy physics that year.

Looking over the list for 2015, you might be forgiven for thinking it was a slow year. The most commonly referenced articles were papers from previous years, some just a few years old, a few going back several decades. 

But even in years without a blockbuster discovery such as the Higgs boson or gravitational waves, INSPIRE’s list is still useful a snapshot of where the minds of the research community are focused. 

In 2015, researchers prioritized studying the Higgs boson. The two most widely referenced articles of 2015 were the papers announcing its discovery by researchers at the ATLAS and CMS detectors at the Large Hadron Collider. The INSPIRE “top 40” for 2015 also includes the original 1964 theoretical papers by Peter Higgs, François Englert, and Robert Brout predicting the existence of the Higgs. 

Another topic that stood out in 2015 was the cosmic microwave background, a pattern of light that could tell us about conditions in the universe just after the Big Bang. Four highly cited papers, including the third most-referenced, came from the Planck cosmic microwave background experiment, with a fifth devoted to the final WMAP cosmic microwave background data. 

It seems that cosmology was on physicists’ minds. Two more top papers were the first measurements of dark energy from the late ’90s, while yet two more described results from the dark matter experiments LUX and XENON100.

Sandbox Studio, Chicago with Lexi Fodor

Open science, open data, open code

INSPIRE grew out of the Stanford Public Information Retrieval System (SPIRES), a database started at SLAC National Accelerator Laboratory in 1969 when the internet was in its infancy. 

After Tim Berners-Lee developed the World Wide Web at CERN, SPIRES was the first US-hosted website.

Like high-energy physics itself, the database is international and cooperative. SLAC joined with Fermi National Accelerator Laboratory in the United States, DESY in Germany, and CERN in Switzerland, which now hosts the site, to create the modern version of INSPIRE. The newest member of the collaboration is IHEP Beijing in China. Institutions in France and Japan also collaborate on particular projects. 

INSPIRE has changed a lot since its inception, and a new version is coming out soon. The biggest change will extend INSPIRE’s database to include repositories for data and computer code. 

Starting later this year, INSPIRE will integrate with the HEPDATA open-data archive and the github code-collaboration system to increase visibility for both data and code that scientists write. The INSPIRE team will also roll out a new interface, so it looks “less like something from 1995,” Hecker says.

From its inception as a way to share printed articles by mail, INSPIRE continues to be a valuable resource to the community. With more papers coming out every year and no sign of decrease in the number of particle physicists working, the need to build on past research—and construct collaborations—is more important than ever.

by Matthew R. Francis at July 26, 2016 01:00 PM

Tommaso Dorigo - Scientificblogging

The Daily Physics Problem - 4
As explained in the previous installment of this series, these questions are a warm-up for my younger colleagues, who will in two months have to pass a tough exam to become INFN researchers.
A disclaimer follows:

read more

by Tommaso Dorigo at July 26, 2016 10:59 AM

Lubos Motl - string vacua and pheno

Families from the Mexican \(\Delta(54)\) symmetry
The most similar previous blog post was one about the \(\Delta(27)\) group

The first hep-ph paper today is dedicated to heterotic string phenomenology.
Delta(54) flavor phenomenology and strings
was written by Mexicans, Ms Brenda Carballo-Perez, Eduardo Peinado, Saul Ramos-Sanchez, but that can't prevent it from being more interesting than many papers from the U.S. The first hep-ph papers often look more interesting than the rest. I believe that also in this case, the authors struggled to get the #1 spot because they're more excited about their work than the authors of the remaining papers today.



Michal Tučný, "Everyone is already in Mexico". Buenos días, I am also going. One of his top 20 best country music songs.

The Standard Model of particle physics is usually formulated as a gauge theory based on the \(SU(3)\times SU(2)\times U(1)\) gauge group. The particles carry the color and the electroweak charges. The gauge group is continuous which implies that there are gauge bosons in the spectrum.

However, the Standard Model also requires 3 generations of fermions – quarks and leptons. Because of this repetitive structure, it's natural to imagine that they transform as "triplets" under another, family group as well. However, there are apparently no \(SU(3)_{\rm flavor}\) gauge bosons, at least not available at the LHC yet. For this and other reasons, it's more sensible to assume that the 3 generations of fermions are "triplets" under a discrete, and not continuous, family symmetry.




The family groups that have been tried and that admit three-dimensional representations have been \(\ZZ_3\), \(S_3\), and \(\Delta(27)\). However, there exists one larger discrete group with a three-dimensional representation, the \(\Delta(54)\) group. With some definitions, it's the maximal one – the maximal "exceptional" discrete symmetry with three-dimensional representations, and the previous three cases are all subgroups of \(\Delta(54)\).




Surely many people share the gut feeling that the maximum symmetries of a similar exceptional type are (just like \(E_8\) among the simple Lie groups) the "most beautiful ones" and therefore most promising ones, at least from a certain aesthetic viewpoint. The larger symmetry with 54 elements probably makes the models more constrained and therefore more predictive.

These Mexican heterotic string theorists point out that the heterotic orbifolds of tori haven't led to viable models with this \(\Delta(54)\) symmetry yet. But that's because people were focusing on orbifolds\[

T^6 / \ZZ_3, \quad T^6 / \ZZ_3 \times \ZZ_2.

\] However, they propose different orbifolds instead:\[

T^6 / \ZZ_3 \times \ZZ_3

\] There are various discrete groups here and I need to emphasize that the group \(G\) in the description of the orbifold, \(T^6 / G\), is not the same group as the group produced as the family symmetry by the resulting heterotic string model. The group \(G\) is being "gauged" on the world sheet. It means that all the one-string states have to be invariant under \(G\) i.e. there are no charged states or non-singlets under \(G\); and closed string sectors with almost periodic boundary conditions up to the action of \(G\) have to be added, the twisted sectors.

The family group produced by orbifolds is in some way a "dual" group. The \(\ZZ_3\times \ZZ_3\) orbifold produces three fixed points not identified with each other. Dynamics at each of them is the same so there is actually an \(S_3\) symmetry (the full permutation group with 6 elements) exchanging them. A semidirect product of this group with a \(\ZZ_3\times \ZZ_3\) group (different from \(G\)) has to be considered because an additional global symmetry results from the action on localization charges \(m,q\) of the twisted sectors etc. The semidirect product with 54 elements is what we call \(\Delta(54)\).

All the fields and terms in the low-energy Lagrangian should simply be symmetric under this \(\Delta(54)\) and that leads to constraints on the Yukawa couplings and other things. It seems that their models – they found some 700 heterotic vacua using some software – have several great phenomenological properties such as
  1. qualitatively realistic quark and lepton masses in general, including the recently proven nonzero \(\theta_{13}\) neutrino mixing angle
  2. the right relationship between the Cabibbo angle and the strange-to-down quark mass ratio (the Gatto-Sartori-Tonin relation)
  3. a cool equation relating ratios of both (down-type) quark and (charged) lepton masses: \[

    \frac{m_s-m_d}{m_b} = \frac{m_\mu - m_e}{m_\tau}

    \] Well, this relationship isn't obeyed by the observed masses but corrections could make it true
  4. the normal, not inverted, hierarchy of neutrino masses, as preferred by latest experiments
  5. PMNS neutrino matrix compatible with the known data
The incorrect novel numerical relationship above may be cured with some corrections and another problem, the rapid proton decay, may perhaps be solved by some discrete symmetries in new examples of the models they haven't considered. At this moment, the models don't work "quite perfectly" and there are glitches.

But it's amazing how fine questions about the spectrum and parameters of the Standard Model are already being "almost completely explained" by a theory based on a completely different and much more concise starting point – the hybrid of the only bosonic \(D=26\) string theory and the only \(D=10\) superstring with the maximum family group produced at low energies.

When you combine these conditions, you rather naturally obtain three generations with realistic constraints on the quarks' and leptons' masses and mixing angles. Whoever isn't intrigued by those hints is a cold-blooded animal.

I added the label/category "music" to this blog post because of the extensive discussion of (mostly Czech) country music in the comment section.

by Luboš Motl (noreply@blogger.com) at July 26, 2016 10:14 AM

Peter Coles - In the Dark

The 3.5 keV “Line” that (probably) wasn’t…

About a year ago I wrote a blog post about a mysterious “line” in the X-ray spectra of galaxy clusters corresponding to an energy of around 3.5 keV. The primary reference for the claim is a paper by Bulbul et al which is, of course, freely available on the arXiv.

The key graph from that paper is this:

XMMspectrum

The claimed feature – it stretches the imagination considerably to call it a “line” – is shown in red. No, I’m not particularly impressed either, but this is what passes for high-quality data in X-ray astronomy!

Anyway, there has just appeared on the arXiv a paper by the Hitomi Collaboration describing what are basically the only set of science results that the Hitomi satellite managed to obtain before it fell to bits earlier this year. These were observations of the Perseus Cluster.

Here is the abstract:

High-resolution X-ray spectroscopy with Hitomi was expected to resolve the origin of the faint unidentified E=3.5 keV emission line reported in several low-resolution studies of various massive systems, such as galaxies and clusters, including the Perseus cluster. We have analyzed the Hitomi first-light observation of the Perseus cluster. The emission line expected for Perseus based on the XMM-Newton signal from the large cluster sample under the dark matter decay scenario is too faint to be detectable in the Hitomi data. However, the previously reported 3.5 keV flux from Perseus was anomalously high compared to the sample-based prediction. We find no unidentified line at the reported flux level. The high flux derived with XMM MOS for the Perseus region covered by Hitomi is excluded at >3-sigma within the energy confidence interval of the most constraining previous study. If XMM measurement uncertainties for this region are included, the inconsistency with Hitomi is at a 99% significance for a broad dark-matter line and at 99.7% for a narrow line from the gas. We do find a hint of a broad excess near the energies of high-n transitions of Sxvi (E=3.44 keV rest-frame) – a possible signature of charge exchange in the molecular nebula and one of the proposed explanations for the 3.5 keV line. While its energy is consistent with XMM pn detections, it is unlikely to explain the MOS signal. A confirmation of this interesting feature has to wait for a more sensitive observation with a future calorimeter experiment.

And here is the killer plot:

Perseus_Hitomi

The spectrum looks amazingly detailed, which makes the demise of Hitomi all the more tragic, but the 3.5 keV is conspicuous by its absence. So there you are, yet another supposedly significant feature that excited a huge amount of interest turns out to be nothing of the sort. To be fair, as the abstract states, the anomalous line was only seen by stacking spectra of different clusters and might still be there but too faint to be seen in an individual cluster spectrum. Nevertheless I’d say the probability of there being any feature at 3.5 keV has decreased significantly after this observation.

P.S. rumours suggest that the 750 GeV diphoton “excess” found at the Large Hadron Collider may be about to meet a similar fate.


by telescoper at July 26, 2016 09:43 AM

Peter Coles - In the Dark

2016 ‘Beards, Shorts & Sandals’ season reaches peak

The Great Question of the day..

Kmflett's Blog

Beard Liberation Front

26th July Contact Keith Flett 07803 167266

2016 ‘BEARDS, SHORTS & SANDALS’ SEASON REACHES PEAK

corbynsandals

BLF permission given for official engagements only

The Beard Liberation Front, the informal network of beard wearers, has said that the 2016 Beards, Shorts and Sandals season has reached its peak in the recent warm weather in some parts of the UK.

The campaigners say that with temperatures well above 20C in some places the rise in the wearing of beards, shorts and sandals has been significant

Shorts and sandals may be worn all day until 9pm at the discretion of the wearer.

The campaigners say that the image of organic beard, shorts and sandals is one of the key male fashions of 2016 not just in Hackney and Dalston but across large parts of the UK as fashions change and develop.

The official season traditionally runs until August 31st. During…

View original post 174 more words


by telescoper at July 26, 2016 07:51 AM

July 25, 2016

Christian P. Robert - xi'an's og

asymptotic properties of Approximate Bayesian Computation

Street light near the St Kilda Road bridge, Melbourne, July 21, 2012With David Frazier and Gael Martin from Monash University, and with Judith Rousseau (Paris-Dauphine), we have now completed and arXived a paper entitled Asymptotic Properties of Approximate Bayesian Computation. This paper undertakes a fairly complete study of the large sample properties of ABC under weak regularity conditions. We produce therein sufficient conditions for posterior concentration, asymptotic normality of the ABC posterior estimate, and asymptotic normality of the ABC posterior mean. Moreover, those (theoretical) results are of significant import for practitioners of ABC as they pertain to the choice of tolerance ε used within ABC for selecting parameter draws. In particular, they [the results] contradict the conventional ABC wisdom that this tolerance should always be taken as small as the computing budget allows.

Now, this paper bears some similarities with our earlier paper on the consistency of ABC, written with David and Gael. As it happens, the paper was rejected after submission and I then discussed it in an internal seminar in Paris-Dauphine, with Judith taking part in the discussion and quickly suggesting some alternative approach that is now central to the current paper. The previous version analysed Bayesian consistency of ABC under specific uniformity conditions on the summary statistics used within ABC. But conditions for consistency are now much weaker conditions than earlier, thanks to Judith’s input!

There are also similarities with Li and Fearnhead (2015). Previously discussed here. However, while similar in spirit, the results contained in the two papers strongly differ on several fronts:

  1. Li and Fearnhead (2015) considers an ABC algorithm based on kernel smoothing, whereas our interest is the original ABC accept-reject and its many derivatives
  2. our theoretical approach permits a complete study of the asymptotic properties of ABC, posterior concentration, asymptotic normality of ABC posteriors, and asymptotic normality of the ABC posterior mean, whereas Li and Fearnhead (2015) is only concerned with asymptotic normality of the ABC posterior mean estimator (and various related point estimators);
  3. the results of Li and Fearnhead (2015) are derived under very strict uniformity and continuity/differentiability conditions, which bear a strong resemblance to those conditions in Yuan and Clark (2004) and Creel et al. (2015), while the result herein do not rely on such conditions and only assume very weak regularity conditions on the summaries statistics themselves; this difference allows us to characterise the behaviour of ABC in situations not covered by the approach taken in Li and Fearnhead (2015);

Filed under: pictures, Statistics, Travel, University life Tagged: ABC, asymptotic normality, Australia, Bayesian inference, concentration inequalities, consistency, convergence, identifiability, Melbourne, Monash University, summary statistics

by xi'an at July 25, 2016 10:16 PM

Georg von Hippel - Life on the lattice

Lattice 2016, Day One
Hello from Southampton, where I am attending the Lattice 2016 conference.

I arrived yesterday safe and sound, but unfortunately too late to attend the welcome reception. Today started off early and quite well with a full English breakfast, however.

The conference programme was opened with a short address by the university's Vicepresident of Research, who made a point of pointing out that he like 93% of UK scientists had voted to remain in the EU - an interesting testimony to the political state of affairs, I think.

The first plenary talk of the conference was a memorial to the scientific legacy of Peter Hasenfratz, who died earlier this year, delivered by Urs Wenger. Peter Hasenfratz was one of the pioneers of lattice field theory, and hearing of his groundbreaking achievements is one of those increasingly rare occasions when I get to feel very young: when he organized the first lattice symposium in 1982, he sent out individual hand-written invitations, and the early lattice reviews he wrote were composed in a time where most results were obtained in the quenched approximation. But his achievements are still very much current, amongst other things in the form of fixed-point actions as a realization of the Ginsparg-Wilson relation, which gave rise to the booming interest in chiral fermions.

This was followed by the review of hadron spectroscopy by Chuan Liu. The contents of the spectroscopy talks have by now shifted away from the ground-state spectrum of stable hadrons, the calculation of which has become more of a benchmark task, and towards more complex issues, such as the proton-neutron mass difference (which requires the treatment of isospin breaking effects both from QED and from the difference in bare mass of the up and down quarks) or the spectrum of resonances (which requires a thorough study of the volume dependence of excited-state energy levels via the Lüscher formalism). The former is required as part to the physics answer to the ageless question why anything exists at all, and the latter is called for in particular by the still pressing current question of the nature of the XYZ states.

Next came a talk by David Wilson on a more specific spectroscopy topic, namely resonances in coupled-channel scattering. Getting these right requires not only extensions of the Lüscher formalism, but also the extraction of very large numbers of energy levels via the generalized eigenvalue problem.

After the coffee break, Hartmut Wittig reviewed the lattice efforts at determining the hadronic contributions to the anomalous magnetic moment (g-2)μ of the muon from first principles. This is a very topical problem, as the next generation of muon experiments will reduce the experimental error by a factor of four or more, which will require a correspondingly large reduction in the theoretical uncertainties in order to interpret the experimental results. Getting to this level of accuracy requires getting the hadronic vacuum polarization contribution to sub-percent accuracy (which requires full control of both finite-volume and cut-off effects, and a reasonably accurate estimate for the disconnected contributions) and the hadronic light-by-light scattering contribution to an accuracy of better than 10% (which some way or another requires the calculation of a four-point function including a reasonable estimate for the disconnected contributions). There has been good progress towards both of these goals from a number of different collaborations, and the generally good overall agreement between results obtained using widely different formulations bodes well for the overall reliability of the lattice results, but there are still many obstacles to overcome.

The last plenary talk of the day was given by Sergei Dubovsky, who spoke about efforts to derive a theory of the QCD string. As with most stringy talks, I have to confess to being far too ignorant to give a good summary; what I took home is that there is some kind of string worldsheet theory with Goldstone bosons that can be used to describe the spectrum of large-Nc gauge theory, and that there are a number of theoretical surprises there.

Since the plenary programme is being streamed on the web, by the way, even those of you who cannot attend the conference can now do without my no doubt quite biased and very limited summaries and hear and see the talks for yourselves.

After lunch, parallel sessions took place. I found the sequence of talks by Stefan Sint, Alberto Ramos and Rainer Sommer about a precise determination of αs(MZ) using the Schrödinger functional and the gradient-flow coupling very interesting.

by Georg v. Hippel (noreply@blogger.com) at July 25, 2016 10:05 PM

astrobites - astro-ph reader's digest

Mosaic galaxies – from where do galaxies get their stars?

Title: Merger-driven evolution of the effective stellar initial mass function of massive early-type galaxies
Authors: A. Sonnenfeld, C. Nipoti, T. Treu
First author institute: Kavli IPMU, University of Tokyo
Status: Submitted for publication.

Introduction

Massive elliptical galaxies are vast collections of aging stars, whose foundations were laid when the universe was young. As we’ve covered previously, they form via a complex, two-stage process. First, gas is drawn into a blob of dark matter to rapidly form a dense core of stars. They then gradually accumulate smaller galaxies over billions of years to build up an extensive outer halo of stars.

These galaxies are some of the most interesting to study, and not just because of their great size. In the most massive of these, the stars are very different to those found in our own galaxy. This is perhaps not surprising: the environment found in these early galaxies is radically different to that of the Milky-Way. The density and pressure of the gas being turned into stars is much higher; the time involved to convert it all much shorter. One of the most intriguing results of recent years – covered by Astrobites at the time – is that the types of stars formed in these massive elliptical galaxies seem to differ from the local brand: giant elliptical galaxies seem to form many more dwarf stars than any individual region of the Milky-Way. Put another way, the stellar initial-mass function (IMF), which tells us the fraction of newly-created stars in a galaxy with a particular mass, is different in these galaxies.

That might not sound important, but it actually has quite profound consequences, both for the galaxy itself (if less gas goes into manufacturing giant stars, fewer supernovae go off, so less energy gets re-injected into the gas) and for those of us who spend our days observing them (a large collection of dwarf stars are dimmer than one giant star of the same mass, so when we measure how bright these galaxies are and make an educated guess about how much mass they contain, we might be off by a significant amount).

There is a problem with all this though: forming stars is a complicated business, the physics of which remains somewhat murky. Understanding exactly how the unusual conditions in these galaxies are connected to the stars they end up forming is an open problem.

The paper

The authors explore how these results tie in with the complicated formation history of these galaxies. Remember all those smaller systems that get sucked onto the dense core? The stars in those are – presumably – not overly-rich in dwarf stars. The upshot? Measurements of the IMF in local, old galaxies that have been accumulating these small systems for many billions of years are really measuring an average IMF. This has the unfortunate effect of obfuscating any links that remain between a particular galaxy’s properties and the stars it contains, since those stars have come from a number of sources which we have no way to know about. These galaxies are a patchwork of stars with diverse origins.

Using a model for the IMF of the precursor galaxies in conjunction with a model for the growth of these galaxies (via mergers with smaller galaxies) over time, the authors predict how the ‘effective IMF’ (the one we measure) might change over time. One of their main aims is to understand why it is that, at various times, different groups of researchers have claimed the IMF correlates with different galactic properties, such as total mass, or the velocity dispersion of the stars. These properties are themselves related, complicating matters further!

Figure depicts dwarf enrichment of the stellar populations in a series of possible elliptical galaxies which the authors model, spanning a wide range in mass and velocity dispersion. The advantage of the model is that evolution over time can be followed, so the galaxies are shown at three times (blue, green, red – z is the redshift of the galaxy. Black solid lines connect a few representative galaxies, showing how they evolve between the three times as they accumulate more mass). The four panels model how the dwarf enrichment correlates with mass and velocity dispersion in two possible scenarios (in which the dwarf enrichment actually correlates with one or the other in the original galaxy before it accumulates mass through mergers). Figure 4 from the paper, modified.

Figure depicts dwarf enrichment of the stellar populations in a series of possible elliptical galaxies which the authors model, spanning a wide range in mass and velocity dispersion. The advantage of the model is that evolution over time can be followed, so the galaxies are shown at three times (blue, green, red – z is the redshift of the galaxy. Black solid lines connect a few representative galaxies, showing how they evolve between the three times as they accumulate more mass). The four panels model how the dwarf enrichment correlates with mass and velocity dispersion in two possible scenarios (in which the dwarf enrichment actually correlates with one or the other in the original galaxy before it accumulates mass through mergers). Figure 4 from the paper, modified.

The figure above shows the authors’ key results. They examine two possible models in which the IMF actually depends on either mass or velocity dispersion (in the initial galaxy, before it merges with other smaller galaxies). They then evolve the galaxies forward in time according to a model prescription, and examine how these properties correlate with the measured (effective) IMF. As expected, they find that the two scenarios cannot be distinguished easily from data on local galaxies alone. They find that as the measured mass grows, the level of dwarf enrichment falls; this is because the low mass systems deliver extra mass, but dilute the dwarf enriched stellar population. If the IMF is initially connected directly to velocity dispersion, the measured relationship between velocity dispersion and the IMF remains constant over time; the accumulated systems have low velocity dispersion, so decrease the overall velocity dispersion when they dilute the stellar population. It is possible that future observations of more distant galaxies might allow us to distinguish the models (or even favour one not considered here), since the relationship between the IMF and these properties is found to change as the galaxies grow.

Clearly, understanding variations in the stellar initial mass function of extreme galaxies is no mean feat. However, if it can be done, it could provide us with a vital link between the large scale properties of galaxies and the small scale, poorly understood processes that control star formation.

by Paddy Alton at July 25, 2016 09:54 PM

CERN Bulletin

Muse at CERN

On 19 July, the world-famous, English rock band, Muse, visited CERN before taking centre-stage at Nyon’s Paléo Festival. They toured some of CERN’s installations, including the Synchrocyclotron and the Microcosm exhibition, and also looked in on CMS and the Antimatter Factory.  

 

July 25, 2016 06:07 PM

Lubos Motl - string vacua and pheno

Experimental physicists shouldn't be trained as amateur theorists
Theoretical work is up to theorists who must do it with some standards

Tommaso Dorigo of CMS is an experimental particle physicist and in his two recent blog posts, he shares two problems that students at their institutions are tortured with. The daily problema numero uno wants them to calculate a probability from an inelastic cross section which is boring but more or less comprehensible and well-defined.



Dorigo just returned from a trip to Malta.

The problema numero due is less clear. I won't change anything about the spirit if I simplify the problem like this:
A collider produces two high-energy electrons, above \(50\GeV\). Think in all possible ways and tell us all possible explanations related to accelerators, detectors, as well as physical theories what's going on.
Cool. I have highly mixed feelings about such a vague, overarching task. On one hand, I do think that a very good experimenter such as Enrico Fermi is capable of producing answers of this broad type – and very good answers. And the problem isn't "too dramatically" different from the rather practical, "know everything" problems I was solving in the PhD qualifying exams at Rutgers – and I am too modest to hide that I got great results in the oral modern exam, good (A) results in the oral classical exam, and the best historical score in the written exam. ;-)

On the other hand, I don't think that there are too many Enrico Fermis in Italy these days – and even outside Italy – and the idea that a big part of the Italian HEP students are Enrico Fermis look even more implausible to me. The problem described by Dorigo is simply more vague and speculative than the problems that look appropriate.




Unsurprisingly, this "problem numero duo" was sufficiently vague and unfocused that it attracted no comments. What should one talk about if he's told to discuss "everything he knows or thinks about particle physics, with a special focus on high-energy electrons"? Well, there have been no comments except for mine. Let me repost my conversation with Dorigo.

LM: Sorry, Tommaso, but the kind of "problems"
...Consider all thinkable physics- and detector-related sources of the detected signature and discuss their chance to be the true source of the observation. Make all the credible assumptions you need about the system you deem appropriate.
are very poor from a pedagogic viewpoint. It's like "be a renaissance man who knows everything, just like the great Tommaso Dorigo". It's not possible to consider "all thinkable sources" in any kind of rigor – some of them are understood well, others are speculative or due to personal prejudices. So it would be impossible to grade such a thing. And at the end, the implicit assumption that Tommaso Dorigo himself is a renaissance man is sort of silly, especially because you basically never include "new physics" among the thinkable sources of observations and even if you did, you just have no idea which new physics is more likely and why.




TD: Hi Lumo, sorry but you should think a bit more about the task that the selection committee has in front of them. They have to select experimental physicists, not theorists. An experimental physicist should be able to produce an approximate answer, by considering the things worth mentioning, and omit the inessential ones.

Prejudices are good – they betray the background of the person that is being examined. This question is not too different in style to the 42 questions that were asked at the last big INFN exam, 11 years ago.



LM: Dear Tommaso, your order to "consider all thinkable things" is even more wrong and *especially* wrong if the student is supposed to be an experimenter because an experimenter simply shouldn't be solving the task of "thinking in all possible ways". He is not qualified (and usually talented) for such things – and that's also why it's not the kind of an activity that he will be doing in his job.

An experimenter is doing a complex but largely mechanical job "within a box" – at most, as a clever engineer, he is inventing ways how to test things assuming that one is within a box. He isn't supposed to understand why the box is what it is (derivation or justification of current or new theories) and his work doesn't depend on having opinions whether the box is a good one or a bad one and whether it should be replaced by another one and how likely it is.

High-energy electrons at a collider are *obviously* a chance that new physics has been discovered. That's why the colliders are being built. They *want* to and *should* find all the cracks in the existing theories of physics that are accessible within a given budget and a kind of a machine. What the possible theories explaining new phenomena with high-energy electrons are isn't something that an experimenter should answer, it's a difficult eclectic question for a theorist. And theorists wouldn't agree "what kind of a theory" should explain new high-energy electron events. They have their priorities and ways to read the available evidence. But they're not hired or rated or graded for their opinions. Theorists are hired and rated for the ability to mentally create, derive, calculate, and construct arguments. You want to change that – you want an experimenter to be an amateur theorist who is graded for his opinions, not for actual theoretical work that he isn't producing.

A task like that could only be about the knowledge of the "box" – about all the reasons why the Standard Model could produce such a thing, including all the phenomena that appear in the dirty real world of the experiment. But [because the work on many distinct parts of an experiment is usually left to the specialists,] it's questionable whether it's helpful to formulate tasks that ask the student to consider *all* these dirty things at the same moment. It's almost like a question "tell us everything you know about experimental particle physics". There are lots of things and what are the priorities is unavoidably largely subjective and dependent on the context. It's bad to grade such things because the examiner is still imposing his personal opinions onto the students.

You know, what I am generally afraid of in this pedagogic style of yours is that you want to upgrade yourself to a kind of a duce – and grade students for the degree to which they agree with you. But that's exactly what science is *not* – and you wouldn't get an A if you were graded by real experts in these parts of thinking yourself. Science works independently of duces, including duces like you. It is building on the objective evidence and by telling the students to show their thinking about almost everything and the balance between all these ideas, you are clearly shifting science away from the cold and unquestionable realm of objective evidence to the realm of personal opinions – and you (and every experimental physicist) is simply too small a man for such an ambitious position.



LM: added for TRF: Note that I was respectful towards Dorigo's nation – for example, I have used the original term "duce" instead of the derivative term "Führer". ;-)

But back to the issue. All of our disagreement is basically about "one idea" in some way. But we express it in various ways. For example, I haven't commented on Dorigo's comment that INFN has been using similar problems in many other exams (well, that's even worse if there's something fundamentally wrong with the problems) and especially his quote:
Prejudices are good – they betray the background of the person that is being examined.
Prejudices – that indeed betray the background – aren't "good" in science. Prejudices are unavoidable and the approach of people (theorists and experimenters) unavoidably depends to their background and prejudices to one extent or another. And some backgrounds may perhaps be more helpful in achieving a certain advance in science which is why the diversity of backgrounds could sometimes help – although I am generally skeptical about similar claims. But a key insight of Dorigo's countrymate Galileo Galilei that Tommaso Dorigo clearly misunderstands is that the whole scientific method is a clever and careful way to make the conclusions as independent of the prejudices as possible. And that's why rational people think of science (and Galileo) so highly! The processes included in the scientific method simply "dilute" the effect of the prejudices to homeopathically low levels at the end – the final conclusions are almost all about the cold hard evidence that is independent of the prejudices.

So prejudices may always exist to some extent (and I often like the fun of the interactions of the backgrounds, as my repeated references to Dorigo's Italian nationality in this very blog post also show) but science, if it is done well, is treating them as annoying stinky pests. Feminists and reverse racists may celebrate the influence of backgrounds and their diversity for their own sake (and so do all other racists, nationalists, staunch religious believers, and all other herd animals) but a person who thinks as a scientist never does. Science maximally minimizes the concentration and influence of these arbitrary cultural and psychological pests – and that's why science is more successful than all the previous philosophies, religions, nationalisms and group thinks combined. Dorigo worships the opposite situation and he worships the dependence on the background, so he is simply not approaching these fundamental issues as a scientist.

Backgrounds and prejudices may be fun but if a talk or an argumentation depends on them too much, this fun is not good science. I am sure that everyone whom I have ever met and who was considered a good scientist – at all institutions in the U.S. and my homeland – agrees with that basic assertion. Too bad Dorigo doesn't.

As I have explained, the question "what kind of new physics produces high-energy electrons" is a typical question that should be answered by a high-energy physics phenomenologist, not an experimenter. Similarly, the detector part of Dorigo's question should be answered by experimenters – but it's OK when they become specialized (someone understands calorimeters, others are great at the pile-up effect). And the phenomenologists will disagree about the most likely type of new physics that may lead to some observation! The answer hasn't been scientifically established and people have different guesses which concepts are natural or likely to change physics in the future etc. Their being good or the right men to be hired isn't determined by their agreement with each other or with some universal authorities. Phenomenologists and theorists are people who are good at calculating, organizing derivations, arguments, inventing new schemes and ideas etc. It doesn't matter whether they agree with some previous authorities.

Obviously, the Italian student facing Dorigo's question can't write a sensible scientific paper that would consider all possible schemes of new physics that produce high-energy electrons at the level of standards and rigor that is expected in particle physics – he would have to cover almost all ideas in high-energy physics phenomenology. This student is basically order to offer just the final answers – his opinions and his prejudices, without the corresponding argumentation or calculation. And that's what the student must be unavoidably graded for.

That's just bad, bad, bad. In practice, the student will be graded for the similarity of his prejudices and Dorigo's prejudices. And that's simply not how the grading works in the scientific method or any meritocracy. If a question hasn't been settled by convincing scientific evidence, the student shouldn't be graded for his opinions on that question. If he is, then this process can't systematically select good students, especially because Tommaso Dorigo – the ultimate benchmark in this pedagogic scheme – would get a grade rather distant from an A from some true experts if he were asked about any questions that depend on theory, especially theory and phenomenology beyond the Standard Model. For the Italian institutions, to produce clones (and often even worse, imperfect clones) of such a Dorigo is just counterproductive.

Dorigo's "pedagogic style" is ultimately rather similar to the postmodern education systems that don't teach hard facts, methods, algorithms, and well-defined and justifiable principles but that indoctrinate children and students and make them parrot some ideological and political dogmas (usually kitschy politically correct lies). Instead of learning how to integrate functions, children spend more time by writing "creative" (identical to each other) essays how the European Union is wonderful and how they dream about visiting Jean-Claude Juncker's rectum on a sunny day (outside) in the future.

That's not a good way to train experts in anything. And Dorigo's method is sadly very similar. Needless to say, my main specific worry is that people like Dorigo are ultimately allowed by the system to impose their absolutely idiotic opinions about the theory – something they know almost nothing about – on the students. It seems plausible that an Italian student who is much brighter than Dorigo – to make things extreme, imagine someone like me – will be removed by the Italian system because that has been hijacked by duces such as Dorigo who have completely misunderstood the lessons by giants such as Galileo. In this way, the corrupt Italian system may be terminating lots of Galileos, Majoranas, and Venezianos, among others, while it is producing lots of small marginal appendices to Dorigo who is ultimately just a little Šmoit himself.

By the way, after the blog post above was published, a new comment was posted by another user nicknamed "Please Respect Anonymity" on Dorigo's blog (the post about the "problema numero due"):
Based on credible assumptions about the system, a possible theory is that some not-very-good physicist expert of leptons, photons and missing energy must win the "concorso". So the commissars choose a undefined question with a subjective answer about leptons, photons and missing energy.

PS: the question would be very good in a different system.
Exactly, amen to that. Perhaps at a school with 500 clones of Enrico Fermi (among commissars and students) who were made up-to-date, the question would be great. But in the realistic system we know, with the names we know, it's pretty likely that the question brings at most noise to the grades etc. Or something worse.

Dorigo's reply is that the question is great and the right answer is that there had to be a Z-boson or two W-bosons with either one or two photons, or a spurious signal in the calorimeter. I think that only a small fraction of the true experts would answer the question exactly in this way. Moreover, the answer completely omits any new physics possibility, as all of Dorigo's thinking. It's extremely problematic for experimenters not to have an idea what a discovery of new physics would actually look like – or to encourage them to believe that it can't happen – because that's the main achievement that the lucky ones among them will make.

Experimenters simply must be able to look for signals of new proposed theories, at least some of them – whether they "like" the new theories (as a group or individually) or not. Whether they like them or not should be an irrelevant curiosity because they are simply not experts in these matters so this expert work shouldn't be left to them. Experimenters' opinions about particular new theories' "a priori value" should be as irrelevant as the opinion of the U.S. president's toilet janitor about the Israeli-Palestinian conflict. She can have an opinion but unless the system is broken, the opinion won't affect the U.S. policies much. If she believes that there can't be an excrement at a wrong place that needs to be cleaned after the visit by a Palestinian official, and that's why she doesn't clean it, well, she must be fired and replaced by another one (be sure that that may happen). The same is true for a CMS experimenter who is hired to look for new physics but is unable to look for SUSY because of some psychological problems.

by Luboš Motl (noreply@blogger.com) at July 25, 2016 05:45 PM

CERN Bulletin

LHC Report: imaginative injectors

A new bunch injection scheme from the PS to the SPS allowed the LHC to achieve a new peak luminosity record.

 

Figure 1: PSB multi-turn injection principle: to vary the parameters during injection with the aim of putting the newly injected beam in a different region of the transverse phase-space plan.

The LHC relies on the injector complex to deliver beam with well-defined bunch populations and the necessary transverse and longitudinal characteristics – all of which fold directly into luminosity performance. There are several processes taking place in the PS Booster (PSB) and the Proton Synchrotron (PS) acting on the beam structure in order to obtain the LHC beam characteristics. Two processes are mainly responsible for the beam brightness: the PSB multi-turn injection and the PS radio-frequency (RF) gymnastics. The total number of protons in a bunch and the transverse emittances are mostly determined by the multi-turn Booster injection, while the number of bunches and their time spacing come from the PS RF gymnastics. The emittance of a bunch is a combined measure of the transverse size and the angular divergence of the protons in a bunch. Smaller emittance means smaller beam size and, in this particular case, smaller beam sizes at the interaction points of the LHC and thus higher luminosity.

In providing beams to the LHC, the injectors have demonstrated remarkable flexibility, and on Saturday, 16 July the LHC made use of an imaginative beam production scheme called Batch Compression Merging and Splitting (BCMS), which offers significantly lower transverse beam size with respect to the nominal production scheme. Despite some blow-up in the LHC during the ramp, the BCMS beam gave an increase in peak luminosity of around 20% and a new record of 1.2 x 1034 cm-2s-1.

Figure 2: Nominal scheme for the longitudinal splitting of the PSB bunches to reach 72 bunches spaced by 25 ns.

The multi-turn injection
The beam coming from Linac 2 is continuous and is injected sequentially into each of the four PSB rings. For each ring, the injection time can be longer than the proton revolution time. It is by choosing for how many turns beam is injected from Linac 2 into the PSB that the total beam intensity per ring is controlled. To inject more than one turn of continuous beam, the process relies on varying parameters during injection, such as the position of the beam at the injection point or the field in the main bending magnets. The aim is to put the newly injected and circulating beam in a different region of the transverse phase-space (figure 1). One consequence of such a process is that the more protons are injected, the larger the transverse emittance.

RF gymnastics and LHC bunch spacing
In order to obtain the 25-ns bunch spacing, a multiple of this value has to be found with the available PS RF harmonics: the PS has a length of 628 meters, giving a revolution time for the protons at 26 GeV of about 2.1 µs. The key harmonic to be reached is therefore H21. On H21, the bunch spacing will be 100 ns. Different RF harmonics are produced by the impressive range of RF cavities in the PS.

Figure 3: The new BCMS injection scheme.

The nominal beam
Until recently, the nominal scheme to obtain the LHC production beam has used batches of two PSB cycles injected in a single PS cycle. Six PSB bunches are injected in the PS RF harmonic 7 (H7). The empty bucket is necessary for the PS and SPS kickers rise times. These six bunches are each longitudinally split into three to reach H21 (figure 2), then split in two, and again in two. This results in 72 bunches spaced by 25 ns.

The Batch Compression Merging and Splitting scheme
From the discussion of multi-turn injection into the PSB, it can be seen that to reduce the emittance it would be good to inject fewer turns into the PSB rings. So, instead of taking six PSB bunches into H7, the PS takes eight bunches into H9. The total intensity needed is then equalised between all eight slots available in the two PSB cycles. Accordingly, the injected intensity per ring is reduced. Therefore, a new scheme had to be invented by the PS RF team to obtain the required LHC beam parameters from eight bunches instead of six: the BCMS injection scheme.

First, a compression is performed by incrementing the harmonic number from H9 to H14. Then, a bunch merging puts the harmonic number back to seven. From this point, the RF gymnastics are similar to the nominal beam, with the bunches split in three (figure 3), then two and two again. The number of bunches produced is different from the normal scheme: eight bunches are merged into four, multiplied by three, two and two again. The result is 48 bunches spaced by 25 ns, which is less than the nominal 72 bunches. Therefore, the PS and SPS have to perform more cycles to fill the entire LHC, but the gain in transverse emittance leads to higher beam brightness.

July 25, 2016 05:07 PM

Tommaso Dorigo - Scientificblogging

The Daily Physics Problem - 3
As explained in the previous installment of this series, these questions are a warm-up for my younger colleagues, who will in two months have to pass a tough exam to become INFN researchers.

A disclaimer is useful here. Here it is:

read more

by Tommaso Dorigo at July 25, 2016 12:26 PM

Peter Coles - In the Dark

Return Trip

A quick post while I’m waiting in the hotel for transport back to Venice and thence to England. It’s been a short trip but a lot of fun, with a couple of interesting discussion sessions thrown in between the eating drinking and hiking.

Unfortunately I twisted my knee near the start of yesterday’s hike. The limestone rocks around here can be very slippery! Since I’ve had a lot of problems with my knees over the years I decided to return to Base Camp. The friendly owners of the bar at the trailhead gave me a bag of ice to control the swelling and I was forced to spend the morning reclining on a sundowner with my foot up rather than hiking.

image

This seemed to work and although I’m still getting a bit of a twinge I can get about reasonably well today.

Anyway, it’s back to Blighty this afternoon for the rest of what will be my last week at the University of Sussex.


by telescoper at July 25, 2016 08:30 AM

CERN Bulletin

CERN, an Invaluable Asset for Humanity – Interview with the Director-General, Fabiola Gianotti
Fabiola Gianotti is an Italian physicist and the first woman appointed by the CERN Council as the Director-General of the Laboratory. She took office on January 1st, 2016. The two Vice-Presidents of the Staff Association (SA) met with her to discuss the current affairs of the Organization. Appointment as D-G and ambitions for CERN As a former member of the personnel in the Physics department of CERN, Fabiola Gianotti has a long history with the Organization, and her vast in-house experience was likely a key factor in her appointment as the Directorate-General. Indeed, her in-depth knowledge of CERN and its functioning, as well as the current and future challenges of the Organization, are indispensable assets in guiding her ambitions for the Laboratory. Among her greatest aspirations for CERN, she names the increase of scientific excellence in the field of experimental research, development of cutting-edge technologies, education of younger generations, and collaborations with scientists from all over the world. To reach these ambitious goals, our Director must go beyond continuing current projects, and prepare for the future of the Organization after the LHC, towards the horizon of 2035. Personnel of CERN Our Directorate-General considers the personnel as the most valuable resource of the Organization. Without the competence and the commitment of the personnel in all professional categories, CERN would not be what it is today. “You can feel how genuinely passionate the members of the personnel are,” she says. The Director finds it therefore essential to keep the personnel motivated and to encourage their indispensable enthusiasm, which shows a particular sense of attachment and belonging to the Organization. 2015 Five-Yearly Review: meeting the expectations of personnel regarding motivation and career structures 2016 is a crucial year for the implementation of the new career structure. For our Director, it is important that the CERN Management and the Staff Association work together to ensure successful implementation although the progress should not stop there. Indeed, efforts must be made to inspire motivation among the personnel by working on important themes, such as the internal mobility and the validation of skills acquired through experience (V.A.E, validation des acquis de l’expérience). Transfer between groups, units and departments must be made easier for the personnel. All too often, internal mobility is negatively perceived, when it should really be regarded as a part of the prosperity of the Organization. To this end, we need a cultural change but also a thorough understanding of our needs and our resources. External revision of the Organization’s resources The new Management has entrusted an external committee with the task of reviewing the resources of the Organization, especially in terms of human resources. The committee started their work in mid-April and should give an oral report at the turn of September–October 2016. The Director considers that it is always interesting to have an external analysis of the functioning of the Organization. Contract Policy Regarding the Contract Policy, the Director is convinced that CERN is severely constrained by the fixed quota of approximately 2 250 ‘staff members’. Indeed, the number of staff members has remained relatively unchanged over the past 10 years, whereas the number of associated members has continued to increase, which, however, is a positive development. When this concern is coupled with increasingly complex projects and a demographic issue, it can only be stated that the situation is very difficult to manage for the existing personnel of CERN. It is thus important to keep the CERN Council well-informed and involved in seeking ways to provide greater flexibility in managing and attributing contracts (LD-IC). The Director considers that it is Council’s role to help CERN. The Staff Association from the perspective of an employed member of the personnel (MPE) As a prior member of the personnel, the Director has always followed the activities and work of the Staff Association with great interest, be it from the social or the cultural point of view. For instance, the Nursery school “EVEE” of the Staff Association is a great benefit for the community of CERN, and as such a service that the Director supports. It is therefore absolutely necessary that the Nursery school remains sustainable and even expands in the future. Moreover, the exhibitions, conferences, clubs, and other activities organized by the Staff Association are of great importance to the Laboratory. They provide opportunities for the personnel to exchange, to share, and to integrate. Contractor’s personnel (ENT) and social conditions It is crucial to ensure that the selection criteria of contractors is not only economic, but also consistent with sufficient social and financial conditions. CERN – Center for innovations When asked about her vision on innovation, the Director declares: “My dream is to put CERN at the forefront of advancements, not only in physics, but also in terms of environment, education and collaborations. One topic that I am particularly concerned about is the situation of people with disabilities. Giving equal opportunities for everyone is important, and it is thus indispensable to ensure that appropriate infrastructure is in place. Lastly, it is crucial that we keep looking for opportunities for improvement, so that CERN can be an example in the world.” Security and risk of attacks Necessary measures must be taken to guarantee the safety of people present on the site of the Organization. Strengthening the security control system at the entry points to CERN site is a part of actions already taken, along with the review of our fences, installation of surveillance cameras and compulsory wearing of identification cards on site. Nevertheless, CERN must remain a Laboratory open for collaborations and for sharing knowledge and expertise. Closing remarks Fabiola Gianotti sees CERN as an invaluable asset for humanity, an organization, which must remain an example for present and future generations. The Director and the Vice-Presidents of the Staff Association concluded the interview with a promise to revisit important issues before the end of the year. The Vice-Presidents, on behalf of the Staff Association, would like to take this opportunity to thank Fabiola Gianotti for the cordial welcome that they have received.

by Staff Association at July 25, 2016 08:08 AM

CERN Bulletin

Collection for Ecuador
Following the terrible earthquake in Ecuador on April 16th, 2016, a collection was organized at CERN and sent to the INEPE Institute in Quito to help the victims. CERN has received the following two letters that we want to share with you. We wish our Ecuadorian friends a prompt recovery and keep them in our thoughts! Dear Fabiola and Alessandro, […] As the CERN contact person in Ecuador, and as the Country Representative for Ecuador in the CMS Experiment, I would like to thank you, the offices you precede and, through you, the whole CERN community, for all the concern and generosity after the terrible earthquake in the coast of Ecuador.  Rather slowly, but full of hope, our people are overcoming this difficult situation.  Contributions from different sources abroad, like CERN's, combined with the rapid and generous local intervention, have greatly helped the affected communities cope with the most urgent needs in order to restart their lives. Once again, I express my most sincere gratitude.  I am sure I echo the sentiment of all the Ecuadorian nationals who are associated with CERN and, humbly, all my compatriots as well. With best regards, Edgar Carrera   Dear Gianni and Marcel: We send you our affectionate and fraternal greeting. I am pleased to confirm that INEPE's bank account got 10.183,30 USD, which means CHF 10.000 that you sent in behalf of CERN PERSONNEL and its generous fundraising of solidarity. This money will be used giving educational material to the children who began the school year in the Coast of Ecuador 40 days ago, and many of who are living in temporary shelters since they lost their humble homes. We will send you a report and photographs of how this fund will be used. Thank you for your brotherhood and generous support. Patricio Raza

by Staff Association at July 25, 2016 07:41 AM

CERN Bulletin

Offers for our members
Summer is here, enjoy our offers for the aquatic parcs! Walibi : Tickets "Zone terrestre": 23 € instead of 29 €. Access to Aqualibi: 5 € instead of 6 € on presentation of your SA member ticket. Free for children (3-11 years old) before 12:00 p.m. Free for children under 3, with limited access to the attractions. Car park free. * * * * * Aquaparc : Day ticket: – Children: 33 CHF instead of 39 CHF – Adults : 33 CHF instead of 49 CHF Bonus! Free for children under 5.

by Staff Association at July 25, 2016 07:35 AM

July 24, 2016

Christian P. Robert - xi'an's og

common derivation for Metropolis–Hastings and other MCMC algorithms

Khoa Tran and Robert Kohn from UNSW just arXived a paper on a comprehensive derivation of a large range of MCMC algorithms, beyond Metropolis-Hastings. The idea is to decompose the MCMC move into

  1. a random completion of the current value θ into V;
  2. a deterministic move T from (θ,V) to (ξ,W), where only ξ matters.

If this sounds like a new version of Peter Green’s completion at the core of his 1995 RJMCMC algorithm, it is bedowntown Sydney from under Sydney Harbour bridge, July 15, 2012cause it is indeed essentially the same notion. The resort to this completion allows for a standard form of the Metropolis-Hastings algorithm, which leads to the correct stationary distribution if T is self-inverse. This representation covers Metropolis-Hastings algorithms, Gibbs sampling, Metropolis-within-Gibbs and auxiliary variables methods, slice sampling, recursive proposals, directional sampling, Langevin and Hamiltonian Monte Carlo, NUTS sampling, pseudo-marginal Metropolis-Hastings algorithms, and pseudo-marginal Hamiltonian  Monte Carlo, as discussed by the authors. Given this representation of the Markov chain through a random transform, I wonder if Peter Glynn’s trick mentioned in the previous post on retrospective Monte Carlo applies in this generic setting (as it could considerably improve convergence…)


Filed under: Books, pictures, Statistics, Travel, University life Tagged: auxiliary variables, directional sampling, Gibbs sampling, Hamiltonian Monte Carlo, Metropolis-Hastings algorithms, Metropolis-within-Gibbs algorithm, NUTS, pseudo-marginal MCMC, recursive proposals, RJMCMC, slice sampling, Sydney, UNSW

by xi'an at July 24, 2016 10:16 PM

Tommaso Dorigo - Scientificblogging

The Daily Physics Problem - 2
As explained in the previous installment of this series, these questions are a warm-up for my younger colleagues, who will in two months have to pass a tough exam to become INFN researchers.
By the way, when I wrote the first question yesterday I thought I would not need to explain it in detail, but it just occurred to me that a disclaimer would be useful. Here it is:

read more

by Tommaso Dorigo at July 24, 2016 12:46 PM

July 23, 2016

Christian P. Robert - xi'an's og

on Monte Rosa [a failed attempt at speed climbing]

With my daughter Rachel and her friend Clément, we tried last week to bag a few summits in the Monte Rosa massif, which stands between Italy (Aosta) and Switzerland (Zermatt). I wanted to take advantage of the Bastille Day break and we drove from Paris to Aosta in the very early morning, stopping in Chamonix to rent shoes and crampons, and meeting with our guide Abele Blanc at noon, before going together to the hut Rifugio Città di Mantova. At 3500m. Our goal was to spent the night there and climb to Punta Gnifetti (Rifugio Margherita) and Zumstein the next morning. Before heading back to Paris in the evening. However, it did not work out that way as I got a slight bout of mountain sickness that left me migrainous, nauseous, and having a pretty bad night, despite great conditions at the hut… So (despite my intense training of the previous weeks!) I did not feel that great when we left the hut at 5am. The weather was fine if cold and windy, but after two hours of moderate climbing in a fairly pleasant crispy snow of a glacier, Rachel was too out of breath to continue and Abele realised my nose had [truly] frozen (I could not feel anything!) and took us down before continuing with Clément to both peaks. This was quite a disappointment as we had planned this trip over several months, but it was clearly for the best as my fingers were definitely close to frozen (with my worst case ever of screamin’ barfies on the way down!). And we thus spent the rest of the morning waiting for our friends, warming up with tea in the sunshine. Upon reflection, planning one extra day of acclimatisation to altitude and cold would have been more reasonable and keeping handwarmers in our backpacks as well… In any case, Clément made it to the top with Abele and we got a good altitude training for the incoming San Francisco half-marathon. Plus an epic hike the next day around Cogne.


Filed under: Kids, Mountains, pictures, Running, Travel Tagged: Abele Blanc, Alps, Aosta, Chamonix, Italy, Mont Blanc, Monte Rosa, Punta Gnifetti, Rifugio Città di Mantova, Rifugio Margherita, screaming barfies, sunrise

by xi'an at July 23, 2016 10:16 PM

Tommaso Dorigo - Scientificblogging

The Daily Physics Problem - 1
Today I wish to start a series of posts that are supposed to help my younger colleagues who will, in two months from now, compete for a position as INFN research scientists. 
The INFN has opened 73 new positions and the selection includes two written exams besides an evaluation of titles and an oral colloquium. The rules also say that the candidates will have to pass the written exams with a score of at least 140/200 on each, in order to access the oral colloquium. Of course, no information is given on how the tests will be graded, so 140 over 200 does not really mean much at this point.

read more

by Tommaso Dorigo at July 23, 2016 04:31 PM

July 22, 2016

astrobites - astro-ph reader's digest

UR #19: Outburst from an X-ray Binary

astrobitesURlogoThe undergrad research series is where we feature the research that you’re doing. If you’ve missed the previous installments, you can find them under the “Undergraduate Research” category here.

Are you doing an REU this summer? Were you working on an astro research project during this past school year? If you, too, have been working on a project that you want to share, we want to hear from you! Think you’re up to the challenge of describing your research carefully and clearly to a broad audience, in only one paragraph? Then send us a summary of it!

You can share what you’re doing by clicking here and using the form provided to submit a brief (fewer than 200 words) write-up of your work. The target audience is one familiar with astrophysics but not necessarily your specific subfield, so write clearly and try to avoid jargon. Feel free to also include either a visual regarding your research or else a photo of yourself.

We look forward to hearing from you!

************

Cormac Larkin
University College Cork

Cormac is entering his final year of high school in Cork, in the south of Ireland. Last summer he completed an internship in the physics department at University College Cork, where he worked on the following research project. Cormac is now the Managing Editor of the Young Scientists Journal and a Research Collaborator with Armagh Observatory working on data mining in the Small Magellanic Cloud in the search for new O-stars.

V-Band Photometry in V404 Cygni

V404 Cygni is a low-mass X-ray binary in the constellation Cygnus. The two stars comprising this system are an accretor (a black hole candidate or neutron star) and a donor star (a low-mass late type star). The accretor grows by accumulating matter from the donor star. Periodic outbursts of X-rays occur as mass is transferred from the donor to the accretor. It underwent a period of outburst this summer, beginning on June 15th 2015. I performed V band photometry on the system in August to attempt to ascertain whether the system had returned to quiescence or not. I used the McDonald 1m telescope in Texas, owned and operated by Las Cumbres Observatory Global Telescope Network. My observation time was awarded to me by the Faulkes Telescope Network. Using the Aperture Photometry Tool, I found the V magnitude on August 12th to be 17.24, which was lower (and thus brighter) than the quiescent V magnitude averaging 18.3-18.4 but higher (and thus dimmer) than the peak V magnitude of 12.1. From the data I obtained, the system appeared to be still active, but was dimmer than when at peak activity. From this, I inferred that activity in V404 Cygni was dissipating but not yet returned to quiescent levels. This work was presented in poster format at both the Irish National Astronomy Meeting 2015 and at the Young Scientists Journal Conference 2015, where it came in 3rd place overall.

The combined 5x60 second image used to measure the magnitude of V404 Cygni on August 12th 2015.

The combined 5×60 second image used to measure the magnitude of V404 Cygni on August 12th 2015.

by Astrobites at July 22, 2016 03:07 PM

John Baez - Azimuth

Topological Crystals (Part 1)

A while back, we started talking about crystals:

• John Baez, Diamonds and triamonds, Azimuth, 11 April 2016.

In the comments on that post, a bunch of us worked on some puzzles connected to ‘topological crystallography’—a subject that blends graph theory, topology and mathematical crystallography. You can learn more about that subject here:

• Tosio Sunada, Crystals that nature might miss creating, Notices of the AMS 55 (2008), 208–215.

I got so interested that I wrote this paper about it, with massive help from Greg Egan:

• John Baez, Topological crystals.

I’ll explain the basic ideas in a series of posts here.

First, a few personal words.

I feel a bit guilty putting so much work into this paper when I should be developing network theory to the point where it does our planet some good. I seem to need a certain amount of beautiful pure math to stay sane. But this project did at least teach me a lot about the topology of graphs.

For those not in the know, applying homology theory to graphs might sound fancy and interesting. For people who have studied a reasonable amount of topology, it probably sounds easy and boring. The first homology of a graph of genus g is a free abelian group on g generators: it’s a complete invariant of connected graphs up to homotopy equivalence. Case closed!

But there’s actually more to it, because studying graphs up to homotopy equivalence kills most of the fun. When we’re studying networks in real life we need a more refined outlook on graphs. So some aspects of this project might pay off, someday, in ways that have nothing to do with crystallography. But right now I’ll just talk about it as a fun self-contained set of puzzles.

I’ll start by quickly sketching how to construct topological crystals, and illustrate it with the example of graphene, a 2-dimensional form of carbon:

I’ll precisely state our biggest result, which says when this construction gives a crystal where the atoms don’t bump into each other and the bonds between atoms don’t cross each other. Later I may come back and add detail, but for now you can find details in our paper.

Constructing topological crystals

The ‘maximal abelian cover’ of a graph plays a key role in Sunada’s work on topological crystallography. Just as the universal cover of a connected graph X has the fundamental group \pi_1(X) as its group of deck transformations, the maximal abelian cover, denoted \overline{X}, has the abelianization of \pi_1(X) as its group of deck transformations. It thus covers every other connected cover of X whose group of deck transformations is abelian. Since the abelianization of \pi_1(X) is the first homology group H_1(X,\mathbb{Z}), there is a close connection between the maximal abelian cover and homology theory.

In our paper, Greg and I prove that for a large class of graphs, the maximal abelian cover can naturally be embedded in the vector space H_1(X,\mathbb{R}). We call this embedded copy of \overline{X} a ‘topological crystal’. The symmetries of the original graph can be lifted to symmetries of its topological crystal, but the topological crystal also has an n-dimensional lattice of translational symmetries. In 2- and 3-dimensional examples, the topological crystal can serve as the blueprint for an actual crystal, with atoms at the vertices and bonds along the edges.

The general construction of topological crystals was developed by Kotani and Sunada, and later by Eon. Sunada uses ‘topological crystal’ for an even more general concept, but we only need a special case.

Here’s how it works. We start with a graph X. This has a space C_0(X,\mathbb{R}) of 0-chains, which are formal linear combinations of vertices, and a space C_1(X,\mathbb{R}) of 1-chains, which are formal linear combinations of edges. There is a boundary operator

\partial \colon C_1(X,\mathbb{R}) \to C_0(X,\mathbb{R})

This is the linear operator sending any edge to the difference of its two endpoints. The kernel of this operator is called the space of 1-cycles, Z_1(X,\mathbb{R}). There is an inner product on the space of 1-chains such that edges form an orthonormal basis. This determines an orthogonal projection

\pi \colon C_1(X,\mathbb{R}) \to Z_1(X,\mathbb{R})

For a graph, Z_1(X,\mathbb{R}) is isomorphic to the first homology group H_1(X,\mathbb{R}). So, to obtain the topological crystal of X, we need only embed its maximal abelian cover \overline{X} in Z_1(X,\mathbb{R}). We do this by embedding \overline{X} in C_1(X,\mathbb{R}) and then projecting it down via \pi.

To accomplish this, we need to fix a basepoint for X. Each path \gamma in X starting at this basepoint determines a 1-chain c_\gamma. These 1-chains correspond to the vertices of \overline{X}. The graph \overline{X} has an edge from c_\gamma to c_{\gamma'} whenever the path \gamma' is obtained by adding an extra edge to \gamma. This edge is a straight line segment from the point c_\gamma to the point c_{\gamma'}.

The hard part is checking that the projection \pi maps this copy of \overline{X} into Z_1(X,\mathbb{R}) in a one-to-one manner. In Theorems 6 and 7 of our paper we prove that this happens precisely when the graph X has no ‘bridges’: that is, edges whose removal would disconnect X.

Kotani and Sunada noted that this condition is necessary. That’s actually pretty easy to see. The challenge was to show that it’s sufficient! For this, our main technical tool is Lemma 5, which for any path \gamma decomposes the 1-chain c_\gamma into manageable pieces.

We call the resulting copy of \overline{X} embedded in Z_1(X,\mathbb{R}) a topological crystal.

Let’s see how it works in an example!

Take X to be this graph:

Since X has 3 edges, the space of 1-chains is 3-dimensional. Since X has 2 holes, the space of 1-cycles is a 2-dimensional plane in this 3-dimensional space. If we consider paths \gamma in X starting at the red vertex, form the 1-chains c_\gamma, and project them down to this plane, we obtain the following picture:

Here the 1-chains c_\gamma are the white and red dots. These are the vertices of \overline{X}, while the line segments between them are the edges of \overline{X}. Projecting these vertices and edges onto the plane of 1-cycles, we obtain the topological crystal for X. The blue dots come from projecting the white dots onto the plane of 1-cycles, while the red dots already lie on this plane. The resulting topological crystal provides the pattern for graphene:

That’s all there is to the basic idea! But there’s a lot more to say about it, and a lot of fun examples to look at: diamonds, triamonds, hyperquartz and more.


by John Baez at July 22, 2016 08:08 AM

July 21, 2016

Emily Lakdawalla - The Planetary Society Blog

The Planetary Society at San Diego Comic-Con (UPDATED with video!)
Whether or not you're attending San Diego Comic-Con, you can enjoy a discussion panel with Emily Lakdawalla and five science fiction authors about the future of science fiction in the context of today's amazing scientific advances.

July 21, 2016 11:04 PM

astrobites - astro-ph reader's digest

Mass Loss in Dying Stars

Title: Pulsation-Triggered Mass Loss From AGB Stars: The 60-Day Critical Period

Authors: Iain McDonald and Albert Zijlstra

First Author’s Institution: Jodrell Bank Centre for Astrophysics

Status: Accepted to ApJ Letters

Background

Perhaps you’ve heard that four billion years from now, the Sun will grow into a red giant with a radius the size of Earth’s orbit before eventually shrinking into a white dwarf about the size of Earth itself. Besides being very small, the resulting white dwarf will probably only have half of the original mass of the Sun. Where does that lost mass go?

Figure 1: An HR Diagram showing the main sequence, red giant branch, horizontal branch, and asymptotic giant branch. The horizontal axis indicates the temperature, while the vertical axis indicates the luminosity.

Figure 1: An HR Diagram showing the main sequence, red giant branch, horizontal branch, and asymptotic giant branch. The horizontal axis indicates the temperature, while the vertical axis indicates the luminosity. The arrow traces out the path the star would take after leaving the main sequence. From http://www.astronomy.ohio-state.edu/~pogge/ .

During a star’s post-main-sequence (MS) evolution, it will lose much of its starting mass through stellar winds. Currently, the Sun is constantly losing mass through solar winds—material that is being ejected from its surface—but when the Sun leaves MS and reaches the red giant branch (RGB), these solar winds will become even stronger. After the end of the RGB phase, the Sun will continue to evolve until it reaches the asymptotic giant branch (AGB)—so named because it will then asymptotically approach the same location on the Hertzsprung-Russell diagram that it does as an RGB star (see Figure 1 for an example). AGB stars have even stronger stellar winds, meaning they are losing mass at an even more rapid rate than RGB stars. It is thought that much of a star’s mass loss happens when it is on the RGB and AGB. In addition, all of this excess material being blown off of the star means that AGB stars are often surrounded by a lot of dust

Exactly what really drives this process, however, is not something that we understand very well. Today’s astrobite discusses some of the possible mechanisms for stellar mass loss in AGB stars, particularly the role that pulsation plays in mass loss.

Stars can pulsate in a variety of different pulsational modes. The fundamental mode is probably what you imagine when you think of stellar pulsation—all of the star is moving radially in the same direction. However, if the star has radial nodes, different parts of the star move in different directions at the same time (sort of like the nodes of an pipe). We call these pulsational modes overtone modes, and the type of overtone mode (first, second, third, etc.) tells you the number of nodes that exist in the star.

Mass Loss above the 60-Day Critical Pulsational Period

Figure 2: Figure 1 from the paper, which shows the dust excess (given by K-[22] color) on the vertical axis plotted against period in days on the horizontal axis. The dotted horizontal line marks the authors' criterion for 'substantial dust excess'. The red circles show period data taken from Tabur (2009), the green squares from the International Variable Star Index, and the blue triangles from the General Catalogue of Variable Stars. Smaller light blue triangles indicate the stars for which they had GCVS data, but could not detect with Hipparcos. Starting at a period of 60 days, there is an increased number of stars with greater dust excess than their criterion. There is another increase at about 300 days.

Figure 2: Figure 1 from the paper, which shows the dust excess (given by K-[22] color) on the vertical axis plotted against period in days on the horizontal axis. The dotted horizontal line marks the authors’ criterion for ‘substantial dust excess’. The red circles show period data taken from Tabur (2009), the green squares from the International Variable Star Index, and the blue triangles from the General Catalogue of Variable Stars. Smaller light blue triangles indicate the stars for which they had GCVS data, but could not detect with Hipparcos. Starting at a period of 60 days, there is an increased number of stars with greater dust excess than their criterion. There is another increase at about 300 days.

Most previous studies of the effects of pulsation on mass loss have focused on stars with pulsational periods greater than 300 days, because both observation and theory have shown that to be when stars have the greatest dust production and highest mass-loss rate. However, a less-studied 60-day ‘critical period’ in the increase of dust production has also been noted as well.

Mass-loss in RGB and AGB stars seems to increase at a period of 60 days. Both RGB stars and AGB stars can pulsate (in fact, there is evidence that all stars pulsate…if only we could study them well enough to see it), but the authors find that despite inhabiting roughly the same area on the HR diagram, the 60-day period stars with strong mass loss appear to only be AGB stars and not RGB stars. This 60-day period also happens to correspond with roughly the point when AGB stars transition from second and third overtone pulsation to the first overtone pulsation mode.  Additional nodes will also result in lower pulsational amplitude (smaller change in brightness and radius over one period) for the star, leading AGB stars to have bigger amplitudes at this point. RGB stars seem to pulse only in the second and third overtone modes. This is most likely responsible for why they produce so much less dust and experience less mass-loss at the same period as their AGB star counterparts.

Screenshot 2016-06-08 11.13.40

Figure 3: This is part of Figure 2 from the paper, showing amplitude in the V-band plotted against period. In both subplots, the darker colored circles are stars with substantial dust excess, and the lighter colored circles are stars without substantial dust excess. This seems to suggest that greater dust excess corresponds with greater amplitude. Greater amplitude also usually indicates fewer radial nodes. The 60 and 300-day increase in dust productions are also visible in both plots.

The relationship between dust production and infrared excess, which the authors use as a proxy for the amount of dust the star is producing, is shown in Figure 2. From this figure, we can see that at periods longer than 60 days, there appear to be more stars that are producing dust above their criterion for substantial dust excess. Figure 3 show period-amplitude diagrams, where the pulsational amplitude is plotted against the pulsational period (where the amplitude suggests of the mode of pulsation). From this diagram, we can see that the stars with less dust production appear to also have lower amplitudes of pulsation. Together, these support the hypothesis that the pulsational mode plays a critical role in producing dust and driving mass loss. These results also confirm the increase in mass-loss at 300 days, which roughly corresponds with stars transitioning from the first overtone pulsation to the fundamental mode.

Conclusion

So what’s next? Well, as you might expect, the follow up to science is usually…more science! The authors point out that further study will be necessary in order to get conclusive evidence for exactly what role this critical period serves and how pulsational mode can affect it. Is it really a change in the stellar-mass loss rate, or is the stellar wind pre-existing, and the 60-day period just coincides with an increase in dust condensation? Similar studies focusing on stars with different metallicities will also be a good check to see whether these critical periods are universal.

by Caroline Huang at July 21, 2016 01:08 PM

Symmetrybreaking - Fermilab/SLAC

Dark matter evades most sensitive detector

In its final run, the LUX experiment increased its sensitivity four-fold, but dark matter remains elusive. 

After completing its final run, scientists on the Large Underground Xenon (LUX) experiment announced they have found no trace of dark matter particles.

The new data, which were collected over more than 300 days from October 2014 to May 2016, improved the experiment’s previous sensitivity four-fold.

“We built an experiment that has delivered world-leading sensitivity in multiple new results over the last three years,” says Brown University’s Rick Gaitskell, co-spokesperson for the LUX collaboration. “We gave dark matter every opportunity to show up in our experiment, but it chose not to.”

Although the LUX scientists haven’t found WIMPs, their results allow them to exclude many theoretical models for what these particles could have been, narrowing down future dark matter searches with other experiments.

“I’m very proud of what we’ve accomplished,” says LUX co-founder Tom Shutt from the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory. “The experiment performed even better than initially planned, and we set a new standard as to how well we can take measurements, calibrate the detector and determine its background signals.”

Scientists have yet to directly detect dark matter, but they have seen indirect evidence of its existence in astronomical studies.

Located one mile underground at the Sanford Underground Research Facility in South Dakota, LUX had been searching since 2012 for what are called weakly interacting massive particles, or WIMPs. These hypothetical particles are top contenders to be the building blocks of dark matter, but their existence has yet to be demonstrated.

WIMPs are believed to barely interact with normal matter other than through gravity. However, researchers had hoped to detect their rare collisions with LUX’s detector material—a third of a ton of liquid xenon.

With the latest gain in sensitivity, LUX has “enabled us to probe dark matter candidates that would produce signals of only a few events per century in a kilogram of xenon,” says Aaron Manalaysay, the analysis working group coordinator of the LUX experiment from the University of California, Davis. Manalaysay presented the new results today at IDM2016, an international dark matter conference in Sheffield in the UK.

After LUX was first proposed in 2007, it became an R&D activity with limited funding and only a handful of participating groups. Over the years it grew from a detector that included parts bought for a few bucks on eBay into a major project involving researchers from 20 universities and national labs in the US, the UK and Portugal.

Over the next months, the LUX experiment will be decommissioned to make room for its successor experiment. The next-generation LUX-ZEPLIN (LZ) detector will use 10 tons of liquid xenon and will be 100 times more sensitive to WIMPs.

“LZ is based on lessons learned from LUX,” Shutt says. “It has been a great advantage to have LUX collect data while designing the new experiment, and some of LZ’s new features are enabled through our experience with LUX.”

Once LZ turns on in 2020, researchers will have another big shot at finding mysterious WIMPs.

by Manuel Gnida at July 21, 2016 01:00 PM

July 20, 2016

Emily Lakdawalla - The Planetary Society Blog

Multimedia recap: Two launches, a landing, a docking, and a berthing
Four days of cargo craft mania came to a close at the International Space Station this morning, as astronauts Kate Rubins and Jeff Williams snagged an approaching SpaceX Dragon vehicle and berthed it to the laboratory's Harmony module.

July 20, 2016 05:35 PM

July 19, 2016

Lubos Motl - string vacua and pheno

CMS in \(ZZ\) channel: a 3-4 sigma evidence in favor of a \(650\GeV\) boson
Today, the CMS collaboration has revealed one of the strongest deviations from the Standard Model in quite some time in the paper
Search for diboson resonances in the semileptonic \(X \to ZV \to \ell^+\ell^- q\bar q\) final state at \(\sqrt{s} = 13\TeV\) with CMS
On page 21, Figure 12, you see the Brazilian charts.




In the channel where a resonance decays to a \(ZZ\) pair and one \(Z\) decays to a quark-antiquark pair and the other \(Z\)-boson to a lepton-antilepton pair (semileptonic decays), CMS folks used two different methods to search for low-mass and high-mass particles.




In the high-mass search – which contributed to the bottom part of Figure 12 – they saw a locally 2-sigma excess indicating a resonance around \(1000\GeV\), possibly compatible with the new \(\gamma\gamma\) resonance near \(975\GeV\) that appeared in some new rumors about the 2016 data.

More impressively, the low-mass search revealed a locally 3.4 or 3.9 sigma excess in the search for a Randall-Sundrum or "bulk graviton" (I won't explain the differences between the two models because I don't know the details and I don't think one should take a particular interpretation too seriously) of mass \(650\GeV\). The "bulk graviton" excess is the stronger one.

Even when the look-elsewhere effect (over the \(550-1400\GeV\) range) is taken into account, as conclusions on Page 22 point out, the deviation is still 2.9 or 3.5 sigma, respectively. That's pretty strong.

We may wait for ATLAS whether they see something. (Update: In comments, a paper with a disappointing 2-sigma deficit on that place is shown instead.) CMS has only used 2.7 inverse femtobarns of the 2015 data in this analysis. Obviously, if the excess at \(650\GeV\) were real, the particle would already be safely discovered in the data that have already been collected in 2015-2016 (by CMS separately) – about 18 inverse femtobarns (CMS).

Concerning the \(650\GeV\) mass, in 2014, CMS also saw a 2.5-sigma hint of a leptoquark of that mass (more on those particles). Note that the leptoquarks carry very different charges (quantum numbers) than the "bulk graviton".

The previous CMS papers on the same channel but based on the 2012 data were these two: low-mass, high-mass strategies. I think that there was no evidence in favor of a similar hypothesis in those older papers. That also seems to be true for the analogous ATLAS paper using the 2012 data.

by Luboš Motl (noreply@blogger.com) at July 19, 2016 03:16 PM

July 18, 2016

Symmetrybreaking - Fermilab/SLAC

Pokémon Go shakes up the lab routine

At Fermilab and CERN, students, lab employees and visitors alike are on the hunt for virtual creatures.

At Fermi National Accelerator Laboratory near Chicago, the normal motions of people going about their days have shifted.

People who parked their cars in the same spot for years have moved. People are rerouting their paths through the buildings of the laboratory campus and striking off to explore new locations. They can be seen on lunch breaks hovering around lab landmarks, alone or in small clumps, flicking their fingers across their smartphones.

The augmented reality phenomenon of Pokémon Go has made its way into the world of high-energy particle physics. Based on the Nintendo franchise that launched in the ’90s, Pokémon Go sends players exploring their surrounding areas in the real world, trying to catch as many of the virtual creatures as possible.

Not only is the game affecting the movements of lab regulars, it’s also brought new people to the site, says Beau Harrison, an accelerator operator and a member of the game’s blue team. “People were coming on their bicycles to get their Pokémon here.”

At Fermilab, the three teams of the Pokémon universe—red, yellow and blue—compete for command of Fermilab’s several virtual gyms, places people battle their Pokémon to boost their strength or simply display team dominance.

“It’s kind of fun playing with everyone here,” says Bobby Santucci, another operator at the lab, who is on team red. “It’s not so much about the game. It’s more like messing with each other.”

In the few days the game has been out, the gyms at Fermilab have repeatedly tossed out one team for another: blue, then red, then blue, then red, then briefly yellow, then blue and then red again.

The game was not released in many European countries until the past weekend. But Elizabeth Kennedy, a graduate student from UC Riverside who is working at CERN, says that even before that you could identify Pokémon Go players among the people at the laboratory on the border of Switzerland and France, based on the routes they walked.

“The Americans are all playing,” she says. “It’s easy to tell who else is playing when you see other people congregating around places.”

The majority of the players at Fermilab seem to be college students and younger employees, but players of all ages can be spotted roaming the labs.

Bonnie King, a system administrator at the lab and a member of team blue, says that on one of her Pokémon-steered nature walks at Fermilab, she encountered a group of preteens. She had never been on that particular trail, and she wondered whether this was a first for the visitors, too. They noticed her playing and asked her if she was taking the gym there.

“Yeah, I am,” she replied, rising to the challenge.

King dropped off her top contender, a drooling, fungal-looking blue monster called Gloom, to help team blue keep its position of power. But eventually the red team toppled blue to reclaim the gym.

The battle for Fermilab rages on.

by Molly Olmstead at July 18, 2016 04:03 PM

Sean Carroll - Preposterous Universe

Space Emerging from Quantum Mechanics

The other day I was amused to find a quote from Einstein, in 1936, about how hard it would be to quantize gravity: “like an attempt to breathe in empty space.” Eight decades later, I think we can still agree that it’s hard.

So here is a possibility worth considering: rather than quantizing gravity, maybe we should try to gravitize quantum mechanics. Or, more accurately but less evocatively, “find gravity inside quantum mechanics.” Rather than starting with some essentially classical view of gravity and “quantizing” it, we might imagine starting with a quantum view of reality from the start, and find the ordinary three-dimensional space in which we live somehow emerging from quantum information. That’s the project that ChunJun (Charles) Cao, Spyridon (Spiros) Michalakis, and I take a few tentative steps toward in a new paper.

We human beings, even those who have been studying quantum mechanics for a long time, still think in terms of a classical concepts. Positions, momenta, particles, fields, space itself. Quantum mechanics tells a different story. The quantum state of the universe is not a collection of things distributed through space, but something called a wave function. The wave function gives us a way of calculating the outcomes of measurements: whenever we measure an observable quantity like the position or momentum or spin of a particle, the wave function has a value for every possible outcome, and the probability of obtaining that outcome is given by the wave function squared. Indeed, that’s typically how we construct wave functions in practice. Start with some classical-sounding notion like “the position of a particle” or “the amplitude of a field,” and to each possible value we attach a complex number. That complex number, squared, gives us the probability of observing the system with that observed value.

Mathematically, wave functions are elements of a mathematical structure called Hilbert space. That means they are vectors — we can add quantum states together (the origin of superpositions in quantum mechanics) and calculate the angle (“dot product”) between them. (We’re skipping over some technicalities here, especially regarding complex numbers — see e.g. The Theoretical Minimum for more.) The word “space” in “Hilbert space” doesn’t mean the good old three-dimensional space we walk through every day, or even the four-dimensional spacetime of relativity. It’s just math-speak for “a collection of things,” in this case “possible quantum states of the universe.”

Hilbert space is quite an abstract thing, which can seem at times pretty removed from the tangible phenomena of our everyday lives. This leads some people to wonder whether we need to supplement ordinary quantum mechanics by additional new variables, or alternatively to imagine that wave functions reflect our knowledge of the world, rather than being representations of reality. For purposes of this post I’ll take the straightforward view that quantum mechanics says that the real world is best described by a wave function, an element of Hilbert space, evolving through time. (Of course time could be emergent too … something for another day.)

Here’s the thing: we can construct a Hilbert space by starting with a classical idea like “all possible positions of a particle” and attaching a complex number to each value, obtaining a wave function. All the conceivable wave functions of that form constitute the Hilbert space we’re interested in. But we don’t have to do it that way. As Einstein might have said, God doesn’t do it that way. Once we make wave functions by quantizing some classical system, we have states that live in Hilbert space. At this point it essentially doesn’t matter where we came from; now we’re in Hilbert space and we’ve left our classical starting point behind. Indeed, it’s well-known that very different classical theories lead to the same theory when we quantize them, and likewise some quantum theories don’t have classical predecessors at all.

The real world simply is quantum-mechanical from the start; it’s not a quantization of some classical system. The universe is described by an element of Hilbert space. All of our usual classical notions should be derived from that, not the other way around. Even space itself. We think of the space through which we move as one of the most basic and irreducible constituents of the real world, but it might be better thought of as an approximate notion that emerges at large distances and low energies.

So here is the task we set for ourselves: start with a quantum state in Hilbert space. Not a random or generic state, admittedly; a particular kind of state. Divide Hilbert space up into pieces — technically, factors that we multiply together to make the whole space. Use quantum information — in particular, the amount of entanglement between different parts of the state, as measured by the mutual information — to define a “distance” between them. Parts that are highly entangled are considered to be nearby, while unentangled parts are far away. This gives us a graph, in which vertices are the different parts of Hilbert space, and the edges are weighted by the emergent distance between them.

rc-graph

We can then ask two questions:

  1. When we zoom out, does the graph take on the geometry of a smooth, flat space with a fixed number of dimensions? (Answer: yes, when we put in the right kind of state to start with.)
  2. If we perturb the state a little bit, how does the emergent geometry change? (Answer: space curves in response to emergent mass/energy, in a way reminiscent of Einstein’s equation in general relativity.)

It’s that last bit that is most exciting, but also most speculative. The claim, in its most dramatic-sounding form, is that gravity (spacetime curvature caused by energy/momentum) isn’t hard to obtain in quantum mechanics — it’s automatic! Or at least, the most natural thing to expect. If geometry is defined by entanglement and quantum information, then perturbing the state (e.g. by adding energy) naturally changes that geometry. And if the model matches onto an emergent field theory at large distances, the most natural relationship between energy and curvature is given by Einstein’s equation. The optimistic view is that gravity just pops out effortlessly in the classical limit of an appropriate quantum system. But the devil is in the details, and there’s a long way to go before we can declare victory.

Here’s the abstract for our paper:

Space from Hilbert Space: Recovering Geometry from Bulk Entanglement
ChunJun Cao, Sean M. Carroll, Spyridon Michalakis

We examine how to construct a spatial manifold and its geometry from the entanglement structure of an abstract quantum state in Hilbert space. Given a decomposition of Hilbert space H into a tensor product of factors, we consider a class of “redundancy-constrained states” in H that generalize the area-law behavior for entanglement entropy usually found in condensed-matter systems with gapped local Hamiltonians. Using mutual information to define a distance measure on the graph, we employ classical multidimensional scaling to extract the best-fit spatial dimensionality of the emergent geometry. We then show that entanglement perturbations on such emergent geometries naturally give rise to local modifications of spatial curvature which obey a (spatial) analog of Einstein’s equation. The Hilbert space corresponding to a region of flat space is finite-dimensional and scales as the volume, though the entropy (and the maximum change thereof) scales like the area of the boundary. A version of the ER=EPR conjecture is recovered, in that perturbations that entangle distant parts of the emergent geometry generate a configuration that may be considered as a highly quantum wormhole.

Like almost any physics paper, we’re building on ideas that have come before. The idea that spacetime geometry is related to entanglement has become increasingly popular, although it’s mostly been explored in the holographic context of the AdS/CFT correspondence; here we’re working directly in the “bulk” region of space, not appealing to a faraway boundary. A related notion is the ER=EPR conjecture of Maldacena and Susskind, relating entanglement to wormholes. In some sense, we’re making this proposal a bit more specific, by giving a formula for distance as a function of entanglement. The relationship of geometry to energy comes from something called the Entanglement First Law, articulated by Faulkner et al., and used by Ted Jacobson in a version of entropic gravity. But as far as we know we’re the first to start directly from Hilbert space, rather than assuming classical variables, a boundary, or a background spacetime. (There’s an enormous amount of work that has been done in closely related areas, obviously, so I’d love to hear about anything in particular that we should know about.)

We’re quick to admit that what we’ve done here is extremely preliminary and conjectural. We don’t have a full theory of anything, and even what we do have involves a great deal of speculating and not yet enough rigorous calculating.

Most importantly, we’ve assumed that parts of Hilbert space that are highly entangled are also “nearby,” but we haven’t actually derived that fact. It’s certainly what should happen, according to our current understanding of quantum field theory. It might seem like entangled particles can be as far apart as you like, but the contribution of particles to the overall entanglement is almost completely negligible — it’s the quantum vacuum itself that carries almost all of the entanglement, and that’s how we derive our geometry.

But it remains to be seen whether this notion really matches what we think of as “distance.” To do that, it’s not sufficient to talk about space, we also need to talk about time, and how states evolve. That’s an obvious next step, but one we’ve just begun to think about. It raises a variety of intimidating questions. What is the appropriate Hamiltonian that actually generates time evolution? Is time fundamental and continuous, or emergent and discrete? Can we derive an emergent theory that includes not only curved space and time, but other quantum fields? Will those fields satisfy the relativistic condition of being invariant under Lorentz transformations? Will gravity, in particular, have propagating degrees of freedom corresponding to spin-2 gravitons? (And only one kind of graviton, coupled universally to energy-momentum?) Full employment for the immediate future.

Perhaps the most interesting and provocative feature of what we’ve done is that we start from an assumption that the degrees of freedom corresponding to any particular region of space are described by a finite-dimensional Hilbert space. In some sense this is natural, as it follows from the Bekenstein bound (on the total entropy that can fit in a region) or the holographic principle (which limits degrees of freedom by the area of the boundary of their region). But on the other hand, it’s completely contrary to what we’re used to thinking about from quantum field theory, which generally assumes that the number of degrees of freedom in any region of space is infinitely big, corresponding to an infinite-dimensional Hilbert space. (By itself that’s not so worrisome; a single simple harmonic oscillator is described by an infinite-dimensional Hilbert space, just because its energy can be arbitrarily large.) People like Jacobson and Seth Lloyd have argued, on pretty general grounds, that any theory with gravity will locally be described by finite-dimensional Hilbert spaces.

That’s a big deal, if true, and I don’t think we physicists have really absorbed the consequences of the idea as yet. Field theory is embedded in how we think about the world; all of the notorious infinities of particle physics that we work so hard to renormalize away owe their existence to the fact that there are an infinite number of degrees of freedom. A finite-dimensional Hilbert space describes a very different world indeed. In many ways, it’s a much simpler world — one that should be easier to understand. We shall see.

Part of me thinks that a picture along these lines — geometry emerging from quantum information, obeying a version of Einstein’s equation in the classical limit — pretty much has to be true, if you believe (1) regions of space have a finite number of degrees of freedom, and (2) the world is described by a wave function in Hilbert space. Those are fairly reasonable postulates, all by themselves, but of course there could be any number of twists and turns to get where we want to go, if indeed it’s possible. Personally I think the prospects are exciting, and I’m eager to see where these ideas lead us.

by Sean Carroll at July 18, 2016 03:42 PM

John Baez - Azimuth

Frigatebirds

 

Frigatebirds are amazing!

They have the largest ratio of wing area to body weight of any bird. This lets them fly very long distances while only rarely flapping their wings. They often stay in the air for weeks at time. And one being tracked by satellite in the Indian Ocean stayed aloft for two months.

Surprisingly for sea birds, they don’t go into the water. Their feathers aren’t waterproof. They are true creatures of the air. They snatch fish from the ocean surface using their long, hooked bills—and they often eat flying fish! They clean themselves in flight by flying low and wetting themselves at the water’s surface before preening themselves.

They live a long time: often over 35 years.

But here’s the cool new discovery:

Since the frigatebird spends most of its life at sea, its habits outside of when it breeds on land aren’t well-known—until researchers started tracking them around the Indian Ocean. What the researchers discovered is that the birds’ flying ability almost defies belief.

Ornithologist Henri Weimerskirch put satellite tags on a couple of dozen frigatebirds, as well as instruments that measured body functions such as heart rate. When the data started to come in, he could hardly believe how high the birds flew.

“First, we found, ‘Whoa, 1,500 meters. Wow. Excellent, fantastique,’ ” says Weimerskirch, who is with the National Center for Scientific Research in Paris. “And after 2,000, after 3,000, after 4,000 meters — OK, at this altitude they are in freezing conditions, especially surprising for a tropical bird.”

Four thousand meters is more than 12,000 feet, or as high as parts of the Rocky Mountains. “There is no other bird flying so high relative to the sea surface,” he says.

Weimerskirch says that kind of flying should take a huge amount of energy. But the instruments monitoring the birds’ heartbeats showed that the birds weren’t even working up a sweat. (They wouldn’t, actually, since birds don’t sweat, but their heart rate wasn’t going up.)

How did they do it? By flying into a cloud.

“It’s the only bird that is known to intentionally enter into a cloud,” Weimerskirch says. And not just any cloud—a fluffy, white cumulus cloud. Over the ocean, these clouds tend to form in places where warm air rises from the sea surface. The birds hitch a ride on the updraft, all the way up to the top of the cloud.

[…]

“Absolutely incredible,” says Curtis Deutsch, an oceanographer at the University of Washington. “They’re doing it right through these cumulus clouds. You know, if you’ve ever been on an airplane, flying through turbulence, you know it can be a little bit nerve-wracking.”

One of the tagged birds soared 40 miles without a wing-flap. Several covered more than 300 miles a day on average, and flew continuously for weeks.

• Christopher Joyce, Nonstop flight: how the frigatebird can soar for weeks without stopping, All Things Considered, National Public Radio, 30 June 2016.

Frigatebirds aren’t admirable in every way. They’re kleptoparasites—now there’s a word you don’t hear every day! That’s a name for animals that steal food:

Frigatebirds will rob other seabirds such as boobies, particularly the red-footed booby, tropicbirds, shearwaters, petrels, terns, gulls and even ospreys of their catch, using their speed and maneuverability to outrun and harass their victims until they regurgitate their stomach contents. They may either assail their targets after they have caught their food or circle high over seabird colonies waiting for parent birds to return laden with food.

Frigatebird, Wikipedia.


by John Baez at July 18, 2016 01:16 PM

Emily Lakdawalla - The Planetary Society Blog

Horizon Goal: A new reporting series on NASA’s Journey to Mars
We're embarking on a multi-part series with the Huffington Post about the world's largest human spaceflight program. In part 1, we look at how the Columbia accident prompted NASA and the George W. Bush administration to create a new vision for space exploration.

July 18, 2016 12:03 PM

July 15, 2016

Symmetrybreaking - Fermilab/SLAC

The science of proton packs

Ghostbusters advisor James Maxwell explains the science of bustin'.

There's a new proton pack in town.

During the development of the new Ghostbusters film, released today, science advisor James Maxwell took on the question: "How would a proton pack work, with as few huge leaps of miraculous science as possible?"

As he explains in this video, he helped redesign the movie's famous ghost-catching tool to bring it more in line with modern particle accelerators such as the Large Hadron Collider.

"Particle accelerators are real. Superconducting magnets are real," he says. "The big leaps of faith are actually doing it in the space that's allowed."

Video of VayXii8HtyE

Video by Sony Pictures Entertainment

by Kathryn Jepsen at July 15, 2016 10:18 PM

Symmetrybreaking - Fermilab/SLAC

Who you gonna call? MIT physicists!

As science advisors, physicists Lindley Winslow and Janet Conrad gave the Ghostbusters crew a taste of life in the lab.

Tonight, two MIT scientists are going to the movies. It’s not just because they want to see Kristen Wiig, who plays a particle physicist in the new Ghostbusters film, talk about grand unified theories on the big screen. Lindley Winslow and Janet Conrad served as science advisors on the film, and they can’t wait to see all the nuggets of realism they managed to fit into the set.

The Ghostbusters production team contacted Winslow on the advice of The Big Bang Theory science advisor David Saltzberg, who worked with Winslow at UCLA.

Winslow says she was delighted to help out. As a child, she watched the original 1984 Ghostbusters on repeat with her sister. As an adult, Winslow recognizes that she became a scientist thanks in part to the capable female characters she saw in shows like Star Trek.

She says she’s excited for a reboot that features women getting their hands dirty doing science. “They’re using oscilloscopes and welding things. It’s great!”

The Ghostbusters crew was filming in Boston, and “they wanted to see what a particle physics lab would be like,” Winslow says. She quickly thought through the coolest stuff she had sitting around: “There was a directional neutron detector Janet had. And at the last minute, I remembered that, in the corner of my lab, I had a separate room with a prototype of a polarized Helium-3 source for a potential future electron-ion collider.”

MIT postdoc James Maxwell, now a staff scientist at Jefferson Lab, wound up constructing a replica of the Helium-3 source for the set.

But the production team was interested in more than just the shiny stuff. They wanted to understand the look and feel of a real laboratory. They knew it would be different from the sanitized versions than often appear onscreen.

Winslow obliged. “I take them to this lab, and it’s pretty… it looks like you’ve been in there for 40 years,” she says. “There’s a coat rack with a whole pile of cables hanging on it. They were taking a ton of pictures.”

The team really wanted to get the details right, down the books on the characters’ shelves, the publications and grant proposals on their desks and the awards on their walls, Winslow says. That’s where Conrad’s contributions came in. Offering to pitch in as Winslow prepared to go out on maternity leave, Conrad rented out her entire office library to the film, and she wrote papers for two characters, Wiig’s particle physicist and a villainous male scientist.

Conrad made Wiig’s character a neutrino physicist. She decided the bad guy would probably be into string theory. There’s just something sinister about the theory’s famous lack of verifiable predictions, Winslow says.

String theorists can also be lovely people, though, Conrad says, and “I wanted to make [the bad guy] as evil as possible.” In the scientific paper she wrote for his desk, “he doesn’t acknowledge anyone. He just says ‘The author is supported by the Royal Society of Fellows,’ and that’s it.”

Also, she wrote for him “an evil letter where he’s turning someone down for tenure.”

Winslow wrote the text for the awards that adorn the characters’ office walls, though both she and Conrad point out that physicists rarely hang their awards at work. “I give mine to my mom, and she hangs them up,” Conrad says.

In their offices, both Winslow and Conrad plan to hang their official Ghostbusters thank-you notes. “And a coat hook,” Winslow says. “I need a coat hook.”

Neither physicist got the chance to see the film before today, and they’re not sure how much of their handiwork will actually make it to the big screen. But Winslow was thrilled to see in a recently released preview one of her proudest contributions: a giant set of equations written on a whiteboard behind Wiig’s character.

The equations are real, representing the Georgi-Glashow model, otherwise known as SU(5), the first theory to try to combine the electroweak and strong forces. The model was ruled out by results from the Super Kamiokande experiment, but Winslow imagines Wiig’s character is using it to introduce her own attempt to unite the fundamental forces.

Winslow says she explained the basics of SU(5) to Ghostbusters director Paul Feig, who was then left to pass along the message to Wiig when Winslow needed to pick up her 3-year-old son from daycare.

As they head to the theaters tonight, Conrad and Winslow say they are excited to see bits of their lives reflected on the Ghostbusters set. They’re even more excited for girls in the audience to see themselves reflected in the tech-savvy, adventurous women in the film.

by Kathryn Jepsen at July 15, 2016 01:00 PM

July 13, 2016

John Baez - Azimuth

Operads for “Systems of Systems”

“Systems of systems” is a fashionable buzzword for complicated systems that are themselves made of complicated systems, often of disparate sorts. They’re important in modern engineering, and it takes some thought to keep them from being unmanageable. Biology and ecology are full of systems of systems.

David Spivak has been working a lot on operads as a tool for describing systems of systems. Here’s a nice programmatic talk advocating this approach:

• David Spivak, Operads as a potential foundation for
systems of systems
.

This was a talk he gave at the Generalized Network Structures and Dynamics Workshop at the Mathematical Biosciences Institute at Ohio State University this spring.

You won’t learn what operads are from this talk—for that, try this:

• Wikipedia, Operad.

But if you know a bit about operads, it may help give you an idea of their flexibility as a formalism for describing ways of sticking together components to form bigger systems!

I’ll probably talk about this kind of thing more pretty soon. So far I’ve been using category theory to study networked systems like electrical circuits, Markov processes and chemical reaction networks. The same ideas handle all these different kind of systems in a unified way. But I want to push toward biology. Here we need more sophisticated ideas. My philosophy is that while biology seems “messy” to physicists, living systems actually operate at higher levels of abstraction, which call for new mathematics.


by John Baez at July 13, 2016 01:40 AM

July 12, 2016

Symmetrybreaking - Fermilab/SLAC

A primer on particle accelerators

What’s the difference between a synchrotron and a cyclotron, anyway?

Research in high-energy physics takes many forms. But most experiments in the field rely on accelerators that create and speed up particles on demand.

What follows is a primer on three different types of particle accelerators: synchrotrons, cyclotrons and linear accelerators, called linacs.

Illustration by Sandbox Studio, Chicago with Jill Preston

Synchrotrons: the heavy lifters

Synchrotrons are the highest-energy particle accelerators in the world. The Large Hadron Collider currently tops the list, with the ability to accelerate particles to an energy of 6.5 trillion electronvolts before colliding them with particles of an equal energy traveling in the opposite direction. 

Synchrotrons typically feature a closed pathway that takes particles around a ring. Other variants are created with straight sections between the curves (similar to a racetrack or in the shape of a triangle or hexagon). Once particles enter the accelerator, they travel around the circular pathway over and over again, always enclosed in a vacuum pipe. 

Radiofrequency cavities at intervals around the ring increase their speed. Several different types of magnets create electromagnetic fields, which can be used to bend and focus the particle beams. The electromagnetic fields slowly build up as the particles are accelerated. Particles pass around the LHC about 14 million times in the 20 minutes they need to reach their intended energy level.  

Researchers send beams of accelerated particles through one another to create collisions in locations surrounded by particle detectors. Relatively few collisions happen each time the beams meet. But because the particles are constantly circulating in a synchrotron, researchers can pass them through one another many times over—creating a large number of collisions over time and more data for observing rare phenomena.

“The LHC detectors ATLAS and CMS reached about 400 million collisions a second last year,” says Mike Lamont, head of LHC operations at CERN. “This is why this design is so useful.”

Synchrotrons’ power makes them especially suited to studying the building blocks of our universe. For example, physicists were able to witness evidence of the Higgs boson among the LHC’s collisions only because the collider could accelerate particles to such a high energy and produce such high collision rates. 

The LHC primarily collides protons with protons but can also accelerate heavy nuclei such as lead. Other synchrotrons can also be customized to accelerate different types of particles. At Brookhaven National Laboratory in New York, the Relativistic Heavy Ion Collider can accelerate everything from protons to uranium nuclei. It keeps the proton beams polarized with the use of specially designed magnets, according to RHIC accelerator physicist Angelika Drees. It can also collide heavy ions such as uranium and gold to create quark-gluon plasma—the high-temperature soup that made up the universe just after the Big Bang.

Illustration by Sandbox Studio, Chicago with Jill Preston

Cyclotrons: the workhorses

Synchrotrons are the descendants of another type of circular collider called cyclotrons. Cyclotrons accelerate particles in a spiral pattern, starting at their center.

Like synchrotrons, cyclotrons use a large electromagnet to bend the particles in a circle. However, they use only one magnet, which limits how large they can be. They use metal electrodes to push particles to travel in increasingly large circles, creating a spiral pathway. 

Cyclotrons are often used to create large amounts of specific types of particles, such as muons or neutrons. They are also popular for medical research because they have the right energy range and intensity to produce medical isotopes. 

The world’s largest cyclotron is located at the TRIUMF laboratory in Vancouver, Canada. At the TRIUMF cyclotron, physicists regularly accelerate particles to 520 million electronvolts. They can draw particles from different parts of their accelerator for experiments that require particles at different energies. This makes it an especially adaptable type of accelerator, says physicist Ewart Blackmore, who helped to design and build the TRIUMF accelerator.

“We certainly make use of that facility every day when we’re running, when we’re typically producing a low-energy but high-current beam for medical isotope production,” Blackmore says. “We’re extracting at fixed energies down one beam for producing pions and muons for research, and on another beam line we’re extracting beams of radioactive nuclei to study their properties.”

Illustration by Sandbox Studio, Chicago with Jill Preston

Linacs: straight and to the point

For physics experiments or applications that require a steady, intense beam of particles, linear accelerators are a favored design. SLAC National Accelerator Laboratory hosts the longest linac in the world, which measures 2 miles long and at one point could accelerate particles up to 50 billion electronvolts. Fermi National Accelerator Laboratory uses a shorter linac to speed up protons before sending them into a different accelerator, eventually running the particles into a fixed target to create the world’s most intense neutrino beam.

While circular accelerators may require many turns to accelerate particles to the desired energy, linacs get particles up to speed in short order. Particles start at one end at a low energy, and electromagnetic fields in the linac accelerate them down its length. When particles travel in a curved path, they release energy in the form of radiation. Traveling in a straight line means keeping their energy for themselves. A series of radiofrequency cavities in SLAC’s linac are used to push particles on the crest of electromagnetic waves, causing them to accelerate forward down the length of the accelerator.

Like cyclotrons, linacs can be used to produce medical isotopes. They can also be used to create beams of radiation for cancer treatment. Electron linacs for cancer therapy are the most common type of particle accelerator.

by Signe Brewster at July 12, 2016 01:00 PM

July 10, 2016

Lubos Motl - string vacua and pheno

Reality vs Connes' fantasies about physics on non-commutative spaces
Florin Moldoveanu, an eclectic semi-anti-quantum zealot, hasn't ever been trained in particle physics and he doesn't understand it but he found it reasonable to uncritically write about Alain Connes' proposals to construct a correct theory of particle physics using the concepts of noncommutative geometry.

Now, Connes is a very interesting guy, great, creative, and playful mathematician, and he surely belongs among the most successful abstract mathematicians who have worked hard to learn particle physics. Except that the product just isn't enough because the airplanes don't land. His and his collaborators' proposals are intriguing but they just don't work and what the "new framework" is supposed to be isn't really well-defined at all.

The status quo in particle physics is that quantum field theories – often interpreted as effective field theories (theories useful for the description of all phenomena at distance scales longer than a cutoff) – and string theory are the only known ways to produce realistic theories. Moreover, to a large extent, string theory in most explicit descriptions we know also adopts the general principles of quantum field theory "without reservations".

The world sheet description of perturbative string theory is a standard two-dimensional conformal (quantum) field theory, Matrix theory and AdS/CFT describe vacua of string/M-theory but they're also quantum field theories in some spaces (world volumes or AdS boundaries), and string theory vacua have their effective field theory descriptions exactly of the type that one expects in the formalism of effective field theories (even though string theory itself isn't "quite" a regular quantum field theory in the bulk).




When we discuss quantum field theories, we decide about the dimension, qualitative field content, and symmetries. Once we do so, we're obliged to consider all (anomaly-free, consistent, unitary) quantum field theories with these conditions and all values of the parameters. This also gives us an idea about which choices of the parameters are natural or unnatural.




Now, Connes and collaborators claim to have something clearly different from the usual rules of quantum field theory (or string theory). The discovery of a new framework that would be "on par" with quantum field theory or string theory would surely be a huge one, just like the discovery of additional dimensions of the spacetime of any kind. Except that we have never been shown what the Connes' framework actually is, how to decide whether a paper describing a model of this kind belongs to Connes' framework or not. And we haven't been given any genuine evidence that the additional dimensions of Connes' type exist.

So all this Connes' work is some hocus pocus experimentation with mixtures of mathematics of noncommutative spaces (which he understands very well) and particle physics (which he understands much less well) and in between some mathematical analyses that are probably hugely careful and advanced, he often writes things that are known to be just silly to almost every physics graduate student. And a very large fraction of his beliefs how noncommutative geometry may work within physics just seems wrong.

How is it supposed to work?

In Kaluza-Klein theory (or string theory), there is some compactification manifold which I will call \(CY_6\) because the Calabi-Yau three-fold is the most frequently mentioned, and sort of canonical, example. Fields may be expanded to modes – a generalization of Fourier series – which are functions of the coordinates on \(CY_6\). And there is a countably infinite number of these modes. Only a small number of them are very light but if you allow arbitrary masses, you have a whole tower of increasingly heavy Kaluza-Klein modes.

Connes et al. want to believe that there are just finitely many fields in 3+1 dimensions, like in the Standard Model. How can we get a finite number of Kaluza-Klein modes? We get them if the space is noncommutative. The effect is similar as if the space were a finite number of points except that a noncommutative space isn't a finite set of points.

A noncommutative space isn't a set of points at all. For this reason, there are no "open sets" and "neighborhoods" and the normal notions of topology and space dimension, either. A noncommutative space is a generalization of the "phase space in quantum mechanics". The phase space has coordinates \(x,p\) but they don't commute with each other – it's why it's called a "noncommutative space". Instead, we have\[

xp-px=i\hbar.

\] Consequently, the uncertainty principle restricts how accurately \(x,p\) may be determined at the same moment. The phase space is effectively composed of cells of area \(2\pi\hbar\) (or its power, if we have many copies of the coordinates and momenta). And these cells behave much like "discrete points" when it comes to the counting of the degrees of freedom – except that they're not discretely separated at all. The boundaries between them are unavoidably fuzzier than even those in regular commutative manifolds. If you consider a compactified (periodic \(x,p\) in some sense) versions of the phase space (e.g. fuzzy sphere and fuzzy torus), you may literally get a finite number of cells and therefore a finite number of fields in 3+1 dimensions.

That's basically what Connes and pals do.

Now, they have made some truly extraordinary claims that have excited me as well. I can't imagine how could I be unexcited at least once; but I also can't imagine that I would preserve my excitement once I see that there's no defensible added value in those ideas. In 2006, for example, Chamseddine, Connes, and Marcolli have released their standard model with neutrino mixing that boldly predicted the mass of the Higgs boson as well. The prediction was \(170\GeV\) which is not right, as you know: the Higgs boson of mass \(125\GeV\) was officially discovered in July 2012.

But the fate of this prediction \(m_h=170\GeV\) was sort of funny. Two years later, in 2008, the Tevatron became able to say something about the Higgs mass for the first time. It ruled out the first narrow interval of Higgs masses. Amusingly enough, the first value of the Higgs mass that was killed was exactly Connes' \(170\GeV\). Oops. ;-)

There's a consensus in the literature of Connes' community that \(170\GeV\) is the prediction that the framework should give for the Higgs mass. But in August 2012, one month after the \(125\GeV\) Higgs boson was discovered, Chamseddine and Connes wrote a preprint about the resilience of their spectral standard model. A "faux pas" would probably be more accurate but "resilience" sounded better.

In that paper, they added some hocus pocus arguments claiming that because of some additional singlet scalar field \(\sigma\) that was previously neglected, the Higgs prediction is reduced from \(170\GeV\) to \(125\GeV\). Too bad they couldn't make this prediction before December 2011 when the value of \(125\GeV\) emerged as the almost surely correct one to the insiders among us.

I can't make sense of the technical details – and I am pretty sure that it's not just due to the lack of effort, listening, or intelligence. There are things that just don't make sense. Connes and his co-author claim that the new scalar field \(\sigma\) which they consider a part of their "standard model" is also responsible for the Majorana neutrino masses.

Now, this just sounds extremely implausible because the origin of the small neutrino masses is very likely to be in the phenomena that occur at some very high energy scale near the GUT scale – possibly grand unified physics itself. The seesaw mechanism produces good estimates for the neutrino masses\[

m_\nu \approx \frac{m_{h}^2}{m_{GUT}}.

\] So how could one count the scalar field responsible for these tiny masses to the "Standard Model" which is an effective theory for the energy scales close to the electroweak scale or the Higgs mass \(m_h\sim 125\GeV\)? If the Higgs mass and neutrino masses are calculable in Connes' theory, the theory wouldn't really be a standard model but a theory of everything and it should work near the GUT scale, too.

The claim that one may relate these parameters that seemingly boil down to very different physical phenomena – at very different energy scales – is an extraordinary statement that requires extraordinary evidence. If the statement were true or justifiable, it would be amazing by itself. But this is the problem with non-experts like Connes. He doesn't give any evidence because he doesn't even realize that his statement sounds extraordinary – it sounds (and probably is) incompatible with rather basic things that particle physicists know (or believe to know).

Connes' "fix" that reduced the prediction to \(125\GeV\) was largely ignored by the later pro-Connes literature that kept on insisting that \(170\GeV\) is indeed what the theory predicts.

So I don't believe one can ever get correct predictions out of a similar framework, except for cases of good luck. But my skepticism about the proposal is much stronger than that. I don't really believe that there exists any new "framework" at all.

What are Connes et al. actually doing when they are constructing new theories? They are rewriting some/all terms in a Lagrangian using some new algebraic symbols, like a "star-product" on a specific noncommutative geometry. But is it a legitimate way to classify quantum field theories? You know, a star-product is just a bookkeeping device. It's a method to write down classical theories of a particular type.

But the quantum theory at any nonzero couplings isn't really "fully given by the classical Lagrangian". It should have some independent definition. If you allow the quantum corrections, renormalization, subtleties with the renormalization schemes etc., I claim that you just can't say whether a particular theory is or is not a theory of the Connes' type. The statement "it is a theory of Connes' type" is only well-defined for classical field theories and probably not even for them.

A generic interacting fully quantum field theory just isn't equivalent to any star-product based classical Lagrangians!

There are many detailed questions that Connes can't quite answer that show that he doesn't really know what he's doing. One of these questions is really elementary: Is gravity supposed to be a part of his picture? Does his noncommutative compactification manifold explain the usual gravitational degrees of freedom, or just some polarizations of the graviton in the compact dimensions, or none? You can find contradictory answers to this question in the Connes' paper.

Let me say what is the answer to the question whether gravity is a part of the consistent decoupled field theories on noncommutative spaces – i.e. those in string theory. The answer is simply No. String theory allows you to pick a \(B\)-field and decouple the low-energy open-string dynamics (which is a gauge theory). The gauge theory is decoupled even if the space coordinates are noncommutative.

But it's always just a gauge theory. There are never spin-two fields that would meaningfully enter the Lagrangian with the noncommutative star-product. Why? Because the noncommutativity comes from the \(B\)-field which may be set to zero by a gauge invariance for the \(B\)-field, \(\delta B_{(2)} = d \lambda_{(1)}\). So the value of this field is unphysical. This conclusion only changes inside a D-brane where \(B+F\) is the gauge-invariant combination. The noncommutativity-inducing \(B\)-field may really be interpreted as a magnetic \(F\) field inside the D-brane which is gauge-invariant. Its value matters. But in the decoupling limit, it only matters for the D-brane degrees of freedom because the D-brane world volume is where the magnetic field \(F\) is confined.

In other words, the star-product-based theory only decouples from the rest of string theory if the open-string scale is parameterically longer than the closed-string scale. And that's why the same star-product isn't relevent for the closed-string modes such as gravity. Or: if you tried to include some "gravitational terms with the star product", you would need to consider all objects with the string-scale energies and the infinite tower of the massive string states would be a part of the picture, too.

Whether you learn these lessons from the string theory examples or you derive them purely from "noncommutative field theory consistency considerations", your conclusions will contradict Connes' assumptions. One simply cannot have gravity in these decoupled theories. If your description has gravity, it must have everything. At the end, you could relate this conclusion with the "weak gravity conjecture", too. Gravity is the weakest force so once your theory of elementary building blocks of Nature starts to be sensitive to it, you must already be sensitive to everything else. Alternatively, you may say that gravity admits black holes that evaporate and they may emit any particle as the Hawking radiation – any particle in any stage of a microscopic phenomenon that is allowed in Nature. So there's no way to decouple any subset of objects and phenomena.

When I read Connes' papers on these issues, he contradicts insights like that – which seem self-evident to me and probably to most real experts in this part of physics. You know, I would be extremely excited if a totally new way to construct theories or decouple subsets of the dynamics from string theory existed. Except that it doesn't seem to be the case.

In proper string/M-theory, when you actually consistently decouple some subset of the dynamics, it's always near some D-brane or singularity. The decoupling of the low-energy physics on D-branes (which may be a gauge theory on noncommutative spaces) was already mentioned. Cumrun Vafa's F-theory models of particle physics are another related example: one decouples the non-gravitational particle physics near the singularities in the F-theory manifold, basically near the "tips of some cones".

But Connes et al. basically want to have a non-singular compactification without branes and they still want to claim that they may decouple some ordinary standard-model-like physics from everything else – like the excited strings or (even if you decided that those don't exist) the black hole microstates which surely have to exist. But that's almost certainly not possible. I don't have a totally rock-solid proof but it seems to follow from what we know from many lines of research and it's a good enough reason to ignore Connes' research direction as a wrong one unless he finds something that is really nontrivial, which he hasn't done yet.

Again, I want to mention the gap between the "physical beef" and "artefacts of formalism". The physical beef includes things like the global symmetries of a physical theories. The artefacts of formalism include things like "whether some classical Lagrangian may be written using some particular star-product". Connes et al. just seem to be extremely focused on the latter, the details of the formalism. They just don't think as physicists.

You know, as we have learned especially in the recent 100 years, a physical theory may often be written in very many different ways that are ultimately equivalent. Quantum mechanics was first found as Heisenberg's "matrix mechanics" which turned into the Heisenberg picture and later as "wave mechanics" which became Schrödinger's picture. Dirac pointed out that a compromise, the interaction/Dirac picture, always exists. Feynman added his path integral approach later, it's really another picture. The equivalence of those pictures was proven soon.

For particular quantum field theories and vacua of string/M-theory, people found dualities, especially in recent 25 years: string-string duality, IIA/M, heterotic/M, S-dualities, T-dualities, U-dualities, mirror symmetry, AdS/CFT, ER=EPR, and others. The point is that physics that is ultimately the same to the observers who live in that universe may often be written in several or many seemingly very different ways. After all, even the gauge theories on noncommutative spaces are equivalent to gauge theories on commutative spaces – or noncommutative spaces in different dimensions, and so on.

The broader lesson is that the precise formalism you pick simply isn't fundamental. Connes' whole philosophy – and the philosophy of many people who focus on appearances and not the physical substance – is very different. At the end, I think that Connes would agree that he's just constructing something that may be rewritten as quantum field theories. If there's any added value, he just claims to have a gadget that produces the "right" structure of the relevant quantum field theories.

But even if he had some well-defined criterion that divides the "right" and "wrong" Lagrangians of this kind, and I think he simply doesn't have one because there can't be one, why would one really believe the Connes' subset? A theory could be special because it could be written in Connes' form but is that a real virtue or just an irrelevant curiosity? The theory is equally consistent and has equal symmetries etc. as many other theories that cannot be written in the Connes form.

So even if the theories of Connes' type were a well-defined subset of quantum field theories, I think that it would be irrational to dramatically focus on them. It would seem just a little bit more natural to focus on this subset than to focus on quantum field theories whose all dimensions of representations are odd and the fine-structure constant (measured from the electron-electron low-energy scattering) is written using purely odd digits in the base-10 form. ;-) You may perhaps define this subset but why would you believe that belonging to this subset is a "virtue"?

I surely don't believe that "the ability to write something in Connes' form" is an equally motivated "virtue" as an "additional enhanced symmetry" of a theory.

This discussion is a somewhat more specific example of the thinking about the "ultimate principles of physics". In quantum field theory, we sort of know what the principles are. We know what theories we like or consider and why. The quantum field theory principles are constructive. The principles we know in string theory – mostly consistency conditions, unitarity, incorporation of massless spin-two particles (gravitons) – are more bootstrapy and less constructive. We would like to know more constructive principles of string theory that make it more immediately clear why there are 6 maximally decompactified supersymmetric vacua of string/M-theory, and things like that. That's what the constantly tantalizing question "what is string theory" means.

But whenever we describe some string theory vacua in a well-defined quantitative formalism, we basically return to the constructive principles of quantum field theory. Constrain the field/particle content and the symmetries. Some theories – mostly derivably from a Lagrangian and its quantization – obey the conditions. There are parameters you may derive. And some measure on these parameter spaces.

Connes basically wants to add principles such as "a theory may be written using a Lagrangian that may be written in a Connes form". I just don't believe that principles like that matter in Nature because they don't really constrain Nature Herself but only what Nature looks like in a formalism. I simply don't believe that a formalism may be this important in the laws of physics. Nature abhors bureaucracy. She doesn't really care about formalisms and what they look like to those who have to work with them. She doesn't really discriminate against one type of formalisms and She doesn't favor another kind. If She constrains some theories, She has good reasons for that. To focus on a subclass of quantum field theories because they are of the "Connes type" simply isn't a good reason. There isn't any rational justification that the Connesness is an advantage rather than a disadvantage etc.

Even though some of my objections are technical while others are "philosophically emotional" in some way, I am pretty sure that most of the people who have thought about the conceptual questions deeply and successfully basically agree with me. This is also reflected by the fact that Connes' followers are a restricted group and I think that none of them really belongs to the cream of the theoretical high-energy physics community. Because the broader interested public should have some fair idea about what the experts actually think, it seems counterproductive for non-experts like Moldoveanu to write about topics they're not really intellectually prepared for.

Moldoveanu's blog post is an example of a text that makes the readers believe that Connes has found a framework that is about as important, meaningful, and settled as the conventional rules of the model building in quantum field theory or string theory. Except that he hasn't and the opinion that he has is based on low standards and sloppiness. More generally, people are being constantly led to believe that "anything goes". But it's not true that anything goes. The amount of empirical data we have collected and the laws, principles, and patterns we have extracted from them is huge and the viable theories and frameworks are extremely constrained. Almost nothing works.

The principles producing theories that seem to work should be taken very seriously.

by Luboš Motl (noreply@blogger.com) at July 10, 2016 08:08 AM

July 08, 2016

Lubos Motl - string vacua and pheno

Pesticides needed against anti-physics pests
Their activity got too high in the summer

Some three decades ago, mosquitoes looked like a bigger problem in the summer. Their numbers had to drop or I am spending less time at places where they get concentrated. The haters of physics have basically hijacked the mosquitoes' Lebensraum, it seems.

The scum stinging fundamental, theoretical, gravitational, and high-energy physics became so aggressive and repetitive that it's no longer possible to even list all the incidents. A week ago, notorious Californian anti-physics instructor Richard Muller – a conman who once pretended to be a climate skeptic although he has always been a fanatical alarmist, a guy who just can't possibly understand that the event horizon is just a coordinate singularity and who thinks it's a religion to demand a physical theory to be compatible with all observations (quantum and gravitational ones), not to mention dozens of other staggering idiocies he has written in recent years – wrote another rant saying that "string theory isn't even a theory".




There's absolutely nothing new about this particular rant – it's the 5000th repetition of the anti-string delusions repeated by dozens of other mental cripples and fraudsters in the recent decade. To make things "cooler", he says that many string theorists would agree with him and to make sure what they would agree with, he promotes both Šmoits' crackpot books at the end as the "recommended reading".

Oh, sure, string theorists would agree with these Šmoitian things. Time for your pills, Mr Muller.




This particular rant has been read by more than 45,000 readers. The number of people indoctrinated with this junk is so high that one should almost start to be afraid to call the string critics vermin on the street (my fear is not this far, however). I am sure that most of them have been gullible imbeciles since the rant was upvoted a whopping 477 times. Every Quora commenter who has had something to do with high brow physics disagrees with Muller but it's only Muller's rant that is visible. Quora labels this Muller as the "most viewed writer in physics". Quora is an anti-civilization force that deserves to be liquidated.

This week, Sabine Hossenfelder wrote a rant claiming that the LHC is a disappointment and naturalness is a delusion. Holy cow, another gem from this Marxist whore. The LHC is a wonderful machine that has already discovered enough to justify its existence and that works perfectly. Lots of the genuine particle physics enthusiasts are excited to follow both papers by the LHC collaboration and the LHC schedules. And while naturalness may look stretched and many people (not myself!) have surely been naive about the direct way how it can imply valid predictions, it is absolutely obvious that to some extent, it will always be viewed as an argument.

It's because theories of Nature simply have to be natural in one way or another. The point is that you can always construct uncountably many unnatural theories that agree with the data. You may always say that some highly fine-tuned God created all the species just like we observe them and all the patterns (and observations that something is much smaller than something else etc.) explained by the natural theories are just coincidences. You can take any valid theory and add 50 new random interactions with very small coupling constants or particles with high masses and claim that your theory is great.

But such unnatural theories are simply no good because it's unlikely for the parameters to have at least qualitatively the right values that are needed for the theories to avoid contradictions with the empirical evidence. At some level, the Bayesian inference that indicates that dimensionless parameters shouldn't be expected to be much smaller than one kicks in. It's the quantitative reason why it's often right to use Occam's razor in our analyses. You know, the future of naturalness is basically analogous to the future of Occam's razor, a related but less specific concept. Some very specific versions of it may be incorrect but the overall paradigm simply can't ever disappear from science.

The right laws of Nature that explain why the Higgs mass is much lighter than the Planck scale may be different than the existing "sketches" of the projects but at the end, these laws are natural. They are colloquially natural which, you might object, is a different word. But in any sufficiently well-defined framework, the colloquial naturalness may be turned into some kind of technical naturalness.

To demand that naturalness is abandoned or banned altogether means to demand that people no longer think rationally. Ms Hossenfelder is just absolutely missing the point of science. Her last paragraph says:
It’s somewhat of a mystery to me why naturalness has become so popular in theoretical high energy physics. I’m happy to see it go out of the window now. Keep your eyes open in the next couple of years and you’ll witness that turning point in the history of science when theoretical physicists stopped dictating nature what’s supposedly natural.
But naturalness isn't going "out of the window" (just look at recent papers with naturalness in the title) and physicists who complain that a theory is unnatural aren't dictating anything to Nature. Instead, they complain against the broader theory. A theory that is fine-tuned and doesn't have any explanation for the fine-tuning is either wrong or missing the explanation of an important thing – it fails to see even the sketch of it. You know, when a theory disagrees with the data "slightly" or in a "detail", something may be "slightly" wrong about the theory or a "detail" may be incorrect. But when a theory makes a parametrically wrong estimate for its parameters, something bigger must be wrong or missing about the theory. Naturalness doesn't say anything else than this trivial fact that simply can't be wrong because in its general form, it's the basis of all rational thinking. Theories in physics will always have to be natural with some interpretation of the probabilistic distributions for the parameters.

The anthropic principle is sometimes quoted as an "alternative to naturalness". Even if this principle could be considered a replacement of this kind of thinking at all, and I am confident that it's right to say that there's no version of it that could be claimed to achieve this goal at this moment, it would still imply some naturalness. The anthropic principle, if it became well-defined enough to be considered a part of physicists' thinking, would just give us different estimates for the parameters or probability distributions for them. But it would still produce some estimates or distributions and we would still distinguish natural and unnatural theories.

And it goes on and on and on. Yesterday, Ethan Siegel wrote a Forbes rant claiming that grand unification may be dead end in physics. Siegel is OK when he writes texts about Earth's being round or similar things but hey, this guy simply must realize that he is absolutely out of his league when it comes to the cutting-edge fundamental physics. Everything he has written about it has always suffered from some absolutely lethal problems and this new rant is no exception.

He uses Garrett Lisi's childish picture to "visualize" the Georgi-Glashow grand unified \(SU(5)\) model (projection of some weights of the fermion reps on a 2D subspace of the Cartan subalgebra). I don't think that this unusual picture is useful for anything (perhaps to incorrectly claim that some \(\ZZ_6\) symmetry is what grand unification is all about) but yes, there are much more severe problems with Siegel's text.

When he starts to enumerate "problems" of grand unified theories, he turns into a full-fledged zombie crackpot:
But there are some big problems with these ideas, too. For one, the new particles that were predicted were of hopelessly high energies: around \(10^{15}\) to \(10^{16}\GeV\), or trillions of times the energies the LHC produces.
What? How can someone call it a "problem" in the sense of "bad news"? The scale at which the couplings unify is whatever it is. If it is \(10^{16}\GeV\), then it is a fact, not a "problem". A person who calls one number a "problem" unmasks that he is a prejudiced aßhole. He simply prefers one number over another without any evidence that would discriminate the possibilities – something that an honest scientist simply cannot ever do.
For another, almost all of the GUTs you can design lead to particles undergoing flavor-changing-neutral-currents, which are certain types of decays forbidden in the Standard Model and never observed in nature.
Great. But it's true for any generic enough theory of new physics, too. Clearly, Nature isn't generic in this sense. But there exist grand unified theories in which all these unwanted effects are suppressed in a technically natural way and that's everything that's needed to say that "everything is fine with the broader GUT paradigm at this moment". Similarly, the proton decay is acceptably slow in some classes of grand unified theories that are as fine as the Georgi-Glashow model.

But the most staggering technical stupidity on grand unified theories that Siegel wrote was one about the unification of the couplings:
The single “point” that the three forces almost meet at only looks like a point on a logarithmic scale, when you zoom out. But so do any three mutually non-parallel lines; you can try it for yourself by drawing three line segments, extending them in both directions until they all intersect and then zooming out.
What? Every three straight lines intersect in one point? Are you joking or are you high?



If you draw three generic straight lines A,B,C in a plane, the pairs intersect at points AB, BC, CA, but there is no intersection of all three lines ABC. Instead, there is a triangle inside. That's the left picture. On the other hand, for a special slope of the third line C – one real number has to be adjusted – the intersection BC may happen to coincide with the intersection AB, and when it's so, the intersection of CA coincides with this point, too: all three lines intersect at one point. That's the right picture. The triangle shrinks to zero area, to a point.

This outcome is in no way guaranteed. It's infinitely unlikely for three lines in a plane to intersect at one point. The small likelihood of this small miracle is roughly equal to the ratio of the actual precision (i.e. the longest side of the triangle) we get over the characteristic precision (the size of the triangle) we expected. The fact that the unification (intersection at a point) happens in a large subset of "morally simple enough" grand unified theories with a certain precision is a nontrivial successful test of these theories' viability. It doesn't prove that the 3 forces get unified (because the precision we can prove isn't "overwhelming") but it's not something that may be denied, either.

How can Ethan Siegel misunderstand the difference between 3 lines intersecting and non-intersecting? I think that every layman who has failed to understand this simple point after reading a popular book on particle physics has failed miserably. Siegel just doesn't make it even to an average reader of popular physics books.

And the incredible statements are added all the time:
The small-but-nonzero masses for neutrinos can be explained by any see-saw mechanism and/or by the MNS matrix; there’s nothing special about the one arising from GUTs.
One can get neutrino masses of a reasonable magnitude from any physics at the right scale but the scale has to be near the GUT scale. Funnily enough, it's the scale \(10^{15}\) to \(10^{16}\GeV\) that Siegel previously called a "problem". Except that this value disfavored by Siegel is favored by the neutrino masses. It's the scale where one expects the new physics responsible for the neutrino masses and nontrivially, it's approximately the same scale as the scale where the unification of the couplings demonstrably takes place (according to a calculation).

So that's another piece of evidence for the picture – that something is taking place at the scale and the something is rather likely to include the unification of non-gravitational forces. Moreover, it's somewhat beneath the Planck scale where gravity is added to the complete unification and it's arguably a good thing: the non-gravitational forces shouldn't split "unnaturally too low" beneath the truly fundamental, Planck scale.

It's the conventional picture which is still arguably most convincing and likely: the true unification of 4 forces occurs close enough to the Planck scale as calculated by Planck and at energies lower by some 2-3 orders of magnitude, the GUT force splits to the electroweak and the strong one. The electroweak force splits to the electromagnetic and weak force at the LHC Higgs scale. This old picture isn't "a demonstrated scientific fact" but it sounds very convincing and as long as we live in a civilized society, you can't just "ban" it or try to harass the people who think that it's the most persuasive scenario – which includes most of the top particle physicists, I am pretty sure about it.

This aßhole just fails to understand all these basic things and he sells this embarrassing ignorance as if it were a virtue. At the very end, we read:
There’s no compelling reason to think grand unification is anything other than a theoretical curiosity and a physical dead-end.
A more accurate formulation is that Mr Siegel doesn't want to see any arguments in favor of grand unification because he is a dishonest and/or totally stupid prejudiced and demagogic crackpot. But I guess that Siegel's own formulation, while totally untrue, sounds fancier to his brainwashed readers.

The number of individuals just like him has grown astronomical and they produce their lies on a daily basis without facing almost any genuine enemies.

by Luboš Motl (noreply@blogger.com) at July 08, 2016 03:03 PM

July 07, 2016

Matt Strassler - Of Particular Significance

Spinoffs from Fundamental Science

I find that some people just don’t believe scientists when we point out that fundamental research has spin-off benefits for modern society.  The assumption often seems to be that it’s just a bunch of egghead esoteric researchers trying to justify their existence.  It’s a real problem when those scoffing at our evidence are congresspeople of the United States and their staffers, or other members of governmental funding agencies around the world.

So I thought I’d point out an example, reported on Bloomberg News.  It’s a good illustration of how these things often work out, and it is very rare indeed that they are discussed in the press.

Gravitational waves are usually incredibly tiny effects [typically squeezing the radius of our planet by less than the width of an atomic nucleus] that can be made only with monster black holes and neutron stars.   There’s not much hope of using them in technology.  So what good could an experiment to discover them, such as LIGO, possibly be for the rest of the world?

Well, Shell Oil seems to have found some value in it.   It’s not in the gravitational waves themselves, of course; instead, it is in the technology that has to be developed to detect something so delicate.   http://www.bloomberg.com/news/articles/2016-07-07/shell-is-using-innoseis-s-sensors-to-detect-gravitational-waves

Score another one for investment in fundamental scientific research.

 


Filed under: Gravitational Waves, Science and Modern Society Tagged: LIGO, Spinoffs

by Matt Strassler at July 07, 2016 08:38 PM

Clifford V. Johnson - Asymptotia

Kill Your Darlings…

dialogues_process_share_7-7-16(Apparently I spent a lot of time cross-hatching, back in 2010-2012? More on this below. click for larger view.)

I've changed locations, have several physics research tasks to work on, and so my usual work flow is not going to be appropriate for the next couple of weeks, so I thought I'd work on a different aspect of the book project. I'm well into the "one full page per day for the rest of the year to stay on target" part of the calendar and there's good news and bad news. On the good news side, I've refined my workflow a lot, and devised new ways of achieving various technical tasks too numerous (and probably boring) to mention, and so I've actually got [...] Click to continue reading this post

The post Kill Your Darlings… appeared first on Asymptotia.

by Clifford at July 07, 2016 08:24 PM

John Baez - Azimuth

Large Countable Ordinals (Part 3)

Last time we saw why it’s devilishly hard to give names to large countable ordinals.

An obvious strategy is to make up a function f from ordinals to ordinals that grows really fast, so that f(x) is a lot bigger than the ordinal x indexing it. This is indeed a good idea. But something funny tends to happen! Eventually x catches up with f(x). In other words, you eventually hit a solution of

x = f(x)

This is called a fixed point of f. At this point, there’s no way to use f(x) as a name for x unless you already have a name for x. So, your scheme fizzles out!

For example, we started by looking at powers of \omega, the smallest infinite ordinal. But eventually we ran into ordinals x that obey

x = \omega^x

There’s an obvious work-around: we make up a new name for ordinals x that obey

x = \omega^x

We call them epsilon numbers. In our usual nerdy way we start counting at zero, so we call the smallest solution of this equation \epsilon_0, and the next one \epsilon_1, and so on.

But eventually we run into ordinals x that are fixed points of the function \epsilon_x, meaning that

x = \epsilon_x

There’s an obvious work-around: we make up a new name for ordinals x that obey

x = \epsilon_x

But by now you can guess that this problem will keep happening, so we’d better get systematic about making up new names! We should let

\phi_0(\alpha) = \omega^\alpha

and let \phi_{n+1}(\alpha) be the \alphath fixed point of \phi_n.

Oswald Veblen, a mathematician at Princeton, came up with this idea around 1908, based on some thoughts of G. H. Hardy:

• Oswald Veblen, Continuous increasing functions of finite and transfinite ordinals, Trans. Amer. Math. Soc. 9 (1908), 280–292.

He figured out how to define \phi_\gamma(\alpha) even when the index \gamma is infinite.

Last time we saw how to name a lot of countable ordinals using this idea: in fact, all ordinals less than the ‘Feferman–Schütte ordinal’. This time I want go further, still using Veblen’s work.

First, however, I feel an urge to explain things a bit more precisely.

Veblen’s fixed point theorem

There are three kinds of ordinals. The first is a successor ordinal, which is one more than some other ordinal. So, we say \alpha is a successor ordinal if

\alpha = \beta + 1

for some \beta. The second is 0, which is not a successor ordinal. And the third is a limit ordinal, which is neither 0 nor a successor ordinal. The smallest example is

\omega = \{1, 2, 3, \dots \}

Every limit ordinal is the ‘limit’ of ordinals less than it. What does that mean, exactly? Remember, each ordinal is a set: the set of all smaller ordinals. We can define the limit of a set of ordinals to be the union of that set. Alternatively, it’s the smallest ordinal that’s greater than or equal to every ordinal in that set.

Now for Veblen’s key idea:

Veblen’s Fixed Point Theorem. Suppose a function f from ordinals to ordinals is:

strictly increasing: if x < y then f(x) < f(y)

and

continuous: if x is a limit ordinal, f(x) is the limit of the ordinals f(\alpha) where \alpha < x.

Then f must have a fixed point.

Why? For starters, we always have this fact:

x \le f(x)

After all, if this weren’t true, there’d be a smallest x with the property that f(x) < x, since every nonempty set of ordinals has a smallest element. But since f is strictly increasing,

f(f(x)) < f(x)

so f(x) would be an even smaller ordinal with this property. Contradiction!

Using this fact repeatedly, we get

0 \le f(0) \le f(f(0)) \le \cdots

Let \alpha be the limit of the ordinals

0, f(0), f(f(0)), \dots

Then by continuity, f(\alpha) is the limit of the sequence

f(0), f(f(0)), f(f(f(0))),\dots

So f(\alpha) equals \alpha. Voilà! A fixed point!

This construction gives the smallest fixed point of f. There are infinitely many more, since we can start not with 0 but with \alpha+1 and repeat the same argument, etc. Indeed if we try to list these fixed points, we find there is one for each ordinal.

So, we can make up a new function that lists these fixed points. Just to be cute, people call this the derivative of f, so that f'(\alpha) is the \alphath fixed point of f. Beware: while the derivative of a polynomial grows more slowly than the original polynomial, the derivative of a continuous increasing function f from ordinals to ordinals generally grows more quickly than f. It doesn’t really act like a derivative; people just call it that.

Veblen proved another nice theorem:

Theorem. If f is a continuous and strictly increasing function from ordinals to ordinals, so is f'.

So, we can take the derivative repeatedly! This is the key to the Veblen hierarchy.

If you want to read more about this, it helps to know that a function from ordinals to ordinals that’s continuous and strictly increasing is called normal. ‘Normal’ is an adjective that mathematicians use when they haven’t had enough coffee in the morning and aren’t feeling creative—it means a thousand different things. In this case, a better term would be ‘differentiable’.

Armed with that buzzword, you can try this:

• Wikipedia, Fixed-point lemma for normal functions.

Okay, enough theory. On to larger ordinals!

The Feferman–Schütte barrier

First let’s summarize how far we got last time, and why we got stuck. We inductively defined the \alphath ordinal of the \gammath kind by:

\phi_0(\alpha) = \omega^\alpha

and

\phi_{\gamma+1}(\alpha) = \phi'_\gamma(\alpha)

meaning that \phi_{\gamma+1}(\alpha) is the \alphath fixed point of \phi_\gamma.

This handles the cases where \gamma is zero or a successor ordinal. When \gamma is a limit ordinal we let \phi_{\gamma}(\alpha) be the \alphath ordinal that’s a fixed point of all the functions \phi_\beta for \beta < \gamma.

Last time I explained how these functions \phi_\gamma give a nice notation for ordinals less than the Feferman–Schütte ordinal, which is also called \Gamma_0. This ordinal is the smallest solution of

x = \phi_x(0)

So it’s a fixed point, but of a new kind, because now the x appears as a subscript of the \phi function.

We can get our hands on the Feferman–Schütte ordinal by taking the limit of the ordinals

\phi_0(0), \; \phi_{\phi_0(0)}(0) , \; \phi_{\phi_{\phi_0(0)}(0)}(0), \dots

(If you’re wondering why we use the number 0 here, instead of some other ordinal, I believe the answer is: it doesn’t really matter, we would get the same result if we used any ordinal less than the Feferman–Schütte ordinal.)

The ‘Feferman–Schütte barrier’ is the combination of these two facts:

• On the one hand, every ordinal \beta less than \Gamma_0 can be written as a finite sum of guys \phi_\gamma(\alpha) where \alpha and \gamma are even smaller than \beta. Using this fact repeatedly, we can get a finite expression for any ordinal less than the Feferman–Schütte ordinal in terms of the \phi function, addition, and the ordinal 0.

• On the other hand, if \alpha and \gamma are less than \Gamma_0 then \phi_\gamma(\alpha) is less than \Gamma_0. So we can’t use the \phi function to name the Feferman–Schütte ordinal in terms of smaller ordinals.

But now let’s break the Feferman–Schütte barrier and reach some bigger countable ordinas!

The Γ function

The function \phi_x(0) is strictly increasing and continuous as a function of x. So, using Veblen’s theorems, we can define \Gamma_\alpha to be the \alphath solution of

x = \phi_x(0)

We can then define a bunch of enormous countable ordinals:

\Gamma_0, \Gamma_1, \Gamma_2, \dots

and still bigger ones:

\Gamma_\omega, \; \Gamma_{\omega^2}, \; \Gamma_{\omega^3} , \dots

and even bigger ones:

\Gamma_{\omega^\omega}, \; \Gamma_{\omega^{\omega^\omega}}, \; \Gamma_{\omega^{\omega^{\omega^\omega}}}, \dots

and even bigger ones:

\Gamma_{\epsilon_0}, \Gamma_{\epsilon_1}, \Gamma_{\epsilon_2}, \dots

But since \epsilon_\alpha is just \phi_1(\alpha), we can reach much bigger countable ordinals with the help of the \phi function:

\Gamma_{\phi_2(0)}, \; \Gamma_{\phi_3(0)}, \; \Gamma_{\phi_4(0)}, \dots

and we can do vastly better using the \Gamma function itself:

\Gamma_{\Gamma_0}, \Gamma_{\Gamma_{\Gamma_0}}, \Gamma_{\Gamma_{\Gamma_{\Gamma_0}}} , \dots

The limit of all these is the smallest solution of

x = \Gamma_x

As usual, this ordinal is still countable, but there’s no way to express it in terms of the \Gamma function and smaller ordinals. So we are stuck again.

In short: we got past the Feferman–Schütte barrier by introducing a name for the \alphath solution of x = \phi_x(0). We called it \Gamma_\alpha. This made us happy for about two minutes…

…. but then we ran into another barrier of the same kind.

So what we really need is a more general notation: one that gets us over not just this particular bump in the road, but all bumps of this kind! We don’t want to keep randomly choosing goofy new letters like \Gamma. We need something systematic.

The multi-variable Veblen hierarchy

We were actually doing pretty well with the \phi function. It was nice and systematic. It just wasn’t powerful enough. But if you’re trying to keep track of how far you’re driving on a really long trip, you want an odometer with more digits. So, let’s try that.

In other words, let’s generalize the \phi function to allow more subscripts. Let’s rename \Gamma_\alpha and call it \phi_{1,0}(\alpha). The fact that we’re using two subscripts says that we’re going beyond the old \phi functions with just one subscript. The subscripts 1 and 0 should remind you of what happens when you drive more than 9 miles: if your odometer has two digits, it’ll say you’re on mile 10.

Now we proceed as before: we make up new functions, each of which enumerates the fixed points of the previous one:

\phi_{1,1} = \phi'_{1,0}
\phi_{1,2} = \phi'_{1,1}
\phi_{1,3} = \phi'_{1,2}

and so on. In general, we let

\phi_{1,\gamma+1} = \phi'_{1,\gamma}

and when \gamma is a limit ordinal, we let

\displaystyle{ \phi_{1,\gamma}(\alpha) = \lim_{\beta \to \gamma} \phi_{1,\beta}(\alpha) }

Are you confused?

How could you possibly be confused???

Okay, maybe an example will help. In the last section, our notation fizzled out when we took the limit of these ordinals:

\Gamma_{\Gamma_0}, \Gamma_{\Gamma_{\Gamma_0}}, \Gamma_{\Gamma_{\Gamma_{\Gamma_0}}} , \dots

The limit of these is the smallest solution of x = \Gamma_x. But now we’re writing \Gamma_x = \phi_{1,0}(x), so this limit is the smallest fixed point of \phi_{1,0}. So, it’s \phi_{1,1}(0).

We can now ride happily into the sunset, defining \phi_{1,\gamma}(\alpha) for all ordinals \alpha, \gamma. Of course, this will never give us a notation for ordinals with

x = \phi_{1,x}(0)

But we don’t let that stop us! This is where the new extra subscript really comes in handy. We now define \phi_{2,0}(\alpha) to be the \alphath solution of

x = \phi_{1,x}(0)

Then we drive on as before. We let

\phi_{2,\gamma+1} = \phi'_{2,\gamma}

and when \gamma is a limit ordinal, we say

\displaystyle{ \phi_{2,\gamma}(\alpha) = \lim_{\beta \to \gamma} \phi_{2,\beta}(\alpha) }

I hope you get the idea. Keep doing this!

We can inductively define \phi_{\beta,\gamma}(\alpha) for all \alpha, \beta and \gamma. Of course, these functions will never give a notation for solutions of

x = \phi_{x,0}(0)

To describe these, we need a function with one more subscript! So let \phi_{1,0,0}(\alpha) be the \alphath solution of

x = \phi_{x,0}(0)

We can then proceed on and on and on, adding extra subscripts as needed.

This is called the multi-variable Veblen hierarchy.

Examples

To help you understand the multi-variable Veblen hierarchy, I’ll use it to describe lots of ordinals. Some are old friends. Starting with finite ones, we have:

\phi_0(0) = 1

\phi_0(0) + \phi_0(0) = 2

and so on, so we don’t need separate names for natural numbers… but I’ll use them just to save space.

\phi_0(1) = \omega

\phi_0(2) = \omega^2

and so on, so we don’t need separate names for \omega and its powers, but I’ll use them just to save space.

\phi_0(\omega) = \omega^\omega

\phi_0(\omega^\omega) = \omega^{\omega^\omega}

\phi_1(0) = \epsilon_0

\phi_1(1) = \epsilon_1

\displaystyle{ \phi_1(\phi_1(0)) = \epsilon_{\epsilon_0} }

\phi_2(0) = \zeta_0

\phi_2(1) = \zeta_1

where I should remind you that \zeta_\alpha is a name for the \alphath solution of x = \epsilon_x.

\phi_{1,0}(0) = \Gamma_0

\phi_{1,0}(1) = \Gamma_1

\displaystyle{ \phi_{1,0}(\phi_{1,0}(0)) = \Gamma_{\Gamma_0} }

\phi_{1,1}(0) is the limit of \Gamma_{\Gamma_0}, \Gamma_{\Gamma_{\Gamma_0}}, \Gamma_{\Gamma_{\Gamma_{\Gamma_0}}} , \dots

\phi_{1,0,0}(0) is called the Ackermann ordinal.

Apparently Wilhelm Ackermann, the logician who invented a very fast-growing function called Ackermann’s function, had a system for naming ordinals that fizzled out at this ordinal.

The small Veblen ordinal

There are obviously lots more ordinals that can be described using the multi-variable Veblen hierarchy, but I don’t have anything interesting to say about them. And you’re probably more interested in this question: what’s next?

The limit of these ordinals

\phi_1(0), \; \phi_{1,0}(0), \; \phi_{1,0,0}(0), \dots

is called the small Veblen ordinal. Yet again, it’s a countable ordinal. It’s the smallest ordinal that cannot be named in terms of smaller ordinals using the multi-variable Veblen hierarchy…. at least, not the version I described. And here’s a nice fact:

Theorem. Every ordinal \beta less than the small Veblen ordinal can be written as a finite expression in terms of the multi-variable \phi function, addition, and 0.

For example,

\Gamma_0 + \epsilon_{\epsilon_0} + \omega^\omega + 2

is equal to

\displaystyle{  \phi_{\phi_0(0),0}(0) + \phi_{\phi_0(0)}(\phi_{\phi_0(0)}(0)) +  \phi_0(\phi_0(\phi_0(0))) + \phi_0(0) + \phi_0(0)  }

On the one hand, this notation is quite tiresome to read. On the other hand, it’s amazing that it gets us so far!

Furthermore, if you stare at expressions like the above one for a while, and think about them abstractly, they should start looking like trees. So you should find it easy to believe that ordinals less than the small Veblen ordinal correspond to trees, perhaps labelled in some way.

Indeed, this paper describes a correspondence of this sort:

• Herman Ruge Jervell, Finite trees as ordinals, in New Computational Paradigms, Lecture Notes in Computer Science 3526, Springer, Berlin, 2005, pp. 211–220.

However, I don’t think his idea is quite same as what you’d come up with by staring at expressions like

\displaystyle{  \phi_{\phi_0(0),0}(0) + \phi_{\phi_0(0)}(\phi_{\phi_0(0)}(0)) +  \phi_0(\phi_0(\phi_0(0))) + \phi_0(0) + \phi_0(0)  }

Beyond the small Veblen ordinal

We’re not quite done yet. The modifier ‘small’ in the term ‘small Veblen ordinal’ should make you suspect that there’s more in Veblen’s paper. And indeed there is!

Veblen actually extended his multi-variable function \phi_{\gamma_1, \dots, \gamma_n}(\alpha) to the case where there are infinitely many variables. He requires that all but finitely many of these variables equal zero, to keep things under control. Using this, one can set up a notation for even bigger countable ordinals! This notation works for all ordinals less than the large Veblen ordinal.

We don’t need to stop here. The large Veblen ordinal is just the first of a new series of even larger countable ordinals!

These can again be defined as fixed points. Yes: it’s déjà vu all over again. But around here, people usually switch to a new method for naming these fixed points, called ‘ordinal collapsing functions’. One interesting thing about this notation is that it makes use of uncountable ordinal. The first uncountable ordinal is called \Omega, and it dwarfs all those we’ve seen here.

We can use the ordinal collapsing function \psi to name many of our favorite countable ordinals, and more:

\psi(\Omega) is \zeta_0, the smallest solution of x = \epsilon_x.

\psi(\Omega^\Omega) is \Gamma_0, the Feferman–Schütte ordinal.

\psi(\Omega^{\Omega^2}) is the Ackermann ordinal.

\psi(\Omega^{\Omega^\omega}) is the small Veblen ordinal.

\psi(\Omega^{\Omega^\Omega}) is the large Veblen ordinal.

\psi(\epsilon_{\Omega+1}) is called the Bachmann–Howard ordinal. This is the limit of the ordinals

\psi(\Omega), \psi(\Omega^\Omega), \psi(\Omega^{\Omega^\Omega}), \dots

I won’t explain this now. Maybe later! But not tonight. As Bilbo Baggins said:

The Road goes ever on and on
Out from the door where it began.
Now far ahead the Road has gone,
Let others follow it who can!
Let them a journey new begin,
But I at last with weary feet
Will turn towards the lighted inn,
My evening-rest and sleep to meet.

For more

But perhaps you’re impatient and want to begin a new journey now!

The people who study notations for very large countable ordinals tend to work on proof theory, because these ordinals have nice applications to that branch of logic. For example, Peano arithmetic is powerful enough to work with ordinals up to but not including \epsilon_0, so we call \epsilon_0 the proof-theoretic ordinal of Peano arithmetic. Stronger axiom systems have bigger proof-theoretic ordinals.

Unfortunately this makes it a bit hard to learn about large countable ordinals without learning, or at least bumping into, a lot of proof theory. And this subject, while interesting in principle, is quite tough. So it’s hard to find a readable introduction to large countable ordinals.

The bibliography of the Wikipedia article on large countable ordinals gives this half-hearted recommendation:

Wolfram Pohlers, Proof theory, Springer 1989 ISBN 0-387-51842-8 (for Veblen hierarchy and some impredicative ordinals). This is probably the most readable book on large countable ordinals (which is not saying much).

Unfortunately, Pohlers does not seem to give a detailed account of ordinal collapsing functions. If you want to read something fun that goes further than my posts so far, try this:

• Hilbert Levitz, Transfinite ordinals and their notations: for the uninitiated.

(Anyone whose first name is Hilbert must be born to do logic!)

This is both systematic and clear:

• Wikipedia, Ordinal collapsing functions.

And if you want to explore countable ordinals using a computer program, try this:

• Paul Budnik, Ordinal calculator and research tool.

Among other things, this calculator can add, multiply and exponentiate ordinals described using the multi-variable Veblen hierarchy—even the version with infinitely many variables!


by John Baez at July 07, 2016 01:00 AM

July 06, 2016

Symmetrybreaking - Fermilab/SLAC

Scientists salvage insights from lost satellite

Before Hitomi died, it sent X-ray data that could explain why galaxy clusters form far fewer stars than expected.

Working with information sent from the Japanese Hitomi satellite, an international team of researchers has obtained the first views of a supermassive black hole stirring hot gas at the heart of a galaxy cluster. These motions could explain why galaxy clusters form far fewer stars than expected.

The data, published today in Nature, were recorded with the X-ray satellite during its first month in space earlier this year, just before it spun out of control and disintegrated due to a chain of technical malfunctions.

“Being able to measure gas motions is a major advance in understanding the dynamic behavior of galaxy clusters and its ties to cosmic evolution,” said study co-author Irina Zhuravleva, a postdoctoral researcher at the Kavli Institute for Particle Astrophysics and Cosmology. “Although the Hitomi mission ended tragically after a very short period of time, it’s fair to say that it has opened a new chapter in X-ray astronomy.” KIPAC is a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory.

Galaxy clusters, which consist of hundreds to thousands of individual galaxies held together by gravity, also contain large amounts of gas. Over time, the gas should cool down and clump together to form stars. Yet there is very little star formation in galaxy clusters, and until now scientists were not sure why.

“We already knew that supermassive black holes, which are found at the center of all galaxy clusters and are tens of billions of times more massive than the sun, could play a major role in keeping the gas from cooling by somehow injecting energy into it,” said Norbert Werner, a research associate at KIPAC involved in the data analysis. “Now we understand this mechanism better and see that there is just the right amount of stirring motion to produce enough heat.”

Plasma bubbles stir and heat intergalactic gas

About 15 percent of the mass of galaxy clusters is gas that is so hot – tens of millions of degrees Fahrenheit – that it shines in bright X-rays. In their study, the Hitomi researchers looked at the Perseus cluster, one of the most massive astronomical objects and the brightest in the X-ray sky.

Other space missions before Hitomi, including NASA’s Chandra X-ray Observatory, had taken precise X-ray images of the Perseus cluster. These snapshots revealed how giant bubbles of ultrahot, ionized gas, or plasma, rise from the central supermassive black hole as it catapults streams of particles tens of thousands of light-years into space. At the same time, streaks of cold gas appear to be pulled away from the center of the galaxy cluster, according to additional images of visible light. Until now, it has been unclear whether these two actions were connected.

To find out, the researchers pointed one of Hitomi’s instruments – the soft X-ray spectrometer (SXS) – at the center of the Perseus cluster and analyzed its X-ray emissions.

“Since the SXS had 30 times better energy resolution than the instruments of previous missions, we were able to resolve details of the X-ray signals that weren’t accessible before,” said co-principal investigator Steve Allen, a professor of physics at Stanford and of particle physics and astrophysics at SLAC. “These new details resulted in the very first velocity map of the cluster center, showing the speed and turbulence of the hot gas.”

By superimposing this map onto the other images, the researchers were able to link the observed motions of the cold gas to the hot plasma bubbles.

According to the data, the rising plasma bubbles drag cold gas away from the cluster center. Researchers see this in the form of stretched filaments in the optical images. The bubbles also transfer energy to the gas, which causes turbulence, Zhuravleva said.

“In a way, the bubbles are like spoons that stir milk into a cup of coffee and cause eddies,” she said. “The turbulence heats the gas, and it appears that this is enough to work against star formation in the cluster.”     

Hitomi’s legacy

Astrophysicists can use the new information to fine-tune models that describe how galaxy clusters change over time.

One important factor in these models is the mass of galaxy clusters, which researchers typically calculate from the gas pressure in the cluster. However, motions cause additional pressure, and before this study it was unclear if the calculations need to be corrected for turbulent gas.

“Although the motions heat the gas at the center of the Perseus cluster, their speed is only about 100 miles per second, which is surprisingly slow considering how disturbed the region looks in X-ray images,” said co-principal investigator Roger Blandford, the Luke Blossom Professor of Physics at Stanford and a professor for particle physics and astrophysics at SLAC. “One consequence is that corrections for these motions are only very small and don’t affect our mass calculations much.”

Although the loss of Hitomi cut most of the planned science program short – it was supposed to run for at least three years – the researchers hope their results will convince the international community to plan another X-ray space mission.

“The data Hitomi sent back to Earth are just beautiful,” Werner said. “They demonstrate what’s possible in the field and give us a taste of all the great science that should have come out of the mission over the years.”

Hitomi is a joint project, with the Japan Aerospace Exploration Agency (JAXA) and NASA as the principal partners. Led by Japan, it is a large-scale international collaboration, boasting the participation of eight countries, including the United States, the Netherlands and Canada, with additional partnership by the European Space Agency (ESA). Other KIPAC researchers involved in the project are Tuneyoshi Kamae, Ashley King, Hirokazu Odaka and co-principal investigator Grzegorz Madejski.

A version of this article originally appeared as a Stanford University press release.

by Manuel Gnida at July 06, 2016 05:18 PM

ZapperZ - Physics and Physicists

Photoemission Spectroscopy - Fundamental Aspects
I don't know if this is a chapter out of a book, or if this is a lecture material, or what, but it has a rather comprehensive coverage of photoionization, Auger, and photoemission in solids. I also don't know how long the document will be available (web links come and go, it seems). So if this is something you're interested in, it might be something you want to download.

At the very least, it has an extensive collection of references, ranging from Hertz's discovery of the photoelectric effect, to Einstein's photoelectric effect paper of 1905, all the way to Spicer's 3-step model and recent progress in ARPES.

Zz.

by ZapperZ (noreply@blogger.com) at July 06, 2016 02:30 PM

Jester - Resonaances

CMS: Higgs to mu tau is going away
One interesting anomaly in the LHC run-1 was a hint of Higgs boson decays to a muon and a tau lepton. Such process is forbidden in the Standard Model by the conservation of  muon and tau lepton numbers. Neutrino masses violate individual lepton numbers, but their effect is far too small to affect the Higgs decays in practice. On the other hand, new particles do not have to respect global symmetries of the Standard Model, and they could induce lepton flavor violating Higgs decays at an observable level. Surprisingly, CMS found a small excess in the Higgs to tau mu search in their 8 TeV data, with the measured branching fraction Br(h→τμ)=(0.84±0.37)%.  The analogous measurement in ATLAS is 1 sigma above the background-only hypothesis, Br(h→τμ)=(0.53±0.51)%. Together this merely corresponds to a 2.5 sigma excess, so it's not too exciting in itself. However, taken together with the B-meson anomalies in LHCb, it has raised hopes for lepton flavor violating new physics just around the corner.  For this reason, the CMS excess inspired a few dozen of theory papers, with Z' bosons, leptoquarks, and additional Higgs doublets pointed out as possible culprits.

Alas, the wind is changing. CMS made a search for h→τμ in their small stash of 13 TeV data collected in 2015. This time they were hit by a negative background fluctuation, and they found Br(h→τμ)=(-0.76±0.81)%. The accuracy of the new measurement is worse than that in run-1, but nevertheless it lowers the combined significance of the excess below 2 sigma. Statistically speaking, the situation hasn't changed much,  but psychologically this is very discouraging. A true signal is expected to grow when more data is added, and when it's the other way around it's usually a sign that we are dealing with a statistical fluctuation...

So, if you have a cool model explaining the h→τμ  excess be sure to post it on arXiv before more run-2 data is analyzed ;)

by Jester (noreply@blogger.com) at July 06, 2016 09:00 AM

July 05, 2016

Symmetrybreaking - Fermilab/SLAC

Incredible hulking facts about gamma rays

From lightning to the death of electrons, the highest-energy form of light is everywhere.

Gamma rays are the most energetic type of light, packing a punch strong enough to pierce through metal or concrete barriers. More energetic than X-rays, they are born in the chaos of exploding stars, the annihilation of electrons and the decay of radioactive atoms. And today, medical scientists have a fine enough control of them to use them for surgery. Here are seven amazing facts about these powerful photons.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Doctors conduct brain surgery using “gamma ray knives.”

Gamma rays can be helpful as well as harmful (and are very unlikely to turn you into the Hulk). To destroy brain cancers and other problems, medical scientists sometimes use a "gamma ray knife." This consists of many beams of gamma rays focused on the cells that need to be destroyed. Because each beam is relatively small, it does little damage to healthy brain tissue. But where they are focused, the amount of radiation is intense enough to kill the cancer cells. Since brains are delicate, the gamma ray knife is a relatively safe way to do certain kinds of surgery that would be a challenge with ordinary scalpels.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

The name “gamma rays” came from Ernest Rutherford.

French chemist Paul Villard first identified gamma rays in 1900 from the element radium, which had been isolated by Marie and Pierre Curie just two years before. When scientists first studied how atomic nuclei changed form, they identified three types of radiation based on how far they penetrated into a barrier made of lead. Ernest Rutherford named the radiation for the first three letters of the Greek alphabet. Alpha rays bounce right off, beta rays went a little farther, and gamma rays went the farthest. Today we know alpha rays are the same thing as helium nuclei (two protons and two neutrons), beta rays are either electrons or positrons (their antimatter versions), and gamma rays are a kind of light.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Nuclear reactions are a major source of gamma rays.

When an unstable uranium nucleus splits in the process of nuclear fission, it releases a lot of gamma rays in the process. Fission is used in both nuclear reactors and nuclear warheads. To monitor nuclear tests in the 1960s, the United States launched gamma radiation detectors on satellites. They found far more explosions than they expected to see. Astronomers eventually realized these explosions were coming from deep space—not the Soviet Union—and named them gamma-ray bursts, or GRBs. Today we know GRBs come in two types: the explosions of extremely massive stars, which pump out gamma rays as they die, and collisions between highly dense relics of stars called neutron stars and something else, probably another neutron star or a black hole.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Gamma rays played a key role in the discovery of the Higgs boson.

Most of the particles in the Standard Model of particle physics are unstable; they decay into other particles almost as soon as they come into existence. The Higgs boson, for example, can decay into many different types of particles, including gamma rays. Even though theory predicts that a Higgs boson will decay into gamma rays just 0.2 percent of the time, this type of decay is relatively easy to identify and it was one of the types that scientists observed when they first discovered the Higgs boson.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

To study gamma rays, astronomers build telescopes in space.

Gamma rays heading toward the Earth from space interact with enough atoms in the atmosphere that almost none of them reach the surface of the planet. That's good for our health, but not so great for those who want to study GRBs and other sources of gamma rays. To see gamma rays before they reach the atmosphere, astronomers have to build telescopes in space. This is challenging for a number of reasons. For example, you can't use a normal lens or mirror to focus gamma rays, because the rays punch right through them. Instead an observatory like the Fermi Gamma-ray Space Telescope detects the signal from gamma rays when they hit a detector and convert into pairs of electrons and positrons.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Some gamma rays come from thunderstorms.

In the 1990s, observatories in space detected bursts of gamma rays coming from Earth that eventually were traced to thunderclouds. When static electricity builds up inside clouds, the immediate result is lightning. That static electricity also acts like a giant particle accelerator, creating pairs of electrons and positrons, which then annihilate into gamma rays. These bursts happen high enough in the air that only airplanes are exposed—and they’re one reason for flights to steer well away from storms.

Illustration by Sandbox Studio, Chicago with Lexi Fodor

Gamma rays (indirectly) give life to Earth.

Hydrogen nuclei are always fusing together in the core of the sun. When this happens, one byproduct is gamma rays. The energy of the gamma rays keeps the sun’s core hot. Some of those gamma rays also escape into the sun's outer layers, where they collide with electrons and protons and lose energy. As they lose energy, they change into ultraviolet, infrared and visible light. The infrared light keeps Earth warm, and the visible light sustains Earth’s plants.

by Matthew R. Francis at July 05, 2016 04:13 PM

June 29, 2016

Clifford V. Johnson - Asymptotia

Gauge Theories are Cool

That is all.

screen_shot_progress_gauge

('fraid you'll have to wait for the finished book to learn why those shapes are relevant to the title...)

-cvj Click to continue reading this post

The post Gauge Theories are Cool appeared first on Asymptotia.

by Clifford at June 29, 2016 10:51 PM

Symmetrybreaking - Fermilab/SLAC

LHCb discovers family of tetraquarks

Researchers found four new particles made of the same four building blocks.

It’s quadruplets! Syracuse University researchers on the LHCb experiment confirmed the existence of a new four-quark particle and serendipitously discovered three of its siblings.

Quarks are the solid scaffolding inside composite particles like protons and neutrons. Normally quarks come in pairs of two or three, but in 2014 LHCb researchers confirmed the existence four-quark particles and, one year later, five-quark particles.

The particles in this new family were named based on their respective masses, denoted in mega-electronvolts: X(4140), X(4274), X(4500) and X(4700). Each particle contains two charm quarks and two strange quarks arranged in a unique way, making them the first four-quark particles composed entirely of heavy quarks. Researchers also measured each particle’s quantum numbers, which describe their subatomic properties. Theorists will use these new measurements to enhance their understanding of the formation of particles and the fundamental structures of matter.

“What we have discovered is a unique system,” says Tomasz Skwarnicki, a physics professor at Syracuse University. “We have four exotic particles of the same type; it’s the first time we have seen this and this discovery is already helping us distinguish between the theoretical models.”

Evidence of the lightest particle in this family of four and a hint of another were first seen by the CDF experiment at the US Department of Energy’s Fermi National Accelerator Lab in 2009. However, other experiments were unable to confirm this observation until 2012, when the CMS experiment at CERN reported seeing the same particle-like bumps with a much greater statistical certainty. Later, the D0 collaboration at Fermilab also reported another observation of this particle.

“It was a long road to get here,” says University of Iowa physicist Kai Yi, who works on both the CDF and CMS experiments. “This has been a collective effort by many complementary experiments. I’m very happy that LHCb has now reconfirmed this particle’s existence and measured its quantum numbers.”

The US contribution to the LHCb experiment is funded by the National Science Foundation.

LHCb researcher Thomas Britton performed this analysis as his PhD thesis at Syracuse University.

“When I first saw the structures jumping out of the data, little did I know this analysis would be such an aporetic saga,” Britton says. “We looked at every known particle and process to make sure these four structures couldn’t be explained by any pre-existing physics. It was like baking a six-dimensional cake with 98 ingredients and no recipe—just a picture of a cake.”

Even though the four new particles all contain the same quark composition, they each have a unique internal structure, mass and their own sets of quantum numbers. These characteristics are determined by the internal spatial configurations of the quarks.

“The quarks inside these particles behave like electrons inside atoms,” Skwarnicki says. “They can be ‘excited’ and jump into higher energy orbitals. The energy configuration of the quarks gives each particle its unique mass and identity.”

According to theoretical predictions, the quarks inside could be tightly bound (like three quarks packed inside a single proton) or loosely bound (like two atoms forming a molecule.) By closely examining each particle’s quantum numbers, scientists were able to narrow down the possible structures.

“The molecular explanation does not fit with the data,” Skwarnicki says. “But I personally would not conclude that these are definitely tightly bound states of four quarks. It could be possible that these are not even particles. The result could show the complex interplays of known particle pairs flippantly changing their identities.”

Theorists are currently working on models to explain these new results—be it a family of four new particles or bizarre ripple effects from known particles. Either way, this study will help shape our understanding of the subatomic universe.

“The huge amount of data generated by the LHC is enabling a resurgence in searches for exotic particles and rare physical phenomena,” Britton says. “There’s so many possible things for us to find and I’m happy to be a part of it.”

by Sarah Charley at June 29, 2016 06:05 PM

June 28, 2016

Symmetrybreaking - Fermilab/SLAC

Preparing for their magnetic moment

Scientists are using a plastic robot and hair-thin pieces of metal to ready a magnet that will hunt for new physics.

Three summers ago, a team of scientists and engineers on the Muon g-2 experiment moved a 52-foot-wide circular magnet 3200 miles over land and sea. It traveled in one piece without twisting more than a couple of millimeters, lest the fragile coils inside irreparably break. It was an astonishing feat that took years to plan and immense skill to execute.

As it turns out, that was the easy part.

The hard part—creating a magnetic field so precise that even subatomic particles see it as perfectly smooth—has been under way for seven months. It’s a labor-intensive process that has inspired scientists to create clever, often low-tech solutions to unique problems, working from a road map written 30 years ago as they drive forward into the unknown.

The goal of Muon g-2 is to follow up on a similar experiment conducted at the US Department of Energy’s Brookhaven National Laboratory in New York in the 1990s. Scientists there built an extraordinary machine that generated a near-perfect magnetic field into which they fired a beam of particles called muons. The magnetic ring serves as a racetrack for the muons, and they zoom around it for as long as they exist—usually about 64 millionths of a second.

That’s a blink of an eye, but it’s enough time to measure a particular property: the precession frequency of the muons as they hustle around the magnetic field. And when Brookhaven scientists took those measurements, they found something different than the Standard Model, our picture of the universe, predicted they would. They didn’t quite capture enough data to claim a definitive discovery, but the hints were tantalizing.

Now, 20 years later, some of those same scientists—and dozens of others, from 34 institutions around the world—are conducting a similar experiment with the same magnet, but fed by a more powerful beam of muons at the US Department of Energy’s Fermi National Accelerator Laboratory in Illinois. Moving that magnet from New York caused quite a stir among the science-interested public, but that’s nothing compared with what a discovery from the Muon g-2 experiment would cause.

“We’re trying to determine if the muon really is behaving differently than expected,” says Dave Hertzog of the University of Washington, one of the spokespeople of the Muon g-2 experiment. “And, if so, that would suggest either new particles popping in and out of the vacuum, or new subatomic forces at work.  More likely, it might just be something no one has thought of yet.  In any case, it’s all  very exciting.”

Shimming to reduce shimmy

To start making these measurements, the magnetic field needs to be the same all the way around the ring so that, wherever the muons are in the circle, they will see the same pathway. That’s where Brendan Kiburg of Fermilab and a group of a dozen scientists, post-docs and students come in. For the past six months, they have been “shimming” the magnetic ring, shaping it to an almost inconceivably exact level.

“The primary goal of shimming is to make the magnetic field as uniform as possible,” Kiburg says. “The muons act like spinning tops, precessing at a rate proportional to the magnetic field. If a section of the field is a little higher or a little lower, the muon sees that, and will go faster or slower.”

Since the idea is to measure the precession rate to an extremely precise degree, the team needs to shape the magnetic field to a similar degree of uniformity. They want it to vary by no more than ten parts in a billion per centimeter. To put that in perspective, that’s like wanting a variation of no more than one second in nearly 32 years, or one sheet in a roll of toilet paper stretching from New York to London.

How do they do this? First, they need to measure the field they have. With a powerful electromagnet that will affect any metal object inside it, that’s pretty tricky. The solution is a marriage of high-tech and low-tech: a cart made of sturdy plastic and quartz, moved by a pulley attached to a motor and continuously tracked by a laser. On this cart are probes filled with petroleum jelly, with sensors measuring the rate at which the jelly’s protons spin in the magnetic field.

The laser can record the position of the cart to 25 microns, half the width of a human hair. Other sensors measure how far apart the top and bottom of the cart are to the magnet, to the micron.

“The cart moves through the field as it is pulled around the ring,” Kiburg says. “It takes between two and two-and-a-half hours to go around the ring. There are more than 1500 locations around the path, and it stops every three centimeters for a short moment while the field is precisely measured in each location. We then stitch those measurements into a full map of the magnetic field.”

Erik Swanson of the University of Washington is the run coordinator for this effort, meaning he directs the team as they measure the field and perform the manually intensive shimming. He also designed the new magnetic resonance probes that measure the field, upgrading them from the technology used at Brookhaven.

“They’re functionally the same,” he says of the probes, “but the Brookhaven experiment started in the 1990s, and the old probes were designed before that. Any electronics that old, there’s the potential that they will stop working.”

Swanson says that the accuracy to which the team has had to position the magnet’s iron pieces to achieve the desired magnetic field surprised even him. When scientists first turned the magnet on in October, the field, measured at different places around the ring, varied by as much as 1400 parts per million. That may seem smooth, but to a tiny muon it looks like a mountain range of obstacles. In order to even it out, the Muon g-2 team makes hundreds of minuscule adjustments by hand.

Video of 4HlKN0rfdKA

Physical physics

Stationed around the ring are about 1000 knobs that control the ways the field could become non-uniform. But when that isn’t enough, the field can be shaped by taking parts of the magnet apart and inserting extremely small pieces of steel called shims, changing the field by thousandths of an inch.

There are 12 sections of the magnet, and it takes an entire day to adjust just one of those sections.

This process relies on simulations, calibrations and iterations, and with each cycle the team inches forward toward their goal, guided by mathematical predictions. Once they’re done with the process of carefully inserting these shims, some as thin as 12.5 microns, they reassemble the magnet and measure the field again, starting the process over, refining and learning as they go.

“It’s fascinating to me how hard such a simple-seeming problem can be,” says Matthias Smith of the University of Washington, one of the students who helped design the plastic measuring robot. “We’re making very minute adjustments because this is a puzzle that can go together in multiple ways. It’s very complex.”

His colleague Rachel Osofsky, also of the University of Washington, agrees. Osofsky has helped put in more than 800 shims around the magnet, and says she enjoys the hands-on and collaborative nature of the work.

“When I first came aboard, I knew I’d be spending time working on the magnet, but I didn’t know what that meant,” she says. “You get your hands dirty, really dirty, and then measure the field to see what you did. Students later will read the reports we’re writing now and refer to them. It’s exciting.”

Similarly, the Muon g-2 team is constantly consulting the work of their predecessors who conducted the Brookhaven experiment, making improvements where they can. (One upgrade that may not be obvious is the very building that the experiment is housed in, which keeps the temperature steadier than the one used at Brookhaven and reduces shape changes in the magnet itself.)

Kiburg says the Muon g-2 team should be comfortable with the shape of the magnetic field sometime this summer. With the experiment’s beamline under construction and the detectors to be installed, the collaboration should be ready to start measuring particles by next summer. Swanson says that while the effort has been intense, it has also been inspiring.

“It’s a big challenge to figure out how to do all this right,” he says. “But if you know scientists, when a challenge seems almost impossible, that’s the one we all go for.”

by Andre Salles at June 28, 2016 03:24 PM

June 27, 2016

ZapperZ - Physics and Physicists

Landau's Nobel Prize in Physics
This is a fascinating article. It describes, using the Nobel prize archives, the process that led to Lev Landau's nomination and winning the Nobel Prize in physics. But more than that, it also describes the behind-the-scenes nominating process for the Nobel Prize.

I'm not sure if the process has change significantly since then, but I would imaging that many of the mechanism leading up to a nomination and winning the prize are similar.

Zz.

by ZapperZ (noreply@blogger.com) at June 27, 2016 05:39 PM

June 24, 2016

Andrew Jaffe - Leaves on the Line

The Sick Rose

Songs of innocence and of experience page 39 The Sick Rose Fitzwilliam copy

O Rose thou art sick.
The invisible worm,
That flies in the night
In the howling storm:

Has found out thy bed
Of crimson joy:
And his dark secret love
Does thy life destroy.

—William Blake, Songs of Experience

by Andrew at June 24, 2016 10:42 AM

Clifford V. Johnson - Asymptotia

Historic Hysteria

So, *that* happened... (Click for larger view.)

referendum_result

-cvj Click to continue reading this post

The post Historic Hysteria appeared first on Asymptotia.

by Clifford at June 24, 2016 05:35 AM

Clifford V. Johnson - Asymptotia

Concern…

Anyone else finding this terrifying? A snapshot (click for larger view) from the Guardian's live results tracker* as of 19:45 PST - see here.

referendum_so_far_1
-cvj

*BTW, I've been using their trackers a lot during the presidential primaries, they're very good. Click to continue reading this post

The post Concern… appeared first on Asymptotia.

by Clifford at June 24, 2016 02:54 AM

June 23, 2016

Symmetrybreaking - Fermilab/SLAC

The Higgs-shaped elephant in the room

Higgs bosons should mass-produce bottom quarks. So why is it so hard to see it happening?

Higgs bosons are born in a blob of pure concentrated energy and live only one-septillionth of a second before decaying into a cascade of other particles. In 2012, these subatomic offspring were the key to the discovery of the Higgs boson.

So-called daughter particles stick around long enough to show up in the CMS and ATLAS detectors at the Large Hadron Collider. Scientists can follow their tracks and trace the family trees back to the Higgs boson they came from.

But the particles that led to the Higgs discovery were actually some of the boson’s less common progeny. After recording several million collisions, scientists identified a handful of Z bosons and photons with a Higgs-like origin. The Standard Model of particle physics predicts that Higgs bosons produce those particles 2.5 and 0.2 percent of the time. Physicists later identified Higgs bosons decaying into W bosons, which happens about 21 percent of the time.

According to the Standard Model, the most common decay of the Higgs boson should be a transformation into a pair of bottom quarks. This should happen about 60 percent of the time.

The strange thing is, scientists have yet to discover it happening (though they have seen evidence).

According to Harvard researcher John Huth, a member of the ATLAS experiment, seeing the Higgs turning into bottom quarks is priority No. 1 for Higgs boson research.

“It would behoove us to find the Higgs decaying to bottom quarks because this is the largest interaction,” Huth says, “and it darn well better be there.”

If the Higgs to bottom quarks decay were not there, scientists would be left completely dumbfounded.

“I would be shocked if this particle does not couple to bottom quarks,” says Jim Olsen, a Princeton researcher and Physics Coordinator for the CMS experiment. “The absence of this decay would have a very large and direct impact on the relative decay rates of the Higgs boson to all of the other known particles, and the recent ATLAS and CMS combined measurements are in excellent agreement with expectations.”

To be fair, the decay of a Higgs to two bottom quarks is difficult to spot.

When a dying Higgs boson produces twin Z or W bosons, they each decay into a pair of muons or electrons. These particles leave crystal clear signals in the detectors, making it easy for scientists to spot them and track their lineage. And because photons are essentially immortal beams of light, scientists can immediately spot them and record their trajectory and energy with electromagnetic detectors.

But when a Higgs births a pair of bottom quarks, they impulsively marry other quarks, generating huge unstable families which bourgeon, break and reform. This chaotic cascade leaves a messy ancestry.

Scientists are developing special tools to disentangle the Higgs from this multi-generational subatomic soap opera. Unfortunately, there are no cheek swabs or Maury Povich to announce, Higgs, you are the father! Instead, scientists are working on algorithms that look for patterns in the energy these jets of particles deposit in the detectors.

“The decay of Higgs bosons to bottom quarks should have different kinematics from the more common processes and leave unique signatures in our detector,” Huth says. “But we need to deeply understand all the variables involved if we want to squeeze the small number of Higgs events from everything else.”

Physicist Usha Mallik and her ATLAS team of researchers at the University of Iowa have been mapping the complex bottom quark genealogies since shortly after the Higgs discovery in 2012.

“Bottom quarks produce jets of particles with all kinds and colors and flavors,” Mallik says. “There are fat jets, narrow gets, distinct jets and overlapping jets. Just to find the original bottom quarks, we need to look at all of the jet’s characteristics. This is a complex problem with a lot of people working on it.”

This year the LHC will produce five times more data than it did last year and will generate Higgs bosons 25 percent faster. Scientists expect that by August they will be able to identify this prominent decay of the Higgs and find out what it can tell them about the properties of this unique particle.

by Sarah Charley at June 23, 2016 01:00 PM

June 22, 2016

Jester - Resonaances

Game of Thrones: 750 GeV edition
The 750 GeV diphoton resonance has made a big impact on theoretical particle physics. The number of papers on the topic is already legendary, and they keep coming at the rate of order 10 per week. Given that the Backović model is falsified, there's no longer a theoretical upper limit.  Does this mean we are not dealing with the classical ambulance chasing scenario? The answer may be known in the next days.

So who's winning this race?  What kind of question is that, you may shout, of course it's Strumia! And you would be wrong, independently of the metric. The contest is much more fierce than one might expect:  it takes 8 papers on the topic to win, and 7 papers to even get on the podium.  Among the 3 authors with 7 papers the final classification is decided by trial by combat the citation count.  The result is (drums):

Citations, tja...   Social dynamics of our community encourages referencing all previous work on the topic, rather than just the relevant ones, which in this particular case triggered a period of inflation. One day soon citation numbers will mean as much as authorship in experimental particle physics. But for now the size of the h-factor is still an important measure of virility for theorists. If the citation count rather the number of papers is the main criterion, the iron throne is taken by a Targaryen contender (trumpets):

This explains why the resonance is usually denoted by the letter S.

Congratulations to all the winners.  For all the rest, wish you more luck and persistence in the next edition,  provided it will take place.

My scripts are not perfect (in previous versions I missed crucial contenders, as pointed out in the comments), so let me know in case I missed your papers or miscalculated citations. 

by Jester (noreply@blogger.com) at June 22, 2016 05:00 PM

Jester - Resonaances

Off we go
The LHC is back in action since last weekend, again colliding protons with 13 TeV energy. The weasels' conspiracy was foiled, and the perpetrators were exemplarily electrocuted. PhD students have been deployed around the LHC perimeter to counter any further sabotage attempts (stoats are known to have been in league with weasels in the past). The period that begins now may prove to be the most exciting time for particle physics in this century.  Or the most disappointing.

The beam intensity is still a factor of 10 below the nominal one, so  the harvest of last weekend is meager 40 inverse picobarns. But the number of proton bunches in the beam is quickly increasing, and once it reaches O(2000), the data will stream at a rate of a femtobarn per week or more. For the nearest future, the plan is to have a few inverse femtobarns on tape by mid-July, which would roughly double the current 13 TeV dataset. The first analyses of this chunk of data  should be presented around the time of the  ICHEP conference in early August. At that point we will know whether the 750 GeV particle is real. Celebrations will begin if the significance of the diphoton peak increases after adding the new data, even if the statistics is not enough to officially announce  a discovery. In the best of all worlds, we may also get a hint of a matching 750 GeV peak in another decay channel (ZZ, Z-photon, dilepton, t-tbar,...) which would help focus our model building. On the other hand, if the significance of the diphoton peak drops in August, there will be a massive hangover...

By the end of October, when the 2016 proton collisions are scheduled to end, the LHC hopes to collect some 20 inverse femtobarns of data. This should already give us a rough feeling of new physics within the reach of the LHC. If a hint of another resonance is seen at that point, one will surely be able to confirm or refute it with the data collected in the following years.  If nothing is seen... then you should start telling yourself that condensed matter physics is also sort of fundamental,  or that systematic uncertainties in astrophysics are not so bad after all...  In any scenario, by December, when first analyses of the full  2016 dataset will be released,  we will know infinitely more than we do today.

So fasten your seat belts and get ready for a (hopefully) bumpy ride. Serious rumors should start showing up on blogs and twitter starting from July.

by Jester (noreply@blogger.com) at June 22, 2016 03:27 PM

Jester - Resonaances

Weekend Plot: The king is dead (long live the king)
The new diphoton king has been discussed at length in the blogoshpere, but the late diboson king also deserves a word or two. Recall that last summer ATLAS announced a 3 sigma excess in the dijet invariant mass distribution where each jet resembles a fast moving W or Z boson decaying to a pair of quarks. This excess can be interpreted as a 2 TeV resonance decaying to a pair of W or Z bosons. For example, it could be a heavy cousin of the W boson, W' in short, decaying to a W and a Z boson. Merely a month ago this paper argued that the excess remains statistically significant after combining several different CMS and ATLAS diboson resonance run-1 analyses in hadronic and leptonic channels of W and Z decay. However, the hammer came down seconds before the diphoton excess announced: diboson resonance searches based on the LHC 13 TeV collisions data do not show anything interesting around 2 TeV. This is a serious problem for any new physics interpretation of the excess since, for this mass scale,  the statistical power of the run-2 and run-1 data is comparable.  The tension is summarized in this plot:
The green bars show the 1 and 2 sigma best fit cross section to the diboson excess. The one on the left takes into account  only the hadronic channel in ATLAS, where the excess is most significant; the one on the right is bases on  the combined run-1 data. The red lines are the limits from run-2 searches in ATLAS and CMS, scaled to 8 TeV cross sections assuming W' is produced in quark-antiquark collisions. Clearly, the best fit region for the 8 TeV data is excluded by the new 13 TeV data. I display results for the W' hypothesis, however conclusions are similar (or more pessimistic) for other hypotheses leading to WW and/or ZZ final states. All in all,  the ATLAS diboson excess is not formally buried yet, but at this point any a reversal of fortune would be a miracle.

by Jester (noreply@blogger.com) at June 22, 2016 03:27 PM

Jon Butterworth - Life and Physics

Don’t let’s quit

This doesn’t belong on the Guardian Science pages, because even though universities and science will suffer if Britain leaves the EU, that’s not my main reason for voting ‘remain’. But lots of friends have been writing or talking about their choice, and the difficulties of making it, and I feel the need to write my own reasons down even if everyone is saturated by now. It’s nearly over, after all.

Even though the EU is obviously imperfect, a pragmatic compromise, I will vote to stay in with hope and enthusiasm. In fact, I’ll do so partly because it’s an imperfect, pragmatic compromise.

I realise there are a number of possible reasons for voting to leave the EU, some better than others, but please don’t.

Democracy

Maybe you’re bothered because EU democracy isn’t perfect. Also we can get outvoted on some things (these are two different points. Being outvoted sometimes is actually democratic. Some limitations on EU democracy are there to stop countries being outvoted by other countries too often). But it sort of works and it can be improved, especially if we took EU elections more seriously after all this. And we’re still ‘sovereign’, simply because we can vote to leave if we get outvoted on something important enough.

Misplaced nostalgia and worse

Maybe you don’t like foreigners, or you want to ‘Take Britain back’  (presumably to some fantasy dreamworld circa 1958). Unlucky; the world has moved on and will continue to do so whatever the result this week. I don’t have a lot of sympathy, frankly, and I don’t think this applies to (m)any of my ‘leave’ friends.

Lies

Maybe you believed the lies about the £350m we don’t send, which wouldn’t save the NHS anyway even if we did, or the idea that new countries are lining up to join and we couldn’t stop them if we wanted. If so please look at e.g. https://fullfact.org/europe/ for help. Some people I love and respect have believed some of these lies, and that has made me cross. These aren’t matters of opinion, and the fact that the ‘leave’ campaign repeats them over and over shows both their contempt for the intelligence of voters and the weakness of their case. If you still want to leave, knowing the facts, then fair enough. But don’t do it on a lie.

We need change

Maybe you have a strong desire for change, because bits of British life are rubbish and unfair. In this case, the chances are your desire for change is directed at entirely the wrong target. The EU is not that powerful in terms of its direct effects on everyday life. The main thing it does is provide a mechanism for resolving common issues between EU member states. It is  a vast improvement on the violent means used in previous centuries. It spreads rights and standards to the citizens and industries of members states, making trade and travel safer and easier. And it amplifies our collective voice in global politics.

People who blame the EU for the injustices of British life are being made fools of by unscrupulous politicians, media moguls and others who have for years been content to see the destruction of British industry, the undermining of workers’ rights, the underfunding of the NHS and education, relentless attacks on national institutions such as the BBC, neglect of whole regions of the country and more.

These are the people now telling us to cut off our nose to spite our face, and they are exploiting the discontent they have fostered to persuade us this would be a good idea, by blaming the EU for choices made by UK governments.

They are quite happy for industry to move to lower-wage economies in the developing world when is suits them, but they don’t want us agreeing common standards, protections and practices with our EU partners. They don’t like Nation states clubbing together, because that can make trouble for multinationals, and (in principle at least) threatens their ability to cash in on exploitative labour practices and tax havens. They would much rather play nation off against nation.

If…

If we vote to leave, the next few years will be dominated by attempts to negotiate something from the wreckage, against the background of a shrinking economy and a dysfunctional political class.  This will do nothing to fix inequality and the social problems we face (and I find it utterly implausible that people like Bojo, IDS or Farage would even want that). Those issues will be neglected or worse. Possibly this distraction, which is already present, is one reason some in the Conservative Party have involved us all in their internal power struggles.

If we vote remain, I hope the desire for change is preserved beyond Thursday, and is focussed not on irresponsible ‘blame the foreigner’ games, but on real politics, of hope and pragmatism, where it can make a positive difference.

I know there’s no physics here. This is the ‘life’ bit, and apart from the facts, it’s just my opinion. Before writing it I said this on twitter:

and it probably still be true that it’s better than the above. Certainly it’s shorter. But I had to try my own words.

I’m not going to enable comments here since they can be added on twitter and facebook if you feel the urge, and I can’t keep up with too many threads.


Filed under: Politics

by Jon Butterworth at June 22, 2016 06:19 AM

June 21, 2016

Symmetrybreaking - Fermilab/SLAC

All four one and one for all

A theory of everything would unite the four forces of nature, but is such a thing possible?

Over the centuries, physicists have made giant strides in understanding and predicting the physical world by connecting phenomena that look very different on the surface. 

One of the great success stories in physics is the unification of electricity and magnetism into the electromagnetic force in the 19th century. Experiments showed that electrical currents could deflect magnetic compass needles and that moving magnets could produce currents.

Then physicists linked another force, the weak force, with that electromagnetic force, forming a theory of electroweak interactions. Some physicists think the logical next step is merging all four fundamental forces—gravity, electromagnetism, the weak force and the strong force—into a single mathematical framework: a theory of everything.

Those four fundamental forces of nature are radically different in strength and behavior. And while reality has cooperated with the human habit of finding patterns so far, creating a theory of everything is perhaps the most difficult endeavor in physics.

“On some level we don't necessarily have to expect that [a theory of everything] exists,” says Cynthia Keeler, a string theorist at the Niels Bohr Institute in Denmark. “I have a little optimism about it because historically, we have been able to make various unifications. None of those had to be true.”

Despite the difficulty, the potential rewards of unification are great enough to keep physicists searching. Along the way, they’ve discovered new things they wouldn’t have learned had it not been for the quest to find a theory of everything.

Illustration by Sandbox Studio, Chicago with Corinne Mucha

United we hope to stand

No one has yet crafted a complete theory of everything.

It’s hard to unify all of the forces when you can’t even get all of them to work at the same scale. Gravity in particular tends to be a tricky force, and no one has come up with a way of describing the force at the smallest (quantum) level.

Physicists such as Albert Einstein thought seriously about whether gravity could be unified with the electromagnetic force. After all, general relativity had shown that electric and magnetic fields produce gravity and that gravity should also make electromagnetic waves, or light. But combining gravity and electromagnetism, a mission called unified field theory, turned out to be far more complicated than making the electromagnetic theory work. This was partly because there was (and is) no good theory of quantum gravity, but also because physicists needed to incorporate the strong and weak forces.

A different idea, quantum field theory, combines Einstein’s special theory of relativity with quantum mechanics to explain the behavior of particles, but it fails horribly for gravity. That’s largely because anything with energy (or mass, thanks to relativity) creates a gravitational attraction—including gravity itself. To oversimplify somewhat, the gravitational interaction between two particles has a certain amount of energy, which produces an additional gravitational interaction with its own energy, and so on, spiraling to higher energies with each extra piece.

“One of the first things you learn about quantum gravity is that quantum field theory probably isn’t the answer,” says Robert McNees, a physicist at Loyola University Chicago. “Quantum gravity is hard because we have to come up with something new.”

Illustration by Sandbox Studio, Chicago with Corinne Mucha

An evolution of theories

The best-known candidate for a theory of everything is string theory, in which the fundamental objects are not particles but strings that stretch out in one dimension.  

Strings were proposed in the 1970s to try to explain the strong force. This first string theory proved to be unnecessary, but physicists realized it could be joined to the another theory called Kaluza-Klein theory as a possible explanation of quantum gravity.

String theory expresses quantum gravity in two dimensions rather than the four, bypassing all the problems of the quantum field theory approach but introducing other complications, namely an extra six dimensions that must be curled up on a scale too small to detect.

Unfortunately, string theory has yet to reproduce the well-tested predictions of the Standard Model.

Another well-known idea is the sci-fi-sounding “loop quantum gravity,” in which space-time on the smallest scales is made of tiny loops in a flexible mesh that produces gravity as we know it.

The idea that space-time is made up of smaller objects, just as matter is made of particles, is not unique to the theory. There are many others with equally Jabberwockian names: twistors, causal set theory, quantum graphity and so on. Granular space-time might even explain why our universe has four dimensions rather than some other number.

Loop quantum gravity’s trouble is that it can’t replicate gravity at large scales, such as the size of the solar system, as described by general relativity.

None of these theories has yet succeeded in producing a theory of everything, in part because it's so hard to test them.

“Quantum gravity is expected to kick in only at energies higher than anything that we can currently produce in a lab,” says Lisa Glaser, who works on causal set quantum gravity at the University of Nottingham. “The hope in many theories is now to predict cumulative effects,” such as unexpected black hole behavior during collisions like the ones detected recently by LIGO.

Today, many of the theories first proposed as theories of everything have moved beyond unifying the forces. For example, much of the current research in string theory is potentially important for understanding the hot soup of particles known as the quark-gluon plasma, along with the complex behavior of electrons in very cold materials like superconductors—something seemingly as far removed from quantum gravity as could be. 

“On a day-to-day basis, I may not be doing a calculation that has anything directly to do with string theory,” Keeler says. “But it’s all about these ideas that came from string theory.”

Finding a theory of everything is unlikely to change the way most of us go about our business, even if our business is science. That’s the normal way of things: Chemists and electricians don't need to use quantum electrodynamics, even though that theory underlies their work. But finding such a theory could change the way we think of the universe on a fundamental level.

Even a successful theory of everything is unlikely to be a final theory. If we’ve learned anything from 150 years of unification, it’s that each step toward bringing theories together uncovers something new to learn.

by Matthew R. Francis at June 21, 2016 01:00 PM

Axel Maas - Looking Inside the Standard Model

How to search for dark, unknown things: A bachelor thesis
Today, I would like to write about a recently finished bachelor thesis on the topic of dark matter and the Higgs. Though I will also present the results, the main aim of this entry is to describe an example of such a bachelor thesis in my group. I will try to follow up also in the future with such entries, to give those interested in working in particle physics an idea of what one can do already at a very early stage in one's studies.

The framework of the thesis is the idea that dark matter could interact with the Higgs particle. This is a serious possibility, as both objects are somehow related to mass. There is also not yet any substantial reason why this should not be the case. The unfortunate problem is only: how strong is this effect? Can we measure it, e.g. in the experiments at CERN?

We are looking in a master thesis in the dynamical features of this idea. This is ongoing, and something I will certainly write about later. Knowing the dynamics, however, is only the first step towards connecting the theory to experiment. To do so, we need the basic properties of the theory. This input will then be put through a simulation of what happens in the experiment. Only this result is the one really interesting for experimental physicists. They then look what any kind of imperfections of the experiments change and then they can conclude, whether they will be able to detect something. Or not.

In the thesis, we did not yet had the results from the master student's work, so we parametrized the possible outcomes. This meant mainly to have the mass and the strength of the interaction between the Higgs and the dark matter particle to play around. This gave us what we call an effective theory. Such a theory does not describe every detail, but it is sufficiently close to study a particular aspect of a theory. In this case how dark matter should interact with the Higgs at the CERN experiments.

With this effective theory, it was then possible to use simulations of what happens in the experiment. Since dark matter cannot, as the name says, be directly seen, we needed somehow a marker to say that it has been there. For that purpose we choose the so-called associate production mode.

We knew that the dark matter would escape the experiment undetected. In jargon, this is called missing energy, since we miss the energy of the dark matter particles, when we account for all we see. Since we knew what went in, and know that what goes in must come out, anything not accounted for must have been carried away by something we could not directly see. To make sure that this came from an interaction with the Higgs we needed a tracer that a Higgs had been involved. The simplest solution was to require that there is still a Higgs. Also, there are deeper reasons which require that dark matter in this theory should not only arrive with a Higgs particle, but should be obtained also from a Higgs particle before the emission of the dark matter particles. The simplest way to check for this is that there is besides the Higgs in the end also a so-called Z-boson, for technical reasons. Thus, we had what we called a signature: Look for a Higgs, a Z-boson, and missing energy.

There is, however, one unfortunate thing in known particle physics which makes this more complicated: neutrinos. These particles are also essentially undetectable for an experiment at the LHC. Thus, when produced, they will also escape undetected as missing energy. Since we do not detect either dark matter or neutrinos, we cannot decide, what actually escaped. Unfortunately, the tagging with the Higgs and the Z do not help, as neutrinos can also be produced together with them. This is what we call a background to our signal. Thus, it was necessary to account for this background.

Fortunately, there are experiments which can detect, with a lot of patience, neutrinos. They are very different from the one we at the LHC. But they gave us a lot of information on neutrinos. Hence, we knew how often neutrinos would be produced in the experiment. So, we would only need to remove this known background from what the simulation gives. Whatever is left would then be the signal of dark matter. If the remainder would be large enough, we would be able to see the dark matter in the experiment. Of course, there are many subtleties involved in this process, which I will skip.

So the student simulated both cases, and determined the signal strength. From that she could deduce that the signal grows quickly with the strength of the interaction. She also found that the signal became stronger if the dark matter particles become lighter. That is so because there is only a finite amount of energy available to produce them. But the more energy is left to make the dark matter particles move the easier it gets to produce them, an effect known in physics as phase space. In addition, she found that if the dark matter particles have half the mass of the Higgs their production became also very efficient. The reason is a resonance. Just like two noises amplify each other if they are at the same frequency, so such amplifications can happen in particle physics.

The final outcome of the bachelor thesis was thus telling us for the values of the two parameters of the effective theory how strong our signal would be. Once we know these values from our microscopic theory in the master project, we know whether we have a chance to see these particles in this type of experiments.

by Axel Maas (noreply@blogger.com) at June 21, 2016 07:35 AM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
July 27, 2016 03:06 PM
All times are UTC.

Suggest a blog:
planet@teilchen.at