Particle Physics Planet

April 20, 2015

Christian P. Robert - xi'an's og

simulating correlated Binomials [another Bernoulli factory]

This early morning, just before going out for my daily run around The Parc, I checked X validated for new questions and came upon that one. Namely, how to simulate X a Bin(8,2/3) variate and Y a Bin(18,2/3) such that corr(X,Y)=0.5. (No reason or motivation provided for this constraint.) And I thought the following (presumably well-known) resolution, namely to break the two binomials as sums of 8 and 18 Bernoulli variates, respectively, and to use some of those Bernoulli variates as being common to both sums. For this specific set of values (8,18,0.5), since 8×18=12², the solution is 0.5×12=6 common variates. (The probability of success does not matter.) While running, I first thought this was a very artificial problem because of this occurrence of 8×18 being a perfect square, 12², and cor(X,Y)x12 an integer. A wee bit later I realised that all positive values of cor(X,Y) could be achieved by randomisation, i.e., by deciding the identity of a Bernoulli variate in X with a Bernoulli variate in Y with a certain probability ϖ. For negative correlations, one can use the (U,1-U) trick, namely to write both Bernoulli variates as

X_1=\mathbb{I}(U\le p)\quad Y_1=\mathbb{I}(U\ge 1-p)

in order to minimise the probability they coincide.

I also checked this result with an R simulation

> z=rbinom(10^8,6,.66)
> y=z+rbinom(10^8,12,.66)
> x=z+rbinom(10^8,2,.66)
> cor(x,y)
[1] 0.5000539

Searching on Google gave me immediately a link to Stack Overflow with an earlier solution with the same idea. And a smarter R code.

Filed under: Books, Kids, pictures, R, Running, Statistics, University life Tagged: binomial distribution, cross validated, inverse cdf, Jacob Bernoulli, Parc de Sceaux, R, random simulation, stackoverflow

by xi'an at April 20, 2015 10:15 PM

Emily Lakdawalla - The Planetary Society Blog

What Color Does the Internet Think Pluto Is?
Astronomers have known for a long time that Pluto’s surface is reddish, so where did the common idea that Pluto is blue come from?

April 20, 2015 08:08 PM

astrobites - astro-ph reader's digest

The Lives of the Longest Lived Stars
  • Title: The End of the Main Sequence
  • Authors: Gregory Laughlin, Peter Bodenheimer, and Fred C. Adams
  • First Author’s Institution: University of Michigan (when published), University of California at Santa Cruz (current)
  • Publication year: 1997

Heavy stars live like rock stars: they live fast, become big, and die young. Low mass stars, on the other hand, are more persistent, and live longer. The ages of the former stars are measured in millions to billions of years; the expected lifetimes of the latter are measured in trillions. Low mass stars are the turtle that beats the hare.


Figure 1: An artist’s impression of a low-mass dwarf star. Figure from here.

But why do we want to study the evolution of low mass stars, and their less than imminent demise? There are various good reasons. First, galaxies are composed of stars —and other things, but here we focus on the stars. Second, low-mass stars are by far the most numerous stars in the galaxy, about 70% of stars in the Milky Way are less than 0.3 solar masses (also denoted as 0.3M). Third, low-mass stars provide useful insights into stellar evolution: if you want to understand why heavier mass stars evolve in a certain way —e.g. develop into red giants— it is helpful to take a careful look at why the lowest mass stars do not.

Todays paper was published in 1997, and marked the first time when the evolution and long-term fate of the lowest mass stars were calculated. It still gives a great overview of their lifespans, which we look at in this astrobite.

Stellar evolution: The life of a 0.1M star

The authors use numerical methods to evolve the lowest mass stars. The chart below summarizes the lifespan of a 0.1M star on the Hertzsprung-Russell diagram, which plots a star’s luminosity as a function of effective temperature. The diagram is the star’s Facebook wall; it gives insight into events in the star’s life. Let’s dive in and follow the star’s lifespan, starting from the beginning.

The star starts out as a protostar, a condensing molecular cloud that descends down the Hayashi track. As the protostar condenses it releases gravitational energy, it gets hotter, and pressures inside it increase. After about 2 billion years of contraction, hydrogen fusion starts in the core. We have reached the Zero Age Main Sequence (ZAMS), where the star will spend most of its life, fusing hydrogen to helium.

Figure 2: The life a 0.1M star shown on the Hertzsprung-Russell diagram, where temperature increases to the left. Interesting life-events labelled. Figure 1 from the paper, with an annotated arrow.


The fusion process creates two isotopes of helium: 3He, an intermediate product, and 4He, the end product. The inset chart plots the core composition of H, 3He, and 4He. We see that for the first trillion (note trillion) years hydrogen drops, while 4He increases. 3He reaches a maximum, and then tapers off. As the star’s average molecular weight increases, the star grows hotter and more luminous. It moves to the upper left on the diagram. The star has now been evolving for roughly 5.7 trillion years, slowly turning into a hot helium dwarf.

The red arrow on the diagram marks a critical juncture in the star’s life. Before now, the energy created by fusion has been transported by convection, which heats up the stellar material, causing it to move and mix with other colder parts of the star, much in a same way how a conventional radiator heats your room. This has kept the star well mixed, and maintained a homogeneous chemical composition throughout the star. Now, the physics behind the energy transport changes. The increasing amounts of helium lower the opacity of the star, a measure of radiation impenetrability. Lowering the opacity makes it easier for photons to travel larger distances inside the star, making them more effective than convection at transporting energy. We say that the stellar core becomes radiative. This causes the entire star to contract and produces a sudden decline in luminosity (see red arrow).


Figure 3: The interior of a 0.1M star. The red arrow in Figure 2 marks the point where the star’s core changes from being convective to radiative. Figure from here.

Now the evolutionary timescale accelerates. The core, now pure helium, continues to increase in mass as hydrogen is exhausted in a nuclear shell around it. On the Hertzsprung-Russell diagram the star moves rapidly to higher temperatures, and will eventually grow hotter than the current Sun, but only 1% as bright.  Afterwards, the star turns a corner. The star starts to cool off, the shell source is slowly extinguished, and the luminosity decreases. The star is on the cooling curve, moving towards Florida on the Hertzsprung-Russel diagram, on its way to become a low-mass helium white dwarf.

The total nuclear burning lifetime of the star is somewhat more than 6 trillion years, and during that time the star used up 99% of its initial hydrogen; the Sun will only burn about 10%. Incredible efficiency.

The lifespans of 0.06M – 0.20M stars

Additionally, the authors compare the lifespans of stars with masses similar to the 0.1M star. Their results are shown in Figure 4. The lightest object, a 0.06M star, never starts fusing. Instead, it rapidly cools, and fades away as a brown dwarf. Stars with masses between 0.08M and 0.16M have similar lives to the star in Figure 2. All of them travel increasingly to the left on the Hertzsprung-Russell diagram after developing a radiative core. The radiative cores appear at progressively earlier times in the evolution as the masses increase. Stars in the mass range 0.16M-0.20M behave differently, and the authors mark them as an important transition group. These stars have a growing ability to swell, compared to the lighter stars. This property is what ultimately fuels even higher mass stars to become red giants.


Figure 4: The evolution of stars with masses between 0.06M and 0.25M shown on a Hertzsprung-Russell diagram. The inset chart shows that stellar lifetimes drop with increasing mass. Figure 2 from the paper.


Fusing hydrogen slow and steady wins the stellar age-race. We see that the lowest mass stars can reach ages that greatly exceed the current age of the universe — by a whooping factor of 100-1000! These stars are both the longest lived, and also the most numerous in the galaxy and the universe. Most of the stellar evolution that will occur is yet to come.

by Gudmundur Stefansson at April 20, 2015 07:44 PM

ZapperZ - Physics and Physicists

Cyclotron Radiation From One Electron
It is a freakingly cool experiment!

We now can see the cyclotron radiation from a single electron, folks!

The researchers plotted the detected radiation power as a function of time and frequency (Fig. 2). The bright, upward-angled streaks of radiation indicate the radiation emitted by a single electron. It is well known theoretically that a circling electron continuously emits radiation. As a result, it gradually loses energy and orbits at a rate that increases linearly in time. The detected radiation streaks have the same predicted linear dependence, which is what allowed the researchers to associate them with a single electron. 

Of course, we have seen such effects for many electrons in synchrotron rings all over the world, but to not only see it for one electron, but to also see how it loses energy as it orbits around is rather neat. It reinforces the fact that we can't really imagine electrons "orbiting" around a nucleus in an atom in the classical way, because if they do, we would detect such cyclotron radiation and that they will eventually crash into the nucleus.

But I also find it interesting that this has more to do with the effort in trying to determine the mass of a neutrino independent of the neutrino mass oscillation via measuring the electrons mass to high accuracy in beta decay.


by ZapperZ ( at April 20, 2015 07:13 PM

Peter Coles - In the Dark

SEPnet Awayday

Here I am in Easthampstead Park Conference Centre after a hard day being away at an awayday. In fact we’ve been so busy that I’ve only just checked into my room (actually it’s a suite) and shall very soon be attempting to find the bar so I can have a drink. I’m parched.

The place is very nice. Here’s a picture from outside:


I’m told it is very close to Broadmoor, the famous high-security psychiatric hospital, although I’m sure that wasn’t one of the reasons for choice of venue.

I have to attend quite a few of these things for one reason or another. This one is on the Future and Sustainability of the South East Physics Network, known as SEPnet for short, which is a consortium of physics departments across the South East of England working together to deliver excellence in both teaching and research. I am here deputising for a Pro Vice Chancellor who can’t be here. I’ve enjoyed pretending to be important, but I’m sure nobody has been taken in.

Although it’s been quite tiring, it has been an interesting day. Lots of ideas and discussion, but we do have to distil all  that down into some more specific detail over dinner tonight and during the course of tomorrow morning.  Anyway, better begin the search for the bar so I can refresh the old brain cells.


by telescoper at April 20, 2015 05:42 PM

Clifford V. Johnson - Asymptotia

Festivities (I)
Love this picture posted by USC's Facebook page*. (I really hope that we did not go over the heads of our - very patient** - audience during the Festival of Books panel...) Screen Shot 2015-04-20 at 08.59.59 -cvj *They don't give a photo credit, so I'm pointing you back to the posting here until I work it out. [...] Click to continue reading this post

by Clifford at April 20, 2015 04:12 PM

Jester - Resonaances

2014 Mad Hat awards
New Year is traditionally the time of recaps and best-ofs. This blog is focused on particle physics beyond the standard model where compiling such lists is challenging, given the dearth of discoveries or even   plausible signals pointing to new physics.  Therefore I thought I could somehow honor those who struggle to promote our discipline by finding new signals against all odds, and sometimes against all logic. Every year from now on, the Mad Hat will be awarded to the researchers who make the most outlandish claim of a particle-physics-related discovery, on the condition it gets enough public attention.

The 2014 Mad Hat award unanimously goes to Andy Read, Steve Sembay, Jenny Carter, Emile Schyns, and, posthumously, to George Fraser, for the paper Potential solar axion signatures in X-ray observations with the XMM–Newton observatory. Although the original arXiv paper sadly went unnoticed, this remarkable work was publicized several months later by the Royal Astronomical Society press release and by the article in Guardian.

The crucial point in this kind of endeavor is to choose an observable that is noisy enough to easily accommodate a new physics signal. In this particular case the observable is x-ray emission from Earth's magnetosphere, which could include a component from axion dark matter emitted from the Sun and converting to photons. A naive axion hunter might expect the conversion signal should be observed by looking at the sun (that is the photon inherits the momentum of the incoming axion), something that XMM cannot do due to technical constraints. The authors thoroughly address this point in a sentence in Introduction, concluding that it would be nice if the x-rays could scatter afterwards at the right angle. Then the signal that is searched for is an annual modulation of the x-ray emission, as the magnetic field strength in XMM's field of view is on average larger in summer than in winter. A seasonal dependence of the x-ray flux is indeed observed, for which axion dark matter is clearly the most plausible explanation.

Congratulations to all involved. Nominations for the 2015 Mad Hat award are open as of today ;) Happy New Year everyone!

by Jester ( at April 20, 2015 04:03 PM

Jester - Resonaances

Weekend Plot: Fermi and more dwarfs
This weekend's plot comes from the recent paper of the Fermi collaboration:

It shows the limits on the cross section of dark matter annihilation into tau lepton pairs. The limits are obtained from gamma-ray observations of 15 dwarf galaxies during 6 years. Dwarf galaxies are satellites of Milky Way made mostly of dark matter with few stars in it, which makes them a clean environment to search for dark matter signals. This study is particularly interesting because it is sensitive to dark matter models that could explain the gamma-ray excess detected from the center of the Milky Way.  Similar limits for the annihilation into b-quarks have already been shown before at conferences. In that case, the region favored by the Galactic center excess seems entirely excluded. Annihilation of 10 GeV dark matter into tau leptons could also explain the excess. As can be seen in the plot, in this case there is also  large tension with the dwarf limits, although astrophysical uncertainties help to keep hopes alive.  

Gamma-ray observations by Fermi will continue for another few years, and the limits will get stronger.   But a faster way to increase the statistics may be to find more observation targets. Numerical simulations with vanilla WIMP dark matter predict a few hundred dwarfs around the Milky Way. Interestingly, a discovery of several new dwarf candidates was reported last week. This is an important development, as the total number of known dwarf galaxies now exceeds the number of dwarf characters in Peter Jackson movies. One of the candidates, known provisionally as DES J0335.6-5403 or  Reticulum-2, has a large J-factor (the larger the better, much like the h-index).  In fact, some gamma-ray excess around 1-10 GeV is observed from this source, and one paper last week even quantified its significance as ~4 astrosigma (or ~3 astrosigma in an alternative more conservative analysis). However, in the Fermi analysis using  more recent reconstruction Pass-8 photon reconstruction,  the significance quoted is only 1.5 sigma. Moreover the dark matter annihilation cross section required to fit the excess is excluded by an order of magnitude by the combined dwarf limits. Therefore,  for the moment, the excess should not be taken seriously.

by Jester ( at April 20, 2015 04:03 PM

Jester - Resonaances

LHCb: B-meson anomaly persists
Today LHCb released a new analysis of the angular distribution in  the B0 → K*0(892) (→K+π-) μ+ μ- decays. In this 4-body decay process, the angles between the direction of flight of all the different particles can be measured as a function of the invariant mass  q^2 of the di-muon pair. The results are summarized in terms of several form factors with imaginative names like P5', FL, etc. The interest in this particular decay comes from the fact that 2 years ago LHCb reported a large deviation from the standard model prediction in one q^2 region of 1 form factor called P5'. That measurement was based on 1 inverse femtobarn of data;  today it was updated to full 3 fb-1 of run-1 data. The news is that the anomaly persists in the q^2 region 4-8 GeV, see the plot.  The measurement  moved a bit toward the standard model, but the statistical errors have shrunk as well.  All in all, the significance of the anomaly is quoted as 3.7 sigma, the same as in the previous LHCb analysis. New physics that effectively induces new contributions to the 4-fermion operator (\bar b_L \gamma_\rho s_L) (\bar \mu \gamma_\rho \mu) can significantly improve agreement with the data, see the blue line in the plot. The preference for new physics remains remains high, at the 4 sigma level, when this measurement is combined with other B-meson observables.

So how excited should we be? One thing we learned today is that the anomaly is unlikely to be a statistical fluctuation. However, the observable is not of the clean kind, as the measured angular distributions are  susceptible to poorly known QCD effects. The significance depends a lot on what is assumed about these uncertainties, and experts wage ferocious battles about the numbers. See for example this paper where larger uncertainties are advocated, in which case the significance becomes negligible. Therefore, the deviation from the standard model is not yet convincing at this point. Other observables may tip the scale.  If a  consistent pattern of deviations in several B-physics observables emerges,  only then we can trumpet victory.

Plots borrowed from David Straub's talk in Moriond; see also the talk of Joaquim Matias with similar conclusions. David has a post with more details about the process and uncertainties. For a more popular write-up, see this article on Quanta Magazine. 

by Jester ( at April 20, 2015 04:01 PM

Matt Strassler - Of Particular Significance

Completed Final Section of Article on Dark Matter and LHC

As promised, I’ve completed the third section, as well as a short addendum to the second section, of my article on how experimenters at the Large Hadron Collider [LHC] can try to discover dark matter particles.   The article is here; if you’ve already read what I wrote as of last Wednesday, you can pick up where you left off by clicking here.

Meanwhile, in the last week there were several dark-matter related stories that hit the press.

There has been a map made by the Dark Energy Survey of dark matter’s location across a swathe of the universe, based on the assumption that weak signals of gravitational lensing (bending of light by gravity) that cannot be explained by observed stars and dust is due to dark matter.  This will be useful down the line as we test simulations of the universe such as the one I referred you to on Wednesday.

There’s been a claim that dark matter interacts with itself, which got a lot of billing in the BBC; however one should be extremely cautious with this one, and the BBC editor should have put the word “perhaps” in the headline! It’s certainly possible that dark matter interacts with itself much more strongly than it interacts with ordinary matter, and many scientists (including myself) have considered this possibility over the years.  However, the claim reported by the BBC is considered somewhat dubious even by the authors of the study, because the little group of four galaxies they are studying is complicated and has to be modeled carefully.  The effect they observed may well be due to ordinary astrophysical effects, and in any case it is less than 3 Standard Deviations away from zero, which makes it more a hint than evidence.  We will need many more examples, or a far more compelling one, before anyone will get too excited about this.

Finally, the AMS experiment (whose early results I reported on here; you can find their September update here) has released some new results, but not yet in papers, so there’s limited information.  The most important result is the one whose details will apparently take longest to come out: this is the discovery (see the figure below) that the ratio of anti-protons to protons in cosmic rays of energies above 100 GeV is not decreasing as was expected. (Note this is a real discovery by AMS alone — in contrast the excess positron-to-electron ratio at similar energies, which was discovered by PAMELA and confirmed by AMS.)  The only problem is that they’ve made the discovery seem very exciting and dramatic by comparing their work to expectations from a model that is out of date and that no one seems to believe.  This model (the brown swathe in the Figure below) tries to predict how high-energy anti-protons are produced (“secondary production”) from even higher energy protons in cosmic rays.  Newer versions of this models are apparently significantly higher than the brown curve. Moreover, some scientists claim also that the uncertainty band (the width of the brown curve) on these types of models is wider than shown in the Figure.  At best, the modeling needs a lot more study before we can say that this discovery is really in stark conflict with expectations.  So stay tuned, but again, this is not yet something that in which one can have confidence.  The experts will be busy.

Figure 1. Antiproton to proton ratio measured by AMS. As seen, the measured ratio cannot be explained by existing models of secondary production.

Figure 1. Antiproton to proton ratio (red data points, with uncertainties given by vertical bars) as measured by AMS. AMS claims that the measured ratio cannot be explained by existing models of secondary production, but the model shown (brown swathe, with uncertainties given by the width of the swathe) is an old one; newer ones lie closer to the data. Also, the uncertainties in the models are probably larger than shown. Whether this is a true discrepancy with expectations is now a matter of healthy debate among the experts.

Filed under: Uncategorized

by Matt Strassler at April 20, 2015 12:38 PM

Lubos Motl - string vacua and pheno

ATLAS: 2.5-sigma four-top-quark excess
ATLAS has posted a new preprint
Analysis of events with \(b\)-jets and a pair of leptons of the same charge in \(pp\)-collisions at \(\sqrt s = 8\TeV\) with the ATLAS detector
boasting numerous near-2-sigma excesses (which could be explained by vector-like quarks and chiral \(b'\) quarks, but are too small to deserve much space here) and a more intriguing 2.5-sigma excess in various final states with four top quarks.

This four-top excess is most clearly expressed in Figure 11.

This picture contains four graphs called (a),(b),(c),(d) – for ATLAS to celebrate the four letters of the alphabet ;-) – and they look as follows:

You may see that the solid black curve (which is sometimes the boundary of the red excluded region) sits strictly outside the yellow-green Brazil one-or-two-sigma band. The magnitude of the excess is about 2.5 sigma in all cases.

The excess is interpreted in four different ways. The graph (a) interprets the extra four-top events in terms of some contact interaction linking four stops at the same point. The horizontal axis shows the scale \(\Lambda\) of new physics from which this contact interaction arises. The vertical axis is the coefficient of the quartic interaction.

The graph (b) assumes that the four leptons come from the decay of two pair-produced sgluons whose mass is on the horizontal axis. On the vertical axis, there is some cross section times the branching ratio to four tops.

And the remaining graphs (c) and (d) assume that the four tops arise from two universal extra dimensions (2UED) of the real projective plane (RPP) geometry. The Kaluza-Klein mass scale is on the horizontal axis. The vertical axis depicts the cross section times the branching ratio again. The subgraphs (c) and (d) differ by using the tier \((1,1)\) and \((2,0)+(0,2)\), respectively.

Extra dimensions are cool but I still tend to bet that they will probably be too small and thus inaccessible to the LHC. Moreover, the RPP geometry is probably naive. But it's fun to see something that could be interpreted as positive evidence in favor of some extra dimensions.

I find the sgluons more realistic and truly exciting. They are colored scalar fields ("s" in "sgluon" stands for "scalar") in the adjoint representation of \(SU(3)_{QCD}\), much like gluons, and may be marketed as additional superpartners of gluinos under "another" supersymmetry in theories where the gauge bosons hide the extended, \(\NNN=2\) supersymmetry. Such models predict that the gluinos are Dirac particles, not just Majorana particles as they are in the normal \(\NNN=1\). This possibility has been discussed on this blog many times in recent years because I consider it elegant and clever – and naturally consistent with some aspects of the superstring model building.

Their graph (b) shows that sgluons may be as light as \(830\GeV\) or so.

Previously, CMS only saw a 1-sigma quadruple-top-quark "excess".

Finally, I also want to mention another preprint with light superpartners, ATLAS Z-peaked excess in MSSM with a light sbottom or stop, by Kobakhidze plus three pals which offers a possible explanation for the recent ATLAS Z-peaked 3-sigma excess. They envision something like an \(800\GeV\) gluino and a \(660\GeV\) sbottom.

by Luboš Motl ( at April 20, 2015 06:31 AM

April 19, 2015

Christian P. Robert - xi'an's og

Bayesian propaganda?

“The question is about frequentist approach. Bayesian is admissable [sic] only by wrong definition as it starts with the assumption that the prior is the correct pre-information. James-Stein beats OLS without assumptions. If there is an admissable [sic] frequentist estimator then it will correspond to a true objective prior.”

I had a wee bit of a (minor, very minor!) communication problem on X validated, about a question on the existence of admissible estimators of the linear regression coefficient in multiple dimensions, under squared error loss. When I first replied that all Bayes estimators with finite risk were de facto admissible, I got the above reply, which clearly misses the point, and as I had edited the OP question to include more tags, the edited version was reverted with a comment about Bayesian propaganda! This is rather funny, if not hilarious, as (a) Bayes estimators are indeed admissible in the classical or frequentist sense—I actually fail to see a definition of admissibility in the Bayesian sense—and (b) the complete class theorems of Wald, Stein, and others (like Jack Kiefer, Larry Brown, and Jim Berger) come from the frequentist quest for best estimator(s). To make my point clearer, I also reproduced in my answer the Stein’s necessary and sufficient condition for admissibility from my book but it did not help, as the theorem was “too complex for [the OP] to understand”, which shows in fine the point of reading textbooks!

Filed under: Books, Kids, pictures, Statistics, University life Tagged: Abraham Wald, admissibility, Bayesian Analysis, Bayesian decision theory, Charles Stein, James-Stein estimator, least squares, objective Bayes, shrinkage estimation, The Bayesian Choice

by xi'an at April 19, 2015 10:15 PM

Peter Coles - In the Dark

Mental Health at Work – to Declare or not to Declare?

I couldn’t resist a comment on a recent article in the Times Higher  (written in response to an earlier piece expressing an opposite view). The question addressed by these articles is whether a member of University staff should be open about mental health issues or not. The latest comes down firmly on “no” side. Although I understand the argument, I disagree very strongly with this conclusion.

In fact I’ve taken this a bit further than just disclosing my problems to my employer; I’ve even blogged about them, both here and elsewhere. I also stood up in the University of Sussex Senate about two years ago and spoke about them there. That latter episode was in response to the attempts by some members of Senate to play down the extent of the violence and intimidation associated with a protest on campus that erupted into a full-scale riot on March 2013, accompanied by theft, vandalism and arson. Since violence is the root cause of my longstanding troubles I was incensed by the casual attitude some academics displayed about something that should never be tolerated. I don’t know whether my intervention had any effect on the discussion but I felt I had to make my point. It still troubles me, in fact, that the culprits have still not been brought to justice, and that some of them undoubtedly remain at large on campus even today.

Anyway, two full years have passed since then and I have received nothing but supportive comments from colleagues both in the School and among senior managers in the University.

When I applied for my current job at Sussex, it was just after I’d recovered from a serious breakdown. When I was offered the position, paperwork arrived that included a form on which to declare any health issues (including mental health). I have moved around several times in my career and have never made a declaration on such a form before, but this time I felt that I should especially because I was still taking medication then. I did wonder whether I might be declared unfit to take up a job that promised to have a fair share of stress associated with it. In the end, though, what happened was that I was put in touch with the Occupational Health department who offered their services if there was anything they could do to help. All these discussions were confidential.

I think it is very important that staff do declare problems with depression or other mental health issues. That’s the only way to be sure of getting the support you need. It’s also important for your colleagues to be able to put arrangements in place if you need to take some time off. On top of all that, employers need to learn how widespread such problems can be so they can try to deal with any factors that might exacerbate existing conditions, such as work-related stress.

Going back to the article in the Times Higher, though. I should say that I can understand the author’s reluctance. It took me twenty-five years so I am hardly in a position to criticise anyone! I was particularly struck by this section:

To disguise my illness, I try my best to be the very opposite of what depressed people are. I become the funniest, the smiliest and the most supportive colleague at work. At times, the performance succeeds and I feel a fleeting sense of being invincible. However, this feeling quickly dissipates and I am left feeling utterly alone, dark and lost. A colleague once said to me that she thought I was the most positive person she had ever met and that everyone enjoyed working with me. I couldn’t say anything to her in that moment. But if I was to speak my truth, it would have been to tell her that I was probably the darkest and saddest of her colleagues. That darkness frightens the hell out of me – so I keep it to myself.

That will ring very true to anyone who is living with mental illness; it becomes part of who you are, and it does mean that you find somethings very difficult or impossible that other people take for granted, no matter how effective your medication is. Putting on a brave face is just one way to avoid dealing with it, but it’s just a form of denial. Another common avoidance strategy is to make up fake excuses for absence from events that fill you with dread. I’ve done that a number of times over the years and although it provides short-term relief, it leaves you with a sense of shame at your own dishonesty that is damaging in the long run to your sense of self-worth and will only serve to give you a reputation for unreliability. The darkness can indeed be frightening but it does not follow that you should keep it to yourself. You should share it – not necessarily with friends and colleagues, who may not know how to help – but with compassionate and highly trained professional counsellors who really can help. It will also help your institution provide more and better assistance.

This is not to say that there isn’t a downside to being open about mental health issues. Now that my own genie is not only out of the bottle but all over the internet I do wonder what the future holds in store for my career beyond my current position. Then again I’m not at all sure what I want to happen. Only time will tell.

by telescoper at April 19, 2015 02:37 PM

Tommaso Dorigo - Scientificblogging

The Era Of The Atom
"The era of the atom" is a new book by Piero Martin and Alessandra Viola - for now the book is only printed in Italian (by Il Mulino), but I hope it will soon be translated in English.

read more

by Tommaso Dorigo at April 19, 2015 11:07 AM

Tommaso Dorigo - Scientificblogging

The Era Of The Atom
"The era of the atom" is a new book by Piero Martin and Alessandra Viola - for now the book is only printed in Italian (by Il Mulino), but I hope it will soon be translated in English. Piero Martin is a professor of Physics at the University of Padova and a member of the RFX collaboration, a big experiment which studies the confinement of plasma with the aim of constructing a fusion reactor - a real, working one like ITER; and Alessandra Viola is a well-known journalist and writer of science popularization. 

read more

by Tommaso Dorigo at April 19, 2015 11:06 AM

April 18, 2015

Peter Coles - In the Dark

End of Term Balls

I haven’t had time to post for the last couple of days because I’ve been too bust with end-of-term business (and pleasure). Yesterday (Friday) was the last day of teaching term and this week I had to get a lot of things finished because of various deadlines, as well as attending numerous meetings. It’s been quite an exhausting week, not just because of that but also because by tradition the two departments within the School of Mathematical and Physical Sciences at the University of Sussex, the Department of Mathematics and the Department of Physics & Astronomy, hold their annual Staff-Student Balls on consecutive days. When I arrived here just over two years ago I decided that I should attend both or neither, as to attend at only one would look like favouritism. In fact this is the third time I’ve attended both of them. Let no-one say I don’t take my obligations seriously.  It’s a tough job, but someone has to do it. Holding both balls so close together  poses some problems for a person of my age, but I coped and also tried to weigh them up relative to each other and see  which was  most impressive.

Actually, both were really well organized. The Mathematics Ball was held in the elegant Hilton Metropole hotel and the Physics one in the Holiday Inn, both on the seafront. As has been the case in previous years the Mathematics ball is a bit more refined and sedate, the Physics one a little more raucous. Also this year there was a very large difference in the number of people going, with over 200 at the Physics Ball and only just over half that number at the Mathematics one. In terms of all-round fun I have to declare the Physics Ball the winner last year, but both occasions were very enjoyable. I’d like to say a very public thank you to the organizers of both events, especially Sinem and Jordan for Mathematics and Francis for Physics. Very well done.

The highlight of the Physics Ball was an after-dinner speech by particle physicist Jon Butterworth, who has an excellent blog called Life and Physics on the Guardian website. I’ve actually been in contact with Jon many times through social media (especially Twitter) over a period of over six years, but we never actually met in person until last night. I think he was a bit nervous beforehand because he had never done an after-dinner speech before, in the end though his talk was funny and wise, and extremely well received. Mind you, I did make it easy for him by giving a short speech to introduce him, and after a speech by me almost anyone would look good!

Thereafter the evening continued with drinking and dancing. After a while most people present were rather tired and emotional.  I even think some might even have been drunk. I eventually got home about 2am, after declining an invitation to go to the after-party. I’m far too old for that sort of thing. Social events like this can be a little bit difficult, for a number of reasons. One is that there’s an inevitable “distance” between students and staff, not just in terms of age but also in the sense that the staff have positions of responsibility for the students. Students are not children, of course, so we’re not legally  in loco parentis, but something of that kind of relationship is definitely there. Although it doesn’t stop either side letting their hair down once in a while, I always find there’s a little bit of tension especially if the revels get a bit out of hand. To help occasions like this run smoothly I think it’s the responsibility of the staff members present to drink heavily in order to put the students at ease. United by a common bond of inebriation, the staff-student divide crumbles and a good time is had by all.

There’s another thing I find a bit strange. Chatting to students last night was the first time I had spoken to many of my students like that, i.e. outside the lecture  or tutorial. I see the same faces in my lectures day in, day out but all I do is talk to them about physics. I really don’t know much about them at all. But it is especially nice when on occasions like this students come up, as several did last night, and say that they enjoyed my lectures. Actually, it’s more than just nice. Amid all the bureaucracy and committee meetings, it’s very valuable to be reminded what the job is really all about.


P.S. Apologies for not having any pictures. I left my phone in the office on Friday when I went home to get changed. I will post some if anyone can supply appropriate images. Or, better still, inappropriate ones!



by telescoper at April 18, 2015 02:45 PM

ZapperZ - Physics and Physicists

Complex Dark Matter
Don Lincoln has another video on Dark Matter, for those of you who can't enough of these things.


by ZapperZ ( at April 18, 2015 01:13 PM

Clifford V. Johnson - Asymptotia

Festival Panel
father and son at LA Times Festival of BooksDon't forget that this weekend is the fantastic LA Times Festival of Books! See my earlier post. Actually, I'll be on a panel at 3:00pm in Wallis Annenberg Hall entitled "Grasping the Ineffable: On Science and Health", with Pat Levitt and Elyn Saks, chaired by the Science writer KC Cole. I've no idea where the conversation is going to go, but I hope it'll be fun and interesting! (See the whole schedule here.) Maybe see you there! -cvj Click to continue reading this post

by Clifford at April 18, 2015 06:07 AM

April 17, 2015

Emily Lakdawalla - The Planetary Society Blog

JPL Will Present their Mars Program Concept at the 2015 Humans to Mars Summit
JPL will present their humans to Mars program concept at the Humans to Mars Summit and publish it as a peer-reviewed article in the New Space Journal.

April 17, 2015 10:37 PM

Quantum Diaries

In Defense of Scientism and the Joys of Self-Publishing.

As long-time readers of Quantum Diaries know I have been publishing here for a number of years and this is my 85th and last post[1]. A couple of years ago, I collected the then current collection, titled it “In Defense of Scientism,” after the title of one of the essays, and sent it off to a commercial publisher. Six months later, I got an e-mail from the editor complaining that he had lost the file and only found it by accident, and he somehow inferred that it was my fault. After that experience, it was no surprise he did not publish it.

With all the talk of self-publishing these days, I thought I would give it a try. It is easy, at least compared to finding the Higgs boson! There are a variety of options that give different levels of control, so one can pick and choose preferences – like off an á la carte menu. The simplest form of self-publishing is to go to a large commercial publisher.  The one I found would, for $50.00 USD up front and $12.00 a year, supply print on demand and e-books to a number of suppliers. Not sure that I could recover the costs from the revenue – and being a cheapskate – I decided not to go that route. There are also commissioned alternatives with no upfront costs, but I decided to interact directly with three (maybe four, if I can jump over the humps the fourth has put up) companies.  One of the companies treated their print-on-demand and digital distribution arms as distinct, even to the point of requiring different reimbursement methods. That is the disadvantage of doing it yourself, sorting it all out. The advantage of working directly with the suppliers is more control over the detailed formatting and distribution.

From then on things got fiddly[2], for example, reimbursement. Some companies would only allow payment by electronic fund transfer, others only by check. The weirdest example was one company that did electronic fund transfers unless the book was sold in Brazil or Mexico. In those cases, it is by check but only after $100.00 has been accumulated. One company verified, during account setup, that the fund transfer worked by transferring a small amount, in my case 16 cents. And then of course there are special rules if you earn any money in the USA. For USA earnings there is a 30% withholding tax unless you can document that there is a tax treaty that allows you to get around it. The USA is the only country that requires this. Fine, being an academic, I am used to jumping through hoops.

Next was the question of an International Standard Book Number (ISBN). They are not required but are recommended. That is fine since in Canada you can get them for free. Just as well since each version of the book needs a different number. The paperback needs a different number from the electronic and each different electronic format requires its own number. As I said, it is a good thing it is free. Along with the ISBN, I got a reminder that the Library of Canada requires one copy of each book that sells more than four copies and two copies if it goes over a hundred and of course a separate electronic copy if you publish electronically. Fun, fun, fun[3]. There are other implications of getting you own ISBN number. Some of the publishers would supply an ISBN free of charge but then would put the book out under their own imprint and, in some cases, give wider distribution to those books. But again, getting your own number ultimately gives you more control.

With all this research in hand, it was time to create and format the content. I had the content from the four years’ worth of Quantum Diary posts and all I had to do was put it together and edit for consistency. Actually, Microsoft Word worked quite well with various formatting features to help. I then gave it to my wife to proofread. That was a mistake; she is still laughing at some of the typos. At least there is now an order of magnitude fewer errors. I should also acknowledge the many editorial comments from successive members of the TRIUMF communications team.

The next step was to design the book cover. There comes a point in every researcher’s career when they need support and talent outside of themselves. Originally, I had wanted to superimpose a picture of a model boat on a blackboard of equations. With that vision in mind, I set about the hallways to seek and enroll the talent of a few staff members who could make it happen. After normal working hours, of course. A co-op communication student suggested that the boat be drawn on the blackboard rather than a picture superimposed. The equations were already on a blackboard and are legitimate. The boat was hand drawn by a talented lady in accounting, drawing it first onto an overhead transparency[4] and then projecting it onto a blackboard. A co-op student in the communications team produced the final cover layout according to the various colour codes and margin bleeds dictated by each publisher. For both my own and your sanity, I won’t go into all the details. In the end, I rather like how the cover turned out.

For print-on-demand, they wanted a separate pdf for the cover and for the interior. They sent very detailed instructions so that was no problem. It only took about three tries to get it correct. The electronic version was much more problematic. I wonder if the companies that produce both paper and digital get it right. I suspect not. There is a free version of a program that converts from Word to epub format but the results have some rather subtle errors, like messing up the table of contents. I ended up using one of the digital publisher’s conversion services provided as a free service. If you buy a copy and it looks messed up, I do not want to hear about it.[5] One company (the fourth mentioned above) added a novel twist. I jumped all the hoops related to banking information for wire transfers, did the USA tax stuff and then went to upload the content. Ah, I needed to download a program to upload the content. That should not have been a problem but it ONLY runs on their hardware. The last few times I used their hardware it died prematurely so they can stuff it.

Now, several months after I started the publishing process, I have jumped through all the hoops! All I have to do is lay back and let the money roll in so I can take early retirement. Well, at my age, early retirement is no longer a priori possible but at least I hope to get enough money to buy the people who helped me prepare the book a coffee. So everyone, please rush out and buy a copy. Come on, at least one of you.

As a final point, you may wonder why there is a drawing of a boat on the cover of a book about the scientific method. Sorry, to find out you will have to read the book. But I will give you a hint. It is not that I like to go sailing. I get seasick.

To receive a notice of my blogs and other writing follow me on Twitter: @musquod.

[1] I know, I have promised this before, but his time trust me. I am not like Lucy in the Charlie Brown cartoons pulling the football away.

[2] Epicurus, who made the lack of hassle the greatest good, would not have approved.

[3] Reminds me of an old Beach Boys song.

[4] An old overhead projector was found in a closet.

[5] Hey! We got through an entire conversation about formatting and word processing software without mentioning LaTeX despite me having been the TRIUMF LaTeX guru before I went over to the dark side and administration.

by Byron at April 17, 2015 09:30 PM

Emily Lakdawalla - The Planetary Society Blog

The Cosmic Microwave Oven Background
Over the past couple of decades the Parkes Radio Telescope in Australia has been picking up two types of mysterious signals, each lasting just a few milliseconds. The source of one of these signals may have finally been found—and an unexpected source at that.

April 17, 2015 08:03 PM

arXiv blog

The Chances of Another Chernobyl Before 2050? 50%, Say Safety Specialists

And there’s a 50:50 chance of a Three Mile Island-scale disaster in the next 10 years, according to the largest statistical analysis of nuclear accidents ever undertaken.

The catastrophic disasters at Chernobyl and Fukushima are among the worst humankind has had to deal with. Both were the result of the inability of scientists and engineers to foresee how seemingly small problems can snowball into disasters of almost unimaginable scale.

April 17, 2015 06:30 PM

Jester - Resonaances

Antiprotons from AMS
This week the AMS collaboration released the long expected measurement of the cosmic ray antiproton spectrum.  Antiprotons are produced in our galaxy in collisions of high-energy cosmic rays with interstellar matter, the so-called secondary production.  Annihilation of dark matter could add more antiprotons on top of that background, which would modify the shape of the spectrum with respect to the prediction from the secondary production. Unlike for cosmic ray positrons, in this case there should be no significant primary production in astrophysical sources such as pulsars or supernovae. Thanks to this, antiprotons could in principle be a smoking gun of dark matter annihilation, or at least a powerful tool to constrain models of WIMP dark matter.

The new data from the AMS-02 detector extend the previous measurements from PAMELA up to 450 GeV and significantly reduce experimental errors at high energies. Now, if you look at the  promotional material, you may get an impression that a clear signal of dark matter has been observed.  However,  experts unanimously agree that the brown smudge in the plot above is just shit, rather than a range of predictions from the secondary production. At this point, there is certainly no serious hints for dark matter contribution to the antiproton flux. A quantitative analysis of this issue appeared in a paper today.  Predicting  the antiproton spectrum is subject to large experimental uncertainties about the flux of cosmic ray proton and about the nuclear cross sections, as well as theoretical uncertainties inherent in models of cosmic ray propagation. The  data and the predictions are compared in this Jamaican band plot. Apparently, the new AMS-02 data are situated near the upper end of the predicted range.

Thus, there is no currently no hint of dark matter detection. However, the new data are extremely useful to constrain models of dark matter. New constraints on the annihilation cross section of dark matter  are shown in the plot to the right. The most stringent limits apply to annihilation into b-quarks or into W bosons, which yield many antiprotons after decay and hadronization. The thermal production cross section - theoretically preferred in a large class of WIMP dark matter models - is in the  case of b-quarks excluded for the mass of the dark matter particle below 150 GeV. These results provide further constraints on models addressing the hooperon excess in the gamma ray emission from the galactic center.

More experimental input will allow us to tune the models of cosmic ray propagation to better predict the background. That, in turn, should lead to  more stringent limits on dark matter. Who knows... maybe a hint for dark matter annihilation will emerge one day from this data; although, given the uncertainties,  it's unlikely to ever be a smoking gun.

Thanks to Marco for comments and plots. 

by Jester ( at April 17, 2015 05:10 PM

Emily Lakdawalla - The Planetary Society Blog

Pretty pictures of the Cosmos: Life and death in the Universe
Astrophotographer Adam Block brings us images showcasing the evolutionary cycles in our universe.

April 17, 2015 04:28 PM

astrobites - astro-ph reader's digest

The Milky Way’s Alien Disk and Quiet Past

Title: The Gaia-ESO Survey: A Quiescent Milky Way with no Significant Dark/Stellar Accreted Disk
Authors: G. R. Ruchti, J. I. Read, S. Feltzing, A. M. Serenelli, P. McMillan, K. Lind, T. Bensby, M. Bergemann, M. Asplund, A. Vallenari, E. Flaccomio, E. Pancino, A. J. Korn, A. Recio-Blanco, A. Bayo, G. Carraro, M. T. Costado, F. Damiani, U. Heiter, A. Hourihane, P. Jofre, G. Kordopatis, C. Lardo, P. de Laverny, L. Monaco, L. Morbidelli, L. Sbordone, C. C. Worley, S. Zaggia
First Author’s Institution: Lund Observatory, Department of Astronomy and Theoretical Physics, Lund, Sweden
Status: Accepted for publication in MNRAS



Galaxy-galaxy collisions can be quite spectacular. The most spectacular occur among galaxies of similar mass, where each galaxy’s competing gravitational forces and comparable reserves of star-forming gas are strong and vast enough to contort the other into bright rings, triply-armed leviathans, long-tailed mice, and cosmic tadpoles. Such collisions, as well as their tamer counterparts between galaxies with large differences in mass—perhaps better described as an accretion event rather than a collision—comprise the inescapable growing pains for adolescent galaxies destined to become the large galaxies adored by generations of space enthusiasts, a privileged group of galaxies to which our home galaxy, the Milky Way, belongs.

What’s happened to the hapless galaxies thus consumed by the Milky Way?  The less massive among these unfortunate interlopers take a while to fall irreversibly deep into the Milky Way’s gravitational clasp, and thus dally, largely unscathed, in the Milky Way’s stellar halo during their long but inevitable journey in.  More massive galaxies feel the gravitationally tug of the Milky Way more strongly, shortening the time it takes the interloper to orbit and eventually merge with the Milky Way as well as making them more vulnerable to being gravitationally ripped apart.  But this is not the only gruesome process the interlopers undergo as they speed towards their deaths.  Galaxies whose orbits cause them to approach the dense disk of the Milky Way are forced to plow through the increasing amounts of gas, dust, stars, and dark matter they encounter.  The disk produces a drag-like force that slows the galaxy down—and the more massive and/or dense the galaxy, the more it’s slowed as it passes through.  Not only so, the disk gradually strips the unfortunate galaxy of the microcosm of stars, gas, and dark matter it nurtured within.  The most massive galaxies—those at least a tenth of the mass of the Milky Way, the instigators of major mergers—accreted by the Milky Way are therefore dragged towards the disk and are forced to deposit their stars, gas, and dark matter preferentially in the disk every time their orbits brings them through the disk.  The stars deposited in the disk in such a manner are called “accreted disk stars,” and the dark matter deposited forms a “dark disk.”

The assimilated stars are thought to compose only a small fraction of the stars in the Milky Way disk. However, they carry the distinct markings of the foreign worlds in which they originated.  The accreted galaxies, lower in mass than the Milky Way, are typically less efficient at forming stars, and thus contain fewer metals and alpha elements produced by supernovae, winds of some old red stars, and other enrichment processes instigated by stars.  Some stars born in the Milky Way, however, are also low in metals and alpha elements (either holdovers formed in the early, less metal- and alpha element-rich days of the Milky Way’s adolescence or formed in regions where gas was not readily available to form stars).  There is one key difference between native and alien stars that provide the final means to identify which of the low metallicity, low alpha-enriched stars were accreted: stars native to the Milky Way typically form in the disk and thus have nearly circular orbits that lie within the disk, while the orbits of accreted stars are more randomly oriented and/or more elliptical (see Figure 1).  Thus, armed with the metallicity, alpha abundance, and kinematics of a sample of stars in the Milky Way, one could potentially pick out the stars among us that have likely fallen from a foreign world.


A search for the accreted disk allows us to peer into the Milky Way’s past and provides clues as to the existence of a dark disk—a quest the authors of today’s paper set out to do.  Their forensic tool of choice?  The Gaia-ESO survey, an ambitious ground-based spectroscopic survey to complement Gaia, a space-based mission designed to measure the position and motions of an astounding 1 billion stars with high precision, from which a 3D map of our galaxy can be constructed and our galaxy’s history untangled.  The authors derived metallicities, alpha abundances, and the kinematics of about 7,700 stars from the survey.  Previous work by the authors informed them that the most promising accretion disk candidates would have metallicities no more than about 60% that of the Sun, an alpha abundance less than double that of the Sun, and orbits that are sufficiently non-elliptical and/or out of the plane of the disk.  The authors found about 4,700 of them, confirming the existence of an accreted stellar disk in the Milky Way.

Were any of these stars deposited in spectacular mergers with high-mass galaxies?  It turns out that one can predict the mass of a dwarf galaxy by its average metallicity.  The authors estimated two bounds on the masses of the accreted galaxies: one by assuming that all the stars matching their accreted disk stars criteria were bona fide accreted stars, and the other by throwing out stars that might belong to the disk—those with metallicites greater than 15% of the Sun’s.  The average metallicity of the first subset of accreted stars was about 10 times less than the Sun’s, implying that they came from galaxies with a stellar mass of 10^8.2 solar masses.  Throwing out possible disk stars lowered the average metallicity to about 5% of the Sun’s, implying that they originated in galaxies with a stellar mass of 10^7.4.  In comparison, the Milky Way’s stellar halo is about 10^10 solar masses.  Thus it appears that the Milky Way has, unusually, suffered no recent major mergers, at least since it formed its disk about 9 billion years ago.  This agrees with many studies that have used alternative methods to probe the formation/accretion history of the Milky Way.

The lack of major mergers also implies that the Milky Way likely does not have a disk of dark matter.  This is an important finding for those searching for dark matter signals in the Milky Way, and one which implies that the Milky Way’s dark matter halo is oblate (flattened at the poles) if there is more dark matter than we’ve estimated based on simplistic models that assumed the halos to be perfectly spherical.


Figure 1.  The interlopers.

Figure 1. Evidence of a foreign population of stars.  The Milky Way’s major mergers (in which the Milky Way accretes a smaller galaxy with mass greater than a tenth of the Milky Way’s) can deposit stars in our galaxy’s disk.  These plots demonstrate one method to determine which stars may have originated in such a merger: how far from an in-plane circular orbit a star has, as is described by the Jz/Jc parameter.  Stars born in the disk (or “in-situ”) typically have circular orbits that lie in the disk plane—these have Jz/Jc close to one, whereas those that were accreted have lower Jz/Jc.  The plots above were computed for a major merger like that between the Milky Way and its dwarf companion the Large Magellanic Cloud, which has about a tenth the mass of the Milky Way.  If the dwarf galaxy initially has a highly inclined orbit (from left to right, 20, 40, and 60 degree inclinations), then the Jz/Jc of stars deposited in the disk by the galaxy becomes increasingly distinct.


Cover image: The Milky Way, LMC, SMC from Cerro Paranal in the Atacama Desert, Chile. [ESO / Y. Beletsky]


by Stacy Kim at April 17, 2015 04:18 PM

Clifford V. Johnson - Asymptotia

Southern California Strings Seminar
2011 scss held in doheney library, uscThere's an SCSS today, at USC! (Should have mentioned it earlier, but I've been snowed under... I hope that the appropriate research groups have been contacted and so forth.) The schedule can be found here along with maps. -cvj Click to continue reading this post

by Clifford at April 17, 2015 03:22 PM

Tommaso Dorigo - Scientificblogging

Guess the Plot
I used to post on this blog very abstruse graphs from time to time, asking readers to guess what they represented. I don't know why I stopped it - it is fun. So here is a very colourful graph for you today. You are asked to guess what it represents. 

I am reluctant to provide any hints, as I do not want to cripple your fantasy. But if you really want to try and guess something close to the truth, this graph represents a slice of a multi-dimensional space, and the information in the lines and in the coloured map is not directly related. Have a shot in the comments thread! (One further hint: you stand no chance of figuring this out).

by Tommaso Dorigo at April 17, 2015 01:33 PM

CERN Bulletin

General assembly
Mardi 5 mai à 11 h 00 Salle 13-2-005 Conformément aux statuts de l’Association du personnel, une Assemblée générale ordinaire est organisée une fois par année (article IV.2.1). Projet d’ordre du jour : Adoption de l’ordre du jour. Approbation du procès-verbal de l’Assemblée générale ordinaire du 22 mai 2014. Présentation et approbation du rapport d’activités 2014. Présentation et approbation du rapport financier 2014. Présentation et approbation du rapport des vérificateurs aux comptes pour 2014. Programme 2015. Présentation et approbation du projet de budget 2015 et taux de cotisation pour 2015. Pas de modifications aux Statuts de l'Association du personnel proposée. Élections des membres de la Commission électorale. Élections des vérificateurs aux comptes. Divers. Nous vous rappelons l’article IV.3.4 des Statuts de l’Association disant que : « Les membres peuvent, après épuisement de l’ordre du jour, mettre en discussion d’autres questions avec le consentement de l’Assemblée, mais seules les questions inscrites à l’ordre du jour peuvent faire l’objet de décisions. L’Assemblée peut cependant charger le Conseil d’étudier toute question qui lui apparaîtrait utile ».

by Staff Association at April 17, 2015 06:40 AM

Quantum Diaries

Life Underground: Anything Anyone Would Teach Me

Going underground most days for work is probably the weirdest-sounding this about this job. At Laboratori Nazionali del Gran Sasso, we use the lab to be underground because of the protection it affords us from cosmic rays, weather, and other disruptions, and with it we get a shorthand description of all the weirdness of lab life. It’s all just “underground.”


The last kilometer of road before reaching the above-ground labs of LNGS

Some labs for low background physics are in mines, like SURF where fellow Quantum Diariest Sally Shaw works. One of the great things about LNGS is that we’re located off a highway tunnel, so it’s relatively easy to reach the lab: we just drive in. There’s a regular shuttle schedule every day, even weekends. When there are snowstorms that close parts of the highway, the shuttle still goes, it just takes a longer route all the way to the next easy exit. The ride is a particularly good time to start drafting blog posts. On days when the shuttle schedule is inconvenient or our work is unpredictable, we can drive individual cars, provided they’ve passed emissions standards.

The guards underground keep a running list of all the people underground at any time, just like in a mine. So, each time I enter or leave, I give my name to the guards. This leads to some fun interactions where Italian speakers try to pronounce names from all over. I didn’t think too much of it before I got here, but in retrospect I had expected that any name of European etymology would be easy, and others somewhat more difficult. In fact, the difficult names are those that don’t end in vowels: “GladStone” become “Glad-eh-Stone-eh”. But longer vowel-filled names are fine, and easy to pronounce, even though they’re sometimes just waved off as “the long one” with a gesture.

There’s constantly water dripping in the tunnel. Every experiment has to be housed in something waterproof, and gutters line all the hallways, usually with algae growing in them. The walls are coated with waterproofing, more to keep any potential chemical spill from us from getting into the local groundwater than to keep the water off our experiments. When we walk from the tunnel entrance to the experimental halls, the cue for me to don a hardhat is the first drip on my head from the ceiling. Somehow, it’s always right next to the shuttle stop, no matter where the shuttle parks.

And, because this is Italy, the side room for emergencies has a bathroom and a coffee machine. There’s probably emergency air tanks too, but the important thing is the coffee machine, to stave off epic caffeine withdrawal headaches. And of course, “coffee” means “espresso” unless otherwise stated– but that’s another whole post right there.

When I meet people in the neighboring villages, at the gym or buying groceries or whatever, they always ask what an “American girl” is doing so far away from the cities, and “lavoro a Laboratorio Gran Sasso” is immediately understood. The lab is even the economic engine that’s kept the nearest village alive: it has restaurants, hotels, and rental apartments all catering to people from the lab (and the local ski lift), but no grocery stores, ATMs, gyms, or post offices that would make life more convenient for long-term residents.

Every once in a while, when someone mentions going underground, I can’t help thinking back to the song “Underground” from the movie Labyrinth that I saw too many times growing up. Labyrinth and The Princess Bride were the “Frozen” of my childhood (despite not passing the Bechtel test).

Just like Sarah, my adventures underground are alternately shocking and exactly what I expected from the stories, and filled with logic puzzles and funny characters. Even my first night here, when I was delirious with jetlag, I saw a black cat scamper across a deserted medieval street, and heard the clock tower strike 13 times. And just like Wesley, “it was a fine time for me, I was learning to fence, to fight–anything anyone would teach me–” (except that in my case it’s more soldering, cryogenics plumbing, and ping-pong, and less fighting). The day hasn’t arrived where the Dread Pirate Roberts calls me to his office and gives me a professorship.

And now the shuttle has arrived back to the office, so we’re done. Ciao, a dopo.

(ps the clock striking 13 times was because it has separate tones for the hour and the 15-minute chunks. The 13 was really 11+2 for 11:30.)

by Laura Gladstone at April 17, 2015 05:00 AM

April 16, 2015

Clifford V. Johnson - Asymptotia

Beyond the Battling Babes
Screen Shot 2015-04-16 at 14.03.58The recent Babe War (Food Babe vs Science Babe) that probably touched your inbox or news feed is a great opportunity to think about a broader issue: the changing faces of science communication. I spoke about this with LA Times science writer Eryn Brown who wrote an excellent article about it that appears today. (Picture (Mark Boster/Los Angeles Times) and headline are from the article's online version.) (By the way, due to space issues, a lot of what we spoke about did not make it to the article (at least not in the form of quotes), including: [...] Click to continue reading this post

by Clifford at April 16, 2015 09:25 PM

Emily Lakdawalla - The Planetary Society Blog

New views of three worlds: Ceres, Pluto, and Charon
New Horizons took its first color photo of Pluto and Charon, while Dawn obtained a 20-frame animation looking down on the north pole of a crescent Ceres.

April 16, 2015 08:17 PM

Clifford V. Johnson - Asymptotia

Oh Naan, I have the measure of thee... (Well, more or less. Ought to drop the level in the oven down a touch so that the broiler browns them less quickly.) (Click for larger view.) naan_construction_montage Very tasty! (The Murphy's stout was not a part of the recipe...) -cvj Click to continue reading this post

by Clifford at April 16, 2015 04:00 PM

ZapperZ - Physics and Physicists

Tevatron Data Reveals No Exotic, Non-Standard Model Higgs
She may be long gone, but the old gal still has something to say.

A new paper that combined the data from CDF and D0, the two old Tevatron detectors at Fermilab, has revealed that the Higgs that has been found is indeed consistent with the Standard Model Higgs. It strengthens the much-heralded discovery made at CERN a while back.

...... the two Tevatron-based experiments, CDF and D0, uncovered evidence in 2012 of a Higgs boson decaying into fermions, specifically, a pair of bottom quarks. The two collaborations have again combined their data to check for exoticness in this fermion decay channel. The Tevatron data show no signal consistent with a Higgs boson having spin zero and odd parity (a so-called pseudoscalar) or spin 2 and even parity (gravitonlike). The results are important for building the case that the Higgs boson seen in particle colliders is indeed the standard model Higgs.


by ZapperZ ( at April 16, 2015 03:52 PM

Quantum Diaries

Building a Neutrino Detector

Ever wanted to see all the steps necessary for building a neutrino detector? Well now you can, check out this awesome video of constructing the near detector for the Double Chooz reactor neutrino experiment in France.

This is the second of two identical detectors near the Chooz nuclear power station in northern France. The experiment, along with competing experiments, already showed that the neutrino mixing angle, Theta_13, was non-zero. A second detector measuring the same flux of neutrinos from the two reactor cores will drastically reduce the final measurement uncertainty.

by jfelde at April 16, 2015 02:04 PM

ZapperZ - Physics and Physicists

More Quantum Physics In Your Daily Lives
I pointed to an article a while back about the stuff we use everyday that came into being because of our understanding of quantum mechanics (basically, all of our modern electronics). Now, Chad Orzel has done the same thing in his article on Forbes, telling you how you actually start your mornings by relying on the validity of QM.

The tiny scale of all the best demonstrations of quantum physics can lead people to think that this is all basically meaningless, arcane technical stuff that only nerds in white lab coats need to worry about. This is deeply wrong, partly because I don’t know any physicists who wear white lab coats, but more importantly because quantum phenomena are at the heart of many basic technologies that we use every day.

In fact, I can’t start my morning without quantum mechanics, in the form of my bedside alarm clock.

You may read the rest of his arguments in the article.

I will also add something that I've mentioned before. The presence of quantum effects may be more prominent than what most are aware of, if we go by the evidence for the existence of superconductivity. As stated by Carver Mead, it is the clearest demonstration of QM effects at the macro scale. Yet, a lot of people simply do not recognize it for what it is.


by ZapperZ ( at April 16, 2015 01:02 PM

Symmetrybreaking - Fermilab/SLAC

Seeing the CMS experiment with new eyes

The wonders of particle physics serve as a springboard for a community-building arts initiative at Fermilab.

For many, the aspects of research at the Large Hadron Collider that inspire wonder are the very same that cast it as intellectually remote: ambitious aims about understanding our universe, a giant circular machine in the European underground, mammoth detectors that tower over us like cathedrals.

The power of art lies in the way it bridges the gap between wonder and understanding, says particle physicist and artist Michael Hoch, founder and driving force behind the outreach initiative Art@CMS. Through the creation and consumption of art inspired by the CMS experiment at the LHC, the public and scientific community approach each other in novel ways, allowing one party to better relate to the other and demystifying the science in the process.

“Art can transport information, but it has an additional layer—a way of allowing human beings to get in touch with each other,” says Hoch, who has worked as a scientist on CMS since 2007. “It can reach people who might not be interested in a typical science presentation. They might not feel smart enough, they might be afraid to be wrong. But with art, you cannot be wrong. It’s a personal reflection.”

As the hub for the United States’ participation in the CMS experiment, Fermilab, located outside Chicago, is currently showing the Art@CMS exhibit in the Fermilab Art Gallery. Organized by Fermilab Art Gallery curator Georgia Schwender, the exhibit coincides with the restart of the LHC, which recently fired up again after a two-year break for upgrades. The exhibit is not only a celebration of the LHC restart, it also aims to create connections between artists and CMS physicists in the United States.

Each artist in the Fermilab exhibit collaborated with a CMS scientist in researching his or her work. Emphasizing the collaborative nature of the exhibit, the artwork title cards display both the name of the artist and the collaborating scientist. Drawing on their interactions, the artists created pieces that invite the viewer to see the experiment—the science, the instruments and the people behind it—with new eyes.

Likewise, scientists get a chance to see how others view their search through unfathomably tiny shards of matter to solve the universe’s mysteries.  

“We work with creative people who come up with creative products that as scientists we may never have thought of, expressing our topic in a new way,” Hoch says. “If we can work with people to create different viewpoints on our topic, then we gain a lot.”

That spirit extends to young art students in Fermilab’s backyard. During one intense day at Fermilab in February, Hoch interacted with students from four local high schools. As part of this student outreach effort, called Science&Art@School, the students also learned from Fermilab scientists about what it’s like to work in the world of particle physics and about their own paintings and photographs. And with artist participants in the exhibit, students discussed translating hard-to-picture phenomena into something tangible.

The students were then given an assignment: Create a piece of art based on what they learned about Fermilab and CMS.

Through Science&Art@School, Fermilab caught hold of the imaginations of students who don’t typically visit the laboratory: non-science students, says Fermilab docent Anne Mary Teichert, who organized the effort.

“It was an amazing opportunity. Students were able to push themselves in ways they hadn’t before due to CMS’s generous contribution of funds for art supplies,” she says. “It was intense from the word ‘go.’”

The students’ work sessions resulted in a display of artwork at Water Street Studios in the nearby town of Batavia, Illinois.

“Their artwork reveals not just an abstract understanding—there’s a human dimension,” Teichert says. “They portray how physics resonates with their lives. There’s warmth and thoughtfulness there, and the connections they made were very interesting.”

Student Brandon Shimkus created a cube-shaped sculpture in which each side represents a different area of particle physics, from those we understand well to those for which we have some information but don’t fully grasp, such as dark matter.

“We were given so much information about things we never think about in that kind of way—and then we had to get our information together and make or paint something,” Shimkus says. “It was a challenge to create these things based on ideas you could barely understand on your own—but a fun challenge. If I could do it again, I would, hundreds of times.”

Science&Art@School has hosted 10 workshops, and the Art@CMS exhibit has shown at 25 venues around the world. The programs benefit from the fact that the CMS detector itself—a four-story-high device of intricate symmetry—is as visually fetching as it is a technological masterpiece. A life-size picture of the instrument by Hoch and CERN photographer Maximilien Brice is the centerpiece of Fermilab’s Art@CMS exhibit.

“Enlarging our CMS collaboration with art institutions and engaging artists, we gain a few points for free,” Hoch says. “And we want to fascinate the students with what we’re doing because this concerns them. They can open their eyes and see what we’re doing here is not just something far away—it’s taking place here in their neighborhood. And they are the next generation of us.”


Like what you see? Sign up for a free subscription to symmetry!

by Leah Hesla at April 16, 2015 01:00 PM

Matt Strassler - Of Particular Significance

Science Festival About to Start in Cambridge, MA

It’s a busy time here in Cambridge, Massachusetts, as the US’s oldest urban Science Festival opens tomorrow for its 2015 edition.  It has been 100 years since Einstein wrote his equations for gravity, known as his Theory of General Relativity, and so this year a significant part of the festival involves Celebrating Einstein.  The festival kicks off tomorrow with a panel discussion of Einstein and his legacy near Harvard University — and I hope some of you can go!   Here are more details:


First Parish in Cambridge, 1446 Massachusetts Avenue, Harvard Square, Cambridge
Friday, April 17; 7:30pm-9:30pm

Officially kicking off the Cambridge Science Festival, four influential physicists will sit down to discuss how Einstein’s work shaped the world we live in today and where his influence will continue to push the frontiers of science in the future!

Our esteemed panelists include:
Lisa Randall | Professor of Physics, Harvard University
Priyamvada Natarajan | Professor of Astronomy & Physics, Yale University
Clifford Will | Professor of Physics, University of Florida
Peter Galison | Professor of History of Science, Harvard University
David Kaiser | Professor of the History of Science, MIT

Cost: $10 per person, $5 per student, Tickets available now at

Filed under: History of Science, Public Outreach Tagged: Einstein, PublicOutreach, relativity

by Matt Strassler at April 16, 2015 12:48 PM

Lubos Motl - string vacua and pheno

LHC: chance to find SUSY quickly
This linker-not-thinker blog post will largely show materials of ATLAS. To be balanced, let me begin with a recommendation for an UCSB article Once More Unto the Breach about the CMS' excitement before the 13 TeV run. Note that the CMS (former?) boss Incandela is from UCSB. They consider the top squark to be their main target.

ATLAS is more into gluinos and sbottoms, it may seem. On March 25th, ATLAS released interesting graphs
Expected sensitivity studies for gluino and squark searches using the early LHC 13 TeV Run-2 dataset with the ATLAS experiment (see also PDF paper)
There are various graphs but let's repost six graphs using the same template.

These six graphs show the expected confidence level \(p_0\) (the probability of a false positive; see the left vertical axis) or \(X\)-sigma (see the dashed red lines with explanations on the right vertical line) that a new superpartner will have been discovered after 1, 2, 5, and 10 inverse femtobarns of collisions.

First, the bottom squark production. The sbottom decays to the neutralino and the bottom quark. If the uncertainty of the Standard Model backgrounds is at 40 percent, the graph looks like this:

You see that if the sbottom is at least 700 GeV heavy, even 10/fb will only get you to 3 sigma. Things improve if the uncertainty in the Standard Model backgrounds is only 20%. Then you get to 4.5 sigma

Now, the production of gluino pairs. Each gluino decays to a neutralino and two quarks. With the background uncertainty 40%, we get this:

With the background uncertainty 20%, things improve:

You see that even a 1350 GeV gluino may be discovered at 5 sigma after 10 inverse femtobarns. I do think that I should win a bet against Adam Falkowski after 10/fb of new data because only 20/fb of the old data has been used in the searches and the "total deadline" of the bet is 30/fb.

Things look similar if there is an extra W-boson among the decay products of each gluino. With the 50% uncertainty of the Standard Model backgrounds, the chances are captured by this graph:

If the uncertainty of the Standard Model backgrounds is reduced to 25%, the discovery could be faster:

If you're happy with 3-sigma hints, they may appear after 10/fb even if the gluino is slightly above 1500 GeV.

The probability is small but nonzero that the gluino or especially the sbottom may be discovered even with 5/fb (if not 2/fb and perhaps 1/fb) of the data.

After 300/fb of collisions, one may see a wider and safer region of the parameter space, see e.g. this CMS study.

by Luboš Motl ( at April 16, 2015 09:17 AM

ATLAS Experiment

From ATLAS around the world: A view from Down Under

While ATLAS members at CERN were preparing for Run 2 during ATLAS week, and eagerly awaiting the beam to re-circulate the LHC, colleagues “down under” in Australia were having a meeting of their own. The ARC Centre of Excellence for Particle Physics at the Terascale (CoEPP) is the hub of all things ATLAS in Australia. Supported by a strong cohort of expert theorists, we represent almost the entirety of particle physics in the nation. It certainly felt that way at our meeting: more than 120 people participated over five days of presentations, discussions and workshops. Commencing at Monash University, our youngest researchers were exposed to a one and a half day summer school. They then joined their lecturers on planes across the Bass Strait to Tasmania where we held our annual CoEPP general meeting.

CoEPP comprises ATLAS collaborators from the University of Adelaide, University of Melbourne and University of Sydney, augmented by theory groups, and joined by theory colleagues from Monash University. CoEPP is enhanced further by international partnerships with investigators in Cambridge, Duke, Freiburg, Geneva, Milano and UPenn to help add a global feel to the strong national impact.


Larry Lee of the University of Adelaide talks about his ideas for ATLAS Run 2 physics analyses.

Larry Lee of the University of Adelaide talks about his ideas for ATLAS Run 2 physics analyses.

Ongoing work was presented on precision studies of the Higgs boson, with a primary focus on the process where the Higgs is produced in association with a top-antitop quark pair (ttH) in the multilepton final state and the process where the Higgs decays into two tau leptons (H->tautau). Published results were shared along with some thoughts on how these analyses may proceed looking forward to Run 2. Novel techniques to search for beyond Standard Model processes in Supersymmetry and Exotica were discussed along with analysis results from Run 1 and prospects for discovery for various new physics scenarios. CoEPP physicists are also involved in precision measurements of the top-antitop (ttbar) cross-section and studies of the production and decay of Quarkonia, “flavourless” mesons comprised of a quark and its own anti-quark (Charmonium for instance is made up of charm and anti-charm quarks). It wasn’t just ATLAS physics being discussed though, with time set aside to talk about growing involvement in the plans to upgrade ATLAS (including the trigger system and inner detector) and how we can best leverage national expertise to have a telling impact.

A dedicated talk to outline our national research computing support for ATLAS proved very helpful to many people new to the Australian ATLAS landscape.

CoEPP director, Professor Geoffrey Taylor of the University of Melbourne, in deep discussion during the poster session.

CoEPP director, Professor Geoffrey Taylor of the University of Melbourne, in deep discussion during the poster session.

I was happy to spend time with colleagues from our collaborating institutes and also to meet the new cohort of students/postdocs and researchers who have joined us over the past year. It dawns on me how the Australian particle physics effort is growing, and how we are attract some of the brightest minds to the country. It is exciting to see the expansion and to be able to play a part in growing an effort nationally. The breadth of Australia’s particle physics involvement was demonstrated with a discussion of national involvement in Belle-II and the exciting development of a potential direct dark matter experiment to be situated in Australia at the Stawell Underground Physics Laboratory. The talks rounded out a complete week of interesting physics, good food, a few drinks and a lot of laughs.

As this was the first visit to Hobart for many of us it was particularly pleasing that the meeting dinner was held at the iconic Museum of Old and New Art (MONA), just outside the centre of the city. It proved a fitting setting to frame the exciting discussion, new and innovative ideas, and mixture of reflection and progression that the week contained. Although Australia’s ATLAS members are some of the farthest from CERN there is considerable activity and excitement down under as we plan to partake in a journey of rediscovery of the Standard Model at a new energy, and to see what else nature may have in store for us.

All the CoEPP workshop attendees outside MONA, Hobart.

All the CoEPP workshop attendees outside MONA, Hobart.


by Paul Jackson at April 16, 2015 04:36 AM

John Baez - Azimuth

Kinetic Networks: From Topology to Design

Here’s an interesting conference for those of you who like networks and biology:

Kinetic networks: from topology to design, Santa Fe Institute, 17–19 September, 2015. Organized by Yoav Kallus, Pablo Damasceno, and Sidney Redner.

Proteins, self-assembled materials, virus capsids, and self-replicating biomolecules go through a variety of states on the way to or in the process of serving their function. The network of possible states and possible transitions between states plays a central role in determining whether they do so reliably. The goal of this workshop is to bring together researchers who study the kinetic networks of a variety of self-assembling, self-replicating, and programmable systems to exchange ideas about, methods for, and insights into the construction of kinetic networks from first principles or simulation data, the analysis of behavior resulting from kinetic network structure, and the algorithmic or heuristic design of kinetic networks with desirable properties.

by John Baez at April 16, 2015 01:00 AM

April 15, 2015

arXiv blog

First Quantum Music Composition Unveiled

Physicists have mapped out how to create quantum music, an experience that will be profoundly different for every member of the audience, they say.

One of the features of 20th century art is its increasing level of abstraction from cubism and surrealism in the early years to abstract expressionism and mathematical photography later. So an interesting question is what further abstractions can we look forward to in the 21th century?

April 15, 2015 04:55 PM

astrobites - astro-ph reader's digest

C-3PO, PhD: Machine Learning in Astronomy

The problem of Big Data in Astronomy

Astronomers work with a lot of data. A serious ton of data. And the rate at which telescopes and simulations pour out data is increasing rapidly. Take the upcoming Large Synoptic Survey Telescope, or LSST. Each image taken by the telescope will be several GBs in size. Between the 2000 nightly images, processing over 10 million sources in each image, and then sending up to 100,000 transient alerts, the survey will result in more than 10 Terabytes of data… EVERY NIGHT! Throughout the entire project, more than 60 Petabytes of data will be generated. At one GB per episode, it would take 6 million seasons of Game of Thrones to amount to that much data. That’s a lot of science coming out of LSST.

One of the largest problems with handling such large amounts of astronomical data is how to efficiently search for transient objects: things that appear and disappear. A major challenge of transient astronomy is how to distinguish something that truly became brighter (like a supernova) from a mechanical artifact (such as a faulty pixel, cosmic ray, or other defect). With so many images coming in, you can’t retake them every time to check if a source is real.

Before the era of Big Data, astronomers could often check images by eye. With this process, known as “scanning”, a trained scientist could often distinguish between a real source and an artifact. As more images have come streaming in, citizen science projects have arisen to harness the power of eager members of the public in categorizing data. But with LSST on the horizon, astronomers are in desperate need of better methods for classifying images.

Bring in Machine Learning

Fig. 1 – A visual example of a machine learning classification problem. Here, trees are sorted by two features: leaf size and number of leaves per twig. The training set (open points) have known classifications (Species 1, 2, or 3). Once the training set has been processed, the algorithm can generate classification rules (the dashed lines). Then, new trees (filled points) can be classified based on their features. Image adapted from

Today’s paper makes use of a computational technique known as machine learning to solve this problem. Specifically, they use a technique known as “supervised machine learning classification“. The goal of this method is to derive a classification of an object (here, an artifact or a real source) based on particular features that can be quantified about the object. The method is “supervised” because it requires a training set: a series of objects and their features along with known classifications. The training set is used to teach the algorithm how to classify objects. Rather than having a scientist elaborate rules that define a classification, this technique develops these rules as it learns. After this training set is processed, the algorithm can classify new objects based on their features (see Fig. 1).

To better understand supervised machine learning, imagine you are trying to identify species of trees. A knowledgeable friend tells you to study the color of the bark, the shape of the leaves, and the number of leaves on a twig — these are the features you’ll use to classify. Your friend shows you many trees, and tells you their species name (this is your training set), and you learn to identify each species based on their features. With a large enough training set, you should be able to classify the next tree you come to, without needing a classification from your friend. You are now ready to apply a “supervised learning” method to new data!

Using machine learning to improve transient searches

Fig. 2 – The performance of the autoScan algorithm. The false detection rate is how often an artifact is labeled a true source. The missed detection rate (or false-negative rate) is how often real sources are labeled as artifacts. For a given tolerance level (tau), the authors can select how willing they are to accept false positives in exchange for lower risk of missing true sources. The authors adopted a tolerance of 0.5 for their final algorithm. This level correctly identifies real sources 96% of the time, with only a 2.5% rate of false positives. Fig. 7 from Goldstein et al. 2015.

The authors of today’s paper developed a machine learning algorithm called autoScan, which classifies possible transient objects as artifacts or real sources. They apply this technique to imaging data from the Dark Energy Survey , or DES. The DES Supernova program is designed to measure the acceleration of the universe by imaging over 3000 supernovae and obtaining spectra for each. Housed in the Chilean Andes mountains, DES will be somewhat akin to a practice run for LSST, in terms of data output.

The autoScan algorithm uses a long list of features (such as the flux of the object and its shape) and a training set of almost 900,000 sources and artifacts. After this training set was processed, the authors tested the algorithm’s classification abilities against another validation set: more objects with known classifications that were not used in the training set. AutoScan was able to correctly identify real sources in the validation set 96% of the time, with a false detection (claiming an artifact to be a source) rate of only 2.5% (see Fig. 2).

With autoScan, the authors are prepared to analyze new data coming live from the Dark Energy Survey. They can greatly improve the efficiency of detecting transient sources like supernova, by easily distinguishing them from instrumental artifacts. But better techniques, such as more clever development of training sets, will continue to beat down the rate of false positives.

Machine learning algorithms will become critical to the success of future large surveys like LSST, where person-power alone will be entirely insufficient to manage the incoming data. The same can be said for Gaia, TESS, the Zwicky Transient Facility, and pretty much any other upcoming astronomical survey.  Citizen science projects will still have many practical uses, and the trained eye of a professional astronomer will always be essential. But in the age of Big Astro, computers will continue to become more and more integral parts of managing the daily operations of research and discovery.

by Ben Cook at April 15, 2015 03:56 PM

Symmetrybreaking - Fermilab/SLAC

AMS results create cosmic ray puzzle

New results from the Alpha Magnetic Spectrometer experiment defy our current understanding of cosmic rays.

New results from the Alpha Magnetic Spectrometer experiment disagree with current models that describe the origin and movement of the high-energy particles called cosmic rays.

These deviations from the predictions might be caused by dark matter, a form of matter that neither emits nor absorbs light. But, according to Mike Capell, a senior researcher at the Massachusetts Institute of Technology working on the AMS experiment, it’s too soon to tell.

“It’s a real head scratcher,” Capell says. “We cannot say we are seeing dark matter, but we are seeing results that cannot be explained by the conventional wisdom about where cosmic rays come from and how they get here. All we can say right now is that our results are consistently confusing.”

The AMS experiment is located on the International Space Station and consists of several layers of sensitive detectors that record the type, energy, momentum and charge of cosmic rays. One of AMS’s scientific goals is to search for signs of dark matter.

Dark matter is almost completely invisible—except for the gravitational pull it exerts on galaxies scattered throughout the visible universe. Scientists suspect that dark matter is about five times as prevalent as regular matter, but so far have observed it only indirectly.

If dark matter particles collide with one another, they could produce offspring such as protons, electrons, antiprotons and positrons. These new particles would look and act like the cosmic rays that AMS usually detects, but they would appear at higher energies and with different relative abundances than the standard cosmological models forecast.

“The conventional models predict that at higher energies, the amount of antimatter cosmic rays will decrease faster than the amount of matter cosmic rays,” Capell says. “But because dark matter is its own antiparticle, when two dark matter particles collide, they are just as likely to produce matter particles as they are to produce antimatter particles, so we would see an excess of antiparticles.”

This new result compares the ratio of antiprotons to protons across a wide energy range and finds that this proportion does not drop down at higher energies as predicted, but stays almost constant. The scientists also found that the momentum-to-charge ratio for protons and helium nuclei is higher than predicted at greater energies.

“These new results are very exciting,” says CERN theorist John Ellis. “They’re much more precise than previous data and they are really going to enable us to pin down our models of antiproton and proton production in the cosmos.”

In 2013 and 2014 AMS found a similar result for the proportion of positrons to electrons—with a steep climb in the relative abundance of positrons at about 8 billion electronvolts followed by the possible start of a slow decline around 275 billion electronvolts. Those results could be explained by pulsars spitting out more positrons than expected or accelerating supernovae remnants, Capell says.

“But antiprotons are so much heavier than positrons and electrons that they can’t be generated in pulsars,” he says. “Likewise, supernova remnants would not propagate antiprotons in the way we are observing.”

If this antimatter excess is the result of colliding dark matter particles, physicists should see a definitive bump in the relative abundance of antimatter particles with a particular energy followed by a decline back to the predicted value. Thus far, AMS has not collected enough data to see this full picture.

“This is an important new piece of the puzzle,” Capell says. “It’s like looking at the world with a really good new microscope—if you take a careful look, you might find all sort of things that you don’t expect.”

Theorists are now left with the task of developing better models that can explain AMS’s unexpected results. “I think AMS’s data is taking the whole analysis of cosmic rays in this energy range to a whole new level,” Ellis says. “It’s revolutionizing the field.”


Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at April 15, 2015 03:03 PM

Lubos Motl - string vacua and pheno

Dark matter self-interaction detected?
Off-topic: My Facebook friend Vít Jedlička (SSO, Party of Free Citizens) established a new libertarian country, Liberland (To Live And Let Live), where the government doesn't get on your nerves. Before he elected himself the president, he had to carefully choose a territory where no one will bother him, where no one would ever start a war; he picked seven squared kilometers in between Serbia and Croatia because these two nations wouldn't dare to damage one another. ;-) There's a catch for new citizens, however: the official language is Czech.
Lots of mainstream writers including BBC, The Telegraph, IBTimes, and Science Daily promote a preprint claiming that they see the non-gravitational forces between the particles that dark matter is composed of:
The behaviour of dark matter associated with 4 bright cluster galaxies in the 10kpc core of Abell 3827 (published in MNRAS)
Richard Massey (Durham) and 22 co-authors have analyzed the galaxy cluster Abell 3827 – which is composed of four very similar galaxies (unusual: they probably got clumped recently) by the new Hubble Space Telescope imaging and by ESO's VLT/MUSE integral field spectroscopy.

The radius of the whole core – which is 1.3 billion light years from us – is 10 kpc. They show that each of the four galaxies has a dark matter halo. But at least one of those halos is offset by 1.62 kpc (plus minus 0.48 kpc, which includes all contributions to the errors, so that it's a 3.4 sigma "certainty").

Such offsets aren't seen in "free" galaxies but when galaxies collide, they may be expected due to the dark matter's collision with itself. With the most straightforward interpretation, the cross section is\[

\frac{\sigma}{m}=(1.7 \pm 0.7)\times 10^{-4}{\rm cm}^2/{\rm g} \times \left(\frac{t}{10^9\,{\rm yrs}}\right)^{-2},

\] where \(t\) is the infall duration. Well, if written in this way, it's only a 2.5-sigma certainty that \(\sigma\neq 0\), but that's probably considered enough for big claims by astrophysicists. (Astrophysicists apparently aren't cosmologists – only cosmology has turned into a hard science in recent 20 years.)

If that claim is right and dark matter interacts not only gravitationally (which is why it was introduced) but also by this additional interaction, it can not only rule out tons of models but also isolate some of the good ones (perhaps with WIMPs such as LSPs such as neutralinos).

The cross section cited above is safely below the recently published upper bound if \(t\) is at least comparable to billions (or tens of millions) years. The \(t\)-dependence of the new result makes it a bit vague – and one could say that similar parameter-, model-dependent claims about the cross section already exist in the literature.

Because of some recent thinking of mine, I should also mention that I think that it's also possible that an adequate MOND theory, with some specific form of nonlinear addition of the forces, could conceivably predict such an offset, too. A week ago, I independently rediscovered Milgrom's justification of MOND using the Unruh effect, after some exchanges with an enthusiastic young Czech female astrophysicist who liked some of my MOND/HOND remarks. For a while, my belief that "MOND and not dark matter" is basically right went above 10%, but it's dropped below 10% again when I was reminded that there are no MOND theories that are really successful with the clusters.

Another dark-matter topic. Today's AMS press conference didn't seem to change the picture much.

Off-topic: If you need to be reminded of the distances inside (our) galaxy, then be aware that the British rapper Stephen Hawking has recorded a cover version of the Monty Python Galaxy Song for you to learn from. Hawking has even hired someone to represent all the stupid, obnoxious, and daft people – you feel that you've had enough – namely Brian Cocks (I have to write it in this way to avoid the copyright traps).

by Luboš Motl ( at April 15, 2015 02:01 PM

ZapperZ - Physics and Physicists

Use "i,j,k" notation instead of "arrow" representation for vectors in Intro Physics?
That is what the authors of this study have found to be more effective in analyzing students understanding and ability to comprehend vector problems. (The paper is available for free.)

First, we replicated a number of previous findings of student difficulties in the arrow format and discovered several additional difficulties, including the finding that different relative arrow orientations can prompt different solution paths and different kinds of mistakes, which suggests that students need to practice with a variety of relative orientations. Most importantly, we found that average performance in the ijk format was typically excellent and often much better than performance in the arrow format in either the generic or physics contexts.

My question is, is this the result of an inherent conceptional problem in the arrow representation, or simply a matter of correcting some of the ways we teach vectors to students?


by ZapperZ ( at April 15, 2015 01:54 PM

Peter Coles - In the Dark

Albert, Bernard and Bell’s Theorem

You’ve probably all heard of the little logic problem involving the mysterious Cheryl and her friends Albert and Bernard that went viral on the internet recently. I decided not to post about it directly because it’s already been done to death. It did however make me think that if people struggle so much with “ordinary” logic problems of this type its no wonder they are so puzzled by the kind of logical issues raised by quantum mechanics. Hence the motivation of updating a post I did quite a while ago. The question we’ll explore does not concern the date of Cheryl’s birthday but the spin of an electron.

To begin with, let me give a bit of physics background. Spin is a concept of fundamental importance in quantum mechanics, not least because it underlies our most basic theoretical understanding of matter. The standard model of particle physics divides elementary particles into two types, fermions and bosons, according to their spin.  One is tempted to think of  these elementary particles as little cricket balls that can be rotating clockwise or anti-clockwise as they approach an elementary batsman. But, as I hope to explain, quantum spin is not really like classical spin.

Take the electron,  for example. The amount of spin an electron carries is  quantized, so that it always has a magnitude which is ±1/2 (in units of Planck’s constant; all fermions have half-integer spin). In addition, according to quantum mechanics, the orientation of the spin is indeterminate until it is measured. Any particular measurement can only determine the component of spin in one direction. Let’s take as an example the case where the measuring device is sensitive to the z-component, i.e. spin in the vertical direction. The outcome of an experiment on a single electron will lead a definite outcome which might either be “up” or “down” relative to this axis.

However, until one makes a measurement the state of the system is not specified and the outcome is consequently not predictable with certainty; there will be a probability of 50% probability for each possible outcome. We could write the state of the system (expressed by the spin part of its wavefunction  ψ prior to measurement in the form

|ψ> = (|↑> + |↓>)/√2

This gives me an excuse to use  the rather beautiful “bra-ket” notation for the state of a quantum system, originally due to Paul Dirac. The two possibilities are “up” (↑­) and “down” (↓) and they are contained within a “ket” (written |>)which is really just a shorthand for a wavefunction describing that particular aspect of the system. A “bra” would be of the form <|; for the mathematicians this represents the Hermitian conjugate of a ket. The √2 is there to insure that the total probability of the spin being either up or down is 1, remembering that the probability is the square of the wavefunction. When we make a measurement we will get one of these two outcomes, with a 50% probability of each.

At the point of measurement the state changes: if we get “up” it becomes purely |↑>  and if the result is  “down” it becomes |↓>. Either way, the quantum state of the system has changed from a “superposition” state described by the equation above to an “eigenstate” which must be either up or down. This means that all subsequent measurements of the spin in this direction will give the same result: the wave-function has “collapsed” into one particular state. Incidentally, the general term for a two-state quantum system like this is a qubit, and it is the basis of the tentative steps that have been taken towards the construction of a quantum computer.

Notice that what is essential about this is the role of measurement. The collapse of  ψ seems to be an irreversible process, but the wavefunction itself evolves according to the Schrödinger equation, which describes reversible, Hamiltonian changes.  To understand what happens when the state of the wavefunction changes we need an extra level of interpretation beyond what the mathematics of quantum theory itself provides,  because we are generally unable to write down a wave-function that sensibly describes the system plus the measuring apparatus in a single form.

So far this all seems rather similar to the state of a fair coin: it has a 50-50 chance of being heads or tails, but the doubt is resolved when its state is actually observed. Thereafter we know for sure what it is. But this resemblance is only superficial. A coin only has heads or tails, but the spin of an electron doesn’t have to be just up or down. We could rotate our measuring apparatus by 90° and measure the spin to the left (←) or the right (→). In this case we still have to get a result which is a half-integer times Planck’s constant. It will have a 50-50 chance of being left or right that “becomes” one or the other when a measurement is made.

Now comes the real fun. Suppose we do a series of measurements on the same electron. First we start with an electron whose spin we know nothing about. In other words it is in a superposition state like that shown above. We then make a measurement in the vertical direction. Suppose we get the answer “up”. The electron is now in the eigenstate with spin “up”.

We then pass it through another measurement, but this time it measures the spin to the left or the right. The process of selecting the electron to be one with  spin in the “up” direction tells us nothing about whether the horizontal component of its spin is to the left or to the right. Theory thus predicts a 50-50 outcome of this measurement, as is observed experimentally.

Suppose we do such an experiment and establish that the electron’s spin vector is pointing to the left. Now our long-suffering electron passes into a third measurement which this time is again in the vertical direction. You might imagine that since we have already measured this component to be in the up direction, it would be in that direction again this time. In fact, this is not the case. The intervening measurement seems to “reset” the up-down component of the spin; the results of the third measurement are back at square one, with a 50-50 chance of getting up or down.

This is just one example of the kind of irreducible “randomness” that seems to be inherent in quantum theory. However, if you think this is what people mean when they say quantum mechanics is weird, you’re quite mistaken. It gets much weirder than this! So far I have focussed on what happens to the description of single particles when quantum measurements are made. Although there seem to be subtle things going on, it is not really obvious that anything happening is very different from systems in which we simply lack the microscopic information needed to make a prediction with absolute certainty.

At the simplest level, the difference is that quantum mechanics gives us a theory for the wave-function which somehow lies at a more fundamental level of description than the usual way we think of probabilities. Probabilities can be derived mathematically from the wave-function,  but there is more information in ψ than there is in |2; the wave-function is a complex entity whereas the square of its amplitude is entirely real. If one can construct a system of two particles, for example, the resulting wave-function is obtained by superimposing the wave-functions of the individual particles, and probabilities are then obtained by squaring this joint wave-function. This will not, in general, give the same probability distribution as one would get by adding the one-particle probabilities because, for complex entities A and B,

A2+B2 ≠(A+B)2

in general. To put this another way, one can write any complex number in the form a+ib (real part plus imaginary part) or, generally more usefully in physics , as Re, where R is the amplitude and θ  is called the phase. The square of the amplitude gives the probability associated with the wavefunction of a single particle, but in this case the phase information disappears; the truly unique character of quantum physics and how it impacts on probabilies of measurements only reveals itself when the phase information is retained. This generally requires two or more particles to be involved, as the absolute phase of a single-particle state is essentially impossible to measure.

Finding situations where the quantum phase of a wave-function is important is not easy. It seems to be quite easy to disturb quantum systems in such a way that the phase information becomes scrambled, so testing the fundamental aspects of quantum theory requires considerable experimental ingenuity. But it has been done, and the results are astonishing.

Let us think about a very simple example of a two-component system: a pair of electrons. All we care about for the purpose of this experiment is the spin of the electrons so let us write the state of this system in terms of states such as  which I take to mean that the first particle has spin up and the second one has spin down. Suppose we can create this pair of electrons in a state where we know the total spin is zero. The electrons are indistinguishable from each other so until we make a measurement we don’t know which one is spinning up and which one is spinning down. The state of the two-particle system might be this:

|ψ> = (|↑↓> – |↓↑>)/√2

squaring this up would give a 50% probability of “particle one” being up and “particle two” being down and 50% for the contrary arrangement. This doesn’t look too different from the example I discussed above, but this duplex state exhibits a bizarre phenomenon known as quantum entanglement.

Suppose we start the system out in this state and then separate the two electrons without disturbing their spin states. Before making a measurement we really can’t say what the spins of the individual particles are: they are in a mixed state that is neither up nor down but a combination of the two possibilities. When they’re up, they’re up. When they’re down, they’re down. But when they’re only half-way up they’re in an entangled state.

If one of them passes through a vertical spin-measuring device we will then know that particle is definitely spin-up or definitely spin-down. Since we know the total spin of the pair is zero, then we can immediately deduce that the other one must be spinning in the opposite direction because we’re not allowed to violate the law of conservation of angular momentum: if Particle 1 turns out to be spin-up, Particle 2  must be spin-down, and vice versa. It is known experimentally that passing two electrons through identical spin-measuring gadgets gives  results consistent with this reasoning. So far there’s nothing so very strange in this.

The problem with entanglement lies in understanding what happens in reality when a measurement is done. Suppose we have two observers, Albert and Bernard, who are bored with Cheryl’s little games and have decided to do something interesting with their lives by becoming physicists. Each is equipped with a device that can measure the spin of an electron in any direction they choose. Particle 1 emerges from the source and travels towards Albert whereas particle 2 travels in Bernard’s direction. Before any measurement, the system is in an entangled superposition state. Suppose Albert decides to measure the spin of electron 1 in the z-direction and finds it spinning up. Immediately, the wave-function for electron 2 collapses into the down direction. If Albert had instead decided to measure spin in the left-right direction and found it “left” similar collapse would have occurred for particle 2, but this time putting it in the “right” direction.

Whatever Albert does, the result of any corresponding measurement made by Bernard has a definite outcome – the opposite to Alberts result. So Albert’s decision whether to make a measurement up-down or left-right instantaneously transmits itself to Bernard who will find a consistent answer, if he makes the same measurement as Albert.

If, on the other hand, Albert makes an up-down measurement but Bernard measures left-right then Albert’s answer has no effect on Bernard, who has a 50% chance of getting “left” and 50% chance of getting right. The point is that whatever Albert decides to do, it has an immediate effect on the wave-function at ’s position; the collapse of the wave-function induced by Albert immediately collapses the state measured by Bernard. How can particle 1 and particle 2 communicate in this way?

This riddle is the core of a thought experiment by Einstein, Podolsky and Rosen in 1935 which has deep implications for the nature of the information that is supplied by quantum mechanics. The essence of the EPR paradox is that each of the two particles – even if they are separated by huge distances – seems to know exactly what the other one is doing. Einstein called this “spooky action at a distance” and went on to point out that this type of thing simply could not happen in the usual calculus of random variables. His argument was later tightened considerably by John Bell in a form now known as Bell’s theorem.

To see how Bell’s theorem works, consider the following roughly analagous situation. Suppose we have two suspects in prison, say Albert and Bernard (presumably Cheryl grassed them up and has been granted immunity from prosecution). The  two are taken apart to separate cells for individual questioning. We can allow them to use notes, electronic organizers, tablets of stone or anything to help them remember any agreed strategy they have concocted, but they are not allowed to communicate with each other once the interrogation has started. Each question they are asked has only two possible answers – “yes” or “no” – and there are only three possible questions. We can assume the questions are asked independently and in a random order to the two suspects.

When the questioning is over, the interrogators find that whenever they asked the same question, Albert and Bernard always gave the same answer, but when the question was different they only gave the same answer 25% of the time. What can the interrogators conclude?

The answer is that Albert and Bernard must be cheating. Either they have seen the question list ahead of time or are able to communicate with each other without the interrogator’s knowledge. If they always give the same answer when asked the same question, they must have agreed on answers to all three questions in advance. But when they are asked different questions then, because each question has only two possible responses, by following this strategy it must turn out that at least two of the three prepared answers – and possibly all of them – must be the same for both Albert and Bernard. This puts a lower limit on the probability of them giving the same answer to different questions. I’ll leave it as an exercise to the reader to show that the probability of coincident answers to different questions in this case must be at least 1/3.

This a simple illustration of what in quantum mechanics is known as a Bell inequality. Albert and Bernard can only keep the number of such false agreements down to the measured level of 25% by cheating.

This example is directly analogous to the behaviour of the entangled quantum state described above under repeated interrogations about its spin in three different directions. The result of each measurement can only be either “yes” or “no”. Each individual answer (for each particle) is equally probable in this case; the same question always produces the same answer for both particles, but the probability of agreement for two different questions is indeed ¼ and not larger as would be expected if the answers were random. For example one could ask particle 1 “are you spinning up” and particle 2 “are you spinning to the right”? The probability of both producing an answer “yes” is 25% according to quantum theory but would be higher if the particles weren’t cheating in some way.

Probably the most famous experiment of this type was done in the 1980s, by Alain Aspect and collaborators, involving entangled pairs of polarized photons (which are bosons), rather than electrons, primarily because these are easier to prepare.

The implications of quantum entanglement greatly troubled Einstein long before the EPR paradox. Indeed the interpretation of single-particle quantum measurement (which has no entanglement) was already troublesome. Just exactly how does the wave-function relate to the particle? What can one really say about the state of the particle before a measurement is made? What really happens when a wave-function collapses? These questions take us into philosophical territory that I have set foot in already; the difficult relationship between epistemological and ontological uses of probability theory.

Thanks largely to the influence of Niels Bohr, in the relatively early stages of quantum theory a standard approach to this question was adopted. In what became known as the  Copenhagen interpretation of quantum mechanics, the collapse of the wave-function as a result of measurement represents a real change in the physical state of the system. Before the measurement, an electron really is neither spinning up nor spinning down but in a kind of quantum purgatory. After a measurement it is released from limbo and becomes definitely something. What collapses the wave-function is something unspecified to do with the interaction of the particle with the measuring apparatus or, in some extreme versions of this doctrine, the intervention of human consciousness.

I find it amazing that such a view could have been held so seriously by so many highly intelligent people. Schrödinger hated this concept so much that he invented a thought-experiment of his own to poke fun at it. This is the famous “Schrödinger’s cat” paradox.

In a closed box there is a cat. Attached to the box is a device which releases poison into the box when triggered by a quantum-mechanical event, such as radiation produced by the decay of a radioactive substance. One can’t tell from the outside whether the poison has been released or not, so one doesn’t know whether the cat is alive or dead. When one opens the box, one learns the truth. Whether the cat has collapsed or not, the wave-function certainly does. At this point one is effectively making a quantum measurement so the wave-function of the cat is either “dead” or “alive” but before opening the box it must be in a superposition state. But do we really think the cat is neither dead nor alive? Isn’t it certainly one or the other, but that our lack of information prevents us from knowing which? And if this is true for a macroscopic object such as a cat, why can’t it be true for a microscopic system, such as that involving just a pair of electrons?

As I learned at a talk a while ago by the Nobel prize-winning physicist Tony Leggett – who has been collecting data on this  – most physicists think Schrödinger’s cat is definitely alive or dead before the box is opened. However, most physicists don’t believe that an electron definitely spins either up or down before a measurement is made. But where does one draw the line between the microscopic and macroscopic descriptions of reality? If quantum mechanics works for 1 particle, does it work also for 10, 1000? Or, for that matter, 1023?

Most modern physicists eschew the Copenhagen interpretation in favour of one or other of two modern interpretations. One involves the concept of quantum decoherence, which is basically the idea that the phase information that is crucial to the underlying logic of quantum theory can be destroyed by the interaction of a microscopic system with one of larger size. In effect, this hides the quantum nature of macroscopic systems and allows us to use a more classical description for complicated objects. This certainly happens in practice, but this idea seems to me merely to defer the problem of interpretation rather than solve it. The fact that a large and complex system makes tends to hide its quantum nature from us does not in itself give us the right to have a different interpretations of the wave-function for big things and for small things.

Another trendy way to think about quantum theory is the so-called Many-Worlds interpretation. This asserts that our Universe comprises an ensemble – sometimes called a multiverse – and  probabilities are defined over this ensemble. In effect when an electron leaves its source it travels through infinitely many paths in this ensemble of possible worlds, interfering with itself on the way. We live in just one slice of the multiverse so at the end we perceive the electron winding up at just one point on our screen. Part of this is to some extent excusable, because many scientists still believe that one has to have an ensemble in order to have a well-defined probability theory. If one adopts a more sensible interpretation of probability then this is not actually necessary; probability does not have to be interpreted in terms of frequencies. But the many-worlds brigade goes even further than this. They assert that these parallel universes are real. What this means is not completely clear, as one can never visit parallel universes other than our own …

It seems to me that none of these interpretations is at all satisfactory and, in the gap left by the failure to find a sensible way to understand “quantum reality”, there has grown a pathological industry of pseudo-scientific gobbledegook. Claims that entanglement is consistent with telepathy, that parallel universes are scientific truths, that consciousness is a quantum phenomena abound in the New Age sections of bookshops but have no rational foundation. Physicists may complain about this, but they have only themselves to blame.

But there is one remaining possibility for an interpretation of that has been unfairly neglected by quantum theorists despite – or perhaps because of – the fact that is the closest of all to commonsense. This view that quantum mechanics is just an incomplete theory, and the reason it produces only a probabilistic description is that does not provide sufficient information to make definite predictions. This line of reasoning has a distinguished pedigree, but fell out of favour after the arrival of Bell’s theorem and related issues. Early ideas on this theme revolved around the idea that particles could carry “hidden variables” whose behaviour we could not predict because our fundamental description is inadequate. In other words two apparently identical electrons are not really identical; something we cannot directly measure marks them apart. If this works then we can simply use only probability theory to deal with inferences made on the basis of information that’s not sufficient for absolute certainty.

After Bell’s work, however, it became clear that these hidden variables must possess a very peculiar property if they are to describe out quantum world. The property of entanglement requires the hidden variables to be non-local. In other words, two electrons must be able to communicate their values faster than the speed of light. Putting this conclusion together with relativity leads one to deduce that the chain of cause and effect must break down: hidden variables are therefore acausal. This is such an unpalatable idea that it seems to many physicists to be even worse than the alternatives, but to me it seems entirely plausible that the causal structure of space-time must break down at some level. On the other hand, not all “incomplete” interpretations of quantum theory involve hidden variables.

One can think of this category of interpretation as involving an epistemological view of quantum mechanics. The probabilistic nature of the theory has, in some sense, a subjective origin. It represents deficiencies in our state of knowledge. The alternative Copenhagen and Many-Worlds views I discussed above differ greatly from each other, but each is characterized by the mistaken desire to put quantum mechanics – and, therefore, probability –  in the realm of ontology.

The idea that quantum mechanics might be incomplete  (or even just fundamentally “wrong”) does not seem to me to be all that radical. Although it has been very successful, there are sufficiently many problems of interpretation associated with it that perhaps it will eventually be replaced by something more fundamental, or at least different. Surprisingly, this is a somewhat heretical view among physicists: most, including several Nobel laureates, seem to think that quantum theory is unquestionably the most complete description of nature we will ever obtain. That may be true, of course. But if we never look any deeper we will certainly never know…

With the gradual re-emergence of Bayesian approaches in other branches of physics a number of important steps have been taken towards the construction of a truly inductive interpretation of quantum mechanics. This programme sets out to understand  probability in terms of the “degree of belief” that characterizes Bayesian probabilities. Recently, Christopher Fuchs, amongst others, has shown that, contrary to popular myth, the role of probability in quantum mechanics can indeed be understood in this way and, moreover, that a theory in which quantum states are states of knowledge rather than states of reality is complete and well-defined. I am not claiming that this argument is settled, but this approach seems to me by far the most compelling and it is a pity more people aren’t following it up…

by telescoper at April 15, 2015 12:48 PM

Matt Strassler - Of Particular Significance

More on Dark Matter and the Large Hadron Collider

As promised in my last post, I’ve now written the answer to the second of the three questions I posed about how the Large Hadron Collider [LHC] can search for dark matter.  You can read the answers to the first two questions here. The first question was about how scientists can possibly look for something that passes through a detector without leaving any trace!  The second question is how scientists can tell the difference between ordinary production of neutrinos — which also leave no trace — and production of something else. [The answer to the third question — how one could determine this “something else” really is what makes up dark matter — will be added to the article later this week.]

In the meantime, after Monday’s post, I got a number of interesting questions about dark matter, why most experts are confident it exists, etc.  There are many reasons to be confident; it’s not just one argument, but a set of interlocking arguments.  One of the most powerful comes from simulations of the universe’s history.  These simulations

  • start with what we think we know about the early universe from the cosmic microwave background [CMB], including the amount of ordinary and dark matter inferred from the CMB (assuming Einstein’s gravity theory is right), and also including the degree of non-uniformity of the local temperature and density;
  • and use equations for known physics, including Einstein’s gravity, the behavior of gas and dust when compressed and heated, the effects of various forms of electromagnetic radiation on matter, etc.

The output of the these simulations is a prediction for the universe today — and indeed, it roughly has the properties of the one we inhabit.

Here’s a video from the Illustris collaboration, which has done the most detailed simulation of the universe so far.  Note the age of the universe listed at the bottom as the video proceeds.  On the left side of the video you see dark matter.  It quickly clumps under the force of gravity, forming a wispy, filamentary structure with dense knots, which then becomes rather stable; moderately dense regions are blue, highly dense regions are pink.  On the right side is shown gas.  You see that after the dark matter structure begins to form, that structure attracts gas, also through gravity, which then forms galaxies (blue knots) around the dense knots of dark matter.  The galaxies then form black holes with energetic disks and jets, and stars, many of which explode.   These much more complicated astrophysical effects blow clouds of heated gas (red) into intergalactic space.

Meanwhile, the distribution of galaxies in the real universe, as measured by astronomers, is illustrated in this video from the Sloan Digital Sky Survey.   You can see by eye that the galaxies in our universe show a filamentary structure, with big nearly-empty spaces, and loose strings of galaxies ending in big clusters.  That’s consistent with what is seen in the Illustris simulation.

Now if you’d like to drop the dark matter idea, the question you have to ask is this: could the simulations still give a universe similar to ours if you took dark matter out and instead modified Einstein’s gravity somehow?  [Usually this type of change goes under the name of MOND.]

In the simulation, gravity causes the dark matter, which is “cold” (cosmo-speak for “made from objects traveling much slower than light speed”), to form filamentary structures that then serve as the seeds for gas to clump and form galaxies.  So if you want to take the dark matter out, and instead change gravity to explain other features that are normally explained by dark matter, you have a challenge.   You are in danger of not creating the filamentary structure seen in our universe.  Somehow your change in the equations for gravity has to cause the gas to form galaxies along filaments, and do so in the time allotted.  Otherwise it won’t lead to the type of universe that we actually live in.

Challenging, yes.  Challenging is not the same as impossible. But everyone one should understand that the arguments in favor of dark matter are by no means limited to the questions of how stars move in galaxies and how galaxies move in galaxy clusters.  Any implementation of MOND has to explain a lot of other things that, in most experts’ eyes, are efficiently taken care of by cold dark matter.

Filed under: Dark Matter, LHC Background Info Tagged: atlas, cms, DarkMatter, LHC, neutrinos

by Matt Strassler at April 15, 2015 12:35 PM

Peter Coles - In the Dark

Spring – Edna St Vincent Millay

To what purpose, April, do you return again?
Beauty is not enough.
You can no longer quiet me with the redness
Of little leaves opening stickily.
I know what I know.
The sun is hot on my neck as I observe
The spikes of the crocus.
The smell of the earth is good.
It is apparent that there is no death.
But what does that signify?
Not only under ground are the brains of men
Eaten by maggots.
Life in itself
Is nothing,
An empty cup, a flight of uncarpeted stairs.
It is not enough that yearly, down this hill,
Comes like an idiot, babbling and strewing flowers.

by Edna St Vincent Millay (1892-1950)

by telescoper at April 15, 2015 12:11 PM

CERN Bulletin

CERN Bulletin Issue No. 16-17/2015
Link to e-Bulletin Issue No. 16-17/2015Link to all articles in this issue No.

April 15, 2015 08:45 AM

April 14, 2015

Symmetrybreaking - Fermilab/SLAC

LSST construction begins

The Large Synoptic Survey Telescope will take the most thorough survey ever of the Southern sky.

Today a group will gather in northern Chile to participate in a traditional stone-laying ceremony. The ceremony marks the beginning of construction for a telescope that will use the world’s largest digital camera to take the most thorough survey ever of the Southern sky.

The 8-meter Large Synoptic Survey Telescope will image the entire visible sky a few times each week for 10 years. It is expected to see first light in 2019 and begin full operation in 2022.

Collaborators from the US National Science Foundation, the US Department of Energy, Chile’s Ministry of Foreign Affairs and Comisión Nacional de Investigación Científica y Technológica, along with several other international public-private partners will participate in the ceremony.

“Today, we embark on an exciting moment in astronomical history,” says NSF Director France A. Córdova, an astrophysicist, in a press release. “NSF is thrilled to lead the way in funding a unique facility that has the potential to transform our knowledge of the universe.”

Equipped with a 3-billion-pixel digital camera, LSST will observe objects as they change or move, providing insight into short-lived transient events such as astronomical explosions and the orbital paths of potentially hazardous asteroids. LSST will take more than 800 panoramic images of the sky each night, allowing for detailed maps of the Milky Way and of our own solar system and charting billions of remote galaxies. Its observations will also probe the imprints of dark matter and dark energy on the evolution of the universe.

“We are very excited to see the start of the summit construction of the LSST facility,” says James Siegrist, DOE associate director of science for high-energy physics. “By collecting a unique dataset of billions of galaxies, LSST will provide multiple probes of dark energy, helping to tackle one of science’s greatest mysteries.”

NSF and DOE will share responsibilities over the lifetime of the project. The NSF, through its partnership with the Association of Universities for Research in Astronomy, will develop the site and telescope, along with the extensive data management system. It will also coordinate education and outreach efforts. DOE, through a collaboration led by its SLAC National Accelerator Laboratory, will develop the large-format camera.

In addition, the Republic of Chile will serve as project host, providing (and protecting) access to some of the darkest and clearest skies in the world over the LSST site on Cerro Pachón, a mountain peak in northern Chile. The site was chosen through an international competition due to the pristine skies, low levels of light pollution, dry climate and the robust and reliable infrastructure available in Chile.

“Chile has extraordinary natural conditions for astronomical observation, and this is once again demonstrated by the decision to build this unique telescope in Cerro Pachón,” says CONICYT President Francisco Brieva. “We are convinced that the LSST will bring important benefits for science in Chile and worldwide by opening up a new window of observation that will lead to new discoveries.”

By 2020, 70 percent of the world’s astronomical infrastructure is expected to be concentrated in Chile.


Like what you see? Sign up for a free subscription to symmetry!

April 14, 2015 04:39 PM

Tommaso Dorigo - Scientificblogging

Fun With A Dobson
It is galaxy season in the northern hemisphere, with Ursa Mayor at the zenith during the night and the Virgo cluster as high as it gets. And if you have ever put your eye on the eyepiece of a large telescope aimed at a far galaxy, you will agree it is quite an experience: you get to see light that traveled for tens or even hundreds of millions of years before reaching your pupil, crossing sizable portions of the universe to make a quite improbable rendez-vous with your photoreceptors. 

read more

by Tommaso Dorigo at April 14, 2015 09:37 AM

Tommaso Dorigo - Scientificblogging

Fun With A Dobson
It is galaxy season in the northern hemisphere, with Ursa Mayor at the zenith during the night and the Virgo cluster as high as it gets. And if you have ever put your eye on the eyepiece of a large telescope aimed at a far galaxy, you will agree it is quite an experience: you get to see light that traveled for tens or even hundreds of millions of years before reaching your pupil, crossing sizable portions of the universe to make a quite improbable rendez-vous with your photoreceptors. 

read more

by Tommaso Dorigo at April 14, 2015 09:37 AM

astrobites - astro-ph reader's digest

The Cosmic Microwave Oven Background

Title: Identifying the source of perytons at the Parkes radio telescope
Authors: E. Petroff, E. F. Keane, E. D. Barr, J. E. Reynolds, J. Sarkissian, P. G. Edwards, J. Stevens, C. Brem, A. Jameson, S. Burke-Spolaor, S. Johnston, N. D. R. Bhat, P. Chandra, S. Kudale, S. Bhandari
First Author’s institution:  Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Australia

Over the past couple of decades the Parkes Radio Telescope in Australia has been picking up two types of mysterious signals, each lasting just a few milliseconds. One kind, the Fast Radio Bursts (FRBs), have come from seemingly random points in the sky at unpredictable times, and are thought to have a (thus far unknown)  astronomical origin. The other kind of signal, perytons, which were named after the mythical winged creatures that cast the shadow of a human, have been found by this paper to have an origin much close to home.

Although the 25 detected perytons are somewhat similar to FRBs, with a comparable spread in frequencies and duration, the author’s suspicions were raised when they noticed that the perytons all happened during office hours, and mostly on weekdays.  When  they corrected for daylight savings, they found that perytons were even more tightly distributed— they mostly came at lunch time. Mostly.

peryton times

Arrival times of perytons (pink) compared with FRBs (blue) at the Parkes Radio Telescope. Real astronomical signals probably don’t all come at lunchtime.

To search for the true origin of the perytons, Petroff et al. took advantage of the fact that the Parkes has just been fitted with a Radio Frequency Interference (RFI) monitor, which continuously scanned the whole sky to detect any background radio sources that might interfere with the astronomical observations.

In the week beginning 19th January 2015 the Parkes radio telescope detected three new perytons. Searching through the RFI data, the authors found that each peryton, with a radio frequency of 1.4GHz, was accompanied by another signal at 2.4GHz. Crucially, they could then compare their results with those from an identical RFI monitor at the nearby ATCA observatory. The 2.4GHz signal was nowhere to be seen in the ATCA data. Not only were the perytons not from space, they had to be coming from somewhere nearby the telescope.

Another clue came when the authors found that, although they had only observed three perytons, there were plenty of 2.4GHz signals in the RFI data that didn’t have an associated peryton. Petroff et al. decided to search for anything that would normally give off 2.4GHz signals, but occasionally emit a 1.4 GHz burst. Suspicion fell on the on-site microwave ovens—not only do they operate at 2.4GHz, the telescope had been pointing in the direction of at least one microwave every time a peryton had been seen.


A suspicious-looking microwave. Image Source: Wikimedia Commons

With the suspects cornered, the authors set about trying to create their own perytons. They found that the magnetrons in microwaves naturally emit a 1.4GHz pulse when powering down. Normally this signal is absorbed by the walls of the microwave, but if someone were to open the microwave door before it finished its cycle then the 1.4GHz pulse could escape. Using this technique, the authors were able to generate perytons with a 50 percent success rate. After decades of searching, the source of these mysterious signals had been found.

What about the FRBs? With the perytons confirmed as coming from Earth and not space, doubt was cast on the origin of the FRBs. The authors suggest that the FRBs are astronomical sources, and not linked with the perytons,  for two reasons:

  • The FRBS come at random times and random locations, whereas the perytons were all detected during the day in the general direction of microwaves.
  • The signal from FRBs is consistent with them having traveled through space, with indicators of interaction with interstellar plasma not seen in the perytons.

The authors finish by suggesting a  final test can be made the next time an FRB is observed. If no simultaneous 2.4GHz signal is seen, then it would  conclusively disprove any link between the FRBs and the perytons. What the FRBs really are remains unknown.


by David Wilson at April 14, 2015 09:36 AM

April 13, 2015

Quantum Diaries

Mapping the cosmos: Dark Energy Survey creates detailed guide to spotting dark matter

This Fermilab press release came out on April 13, 2015.

This is the first Dark Energy Survey map to trace the detailed distribution of dark matter across a large area of sky. The color scale represents projected mass density: red and yellow represent regions with more dense matter. The dark matter maps reflect the current picture of mass distribution in the universe where large filaments of matter align with galaxies and clusters of galaxies. Clusters of galaxies are represented by gray dots on the map - bigger dots represent larger clusters. This map covers three percent of the area of sky that DES will eventually document over its five-year mission. Image: Dark Energy Survey

This is the first Dark Energy Survey map to trace the detailed distribution of dark matter across a large area of sky. The color scale represents projected mass density: red and yellow represent regions with more dense matter. The dark matter maps reflect the current picture of mass distribution in the universe where large filaments of matter align with galaxies and clusters of galaxies. Clusters of galaxies are represented by gray dots on the map – bigger dots represent larger clusters. This map covers three percent of the area of sky that DES will eventually document over its five-year mission. Image: Dark Energy Survey

Scientists on the Dark Energy Survey have released the first in a series of dark matter maps of the cosmos. These maps, created with one of the world’s most powerful digital cameras, are the largest contiguous maps created at this level of detail and will improve our understanding of dark matter’s role in the formation of galaxies. Analysis of the clumpiness of the dark matter in the maps will also allow scientists to probe the nature of the mysterious dark energy, believed to be causing the expansion of the universe to speed up.

The new maps were released today at the April meeting of the American Physical Society in Baltimore, Maryland. They were created using data captured by the Dark Energy Camera, a 570-megapixel imaging device that is the primary instrument for the Dark Energy Survey (DES).

Dark matter, the mysterious substance that makes up roughly a quarter of the universe, is invisible to even the most sensitive astronomical instruments because it does not emit or block light. But its effects can be seen by studying a phenomenon called gravitational lensing – the distortion that occurs when the gravitational pull of dark matter bends light around distant galaxies. Understanding the role of dark matter is part of the research program to quantify the role of dark energy, which is the ultimate goal of the survey.

This analysis was led by Vinu Vikram of Argonne National Laboratory (then at the University of Pennsylvania) and Chihway Chang of ETH Zurich. Vikram, Chang and their collaborators at Penn, ETH Zurich, the University of Portsmouth, the University of Manchester and other DES institutions worked for more than a year to carefully validate the lensing maps.

“We measured the barely perceptible distortions in the shapes of about 2 million galaxies to construct these new maps,” Vikram said. “They are a testament not only to the sensitivity of the Dark Energy Camera, but also to the rigorous work by our lensing team to understand its sensitivity so well that we can get exacting results from it.”

The camera was constructed and tested at the U.S. Department of Energy’s Fermi National Accelerator Laboratory and is now mounted on the 4-meter Victor M. Blanco telescope at the National Optical Astronomy Observatory’s Cerro Tololo Inter-American Observatory in Chile. The data were processed at the National Center for Supercomputing Applications at the University of Illinois in Urbana-Champaign.

The dark matter map released today makes use of early DES observations and covers only about three percent of the area of sky DES will document over its five-year mission. The survey has just completed its second year. As scientists expand their search, they will be able to better test current cosmological theories by comparing the amounts of dark and visible matter.

Those theories suggest that, since there is much more dark matter in the universe than visible matter, galaxies will form where large concentrations of dark matter (and hence stronger gravity) are present. So far, the DES analysis backs this up: The maps show large filaments of matter along which visible galaxies and galaxy clusters lie and cosmic voids where very few galaxies reside. Follow-up studies of some of the enormous filaments and voids, and the enormous volume of data, collected throughout the survey will reveal more about this interplay of mass and light.

“Our analysis so far is in line with what the current picture of the universe predicts,” Chang said. “Zooming into the maps, we have measured how dark matter envelops galaxies of different types and how together they evolve over cosmic time. We are eager to use the new data coming in to make much stricter tests of theoretical models.”

View the Dark Energy Survey analysis.

The Dark Energy Survey is a collaboration of more than 300 scientists from 25 institutions in six countries. Its primary instrument, the Dark Energy Camera, is mounted on the 4-meter Blanco telescope at the National Optical Astronomy Observatory’s Cerro Tololo Inter-American Observatory in Chile, and its data is processed at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.

Funding for the DES Projects has been provided by the U.S. Department of Energy Office of Science, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, ETH Zurich for Switzerland, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, Financiadora de Estudos e Projetos, Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da Ciência e Tecnologia, the Deutsche Forschungsgemeinschaft and the collaborating institutions in the Dark Energy Survey. The DES participants from Spanish institutions are partially supported by MINECO under grants AYA2012-39559, ESP2013-48274, FPA2013-47986 and Centro de Excelencia Severo Ochoa SEV-2012-0234, some of which include ERDF funds from the European Union.

Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance, LLC. Visit Fermilab’s website at and follow us on Twitter at @Fermilab.

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit

by Fermilab at April 13, 2015 07:35 PM

astrobites - astro-ph reader's digest

Discovering the mysterious companions of Cepheids

Title: Discovery of blue companions to two southern Cepheids: WW Car and FN Vel

Authors: V. Kovtyukh, L. Szabados, F. Chekhonadskikh, B. Lemasle, and S. Belik

First Author’s Institution: Astronomical Observatory, Odessa National University

Status: Published in MNRAS


Figure 1: Also Figure 1 from the paper. This shows the spectra of a Cepheid without a companion (bottom) and the spectrum of a B-star (top spectrum).  As we can see, the Balmer line from the B-star nearly overlaps with the Ca II H line of the Cepheid.

Figure 1: This shows the spectra of a Cepheid without a companion (bottom) and the spectrum of a B-star (top spectrum). As we can see, the Balmer line from the B-star nearly overlaps with the Ca II H line of the Cepheid. This is also Figure 1 from the paper.

Cepheid variable stars are perhaps most famous for serving as standard candles in the cosmic distance ladder. The periods of their variability are related to their luminosities by the period-luminosity relationship (now sometimes also called Leavitt’s law). This was first discovered by Henrietta Swan Leavitt over a hundred years ago but this relationship is something astronomers are still working to improve today. Astrobites has also discussed Cepheids and other variable stars and the cosmic distance ladder in previous posts.

Though the physics behind Cepheid variability is well-understood, we still have significant difficulties to overcome in order to improve the zero-point of Leavitt’s law. Cepheids are supergiants. They are stars several times the mass of our Sun that have evolved off of the main sequence of the stellar color-magnitude diagram (in other words they’re in the stellar ‘afterlife’). Because bigger stars burn their fuel faster than smaller stars, Cepheids are also young stars. Thus they are often found in the dusty regions of galaxies so we have to deal with absorption, reddening, and dust scattering when we observe them. The period-luminosity relationship may also have a dependence on metallicity (the fraction of atoms in the star that are heavier than helium) that we still don’t fully understand.

Another common problem that we face when using Cepheids—and the focus of today’s paper!—is the presence of a binary companion.In fact, more than 50% of Galactic Cepheids are expected to have at least one companion. The number of Cepheids with binary companions is so high that we can’t deal with them by simply throwing out the ones that have companions. Separating the luminosity of the Cepheid from its companion is important if we want to use the period luminosity relationship. Cepheids, like most stars, are often found in binary systems.

Fortunately, since binary systems are incredibly useful in astronomy in their own right, astronomers have many ways of detecting them using spectroscopy, photometery, and astrometry. For one thing, observations of binary stars are pretty much the only way we can directly determine the masses of stars that aren’t the Sun. Today’s astrobite, however, focuses on a method of detecting binaries that is useful specifically in finding the companions of Cepheids.


Figure 2: Figure 3 from the paper, showing the spectra of the two Cepheids for which they discovered blue companions. We can see that the Ca II H line on the right is "deeper" than the Ca II K line on the left.

Figure 2: Figure 3 from the paper, showing the spectra of the two Cepheids for which they discovered blue companions. We can see that the Ca II H line on the right is “deeper” than the Ca II K line on the left.

The authors of today’s paper used the “calcium-line method” (we’ll elaborate more on this in a minute) to study 103 southern Cepheids. This allowed them to identify a new blue binary companion to the Cepheid WW Car and independently confirm another recent discovery of a blue binary companion to the Cepheid FN Vel. They were also able to extract the known blue companions of eight other Cepheids.

To do this, they used spectra taken with the MPG telescope and FEROS spectrograph and focused on the 3900-4000 angstrom range to study the depth of the calcium lines Ca II H (3968.47 angstroms) and Ca II K (3933.68 angstroms) and also the Hϵ line (3970.07 angstroms) of the Balmer series. In Cepheids without blue companions, the depths of the two calcium lines will be equal (see Figure 1). However, if the Cepheid has a blue (hot) companion, then the Hϵ line of the blue companion will be superimposed with the Ca II H line, causing it to be deeper than the Ca II K line. Thus, by studying the relative depths of these two lines, they were able to identify Cepheids with hot companions. This method works for companion stars that are hotter than AV3 stars (hence the “blue” moniker in the title) so they were also able to see the increased line depth in Cepheids with previously-discovered hot companion stars. To make sure that they were not affected by lines from Earth’s atmosphere, the authors also made observations of blue stars at the same time as their Cepheid observations. Spectra for WW Car and FN Vel can be seen in Figure 2.

They then search the literature for other evidence of binary companions for WW Car and FN Vel. FN Vel’s binary companion had been discovered independently and WW Car had previously been suspected of having a binary companion because of its photometry. WW Car’s pulsation period also seemed to display a long-term sinusoidal variation, which the authors speculate could be attributed to the light-time effect that is sometimes seen in binary systems. This is caused by the light traveling different distances to reach us as the Cepheid orbits about the system’s center of mass. Danish astronomer Ole Rømer actually used this to calculate the speed of light in the 1600s!

Figure 3:  The O - C Diagram for WW Car (Figure 4 from the paper). We can see the sinusoidal variations in period that the authors speculate may be caused by the light-time effect of a binary system.

Figure 3: The O – C Diagram for WW Car (Figure 4 from the paper). We can see the sinusoidal variations in period that the authors speculate may be caused by the light-time effect of a binary system.

WW Car’s varying periodicity can be seen in it’s O – C diagram (observed minus calculated) in Figure 3.  An O – C diagram shows the difference in time from when we expect to see a certain phase of the Cepheid (the ‘calculated’ part of the diagram) and when we actually see it (the ‘observed’ part of the diagram). If the period isn’t changing we’d see points that would be best fit by a horizontal line because the dates that we expect for the Cepheid to be at a certain part of its phase–say the maximum–would be about the same as when we actually see the maximum observationally. In WW Car’s case, they see this wave-like structure in the O – C diagram, so they suggest that this could be more evidence for the existence of the blue companion they detected in their analysis of its calcium lines.

There are, however, a few limitations to the calcium-line method. For example, the Cepheid observations were generally made at the peak of their light curves to give them the highest signal-to-noise ratio possible, but the best way to search for Cepheid companions is to look when the Cepheid is at the minimum of its light curve. This particular method, as noted earlier, is also effective for stars that are hotter than the AV3 spectral type, so it wouldn’t work on less massive companions. Nevertheless, as the author’s of today’s paper have shown, looking at the relative depths of a Cepheid’s calcium lines can be a very effective way of uncovering the secret companions of Cepheids.

by Caroline Huang at April 13, 2015 05:18 PM

Symmetrybreaking - Fermilab/SLAC

DES releases dark matter map

The Dark Energy Survey's detailed maps may help scientists better understand galaxy formation.

Scientists on the Dark Energy Survey have released the first in a series of dark matter maps of the cosmos. These maps, created with one of the world's most powerful digital cameras, are the largest contiguous maps created at this level of detail and will improve our understanding of dark matter's role in the formation of galaxies. Analysis of the clumpiness of the dark matter in the maps will also allow scientists to probe the nature of the mysterious dark energy, believed to be causing the expansion of the universe to speed up.

The new maps were released today at the April meeting of the American Physical Society in Baltimore, Maryland. They were created using data captured by the Dark Energy Camera, a 570-megapixel imaging device that is the primary instrument for the Dark Energy Survey.

Dark matter, the mysterious substance that makes up roughly a quarter of the universe, is invisible to even the most sensitive astronomical instruments because it does not emit or block light. But its effects can be seen by studying a phenomenon called gravitational lensing – the distortion that occurs when the gravitational pull of dark matter bends light around distant galaxies. Understanding the role of dark matter is part of the research program to quantify the role of dark energy, which is the ultimate goal of the survey.

This analysis was led by Vinu Vikram of Argonne National Laboratory (then at the University of Pennsylvania) and Chihway Chang of ETH Zurich. Vikram, Chang and their collaborators at Penn, ETH Zurich, the University of Portsmouth, the University of Manchester and other DES institutions worked for more than a year to carefully validate the lensing maps.

"We measured the barely perceptible distortions in the shapes of about 2 million galaxies to construct these new maps," Vikram says. "They are a testament not only to the sensitivity of the Dark Energy Camera, but also to the rigorous work by our lensing team to understand its sensitivity so well that we can get exacting results from it."

The camera was constructed and tested at the US Department of Energy's Fermi National Accelerator Laboratory and is now mounted on the 4-meter Victor M. Blanco telescope at the National Optical Astronomy Observatory's Cerro Tololo Inter-American Observatory in Chile. The data were processed at the National Center for Supercomputing Applications at the University of Illinois in Urbana-Champaign.

The dark matter map released today makes use of early DES observations and covers only about three percent of the area of sky DES will document over its five-year mission. The survey has just completed its second year. As scientists expand their search, they will be able to better test current cosmological theories by comparing the amounts of dark and visible matter.

Those theories suggest that, since there is much more dark matter in the universe than visible matter, galaxies will form where large concentrations of dark matter (and hence stronger gravity) are present. So far, the DES analysis backs this up: The maps show large filaments of matter along which visible galaxies and galaxy clusters lie and cosmic voids where very few galaxies reside. Follow-up studies of some of the enormous filaments and voids, and the enormous volume of data, collected throughout the survey will reveal more about this interplay of mass and light.

"Our analysis so far is in line with what the current picture of the universe predicts," Chang says. "Zooming into the maps, we have measured how dark matter envelops galaxies of different types and how together they evolve over cosmic time. We are eager to use the new data coming in to make much stricter tests of theoretical models."

View the Dark Energy Survey analysis.


Fermilab published a version of this article as a press release.


Like what you see? Sign up for a free subscription to symmetry!

April 13, 2015 04:39 PM

Axel Maas - Looking Inside the Standard Model

A partner for every particle?
A master student has started with me a thesis on a new topic, one on which I have not been working before. Therefore, before going into details about the thesis' topic itself, I would like to introduce the basic physics underlying it.

The topic is the rather famous concept of supersymmetry. What this means I will explain in a minute. Supersymmetry is related to two general topics we are working on. One is the quest for what comes after the standard model. It is with this respect that it has become famous. There are many quite excellent introductions to why it is relevant, and why it could be within the LHC's reach to discover it. I will not just point to any of these, but write nonetheless here a new text on it. Why? Because of the relation to the second research area involved in the master thesis, the ground work about theory. This gives our investigation a quite different perspective on the topic, and requires a different kind of introduction.

So what is supersymmetry all about? I have written about the fact that there are two very different types of particles we know of: Bosons and fermions. Both types have very distinct features. Any particle we know belong to either of these two types. E.g. the famous Higgs is a boson, while the electron is a fermion.

One question to pose is, whether these two categories are really distinct, or if there are just two sides of a single coin. Supersymmetry is what you get if you try to realize the latter option. Supersymmetry - or SUSY for short - introduces a relation between bosons and fermions. A consequence of SUSY is that for every boson there is a fermion partner, and for every fermion there is a boson partner.

A quick counting in the standard model shows that it cannot be supersymmetric. Moreover, SUSY also dictates that all other properties of a boson and a fermion partner must be the same. This includes the mass and the electric charge. Hence, if SUSY would be real, there should be a boson which acts otherwise like an electron. Experiments tell us that this is not the case. So is SUSY doomed? Well, not necessarily. There is a weaker version of SUSY where it only approximately true - a so-called broken symmetry. This allows to make the partners differently massive, and then they can escape detection. For now.

SUSY, even in its approximate form, has many neat features. It is therefore a possibility desired by many to be true. But only experiment (and nature) will tell eventually.

But the reason why we are interested in SUSY is quite different.

As you see, SUSY puts tight constraints on what kind of particles are in a theory. But it does even more. It also restricts the way how these particles can interact. The constraints on the interactions are a little bit more flexible than on the kind of particles. You can realize different amounts of SUSY by relaxing or enforcing relations between the interactions. What does 'more or less' SUSY mean? The details are somewhat subtle, but a hand-waving statement is that more SUSY not only relates bosons and fermions, but in addition also partner particles of different particles more and more. There is an ultimate limit to the amount of SUSY you can have, essentially when everything and everyone is related and every interaction is essentially of the same strength. That is what is called a maximal SUSY theory. A fancy name is N=4 SUSY for technical reason, if you should come across it somewhere on the web.

And it is this theory which is interesting to us. Having such very tight constraints enforces a very predetermined behavior. Many things are fixed. Thus, calculations are more simple. At the same time, many of there more subtle questions we are working on are nonetheless still there. Using the additional constraints, we hope to understand this stuff better. With these insights, we may have a better chance to understand the same stuff in a less rigid theory, like the standard model.

by Axel Maas ( at April 13, 2015 03:31 PM

Matt Strassler - Of Particular Significance

Dark Matter: How Could the Large Hadron Collider Discover It?

Dark Matter. Its existence is still not 100% certain, but if it exists, it is exceedingly dark, both in the usual sense — it doesn’t emit light or reflect light or scatter light — and in a more general sense — it doesn’t interact much, in any way, with ordinary stuff, like tables or floors or planets or  humans. So not only is it invisible (air is too, after all, so that’s not so remarkable), it’s actually extremely difficult to detect, even with the best scientific instruments. How difficult? We don’t even know, but certainly more difficult than neutrinos, the most elusive of the known particles. The only way we’ve been able to detect dark matter so far is through the pull it exerts via gravity, which is big only because there’s so much dark matter out there, and because it has slow but inexorable and remarkable effects on things that we can see, such as stars, interstellar gas, and even light itself.

About a week ago, the mainstream press was reporting, inaccurately, that the leading aim of the Large Hadron Collider [LHC], after its two-year upgrade, is to discover dark matter. [By the way, on Friday the LHC operators made the first beams with energy-per-proton of 6.5 TeV, a new record and a major milestone in the LHC’s restart.]  There are many problems with such a statement, as I commented in my last post, but let’s leave all that aside today… because it is true that the LHC can look for dark matter.   How?

When people suggest that the LHC can discover dark matter, they are implicitly assuming

  • that dark matter exists (very likely, but perhaps still with some loopholes),
  • that dark matter is made from particles (which isn’t established yet) and
  • that dark matter particles can be commonly produced by the LHC’s proton-proton collisions (which need not be the case).

You can question these assumptions, but let’s accept them for now.  The question for today is this: since dark matter barely interacts with ordinary matter, how can scientists at an LHC experiment like ATLAS or CMS, which is made from ordinary matter of course, have any hope of figuring out that they’ve made dark matter particles?  What would have to happen before we could see a BBC or New York Times headline that reads, “Large Hadron Collider Scientists Claim Discovery of Dark Matter”?

Well, to address this issue, I’m writing an article in three stages. Each stage answers one of the following questions:

  1. How can scientists working at ATLAS or CMS be confident that an LHC proton-proton collision has produced an undetected particle — whether this be simply a neutrino or something unfamiliar?
  2. How can ATLAS or CMS scientists tell whether they are making something new and Nobel-Prizeworthy, such as dark matter particles, as opposed to making neutrinos, which they do every day, many times a second?
  3. How can we be sure, if ATLAS or CMS discovers they are making undetected particles through a new and unknown process, that they are actually making dark matter particles?

My answer to the first question is finished; you can read it now if you like.  The second and third answers will be posted later during the week.

But if you’re impatient, here are highly compressed versions of the answers, in a form which is accurate, but admittedly not very clear or precise.

  1. Dark matter particles, like neutrinos, would not be observed directly. Instead their presence would be indirectly inferred, by observing the behavior of other particles that are produced alongside them.
  2. It is impossible to directly distinguish dark matter particles from neutrinos or from any other new, equally undetectable particle. But the equations used to describe the known elementary particles (the “Standard Model”) predict how often neutrinos are produced at the LHC. If the number of neutrino-like objects is larger that the predictions, that will mean something new is being produced.
  3. To confirm that dark matter is made from LHC’s new undetectable particles will require many steps and possibly many decades. Detailed study of LHC data can allow properties of the new particles to be inferred. Then, if other types of experiments (e.g. LUX or COGENT or Fermi) detect dark matter itself, they can check whether it shares the same properties as LHC’s new particles. Only then can we know if LHC discovered dark matter.

I realize these brief answers are cryptic at best, so if you want to learn more, please check out my new article.

Filed under: Dark Matter, LHC Background Info, LHC News, Particle Physics Tagged: atlas, cms, DarkMatter, LHC

by Matt Strassler at April 13, 2015 12:33 PM

John Baez - Azimuth

Resource Convertibility (Part 3)

guest post by Tobias Fritz

In Part 1 and Part 2, we learnt about ordered commutative monoids and how they formalize theories of resource convertibility and combinability. In this post, I would like to say a bit about the applications that have been explored so far. First, the study of resource theories has become a popular subject in quantum information theory, and many of the ideas in my paper actually originate there. I’ll list some references at the end. So I hope that the toolbox of ordered commutative monoids will turn out to be useful for this. But here I would like to talk about an example application that is much easier to understand, but no less difficult to analyze: graph theory and the resource theory of zero-error communication.

A graph consists of a bunch of nodes connected by a bunch of edges, for example like this:

This particular graph is the pentagon graph or 5-cycle. To give it some resource-theoretic interpretation, think of it as the distinguishability graph of a communication channel, where the nodes are the symbols that can be sent across the channel, and two symbols share an edge if and only if they can be unambiguously decoded. For example, the pentagon graph roughly corresponds to the distinguishability graph of my handwriting, when restricted to five letters only:

So my ‘w’ is distinguishable from my ‘u’, but it may be confused for my ‘m’. In order to communicate unambiguously, it looks like I should restrict myself to using only two of those letters in writing, since any third of them may be mistaken for one of the other three. But alternatively, I could use a block code to create context around each letter which allows for perfect disambiguation. This is what happens in practice: I write in natural language, where an entire word is usually not ambiguous.

One can now also consider graph homomorphisms, which are maps like this:

The numbers on the nodes indicate where each node on the left gets mapped to. Formally, a graph homomorphism is a function taking nodes to nodes such that adjacent nodes get mapped to adjacent nodes. If a homomorphism G\to H exists between graphs G and H, then we also write H\geq G; in terms of communication channels, we can interpret this as saying that H simulates G, since the homomorphism provides a map between the symbols which preserves distinguishability. A ‘code’ for a communication channel is then just a homomorphism from the complete graph in which all nodes share an edge to the graph which describes the channel. With this ordering structure, the collection of all finite graphs forms an ordered set. This ordered set has an intricate structure which is intimately related to some big open problems in graph theory.

We can also combine two communication channels to form a compound one. Going back to the handwriting example, we can consider the new channel in which the symbols are pairs of letters. Two such pairs are distinguishable if and only if either the first letters of each pair are distinguishable or the second letters are,

(a,b) \sim (a',b') \:\Leftrightarrow\: a\sim a' \:\lor\: b\sim b'

When generalized to arbitrary graphs, this yields the definition of disjunctive product of graphs. It is not hard to show that this equips the ordered set of graphs with a binary operation compatible with the ordering, so that we obtain an ordered commutative monoid denoted Grph. It mathematically formalizes the resource theory of zero-error communication.

Using the toolbox of ordered commutative monoids combined with some concrete computations on graphs, one can show that Grph is not cancellative: if K_{11} is the complete graph on 11 nodes, then 3C_5\not\geq K_{11}, but there exists a graph G such that

3 C_5 + G \geq K_{11} + G

The graph G turns out to have 136 nodes. This result seems to be new. But if you happen to have seen something like this before, please let me know!

Last time, we also talked about rates of conversion. In Grph, it turns out that some of these correspond to famous graph invariants! For example, the rate of conversion from a graph G to the single-edge graph K_2 is Shannon capacity \Theta(\overline{G}), where \overline{G} is the complement graph. This is of no surprise since \Theta was originally defined by Shannon with precisely this rate in mind, although he did not use the language of ordered commutative monoids. In any case, the Shannon capacity \Theta(\overline{G}) is a graph invariant notorious for its complexity: it is not known whether there exists an algorithm to compute it! But an application of the Rate Theorem from Part 2 gives us a formula for the Shannon capacity:

\Theta(\overline{G}) = \inf_f f(G)

where f ranges over all graph invariants which are monotone under graph homomorphisms, multiplicative under disjunctive product, and normalized such that f(K_2) = 2. Unfortunately, this formula still not produce an algorithm for computing \Theta. But it nonconstructively proves the existence of many new graph invariants f which approximate the Shannon capacity from above.

Although my story ends here, I also feel that the whole project has barely started. There are lots of directions to explore! For example, it would be great to fit Shannon’s noisy channel coding theorem into this framework, but this has turned out be technically challenging. If you happen to be familiar with rate-distortion theory and you want to help out, please get in touch!


Here is a haphazard selection of references on resource theories in quantum information theory and related fields:

• Igor Devetak, Aram Harrow and Andreas Winter, A resource framework for quantum Shannon theory.

• Gilad Gour, Markus P. Müller, Varun Narasimhachar, Robert W. Spekkens and Nicole Yunger Halpern, The resource theory of informational nonequilibrium in thermodynamics.

• Fernando G.S.L. Brandão, Michał Horodecki, Nelly Huei Ying Ng, Jonathan Oppenheim and Stephanie Wehner, The second laws of quantum thermodynamics.

• Iman Marvian and Robert W. Spekkens, The theory of manipulations of pure state asymmetry: basic tools and equivalence classes of states under symmetric operations.

• Elliott H. Lieb and Jakob Yngvason, The physics and mathematics of the second law of thermodynamics.

by John Baez at April 13, 2015 01:00 AM

April 12, 2015

Jaques Distler - Musings

Just Wow!

I was recently invited to appear on a panel at the World Science Festival this May. For scheduling reasons, I had to decline. But since I had been told that the other panelists were to include Carlo Rovelli and Lee Smolin, I had the sense that, even had I been able to make it, the event would be a colossal waste of time.

But then this came across my desk (the original passage from Smolin’s book is here), and I am even more glad that I said, “No.”

by distler ( at April 12, 2015 08:02 PM

The n-Category Cafe

The Structure of A

I attended a workshop last week down in Bristol organised by James Ladyman and Stuart Presnell, as part of their Homotopy Type Theory project.

Urs was there, showing everyone his magical conjuring trick where the world emerges out of the opposition between <semantics><annotation encoding="application/x-tex">\emptyset</annotation></semantics> and <semantics>*<annotation encoding="application/x-tex">\ast\;</annotation></semantics> in Modern Physics formalized in Modal Homotopy Type Theory.

Jamie Vicary spoke on the Categorified Heisenberg Algebra. (See also John’s page.) After the talk, interesting connections were discussed with dependent linear type theory and tangent (infinity, 1)-toposes. It seems that André Joyal and colleagues are working on the latter. This should link up with Urs’s Quantization via Linear homotopy types at some stage.

As for me, I was speaking on the subject of my chapter for the book that Mike’s Introduction to Synthetic Mathematics and John’s Concepts of Sameness will appear in. It’s on reviving the philosophy of geometry through the (synthetic) approach of cohesion.

In the talk I mentioned the outcome of some further thinking about how to treat the phrase ‘the structure of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>’ for a mathematical entity. It occurred to me to combine what I wrote in that discussion we once had on The covariance of coloured balls with the analysis of ‘the’ from The King of France thread. After the event I thought I’d write out a note explaining this point of view, and it can be found here. Thanks to Mike and Urs for suggestions and comments.

The long and the short of it is that there’s no great need for the word ‘structure’ when using homotopy type theory. If anyone has any thoughts, I’d like to hear them.

by david ( at April 12, 2015 04:08 PM

April 11, 2015

Lubos Motl - string vacua and pheno

New wave of LHC alarmism
Nina Beety is a community organizer.

She has previously written a 170-page-long rant (plus a hilarious song) against "smart meters". But now, while the LHC is waking up again (protons have already circulated at 6,500 GeV, this time the news is for real), she wrote a detailed rant for the far left-wing website,
Harnessing “Black Holes”: The Large Hadron Collider – ultimate Weapon of Mass Destruction.
You may want to read this stuff and laugh – or cry – because this loon may be a role model for what other similar folks – including male community organizers – actually believe. And it ain't pretty.

It's not a cathedral of science, she argues, but Genesis 2.0 because it will destroy all the creation from the Genesis 1.0 event.

The project was built in order to be completely useless. It doesn't produce anything practical – no new tax or subsidy for Centre for Research and Globalization. But to call it "useless" is far too generous. Instead, the LHC is a tool to make rich even richer; to give them more power; to produce new weapons for them; and to give them new assets, like other planets and galaxies, when Earth is destroyed.

One of the LHC sponsors, the U.S., that has contributed 5 percent, is responsible for even more death than the LEP collider, she argues. Alarm bells should be going off everywhere because the Great Satan has contributed 5% of the money.

The popular myth is that scientists are objective but their real goal is to cut the throats of others. Funding usually comes from industry or governments, and is therefore dirty. Scientists act as nuclear weapons who tell you to "publish or perish".

Our planet is a network of ecosystems and the fact is that the LHC is going to destroy them. One of the LHC magnets is 100,000 times stronger than the Earth's magnetic field. What does it do with the Earth and the whole Milky Way when such a strong magnet appears somewhere? It's even stronger than the magnet on your fridge. How does the LHC magnet destroy the Earth's magnetic fields? What happens to the Earth when the LHC magnet is actually turned on?

This LHC press conference turned into a bummer. In the rest of the 15 minutes, Alex Jones and Bill Gates didn't improve it much. ;-) The former discovered that the LHC was built after the extraterrestrials bribed our elites to build a gateway to extra dimensions. Also starring: Morgan Freeman and Rolf Heuer (a hostile reptilian shapeshifter).

And what if the LHC magnet is not warmed up gently? If something goes wrong, how many seconds are needed to detonate the whole LHC to save the Earth? If there are so many collisions per second, isn't it already too late to stop it?

Moreover, the force inside each dipole is equal to the power of a 747 taking off. So the LHC contains 1232 big airplanes that are taking off. Where will Switzerland be flying? We could see what one or two airplanes did with the twin towers. What about the 1232 airplanes from the LHC? The 2008 LHC accident is a template for the destruction of the whole planet. What would have happened to the land above (including two villages) if the pressure wave from the 2008 catastrophe didn't press the "stop button"?

It's a perfect military weapon. Also, it will create black holes. Some people like John Ellis said that those would be friendly black holes. But what about the common sense? In the Universe, black holes eat planets and stars. Why should it be any different for the LHC black holes? Prominent Hawaii high school teacher Walter Wagner has sued the LHC and proved that the risk of Earth's destruction was 50% because there were only two possibilities – destroyed or not destroyed.

What if the scientists create dark matter? It will increase the weight of the Cosmos by a factor of five. And what if they create the extra dimensions? Everything we love will escape to those new corners.

When some "physicists" said that the catastrophe hasn't occurred for 14 billion years, they are missing the whole point: it would be like the creation of children and indigenous people could inform them how it works!

Spouses of LHC members, janitors, energy companies pumping electrons to that facility: all of us have to shut down this devilish project!

The excerpt above is a tiny portion of the text. You may want to read it in its entirety. Almost every sentence sounds like a joke but I would bet that this lady is damn serious. It's insane partly because the completely unrestricted stupidity of the writer is combined with lots of factoids – she has probably accumulated more factoids about some magnets or events at the LHC than what an average high-energy theorist knows!

And self-confident hardcore morons resembling Ms Nina Beety are literally everywhere.

Incidentally, two months ago, an article claimed that Stephen Hawking and Neil Tyson warned CERN against people's playing to be God Shiva and launching the new Big Bang. ;-) The appearance of Shiva is no coincidence. The best minds in the world know that the LHC has been exposed as the stargate of Shiva (see also a documentary).

by Luboš Motl ( at April 11, 2015 04:26 PM

April 10, 2015

Symmetrybreaking - Fermilab/SLAC

LHC breaks energy record

The Large Hadron Collider has beaten its own world record for accelerating particles.

Around midnight last night, engineers at CERN broke a world record when they accelerated a beam of particles to 6.5 trillion electronvolts—60 percent higher than the previous world record of 4 TeV.

On April 5 engineers sent beams of protons all the way around the improved Large Hadron Collider for the first time in more than two years (pictured above). On April 9, they ramped up the power of the LHC and maintained the 6.5 TeV beam for more than 30 minutes before dumping the high-energy protons into a thick graphite block.

“This is an important milestone, but it is really just a stepping stone to generating 13 TeV collisions, which is our real goal,” says Giulia Papotti, a lead engineer at the LHC. “If you want to find something new in particle physics, you have to search where no one’s searched before. That’s the point of going to higher energy.”

Now engineers are focused on preparing the machine to maintain two safe and stable beams—one in each direction—for several consecutive hours.

“During collisions we will run the beam continuously for six to 10 hours, until it is no longer efficient to collide the particles,” Papotti says. “Then we will dump the beam and start the process over again.”

Over the next few weeks, engineers will ensure that all the hardware and software systems work together and will finish defining the machine and beam parameters. Papotti says there might be a few surprises along the way to bringing the world’s largest machine back to life.

“There may be, for example, problems keeping the beam stable, and we also don’t know how many times the magnets will quench—or suddenly lose their superconducting state—while the beam is running,” Papotti says. “It’s experimental work. It’s partly about knowing what we have to do and partly about solving unexpected problems. But everything we do is to make the experiments happy and let them take data safely.”

With the energy record achieved and the first proton-proton collisions on the horizon, physicists are one step closer to exploring a new energy frontier.

“At this new higher energy, the collisions have the potential to create particles that have never been seen before in a laboratory,” says Greg Rakness, a Fermilab applications physicist and the run coordinator for the CMS experiment at the LHC. “The physicists at CMS are incredibly excited about this because discovering new physical phenomena is every physicist's dream. Discovering new physics is what we are here to do.”


LHC restart timeline

February 2015

The Large Hadron Collider is now cooled to nearly its operational temperature.

Info-Graphic by: Sandbox Studio, Chicago

LHC filled with liquid helium

The Large Hadron Collider is now cooled to nearly its operational temperature.
Read more…
A first set of superconducting magnets has passed the test and is ready for the Large Hadron Collider to restart in spring.
Info-Graphic by: Sandbox Studio, Chicago

First LHC magnets prepped for restart

A first set of superconducting magnets has passed the test and is ready for the Large Hadron Collider to restart in spring. Read more…
Engineers and technicians have begun to close experiments in preparation for the next run.
Info-Graphic by: Sandbox Studio, Chicago

LHC experiments prep for restart

Engineers and technicians have begun to close experiments in preparation for the next run.
Read more…
March 2015

The Large Hadron Collider has overcome a technical hurdle and could restart as early as next week.

Info-Graphic by: Sandbox Studio, Chicago

LHC restart back on track

The Large Hadron Collider has overcome a technical hurdle and could restart as early as next week. Read more…
April 2015

The Large Hadron Collider has circulated the first protons, ending a two-year shutdown.

Info-Graphic by: Sandbox Studio, Chicago

LHC sees first beams

The Large Hadron Collider has circulated the first protons, ending a two-year shutdown. Read more…

The Large Hadron Collider accelerated protons to the fastest speed ever attained on Earth.

Info-Graphic by: Sandbox Studio, Chicago

LHC breaks energy record

The Large Hadron Collider accelerated protons to the fastest speed ever attained on Earth.
Read more…


Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at April 10, 2015 04:26 PM

Georg von Hippel - Life on the lattice

Workshop "Fundamental Parameters from Lattice QCD" at MITP (upcoming deadline)
Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.

The scientific programme "Fundamental Parameters from Lattice QCD" at the Mainz Institute of Theoretical Physics (MITP) is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.

The deadline for registration is Wednesday, 15 April 2015. Please register at this link.

by Georg v. Hippel ( at April 10, 2015 02:59 PM

CERN Bulletin

Diversity in Action workshop | 5 May - 8:30 a.m. - 12 p.m. | Technoparc Business Centre
Get an insight into diversity, develop greater sensitivity to differences, acquire new tools to recognise and overcome unconscious biases.   Diversity in Action workshop 5th edition in English Tuesday, 5 May 2015 8:30 a.m. - 12 p.m. Technoparc Business Centre – Saint-Genis-Pouilly Registration and more information on the workshop: On 5 May at 2 p.m., Alan Richter will also lead a forum for discussion  on the topic of “Respect in the workplace”. More information here.

by CERN Diversity Office at April 10, 2015 02:40 PM

arXiv blog

How a Troll-Spotting Algorithm Learned Its Anti-antisocial Trade

Antisocial behavior online can make people’s lives miserable. So an algorithm that can spot trolls more quickly should be a boon, say the computer scientists who developed it.

April 10, 2015 04:26 AM

John Baez - Azimuth

Resource Convertibility (Part 2)

guest post by Tobias Fritz

In Part 1, I introduced ordered commutative monoids as a mathematical formalization of resources and their convertibility. Today I’m going to say something about what to do with this formalization. Let’s start with a quick recap!

Definition: An ordered commutative monoid is a set A equipped with a binary relation \geq, a binary operation +, and a distinguished element 0 such that the following hold:

+ and 0 equip A with the structure of a commutative monoid;

\geq equips A with the structure of a partially ordered set;

• addition is monotone: if x\geq y, then also x + z \geq y + z.

Recall also that we think of the x,y\in A as resource objects such that x+y represents the object consisting of x and y together, and x\geq y means that the resource object x can be converted into y.

When confronted with an abstract definition like this, many people ask: so what is it useful for? The answer to this is twofold: first, it provides a language which we can use to guide our thoughts in any application context. Second, the definition itself is just the very start: we can now also prove theorems about ordered commutative monoids, which can be instantiated in any particular application context. So the theory of ordered commutative monoids will provide a useful toolbox for talking about concrete resource theories and studying them. In the remainder of this post, I’d like to say a bit about what this toolbox contains. For more, you’ll have to read the paper!

To start, let’s consider catalysis as one of the resource-theoretic phenomena neatly captured by ordered commutative monoids. Catalysis is the phenomenon that certain conversions become possible only due to the presence of a catalyst, which is an additional resource object which does not get consumed in the process of the conversion. For example, we have

\text{timber + nails}\not\geq \text{table},

\text{timber + nails + saw + hammer} \geq \text{table + saw + hammer}

because making a table from timber and nails requires a saw and a hammer as tools. So in this example, ‘saw + hammer’ is a catalyst for the conversion of ‘timber + nails’ into ‘table’. In mathematical language, catalysis occurs precisely when the ordered commutative monoid is not cancellative, which means that x + z\geq y + z sometimes holds even though x\geq y does not. So, the notion of catalysis perfectly matches up with a very natural and familiar notion from algebra.

One can continue along these lines and study those ordered commutative monoids which are cancellative. It turns out that every ordered commutative monoid can be made cancellative in a universal way; in the resource-theoretic interpretation, this boils down to replacing the convertibility relation by catalytic convertibility, in which x is declared to be convertible into y as soon as there exists a catalyst which achieves this conversion. Making an ordered commutative monoid cancellative like this is a kind of ‘regularization': it leads to a mathematically more well-behaved structure. As it turns out, there are several additional steps of regularization that can be performed, and all of these are both mathematically natural and have an appealing resource-theoretic interpretation. These regularizations successively take us from the world of ordered commutative monoids to the realm of linear algebra and functional analysis, where powerful theorems are available. For now, let me not go into the details, but only try to summarize one of the consequences of this development. This requires a bit of preparation.

In many situations, it is not just of interest to convert a single copy of some resource object x into a single copy of some y; instead, one may be interested in converting many copies of x into many copies of y all together, and thereby maximizing (or minimizing) the ratio of the resulting number of y‘s compared to the number of x‘s that get consumed. This ratio is measured by the maximal rate:

\displaystyle{ R_{\mathrm{max}}(x\to y) = \sup \left\{ \frac{m}{n} \:|\: nx \geq my \right\} }

Here, m and n are natural numbers, and nx stands for the n-fold sum x+\cdots+x, and similarly for my. So this maximal rate quantifies how many y’ s we can get out of one copy of x, when working in a ‘mass production’ setting. There is also a notion of regularized rate, which has a slightly more complicated definition that I don’t want to spell out here, but is similar in spirit. The toolbox of ordered commutative monoids now provides the following result:

Rate Theorem: If x\geq 0 and y\geq 0 in an ordered commutative monoid A which satisfies a mild technical assumption, then the maximal regularized rate from x to y can be computed like this:

\displaystyle{ R_{\mathrm{max}}^{\mathrm{reg}}(x\to y) = \inf_f \frac{f(y)}{f(x)} }

where f ranges over all functionals on A with f(y)\neq 0.

Wait a minute, what’s a ‘functional’? It’s defined to be a map f:A\to\mathbb{R} which is monotone,

x\geq y \:\Rightarrow\: f(x)\geq f(y)

and additive,

f(x+y) = f(x) + f(y)

In economic terms, we can think of a functional as a consistent assignment of prices to all resource objects. If x is at least as useful as y, then the price of x should be at least as high as the price of y; and the price of two objects together should be the sum of their individual prices. So the f in the rate formula above ranges over all ‘markets’ on which resource objects can be ‘traded’ at consistent prices. The term ‘functional’ is supposed to hint at a relation to functional analysis. In fact, the proof of the theorem crucially relies on the Hahn–Banach Theorem.

The mild technical mentioned in the Rate Theorem is that the ordered commutative monoid needs to have a generating pair. This turns out to hold in the applications that I have considered so far, and I hope that it will turn out to hold in most others as well. For the full gory details, see the paper.

So this provides some idea of what kinds of gadgets one can find in the toolbox of ordered commutative monoids. Next time, I’ll show some applications to graph theory and zero-error communication and say a bit about where this project might be going next.

by John Baez at April 10, 2015 01:00 AM

April 09, 2015

Quantum Diaries

CUORE-0 Results Tour Kicks Off

The CUORE-0 collaboration just announced a result: a new limit of 2.7 x1024 years (90%C.L.) on the halflife of neutrinoless double beta decay in 130Te. Or, if you combine it with the data from Cuorecino, 4.0×1024 years. A paper has been posted to the arXiv preprint server and submitted to the journal Physical Review Letters.

Screen Shot 2015-04-09 at 5.26.55 PM

Bottom: Energy spectrum of 0νββ decay candidates in CUORE-0 (data points) and the best-fit model from the UEML analysis (solid blue line). The peak at ∼2507 keV is attributed to 60Co; the dotted black line shows the continuum background component of the best-fit model. Top: The nor-369 malized residuals of the best-fit model and the binned data.370 The vertical dot-dashed black line indicates the position of371 Qββ. From arXiv.

CUORE-0 is an intermediate step between the upcoming full CUORE detector and its prototype, Cuoricino. The limit from Cuoricino was 2.8×1024 years**, but this was limited by background contamination in the detector, and it took a long time to get to that result. For CUORE, the collaboration developed new and better methods (which are described in detail in an upcoming detector paper) for keeping everything clean and uniform, plus increased the amount of tellurium by a factor of 19. The results coming out now test and verify all of that except the increased mass: CUORE-0 uses all the same cleaning and assembly procedures as CUORE, but with only the first of 19 towers of crystals. It took data while the rest of the towers were being built. We stopped taking CUORE-0 data when the sensitivity was slightly better than Cuoricino, which only took half the exposure time of the Cuoricino run. The resulting background was 6 times lower in the continuum parts of the spectrum, and all the energy resolutions (which were calibrated individually for each crystal each month) were more uniform. So this is a result to be proud of: even before the CUORE detector starts taking data, we have this result to herald its success.

The energy spectra measured in both Cuoricino and CUORE-0, displaying the factor of 6 improvement in the background rates.

The energy spectra measured in both Cuoricino and CUORE-0, displaying the factor of 6 improvement in the background rates. From the seminar slides of L. Canonica.


The result was announced in the first seminar in a grand tour of talks about the new result. I got to see the announcement at Gran Sasso today–perhaps you, dear reader, can see one of the talks too! (and if not, there’s video available from the seminar today) Statistically speaking, out of these presentations you’re probably closest to the April APS meeting if you’re reading this, but any of them would be worth the effort to see. There was also a press release today and coverage in the Yale News and Berkley Labs news, because of which I’m making this post pretty short.


The Upcoming Talks:

There are also two more papers in preparation, which I’ll post about when they’re submitted. One describes the background model, and the other describes the technical details of the detector. The most comprehensive coverage of this result will be in a handful of PhD theses that are currently being written.

(post has been revised to include links with the arXiv post number: 1504.02454)

**Comparing the two limits to each other is not as straightforward as one might hope, because there were different statistical methods used to obtain them, which will be covered in detail in the papers. The two limits are roughly similar no matter how you look, and still the new result has better (=lower) backgrounds and took less time to achieve. A rigorous, apples-to-apples comparison of the two datasets would require me to quote internal collaboration numbers.

by Laura Gladstone at April 09, 2015 09:41 PM

Sean Carroll - Preposterous Universe

A Personal Narrative

I was very pleased to learn that I’m among this year’s recipients of a Guggenheim Fellowship. The Fellowships are mid-career awards, meant “to further the development of scholars and artists by assisting them to engage in research in any field of knowledge and creation in any of the arts, under the freest possible conditions and irrespective of race, color, or creed.” This year 173 Fellowships were awarded, chosen from 3,100 applications. About half of the winners are in the creative arts, and the majority of those remaining are in the humanities and social sciences, leaving eighteen slots for natural scientists. Only two physicists were chosen, so it’s up to Philip Phillips and me to uphold the honor of our discipline.

The Guggenheim application includes a “Career Narrative” as well as a separate research proposal. I don’t like to share my research proposals around, mostly because I’m a theoretical physicist and what I actually end up doing rarely bears much resemblance to what I had previously planned to do. But I thought I could post my career narrative, if only on the chance that it might be useful to future fellowship applicants (or young students embarking on their own research careers). Be warned that it’s more personal than most things I write on the blog here, not to mention that it’s beastly long. Also, keep in mind that the purpose of the document was to convince people to give me money — as such, it falls pretty heavily on the side of grandiosity and self-justification. Be assured that in real life I remain meek and humble.

Sean M. Carroll: Career Narrative

Reading over applications for graduate school in theoretical physics, one cannot help but be struck by a certain common theme: everyone wants to discover the fundamental laws of nature, quantize gravity, and find a unified theory of everything. That was certainly what interested me, ever since I first became enamored with physics when I was about ten years old. It’s an ambitious goal, worthy of pursuing, and I’ve been fortunate enough to contribute to the quest in my own small way over the course of my research career, especially in gravitational physics and cosmology.

But when a goal is this far-reaching, it’s important to keep in mind different routes to the ultimate end. In recent years I have become increasingly convinced that there is important progress to be made by focusing on emergence: how the deepest levels of reality are connected to the many higher levels of behavior we observe. How do spacetime and classical reality arise from an underlying quantum description? What is complexity, and how does it evolve over time, and how is that evolution driven by the increase of entropy? What do we mean when we talk about “causes” and “purposes” if the underlying laws are perfectly reversible? What role does information play in the structure of reality? All of these questions are thoroughly interdisciplinary in nature, and can be addressed with a wide variety of different techniques. I strongly believe that the time is right for groundbreaking work in this area, and a Guggenheim fellowship would help me develop the relevant expertise and start stimulating new collaborations.

University, Villanova and Harvard: 1984-1993

There is no question I am a physicist. The topics that first sparked my interest in science – the Big Bang, black holes, elementary particles – are the ones that I think about today, and they lie squarely within the purview of physics. So it is somewhat curious that I have no degrees in physics. For a variety of reasons (including questionable guidance), both my undergraduate degree from Villanova and my Ph.D. from Harvard are in astronomy and astrophysics. I would like to say that this was a clever choice based on a desire for interdisciplinary engagement, but it was more of an accident of history (and a seeming insistence on doing things the hard way). Villanova offered me a full-tuition academic scholarship (rare at the time), and I financed my graduate education through fellowships from NASA and the National Science Foundation.

Nevertheless, my education was extremely rewarding. As an undergraduate at a very small but research-oriented department, I got a start in doing real science at an early age, taking photometric data on variable stars and building models based on their light curves [Carroll, Guinan, McCook and Donahue, 1991]. In graduate school I was surrounded by incredible resources in the Cambridge area, and made an effort to take advantage of them. My advisor, George Field, was a well-established theoretical astrophysicist, specializing in magnetohydrodynamics and the interstellar medium. He wasn’t an expert in the area that I wanted to study, the particle physics/cosmology connection, but he was curious about it. So we essentially learned things together, writing papers on alternatives to general relativity, the origin of intergalactic magnetic fields, and inflationary cosmology, including one of the first studies of a non-Lorentz-invariant modification of electromagnetism [Carroll, Field, and Jackiw 1990]. George also encouraged me to work with others, and I collaborated with fellow graduate students on topics in mathematical physics and topological defects, as well as with Edward Farhi and Alan Guth from MIT on closed timelike curves (what people on the street call “time machines”) in general relativity [Carroll, Farhi, and Guth 1992].

Setting a pattern that would continue to be followed down the line, I didn’t limit my studies to physics alone. In particular, my time at Villanova ignited an interest in philosophy that remains strong to this day. I received a B.A. degree in “General Honors” as well as my B.S. in Astronomy and Astrophysics, and also picked up a philosophy minor. At Harvard, I sat in on courses with John Rawls, Robert Nozick, and Barbara Johnson. While science was my first love and remains my primary passion, the philosophical desire to dig deep and ask fundamental questions continues to resonate strongly with me, and I’m convinced that familiarity with modern philosophy of science can be invaluable to physicists trying to tackle questions at the foundations of the discipline.

Postdoctoral, MIT and ITP: 1993-1999

For my first postdoctoral fellowship, in 1993 I moved just a bit down the road, from Harvard to MIT; three years later I would fly across the country to the prestigious Institute for Theoretical Physics at UC Santa Barbara. At both places I continued to do research in a somewhat scattershot fashion, working on a potpourri of topics in gravitation and field theory, usually in collaboration with other physicists my age rather than with the senior professors. I had great fun, writing papers on supergravity (the supersymmetric version of general relativity), topological defects, perturbations of the cosmic microwave background radiation, two-dimensional quantum gravity, interacting dark matter, and tests of the large-scale isotropy of the universe.

Although I was slow to catch on, the academic ground was shifting beneath me. The late 80’s and early 90’s, when I was a graduate student, were a sluggish time in particle physics and cosmology. There were few new experimental results; the string theory revolution, which generated so much excitement in the early 80’s, had not lived up to its initial promise; and astronomers continued to grapple with the difficulties in measuring properties of the universe with any precision. In such an environment, my disjointed research style was enough to get by. But as I was graduating with my Ph.D., things were changing. In 1992, results from the COBE satellite showed us for the first time the tiny temperature variations in the cosmic background radiation, representing primordial density fluctuations that gradually grew into galaxies and large-scale structure. In 1994-95, a series of theoretical breakthroughs launched the second superstring revolution. Suddenly, it was no longer good enough just to be considered smart and do random interesting things. Theoretical cosmologists dived into work on the microwave background, or at least models of inflation that made predictions for it; field theorists and string theorists were concentrating on dualities, D-branes, and the other shiny new toys that the latest revolution had brought them. In 1993 I was a hot property on the postdoctoral job market, with multiple offers from the very best places; by 1996 those offers had largely dried up, and I was very fortunate to be offered a position at a place as good as ITP.

Of course, nobody actually told me this in so many words, and it took me a while to figure it out. It’s a valuable lesson that I still take to heart – it’s not good enough to do work on things you think are interesting, you have to make real contributions that others recognize as interesting, as well. I don’t see this as merely a cynical strategy for academic career success. As enjoyable and stimulating as it may be to bounce from topic to topic, the chances of make a true and lasting contribution are larger for people who focus on an area with sufficient intensity to master it in all of its nuance.

What I needed was a topic that I personally found fascinating enough to investigate in real detail, and which the rest of the community recognized as being of central importance. Happily, the universe obligingly provided just the thing. In 1998, two teams of astronomers, one led by Saul Perlmutter and the other by Brian Schmidt and Adam Riess, announced an amazing result: our universe is not only expanding, it’s accelerating. Although in retrospect there were clues that this might have been the case, it took most of the community by complete surprise, and certainly stands as the most important discovery that has happened during my own career. Perlmutter, Schmidt, and Riess shared the Nobel Prize in 2011.

Like many other physicists, my imagination was immediately captured by the question of why the universe is accelerating. Through no planning of my own, I was perfectly placed to dive into the problem. Schmidt and Riess had both been fellow graduate students of mine while I was at Harvard (Brian was my officemate), and I had consulted with Perlmutter’s group early on in their investigations, so I was very familiar with the observations of Type Ia supernovae on which the discovery was based. The most obvious explanation for universal acceleration is that empty space itself carries a fixed energy density, what Einstein had labeled the “cosmological constant”; I happened to be a co-author, with Bill Press and Ed Turner, on a 1992 review article on the subject that had become a standard reference in the field [Carroll, Press, and Turner 1992], and which hundreds of scientists were now hurriedly re-reading. In 1997 Greg Anderson and I had proposed a model in which dark-matter particles would interact with an ambient field, growing in mass as the universe expands [Anderson and Carroll 1997]; this kind of model natural leads to cosmic acceleration, and was an early idea for what is now known as “dark energy” (as well as for the more intriguing possibility that there may be a variety of interactions within a rich “dark sector”).

With that serendipitous preparation, I was able to throw myself into the questions of dark energy and the acceleration of the universe. After the discovery was announced, models were quickly proposed in which the dark energy was a dynamically-evolving field, rather than a constant energy density. I realized that most such models were subject to severe experimental constraints, because they would lead to new long-range forces and cause particle-physics parameters to slowly vary with time. I wrote a paper [Carroll 1998] pointing out these features, as well as suggesting symmetries that could help avoid them. I also collaborated with the Schmidt/Riess group on a pioneering paper [Garnavich et al. 1998] that placed limits on the rate at which the density of dark energy could change as the universe expands. With this expertise and these papers, I was suddenly a hot property on the job market once again; in 1999 I accepted a junior-faculty position at the University of Chicago.

University of Chicago: 1999-2006

While I was a postdoc, for the most part my intellectual energies were devoted completely to research. As a new faculty member, I had the responsibility and opportunity to expand my reach in a variety of ways. I had always loved teaching, and took to it with gusto, pioneering new courses (undergraduate general relativity, graduate cosmology), and winning a “Spherical Cow” teaching award from the physics graduate students. I developed my lecture notes for a graduate course in general relativity into a textbook, Spacetime and Geometry, which is now used widely in universities around the world. I helped organize a major international conference (Cosmo-02), served on a number of national committees (including the roadmap team for NASA’s Beyond Einstein program), and was a founding member and leader of the theory group at Chicago’s Kavli Institute for Cosmological Physics. I was successful at bringing in money, including fellowships from the Sloan and Packard Foundations. I made connections with professors in other departments, and started to work with Project Exploration, an outreach nonprofit led by Gabrielle Lyon and Chicago paleontologist Paul Sereno. With Classics professor Shadi Bartsch, I taught an undergraduate humanities course on the history of atheism. I became involved in the local theatre community, helping advise companies that were performing plays with scientific themes (Arcadia, Proof, Humble Boy). And in 2004 I took up blogging at my site Preposterous Universe, a fun and stimulating pastime that I continue to this day.

Research, of course, was still central, and I continued to concentrate on the challenge posed by the accelerating universe, especially in a series of papers with Mark Trodden (then at Syracuse, now at U. Penn.) and other collaborators. Among the more speculative ideas that had been proposed was “phantom energy,” a form of dark energy whose density actually increases as the universe expands. In one paper [Carroll, Hoffman, and Trodden 2003] we showed that such theories tended to be catastrophically unstable, and in another [Carroll, De Felice, and Trodden 2004] we showed that more complex models could nevertheless trick observers into concluding that the dark energy was phantom-like.

Our most influential work proposed a simple idea: that there isn’t any dark energy at all, but rather that general relativity breaks down on cosmological scales, where new dynamics can kick in [Carroll, Duvvuri, Trodden, and Turner 2004]. This became an extremely popular scenario within the theoretical cosmology community, launching a great deal of work devoted to investigating these “f(R) theories.” (The name refers to the fact that the dynamical equations are based on an arbitrary function of R, a quantity that measures the curvature of spacetime.) This work included papers by our group looking at long-term cosmological evolution in such models [Carroll et al. 2004], and studying the formation of structure in theories designed to be compatible with observational constraints on modified gravity [Carroll, Sawicki, Silvestri, and Trodden 2006].

Being of restless temperament, I couldn’t confine myself to only thinking about dark energy and modified gravity. I published on a number of topics at the interface of cosmology, field theory, and gravitation: observational constraints on alternative cosmologies, large extra dimensions of spacetime, supersymmetric topological defects, violations of fundamental symmetries, the origin of the matter/antimatter asymmetry, the connection between cosmology and the arrow of time. I found the last of these especially intriguing. To physicists, all of the manifold ways in which the past is different from the future (we age toward the future, we can remember the past, we can make choices toward the future) ultimately come back to the celebrated Second Law of Thermodynamics: in closed systems, entropy tends to increase over time. Back in the 19th century, Ludwig Boltzmann and others explained why entropy increases toward the future; what remains as a problem is why the entropy was ever so low in the past. That’s a question for cosmology, and presents a significant challenge to current models of the early universe. With graduate student Jennifer Chen, I proposed a novel scenario in which the Big Bang is not the beginning of the universe, but simply one event among many; in the larger multiverse, entropy increases without bound both toward the distant future and also in the very distant past [Carroll and Chen 2004, 2005]. Our picture was speculative, to say the least, but it serves as a paradigmatic example of attempts to find a purely dynamical basis for the Second Law, and continues to attract attention from both physicists and philosophers.

In May, 2005, I was informed that I had been denied tenure. This came as a complete shock, in part because I had been given no warning that any trouble was brewing. I will never know precisely what was said at the relevant faculty meetings, and the explanations I received from different colleagues were notable mostly for the lack of any consistent narrative. But one thing that came through clearly was that my interest in doing things other than research had counted substantially against me. I was told that I came across as “more interested in writing textbooks,” and that perhaps I would be happier at a university that placed a “greater emphasis on pedagogy.”

An experience like that cannot help but inspire some self-examination, and I thought hard about what my next steps should be. I recognized that, if I wanted to continue in academia, my best chance of being considered successful would be to focus my energies as intently as possible in a single area of research, and cut down non-research activities to a minimum.

After a great deal of contemplation, I decided that such a strategy was exactly what I didn’t want to do. I would remain true to my own intellectual passions, and let the chips fall where they may.

Caltech and Beyond: 2006-

After the Chicago decision I was again very fortunate, when the physics department at Caltech quickly offered me a position as a research faculty member. It was a great opportunity, offering both a topflight research environment and an extraordinary amount of personal freedom. I took the job with two goals in mind: to expand my outreach and non-academic efforts even further, and to do innovative interdisciplinary research that would represent a true and lasting contribution.

To be brutally honest, since I arrived here in 2006 I have been much more successful at the former than at the latter (although I feel this is beginning to change). I’ve written two popular-level books: From Eternity to Here, on cosmology and the arrow of time, and The Particle at the End of the Universe, on the search for the Higgs boson at the Large Hadron Collider. Both were well-received, with Particle winning the Winton Prize from the Royal Society, the world’s most prestigious award for popular science books. I have produced two lecture courses for The Teaching Company, given countless public talks, and appeared on numerous TV programs, up to and including The Colbert Report. Living in Los Angeles, I’ve had the pleasure of serving as a science consultant on various films and TV shows, working with people such as Ron Howard, Kenneth Branagh, and Ridley Scott. My talk from TEDxCaltech, “Distant time and the hint of a multiverse,” recently passed a million total views. I helped organize a major interdisciplinary conference on the nature of time, as well as a much smaller workshop on philosophical naturalism that attracted some of the best people in the field (such as Steven Weinberg, Daniel Dennett, and Richard Dawkins). I was elected a Fellow of the American Physical Society and won the Gemant Award from the American Institute of Physics.

More substantively, I’ve developed my longstanding interest in philosophy in productive directions. Some of the physics questions that I find most interesting, such as the arrow of time or the measurement problem in quantum mechanics, are ones where philosophers have made a significant impact, and I have begun interacting and collaborating with several of the best in the business. In recent years the subject called “philosophy of cosmology” has become a new and exciting field, and I’ve had the pleasure of being at the center of many activities in the area; a conference next month has set aside a discussion session to examine the implications of the approach to the arrow of time that Jennifer Chen and I put forward a decade ago. My first major work in philosophy of science, a paper with graduate student Charles Sebens on how to derive the Born Rule in the many-worlds approach to quantum mechanics, was recently accepted into one of the leading journals in the field [Sebens and Carroll 2014]. I’ve also published invited articles on the implications of modern cosmology for religion, and participated in a number of popular debates on naturalism vs. theism.

At the same time, my research efforts have been productive but somewhat meandering. As usual, I have worked on a variety of interesting topics, including the use of effective field theory to understand the growth of large-scale structure, the dynamics of Lorentz-violating “aether” fields, how new forces can interact with dark matter, black hole entropy, novel approaches to dark-matter abundance, cosmological implications of a decaying Higgs field, and the role of rare fluctuations in the long-term evolution of universe. Some of my work over these years includes papers of which I am quite proud; these include investigations of dynamical compactification of dimensions of space [Carroll, Johnson, and Randall 2009], possible preferred directions in the universe [Ackerman, Carroll, and Wise 2007; Erickcek, Kamionkowski, and Carroll 2008a, b], the prospect of a force similar to electromagnetism interacting with dark matter [Ackerman et al. 2008], and quantitative investigations of fine- tuning of cosmological evolution [Carroll and Tam 2010; Remmen and Carroll 2013, 2014; Carroll 2014]. Almost none of this work has been on my previous specialty, dark energy and the accelerating universe. After having put a great amount of effort into thinking about this (undoubtedly important) problem, I have become pessimistic about the prospect for an imminent theoretical breakthrough, at least until we have a better understanding of the basic principles of quantum gravity. This helps explain the disjointed nature of my research over the past few years, but has also driven home to me the need to find a new direction and tackle it with determination.

Very recently I’ve found such a focus, and in some sense I have finally started to do the research I was born to do. It has resulted from a confluence of my interests in cosmology, quantum mechanics, and philosophy, along with a curiosity about complexity theory that I have long nurtured but never really acted upon. This is the turn toward “emergence” that I mentioned at the beginning of this narrative, and elaborate on in my research plan. I go into greater detail there, but the basic point is that we need to construct a more reliable framework in which to connect the very foundations of physics – quantum mechanics, field theory, spacetime – to a multitude of higher-level phenomena, from statistical mechanics to organized structures. A substantial amount of work has already been put into such issues, but a number of very basic questions remain unanswered.

This represents an evolution of my research focus rather than a sudden break with my earlier work; many topics in cosmology and quantum gravity are intimately tied to issues of emergence, and I’ve already begun investigating some of these questions in different ways. One prominent theme is the emergence of the classical world out of an underlying quantum description. My papers with Sebens on the many-worlds approach are complementary to a recent paper I wrote with two graduate students on the nature of quantum fluctuations [Boddy, Carroll, and Pollack 2014]. There, we argued that configurations don’t actually “fluctuate into existence” in stationary quantum states, since there is no process of decoherence; this has important implications for cosmology in both the early and late universe. In another paper [Aaronson, Carroll, and Ouellette 2014], my collaborators and I investigated the relationship between entropy (which always increases in closed systems) and complexity (which first increases, then decreases as the system approaches equilibrium). Since the very notion of complexity does not have a universally-agreed-upon definition, any progress we can make in understanding its basic features is potentially very important.

I am optimistic that this new research direction will continue to expand and flourish, and that there is a substantial possibility of making important breakthroughs in the field. (My papers on the Born Rule and quantum fluctuations have already attracted considerable attention from influential physicists and philosophers – they don’t always agree with our unconventional conclusions, but I choose to believe that it’s just a matter of time.) I am diving into these new waters headfirst, including taking online courses (complexity theory from Santa Fe, programming and computer science from MIT) that will help me add skills that weren’t part of my education as a cosmologist. A Guggenheim Fellowship will be invaluable in aiding me in this effort.

My ten-year-old self was right: there is nothing more exciting than trying to figure out how nature works at a deep level. Having hit upon a promising new way of doing it, I can’t wait to see where it goes.

by Sean Carroll at April 09, 2015 08:41 PM



[RSS 2.0 Feed] [Atom Feed]

Last updated:
April 21, 2015 08:06 AM
All times are UTC.

Suggest a blog: