Particle Physics Planet


September 23, 2014

Clifford V. Johnson - Asymptotia

STEM Keynote
keynote_at_stem_divide_cvjAs I mentioned, a couple of Saturdays ago I gave the keynote address at a one-day conference designed to introduce STEM Careers to underrepresented students from various neighboring schools. The event* was co-sponsored by the Level Playing Field Institute, but sadly the details of it seem to have vanished from their site now that the event has passed, which is unfortunate. It was good to see a room full of enthusiastic students wanting to learn more about such careers (STEM = Science, Technology, Engineering and Mathematics) and I tried to give some thoughts about some of the reasons that there's such poor representation by people of color (the group I was asked to focus on, although I mentioned that many of my remarks also extended to women to some extent) in such fields, and what can be done about it. Much of my focus, as you can guess from the issues I bring up here from time to time, was on battling the Culture: The perception people have of who "belongs" and who does not, and how that perception makes people act, consciously or otherwise, the images we as a society present and perpetuate in our media and in our conversations and conventions throughout everyday life, and so on. I used my own experience as an example at various points, which may or may not have been helpful - I don't know. My experience, in part and in brief, is this: I went a long way into being excited [...] Click to continue reading this post

by Clifford at September 23, 2014 01:33 AM

September 22, 2014

Christian P. Robert - xi'an's og

BAYSM’14 recollection

Poster for the meeting found everywhere on the WU campus, Wien business and economics universityWhen I got invited to BAYSM’14 last December, I was quite excited to be part of the event. (And to have the opportunities to be in Austria, in Wien and on the new WU campus!) And most definitely and a posteriori I have not been disappointed given the high expectations I had for that meeting…! The organisation was seamless, even by Austrian [high] standards, the program diverse and innovative, if somewhat brutal for older Bayesians and the organising committee (Angela Bitto, Gregor Kastner, and Alexandra Posekany) deserves an ISBA recognition award [yet to be created!] for their hard work and dedication. Thanks also to Sylvia Früwirth-Schnatter for hosting the meeting in her university. They set the standard very high for the next BAYSM organising team. (To be hold in Firenze/Florence, on June 19-21, 2016, just prior to the ISBA Worlbaysm5d meeting not taking place in Banff. A great idea to associate with a major meeting, in order to save on travel costs. Maybe the following BAYSM will take place in Edinburgh! Young, local, and interested Bayesians just have to contact the board of BAYS with proposals.)

So, very exciting and diverse. A lot of talks in applied domains, esp. economics and finance in connection with the themes of the guest institution, WU.  On the talks most related to my areas of interest, I was pleased to see Matthew Simpson working on interweaving MCMC with Vivek Roy and Jarad Niemi, Madhura Killedar constructing her own kind of experimental ABC on galaxy clusters, Kathrin Plankensteiner using Gaussian processes on accelerated test data, Julyan Arbel explaining modelling by completely random measures for hazard mixtures [and showing his filliation with me by (a) adapting my pun title to his talk, (b) adding an unrelated mountain picture to the title page, (c) including a picture of a famous probabilist, Paul Lévy, to his introduction of Lévy processes and (d) using xkcd strips], Ewan Cameron considering future ABC for malaria modelling,  Konstantinos Perrakis working on generic importance functions in data augmentation settings, Markus baysm4Hainy presenting his likelihood-free design (that I commented a while ago), Kees Mulder explaining how to work with the circular von Mises distribution. Not to mention the numerous posters I enjoyed over the first evening. And my student Clara Grazian who talked about our joint and current work on Jeffreys priors for mixture of distributions. Whose talk led me to think of several extensions…

Besides my trek through past and current works of mine dealing with mixtures, the plenary sessions for mature Bayesians were given by Mike West and Chris Holmes, who gave very different talks but with the similar message that data was catching up with modelling and with a revenge and that we [or rather young Bayesians] needed to deal with this difficulty. And use approximate or proxy models. Somewhat in connection with my last part on an alternative to Bayes factors, Mike also mentioned a modification of the factor in order to attenuate the absorbing impact of long time series. And Chris re-set Bayesian analysis within decision theory, constructing approximate models by incorporating the loss function as a substitute to the likelihood.

Once again, a terrific meeting in a fantastic place with a highly unusual warm spell. Plus enough time to run around Vienna and its castles and churches. And enjoy local wines (great conference evening at a Heuriger, where we did indeed experience Gemütlichkeit.) And museums. Wunderbar!


Filed under: Books, Kids, pictures, Statistics, Travel, University life, Wines Tagged: ABC, approximate likelihood, architecture, Austria, BAYSM 2014, Donau, econometrics, Heuriger, interweaving, MCMC, Vienna, Wien, WU Wien, young Bayesians

by xi'an at September 22, 2014 10:14 PM

Symmetrybreaking - Fermilab/SLAC

Cosmic dust proves prevalent

Space dust accounts for at least some of the possible signal of cosmic inflation the BICEP2 experiment announced in March. How much remains to be seen.

Space is full of dust, according to a new analysis from the European Space Agency’s Planck experiment.

That includes the area of space studied by the BICEP2 experiment, which in March announced seeing a faint pattern left over from the big bang that could tell us about the first moments after the birth of the universe.

The Planck analysis, which started before March, was not meant as a direct check of the BICEP2 result. It does, however, reveal that the level of dust in the area BICEP2 scientists studied is both significant and higher than they thought.

“There is still a wide range of possibilities left open,” writes astronomer Jan Tauber, ESA project scientist for Planck, in an email. “It could be that all of the signal is due to dust; but part of the signal could certainly be due to primordial gravitational waves.”

BICEP2 scientists study the cosmic microwave background, a uniform bath of radiation permeating the universe that formed when the universe first cooled enough after the big bang to be transparent to light. BICEP2 scientists found a pattern within the cosmic microwave background, one that would indicate that not long after the big bang, the universe went through a period of exponential expansion called cosmic inflation. The BICEP2 result was announced as the first direct evidence of this process.

The problem is that the same pattern, called B-mode polarization, also appears in space dust. The BICEP2 team subtracted the then known influence of the dust from their result. But based on today’s Planck result, they didn’t manage to scrub all of it.

How much the dust influenced the BICEP2 result remains to be seen.

In November, Planck scientists will release their own analysis of B-mode polarization in the cosmic microwave background, in addition to a joint analysis with BICEP2 specifically intended to check the BICEP2 result. These results could answer the question of whether BICEP2 really saw evidence of cosmic inflation.

“While we can say the dust level is significant,” writes BICEP2 co-leader Jamie Bock of Caltech and NASA’s Jet Propulsion Laboratory, “we really need to wait for the joint BICEP2-Planck paper that is coming out in the fall to get the full answer.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at September 22, 2014 06:03 PM

Quantum Diaries

Breakthrough: nanotube cathode creates more electron beam than large laser system

This article appeared in Fermilab Today on Sept. 22, 2014.

Harsha Panunganti of Northern Illinois University works on the laser system (turned off here) normally used to create electron beams from a photocathode. Photo: Reidar Hahn

Harsha Panunganti of Northern Illinois University works on the laser system (turned off here) normally used to create electron beams from a photocathode. Photo: Reidar Hahn

Lasers are cool, except when they’re clunky, expensive and delicate.

So a collaboration led by RadiaBeam Technologies, a California-based technology firm actively involved in accelerator R&D, is designing an electron beam source that doesn’t need a laser. The team led by Luigi Faillace, a scientist at RadiaBeam, is testing a carbon nanotube cathode — about the size of a nickel — in Fermilab’s High-Brightness Electron Source Lab (HBESL) that completely eliminates the need for a room-sized laser system currently used to generate the electron beam.

Fermilab was sought out to test the experimental cathode because of its capability and expertise for handling intense electron beams, one of relatively few labs that can support this project.

Tests have shown that the vastly smaller cathode does a better job than the laser. Philippe Piot, a staff scientist in the Fermilab Accelerator Division and a joint appointee at Northern Illinois University, says tests have produced beam currents a thousand to a million times greater than the one generated with a laser. This remarkable result means that electron beam equipment used in industry may become not only less expensive and more compact, but also more efficient. A laser like the one in HBESL runs close to half a million dollars, Piot said, about hundred times more than RadiaBeam’s cathode.

The technology has extensive applications in medical equipment and national security, as an electron beam is a critical component in generating X-rays. And while carbon nanotube cathodes have been studied extensively in academia, Fermilab is the first facility to test the technology within a full-scale setting.

“People have talked about it for years,” said Piot, “but what was missing was a partnership between people that have the know-how at a lab, a university and a company.”

The dark carbon-nanotube-coated area of this field emission cathode is made of millions of nanotubes that function like little lightning rods. At Fermilab's High-Brightness Electron Source Lab, scientists have tested this cathode in the front end of an accelerator, where a strong electric field siphons electrons off the nanotubes to create an intense electron beam. Photo: Reidar Hahn

The dark carbon-nanotube-coated area of this field emission cathode is made of millions of nanotubes that function like little lightning rods. At Fermilab’s High-Brightness Electron Source Lab, scientists have tested this cathode in the front end of an accelerator, where a strong electric field siphons electrons off the nanotubes to create an intense electron beam. Photo: Reidar Hahn

Piot and Fermilab scientist Charles Thangaraj are partnering with RadiaBeam Technologies staff Luigi Faillace and Josiah Hartzell and Northern Illinois University student Harsha Panuganti and researcher Daniel Mihalcea. A U.S. Department of Energy Small Business Innovation Research grant, a federal endowment designed to bridge the R&D gap between basic research and commercial products, funds the project. The work represents the kind of research that will be enabled in the future at the Illinois Accelerator Research Center — a facility that brings together Fermilab expertise and industry.

The new cathode appears at first glance like a smooth black button, but at the nanoscale it resembles, in Piot’s words, “millions of lightning rods.”

“When you apply an electric field, the field lines organize themselves around the rods’ extremities and enhance the field,” Piot said. The electric field at the peaks is so intense that it pulls streams of electrons off the cathode, creating the beam.

Traditionally, lasers strike cathodes in order to eject electrons through photoemission. Those electrons form a beam by piggybacking onto a radio-frequency wave, synchronized to the laser pulses and formed in a resonance cavity. Powerful magnets focus the beam. The tested nanotube cathode requires no laser as it needs only the electric field already generated by the accelerator to siphon the electrons off, a process dubbed field emission.

The intense electric field, though, has been a tremendous liability. Piot said critics thought the cathode “was just going to explode and ruin the electron source, and we would be crying because it would be dead.”

One of the first discoveries Piot’s team made when they began testing in May was that the cathode did not, in fact, explode and ruin everything. The exceptional strength of carbon nanotubes makes the project feasible.

Still, Piot continues to study ways to optimize the design of the cathode to prevent any smaller, adverse effects that may take place within the beam assembly. Future research also may focus on redesigning an accelerator that natively incorporates the carbon nanotube cathode to avoid any compatibility issues.

Troy Rummler

by Fermilab at September 22, 2014 02:50 PM

Emily Lakdawalla - The Planetary Society Blog

SHARAD: Delving Deep at Mars
Some of Mars' most important secrets are hiding beneath the surface.

September 22, 2014 02:41 PM

Emily Lakdawalla - The Planetary Society Blog

Mars Orbiter Mission test firing successful; all ready for orbit insertion
There was celebration in the Mars mission control room in Bangalore on Monday following the success of the crucial four-second test firing of the Mars Orbiter Mission’s (MOM) 440-Newton liquid apogee motor. MOM will now go ahead with the nominal plan for the Mars orbit insertion on September 24 at 07:30 IST (02:00 UT / September 23 19:00 PDT).

September 22, 2014 02:00 PM

Lubos Motl - string vacua and pheno

A simple explanation behind AMS' electron+positron flux power law?
Aside from tweets about the latest, not so interesting, and inconclusive Planck paper on the dust and polarized CMB, Francis Emulenews Villatoro tweeted the following suggestive graphs to his 7,000+ Twitter followers:



The newest data from the Alpha Magnetic Spectrometer are fully compatible with the positron flux curve resulting from an annihilating lighter than \(1\TeV\) dark matter particle. But the steep drop itself hasn't been seen yet (the AMS' dark matter discovery is one seminar away but it may always be so in the future LOL) and the power-law description seems really accurate and attractive.

What if neither dirty pulsars nor dark matter is the cause of these curves? All of those who claim to love simple explanations and who sometimes feel annoyed that physics has gotten too complicated are invited to think about the question.




The fact at this moment seems to be that above the energy \(30\GeV\) and perhaps up to \(430\GeV\) or much higher, the positrons represent \(0.15\) of the total electron+positron flux. Moreover, this flux itself depends on the energy via a simple power law:\[

\Phi(e^- + e^+) = C \cdot E^\gamma

\] where the exponent \(\gamma\) has a pretty well-defined value.




Apparently, things work very well in that rather long interval if the exponent (spectral index) is\[

\gamma= -3.170 \pm 0.008 \pm 0.008

\] The first part of the error unifies the systematic and statistical error; the other one is from the energy scale errors. At any rate, the exponent is literally between \(-3.18\) and \(-3.16\), quite some lunch for numerologists.

My question for the numerologist and simple ingenious armchair physicist (and others!) reading this blog is: what in the Universe may produce such a power law for the positron and electron flux, with this bizarre negative exponent?

The thermal radiation is no good if the temperature \(kT\) is below those \(30\GeV\): you would get an exponential decrease. You may think about the thermal radiation in some decoupled component of the Universe whose temperature is huge, above \(430\GeV\), but then you will get something like \(\gamma=0\) or a nearby integer instead of the strange, large, and fractional negative constant.

You may continue by thinking about some sources distributed according to this power law, for example microscopic (but I mean much heavier than the Planck mass!) black holes. Such Hawking-radiating black holes might emit as many positrons as electrons so it doesn't look great but ignore this problem – there may be selective conversion to electrons because of some extra dirty effects, or enhanced annihilation of positrons.

If you want the Hawking radiation to have energy between \(30\) and \(430\GeV\), what is the radius and size of the black hole? How many black holes like that do you need to get the right power law? What will be their mass density needed to obtain the observed flux? Is this mass density compatible with the basic data about the energy density that we know?

Now, if that theory can pass all your tests, you also need the number of smaller i.e. lighter i.e. hotter microscopic black holes (those emitting higher-energy radiation) to be larger. Can you explain why the small black holes should dominate in this way? May the exponent \(-3.17\) appear in this way? Can you get this dominance of smaller black holes in the process of their gradual merger? Or thanks to the reduction of their sizes during evaporation?

I am looking forward to your solutions – with numbers and somewhat solid arguments. You can do it! ;-)

A completely different explanation: the high-energy electrons and positrons could arise from some form of "multiple decoupling events" in the very high-energy sector of the world that isn't in thermal equilibrium with everything else. Can you propose a convincing model about the moments of decoupling and the corresponding temperature that would produce such high-energy particles?

by Luboš Motl (noreply@blogger.com) at September 22, 2014 01:49 PM

Lubos Motl - string vacua and pheno

A pro-BICEP2 paper
Update Sep 22nd: a Planck paper on polarization is out, suggesting dust could explain the BICEP2 signal – or just 1/2 of it – but lacking the resolution to settle anything. A joint Planck-BICEP2 paper should be out in November but it seems predetermined that they only want to impose an upper bound on \(r\) so it won't be too strong or interesting, either.
It's generally expected that the Planck collaboration should present their new results on the CMB polarization data within days, weeks, or a month. Will they be capable of confirming the BICEP2 discovery – or refute it by convincing data?

Ten days ago, Planck published a paper on dust modelling:
Planck intermediate results. XXIX. All-sky dust modelling with Planck, IRAS, and WISE observations
I am not able to decide whether this paper has anything to say about the discovery of the primordial gravitational waves. It could be relevant but note that the paper doesn't discuss the polarization of the radiation at all.

Perhaps more interestingly, Wesley Colley and Richard Gott released their preprint
Genus Topology and Cross-Correlation of BICEP2 and Planck 353 GHz B-Modes: Further Evidence Favoring Gravity Wave Detection
that seems to claim that the data are powerful enough to confirm some influence of the dust yet defend the notion that the primordial gravitational waves have to represent a big part of the BICEP2 observation, too.




What did they do? Well, they took some new publicly available maps by Planck – those at the frequency 353 GHz (wavelength 849 microns). Recall that the claimed BICEP2 discovery appeared at the frequency 150 GHz (wavelength 2 millimeters).




They assume, hopefully for good reasons, that the dust's contribution to the data should be pretty much the same for these two frequencies, up to an overall normalization. Planck sees a lot of radiation at 353 GHz – if all of it were due to dust, the amount of dust would be enough to account for the whole BICEP2 signal.

However, if this were the case, the signals in the BICEP2 patch of the sky at these two frequencies would have to be almost perfectly correlated with each other. Instead, Colley and Gott see the correlation coefficient to be\[

15\% \pm 4\%

\] (does someone understand why the \(\rm\LaTeX\) percent sign has a tilde connecting the upper circle with the slash?) which is "significantly" (four-sigma) different from zero but it is still decidedly smaller than 100 percent. The fact that this correlation is much smaller than 100% implies that most of the BICEP2 signal is uncorrelated to what is classified as dust by the Planck maps or, almost equivalently, that most of the observations at 353 GHz in the BICEP2 region is due to noise, not dust.

When they quantify all this logic, they conclude that about one-half of the BICEP2 signal is due to dust and the remaining one-half has to be due to the primordial gravitational waves; that's why their preferred \(r\), the tensor-to-scalar ratio, drops from BICEP2's most ambitious \(r=0.2\) to \(r=0.11\pm 0.04\), a value very nicely compatible with chaotic inflation. The values "one-half" aren't known quite accurately but with the error margins they seem to work with, they still seem to see that the value \(r=0\) – i.e. non-discovery of the primordial gravitational waves – may be excluded at the 2.5-sigma level.



"Engineers with a diploma" vs "The Big Bang Theory"

by Luboš Motl (noreply@blogger.com) at September 22, 2014 01:17 PM

Tommaso Dorigo - Scientificblogging

Sam Ting On AMS Results: Dark Matter Might Be One Seminar Away
Last Friday Samuel Ting, the winner of the 1975 Nobel prize in Physics for the co-discovery of the J/ψ particle, gave a seminar in the packed CERN main auditorium on the latest results from AMS, the Alpha Magnetic Spectrometer installed on the international space station.

read more

by Tommaso Dorigo at September 22, 2014 12:14 PM

The Great Beyond - Nature blog

Ahead of UN summit, chances dwindle to keep warming at bay
smokestacks

Credit: Martin Muránsky/Shutterstock.com

Despite a slowdown in recent years in the rate of global warming, the world remains on a path to substantial and potentially disruptive climate change.

Global carbon dioxide emissions from the burning of fossil fuels and the production of cement reached a record high of 36.1 billion tonnes in 2013, and are now more than 60% above the level of 1990, when the Intergovernmental Panel on Climate Change (IPCC)  released its first report. Compared to 2012, emissions grew by 2.3% last year and are likely to increase by a further 2.5% in 2014.

The new figures were released on 21 September by the Global Carbon Budget, a group that regularly analyses changes in carbon sources and sinks.

CO2 emissions continue to track the high-end scenarios used by the IPCC in its latest report to project the magnitude of global warming. Without sustained mitigation measures — including capturing and storing the carbon produced by power stations — the world is likely to warm by 3.2–5.4 °C above pre-industrial levels by the end of the century.

“It is getting increasingly unlikely that global warming can be kept below 2 °C,” says Glen Peters, a climate scientists at the Center for International Climate and Environmental Research in Oslo. “In any case, the challenge is getting bigger every year and might be unachievable without our betting on negative emissions.”

The dire outlook — detailed in a package of research articles and commentaries in Nature Geoscience and Nature Climate Change — comes on the eve of a climate summit convened by the United Nations on 23 September in New York. At the meeting, world leaders aim to prepare the ground for an international greenhouse-gas reduction agreement to be signed next year.

“Governments say they agree with the 2 °C target but the urgency of action hasn’t really sunk in,” says Corinne Le Quéré, a climate scientist at the University of East Anglia in Norwich, UK, and a co-author of the studies. “We have already used up two-thirds of the fossil fuels we can afford to burn if we want to have a reasonable chance to stay below 2 °C warming. At the rate at which CO2 currently accumulates in the atmosphere, the remaining emissions budget will be exhausted in 30 years.”

When the latest set of IPCC emissions scenarios were developed about ten years ago, many experts had expected the ‘carbon intensity’ of the world economy’s to decrease by 2% to 4.5% per year. But that has not happened — mainly owing to China’s continued reliance on coal as the main energy source for its growing economy, the actual decline in the amount of fossil fuel used to produce 1% of global gross domestic product was merely about 1%. Given current projections of global economic growth, emissions are unlikely to peak and reverse any time soon in the absence of more stringent energy policies.

Despite its increased efforts to reduce pollution, China surpassed the United States as the world’s largest emitter of CO2 in 2007 and is now emitting more than the US and the European Union combined. China’s pro capita emissions are still not as high as those in the US, but in 2013 they were higher than the EU’s. Together, the three regions account for more than half of worldwide emissions.

 

by Quirin Schiermeier at September 22, 2014 11:52 AM

astrobites - astro-ph reader's digest

Void cosmology

The distribution of matter in the Universe has much to say about its constituents and evolution. The authors of this paper challenge our view of the large-scale structure of the Universe by studying the counterpart of the distribution of matter – the distribution of cosmic voids.

The matter and energy content of the Universe, and its evolution, are encoded in the spatial distribution of galaxies. Primordial fluctuations in the gas that made up the Universe 380,000 years after the Big Bang collapsed by gravity to form the first stars and galaxies. Galaxies clustered, forming a network of walls, filaments, clusters and voids, that we know as the large-scale structure. This clustering is usually measured through the correlation function of galaxies, which gives the excess number of galaxy pairs as a function of separation, relative to the expected number if galaxies where distributed at random. Alternatively, one can think of the correlation function as giving a characteristic scale for the separation of galaxies.

In 1979, Alcock and Paczynski devised a test that would give information on the properties of the Universe. They proposed that any spherically symmetric structure in space should have the same dimension on the plane of the sky and along the line of sight. This applies in particular to the characteristic separation of galaxies. This scale should be the same if measured from the angular separation of galaxies in the sky, or from their separation along the line of sight (or redshift). This test is useful for constraining the expansion history of the Universe and the nature of the mysterious dark energy. One drawback of the Alcock and Paczynski test is the fact that galaxies are not at rest. The measurement of their position along the line of sight is actually subject to distortion due to the component of their velocity in that direction. The novelty of this paper is the idea that there is a characteristic scale of separation between cosmic voids to which the Alcock-Paczynski test can be applied and that is not subject to the velocity uncertainty.

Adapted from Figure 1 of Hamaus et al.

Figure 1. The correlation function of galaxies (left), voids and galaxies (middle) and between voids (right) from numerical simulations of the Universe. The x-axis is the separation on the plane of the sky and the y-axis gives the separation along the line of sight. The correlation between voids shows more symmetry between the two directions because voids are not affected by velocity distortions. Adapted from Figure 1 of Hamaus et al.

The authors use simulations of the Universe to demonstrate this in Figure 1, where they compare the correlation function of galaxies, the cross-correlation of voids and galaxies, and the correlation between voids. The correlation function is shown color-coded and contoured as a function of separation on the sky (x-axis) and along the line of sight (y-axis). The contours look more symmetrical for the correlation between voids (right panel), while the correlation between galaxies or between galaxies and voids are subject to distortions due to the velocities of galaxies. On large scales, galaxies tend to fall towards overdensities, squashing the correlation function along the line of sight. On small scales, the velocities become much larger and harder to predict, smearing the correlation (although this effect is harder to see in Figure 1 because of its resolution).

The authors study the sensitivity of the ellipticity of the contours shown in Figure 1 to changing the properties of the Universe. They find that void correlations can provide a cleaner result for the Alcock and Paczynski test at redshift of ~1 and only if compared to the distribution of galaxies on large scales. At lower redshifts and on smaller scales, galaxies continue to provide better constraints on the expansion history of the Universe. A combination of both methods should be explored. With the advent of deeper and larger galaxy redshift surveys in the upcoming decade (such as Euclid and WFIRST), voids might claim a role in precision cosmology.

 

by Elisa Chisari at September 22, 2014 11:13 AM

Jester - Resonaances

Dark matter or pulsars? AMS hints it's neither.
Yesterday AMS-02 updated their measurement of cosmic-ray positron and electron fluxes. The newly published data extend to positron energies 500 GeV, compared to 350 GeV in the previous release. The central value of the positron fraction in the highest energy bin is one third of the error bar lower than the central value of the next-to-highestbin.  This allows the collaboration to conclude that the positron fraction has a maximum and starts to decrease at high energies :]  The sloppy presentation and unnecessary hype obscures the fact that AMS actually found something non-trivial.  Namely, it is interesting that the positron fraction, after a sharp rise between 10 and 200 GeV, seems to plateau at higher energies at the value around 15%.  This sort of behavior, although not expected by popular models of cosmic ray propagation, was actually predicted a few years ago, well before AMS was launched.  

Before I get to the point, let's have a brief summary. In 2008 the PAMELA experiment observed a steep rise of the cosmic ray positron fraction between 10 and 100 GeV. Positrons are routinely produced by scattering of high energy cosmic rays (secondary production), but the rise was not predicted by models of cosmic ray propagations. This prompted speculations of another (primary) source of positrons: from pulsars, supernovae or other astrophysical objects, to  dark matter annihilation. The dark matter explanation is unlikely for many reasons. On the theoretical side, the large annihilation cross section required is difficult to achieve, and it is difficult to produce a large flux of positrons without producing an excess of antiprotons at the same time. In particular, the MSSM neutralino entertained in the last AMS paper certainly cannot fit the cosmic-ray data for these reasons. When theoretical obstacles are overcome by skillful model building, constraints from gamma ray and radio observations disfavor the relevant parameter space. Even if these constraints are dismissed due to large astrophysical uncertainties, the models poorly fit the shape the electron and positron spectrum observed by PAMELA, AMS, and FERMI (see the addendum of this paper for a recent discussion). Pulsars, on the other hand, are a plausible but handwaving explanation: we know they are all around and we know they produce electron-positron pairs in the magnetosphere, but we cannot calculate the spectrum from first principles.

But maybe primary positron sources are not needed at all? The old paper by Katz et al. proposes a different approach. Rather than starting with a particular propagation model, it assumes the high-energy positrons observed by PAMELA are secondary, and attempts to deduce from the data the parameters controlling the propagation of cosmic rays. The logic is based on two premises. Firstly, while production of cosmic rays in our galaxy contains many unknowns, the production of different particles is strongly correlated, with the relative ratios depending on nuclear cross sections that are measurable in laboratories. Secondly, different particles propagate in the magnetic field of the galaxy in the same way, depending only on their rigidity (momentum divided by charge). Thus, from an observed flux of one particle, one can predict the production rate of other particles. This approach is quite successful in predicting the cosmic antiproton flux based on the observed boron flux. For positrons, the story is more complicated because of large energy losses (cooling) due to synchrotron and inverse-Compton processes. However, in this case one can make the  exercise of computing the positron flux assuming no losses at all. The result correspond to roughly 20% positron fraction above 100 GeV. Since in the real world cooling can only suppress the positron flux, the value computed assuming no cooling represents an upper bound on the positron fraction.

Now, at lower energies, the observed positron flux is a factor of a few below the upper bound. This is already intriguing, as hypothetical primary positrons could in principle have an arbitrary flux,  orders of magnitude larger or smaller than this upper bound. The rise observed by PAMELA can be interpreted that the suppression due to cooling decreases as positron energy increases. This is not implausible: the suppression depends on the interplay of the cooling time and mean propagation time of positrons, both of which are unknown functions of energy. Once the cooling time exceeds the propagation time the suppression factor is completely gone. In such a case the positron fraction should saturate the upper limit. This is what seems to be happening at the energies 200-500 GeV probed by AMS, as can be seen in the plot. Already the previous AMS data were consistent with this picture, and the latest update only strengthens it.

So, it may be that the mystery of cosmic ray positrons has a simple down-to-galactic-disc explanation. If further observations show the positron flux climbing  above the upper limit or dropping suddenly, then the secondary production hypothesis would be invalidated. But, for the moment, the AMS data seems to be consistent with no primary sources, just assuming that the cooling time of positrons is shorter than predicted by the state-of-the-art propagation models. So, instead of dark matter, AMS might have discovered models of cosmic-ray propagation need a fix. That's less spectacular, but still worthwhile.

Thanks to Kfir for the plot and explanations. 

by Jester (noreply@blogger.com) at September 22, 2014 09:27 AM

Peter Coles - In the Dark

BICEP2 bites the dust.. or does it?

Well, it’s come about three weeks later than I suggested – you should know that you can never trust anything you read in a blog – but the long-awaited Planck analysis of polarized dust emission from our Galaxy has now hit the arXiv. Here is the abstract, which you can click on to make it larger:

PlanckvBICEP2

My twitter feed was already alive with reactions to the paper when I woke up at 6am, so I’m already a bit late on the story, but I couldn’t resist a quick comment or two.

The bottom line is of course that the polarized emission from Galactic dust is much larger in the BICEP2 field than had been anticipated in the BICEP2 analysis of their data (now published  in Physical Review Letters after being refereed). Indeed, as the abstract states, the actual dust contamination in the BICEP2 field is subject to considerable statistical and systematic uncertainties, but seems to be around the same level as BICEP2’s claimed detection. In other words the Planck analysis shows that the BICEP2 result is completely consistent with what is now known about polarized dust emission.  To put it bluntly, the Planck analysis shows that the claim that primordial gravitational waves had been detected was premature, to say the least. I remind you that the original  BICEP2 result was spun as a ‘7σ’ detection of a primordial polarization signal associated with gravitational waves. This level of confidence is now known to have been false.  I’m going to resist (for the time being) another rant about p-values

Although it is consistent with being entirely dust, the Planck analysis does not entirely kill off the idea that there might be a primordial contribution to the BICEP2 measurement, which could be of similar amplitude to the dust signal. However, identifying and extracting that signal will require the much more sophisticated joint analysis alluded to in the final sentence of the abstract above. Planck and BICEP2 have differing strengths and weaknesses and a joint analysis will benefit from considerable complementarity. Planck has wider spectral coverage, and has mapped the entire sky; BICEP2 is more sensitive, but works at only one frequency and covers only a relatively small field of view. Between them they may be able to identify an excess source of polarization over and above the foreground, so it is not impossible that there may a gravitational wave component may be isolated. That will be a tough job, however, and there’s by no means any guarantee that it will work. We will just have to wait and see.

In the mean time let’s see how big an effect this paper has on my poll:

 

<noscript><a href="http://polldaddy.com/poll/8258371">Take Our Poll</a></noscript>

 

Note also that the abstract states:

We show that even in the faintest dust-emitting regions there are no “clean” windows where primordial CMB B-mode polarization could be measured without subtraction of dust emission.

It is as I always thought. Our Galaxy is a rather grubby place to live. Even the windows are filthy. It’s far too dusty for fussy cosmologists, who need to have everything just so, but probably fine for astrophysicists who generally like mucking about and getting their hands dirty…

This discussion suggests that a confident detection of B-modes from primordial gravitational waves (if there is one to detect) may have to wait for a sensitive all-sky experiment, which would have to be done in space. On the other hand, Planck has identified some regions which appear to be significantly less contaminated than the BICEP2 field (which is outlined in black):

Quieter dust

Could it be possible to direct some of the ongoing ground- or balloon-based CMB polarization experiments towards the cleaner (dark blue area in the right-hand panel) just south of the BICEP2 field?

From a theorist’s perspective, I think this result means that all the models of the early Universe that we thought were dead because they couldn’t produce the high level of primordial gravitational waves detected by BICEP2 have no come back to life, and those that came to life to explain the BICEP2 result may soon be read the last rites if the signal turns out to be predominantly dust.

Another important thing that remains to be seen is the extent to which the extraordinary media hype surrounding the announcement back in March will affect the credibility of the BICEP2 team itself and indeed the cosmological community as a whole. On the one hand, there’s nothing wrong with what has happened from a scientific point of view: results get scrutinized, tested, and sometimes refuted.  To that extent all this episode demonstrates is that science works.  On the other hand most of this stuff usually goes on behind the scenes as far as the public are concerned. The BICEP2 team decided to announce their results by press conference before they had been subjected to proper peer review. I’m sure they made that decision because they were confident in their results, but it now looks like it may have backfired rather badly. I think the public needs to understand more about how science functions as a process, often very messily, but how much of this mess should be out in the open?

 

UPDATE: Here’s a piece by Jonathan Amos on the BBC Website about the story.


by telescoper at September 22, 2014 09:19 AM

Emily Lakdawalla - The Planetary Society Blog

Last Chance to Fly Your Name to Asteroid Bennu
You have just until September 30, 2014 at 23:59 Pacific time, to submit your name, and to tell your friends and family to submit their names, to fly to asteroid Bennu and back on board NASA’s OSIRIS-REx mission.

September 22, 2014 09:00 AM

Sean Carroll - Preposterous Universe

Planck Speaks: Bad News for Primordial Gravitational Waves?

Ever since we all heard the exciting news that the BICEP2 experiment had detected “B-mode” polarization in the cosmic microwave background — just the kind we would expect to be produced by cosmic inflation at a high energy scale — the scientific community has been waiting on pins and needles for some kind of independent confirmation, so that we could stop adding “if it holds up” every time we waxed enthusiastic about the result. And we all knew that there was just such an independent check looming, from the Planck satellite. The need for some kind of check became especially pressing when some cosmologists made a good case that the BICEP2 signal may very well have been dust in our galaxy, rather than gravitational waves from inflation (Mortonson and Seljak; Flauger, Hill, and Spergel).

Now some initial results from Planck are in … and it doesn’t look good for gravitational waves. (Warning: I am not a CMB experimentalist or data analyst, so take the below with a grain of salt, though I tried to stick close to the paper itself.)

Planck intermediate results. XXX. The angular power spectrum of polarized dust emission at intermediate and high Galactic latitudes
Planck Collaboration: R. Adam, et al.

The polarized thermal emission from Galactic dust is the main foreground present in measurements of the polarization of the cosmic microwave background (CMB) at frequencies above 100GHz. We exploit the Planck HFI polarization data from 100 to 353GHz to measure the dust angular power spectra CEE,BBℓ over the range 40<ℓ<600. These will bring new insights into interstellar dust physics and a precise determination of the level of contamination for CMB polarization experiments. We show that statistical properties of the emission can be characterized over large fractions of the sky using Cℓ. For the dust, they are well described by power laws in ℓ with exponents αEE,BB=−2.42±0.02. The amplitudes of the polarization Cℓ vary with the average brightness in a way similar to the intensity ones. The dust polarization frequency dependence is consistent with modified blackbody emission with βd=1.59 and Td=19.6K. We find a systematic ratio between the amplitudes of the Galactic B- and E-modes of 0.5. We show that even in the faintest dust-emitting regions there are no "clean" windows where primordial CMB B-mode polarization could be measured without subtraction of dust emission. Finally, we investigate the level of dust polarization in the BICEP2 experiment field. Extrapolation of the Planck 353GHz data to 150GHz gives a dust power ℓ(ℓ+1)CBBℓ/(2π) of 1.32×10−2μK2CMB over the 40<ℓ<120 range; the statistical uncertainty is ±0.29 and there is an additional uncertainty (+0.28,-0.24) from the extrapolation, both in the same units. This is the same magnitude as reported by BICEP2 over this ℓ range, which highlights the need for assessment of the polarized dust signal. The present uncertainties will be reduced through an ongoing, joint analysis of the Planck and BICEP2 data sets.

We can unpack that a bit, but the upshot is pretty simple: Planck has observed the whole sky, including the BICEP2 region, although not in precisely the same wavelengths. With a bit of extrapolation, however, they can use their data to estimate how big a signal should be generated by dust in our galaxy. The result fits very well with what BICEP2 actually measured. It’s not completely definitive — the Planck paper stresses over and over the need to do more analysis, especially in collaboration with the BICEP2 team — but the simplest interpretation is that BICEP2’s B-modes were caused by local contamination, not by early-universe inflation.

Here’s the Planck sky, color-coded by amount of B-mode polarization generated by dust, with the BICEP2 field indicated at bottom left of the right-hand circle:

planckdustmap

Every experiment is different, so the Planck team had to do some work to take their measurements and turn them into a prediction for what BICEP2 should have seen. Here is the sobering result, expressed (roughtly) as the expected amount of B-mode polarization as a function of angular size, with large angles on the left. (Really, the BB correlation function as a function of multipole moment.)

planck-bmode-spectrum

The light-blue rectangles are what Planck actually sees and attributes to dust. The black line is the theoretical prediction for what you would see from gravitational waves with the amplitude claimed by BICEP2. As you see, they match very well. That is: the BICEP2 signal is apparently well-explained by dust.

Of course, just because it could be dust doesn’t mean that it is. As one last check, the Planck team looked at how the amount of signal they saw varied as a function of the frequency of the microwaves they were observing. (BICEP2 was only able to observe at one frequency, 150 GHz.) Here’s the result, compared to a theoretical prediction for what dust should look like:

planck-dust-spectrum

Again, the data seem to be lining right up with what you would expect from dust.

It’s not completely definitive — but it’s pretty powerful. BICEP2 did indeed observe the signal that they said they observed; but the smart money right now is betting that the signal didn’t come from the early universe. There’s still work to be done, and the universe has plenty of capacity for surprising us, but for the moment we can’t claim to have gathered information from quite as early in the history of the universe as we had hoped.

by Sean Carroll at September 22, 2014 01:13 AM

The n-Category Cafe

A Packing Pessimization Problem

In a pessimization problem, you try to minimize the maximum value of some function:

<semantics>min xmax yf(x,y)<annotation encoding="application/x-tex"> min_x max_y f(x,y) </annotation></semantics>

For example: which convex body in Euclidean space is the worst for packing? That is, which has the smallest maximum packing density?

(In case you’re wondering, we restrict attention to convex bodies because without this restriction the maximum packing density can be made as close to <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> as we want.)

Of course the answer depends on the dimension. According to Martin Gardner, Stanislaw Ulam guessed that in 3 dimensions, the worst is the round ball. This is called Ulam’s packing conjecture.

In 3 dimensions, congruent balls can be packed with a density of

<semantics>π18=0.74048048<annotation encoding="application/x-tex"> \frac{\pi}{\sqrt{18}} = 0.74048048 \dots </annotation></semantics>

and Kepler’s conjecture, now a theorem, says that’s their maximum packing density. So, Ulam’s packing conjecture says we can pack congruent copies of any other convex body in <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics> with a density above <semantics>π/18<annotation encoding="application/x-tex">\pi/\sqrt{18}</annotation></semantics>.

Ulam’s packing conjecture is still open. We know that the ball is a local pessimum for ‘centrally symmetric’ convex bodies in 3 dimensions. But that’s just a small first step.

Geometry is often easier in 2 dimensions… but in some ways, the packing pessimization problem for convex bodies in 2 dimensions is even more mysterious than in 3.

You see, in 2 dimensions the disk is not the worst. The disk has maximum packing density of

<semantics>π12=0.9068996<annotation encoding="application/x-tex"> \frac{\pi}{\sqrt{12}} = 0.9068996 \dots </annotation></semantics>

However, the densest packing of regular octagons has density

<semantics>4+425+42=0.9061636<annotation encoding="application/x-tex"> \frac{4 + 4 \sqrt{2}}{5 + 4 \sqrt{2}} = 0.9061636 \dots </annotation></semantics>

This is a tiny bit less! So, the obvious 2-dimensional analogue of Ulam’s packing conjecture is false.

By the way, the densest packing of regular octagons is not the one with square symmetry:

It’s this:

which is closer in spirit to the densest packing of discs:

The regular octagon is not the worst convex shape for packing the plane! Regular heptagons might be even worse. As far as I know, the densest known packing of regular heptagons is this ‘double lattice’ packing:

studied by Greg Kuperberg and his father. They showed this was the densest packing of regular heptagons where they are arranged in two lattices, each consisting of translates of a given heptagon. And this packing has density

<semantics>297(111+492cos(π7)356cos 2(π7))=0.89269<annotation encoding="application/x-tex"> \frac{2}{97}\left(-111 + 492 \cos\left(\frac{\pi}{7}\right) - 356 \cos^2 \left(\frac{\pi}{7}\right)\right) = 0.89269 \dots </annotation></semantics>

But I don’t know if this is the densest possible packing of regular heptagons.

Unlike the heptagon, the regular octagon is centrally symmetric: if we put its center at the origin, a point <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is in this region iff <semantics>x<annotation encoding="application/x-tex">-x</annotation></semantics> is in the region.

The great thing about convex centrally symmetric regions of the plane is that their densest packing is always a lattice packing: you take your region <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> and form a packing by using all its translates <semantics>R+L<annotation encoding="application/x-tex">R + L</annotation></semantics> where <semantics>L 2<annotation encoding="application/x-tex">L \subseteq \mathbb{R}^2</annotation></semantics> is a lattice. This is an old result of László Fejes Tóth and C. A. Rogers. For convex centrally symmetric regions, it reduces the search for the densest packing to a finite-dimensional maximization problem!

I said the regular octagon wasn’t the worst convex shape for densely tiling the plane. Then I said the regular heptagon might be worse, but I didn’t know. So what’s worse?

A certain smoothed octagon is worse:

Since it’s centrally symmetric, we know its densest packing is a lattice packing, so it’s not miraculous that someone was able to work out its density:

<semantics>842ln2221=0.902414<annotation encoding="application/x-tex"> \frac{ 8-4\sqrt{2}-\ln{2} }{2\sqrt{2}-1} = 0.902414 \dots </annotation></semantics>

and the way it looks:

In fact this shape is believed to be the worst centrally symmetric convex region for densely packing the plane! I don’t really know why. But Thomas Hales, who proved the Kepler conjecture, has an NSF grant based on a proposal where he says he’ll prove this:

In 1934, Reinhardt considered the problem of determining the shape of the centrally symmetric convex disk in the plane whose densest packing has the lowest density. In informal terms, if a contract requires a miser to make payment with a tray of identical gold coins filling the tray as densely as possible, and if the contract stipulates the coins to be convex and centrally symmetric, then what shape of coin should the miser choose in order to part with as little gold as possible? Reinhardt conjectured that the shape of the coin should be a smoothed octagon. The smoothed octagon is constructed by taking a regular octagon and clipping the corners with hyperbolic arcs. The density of the smoothed octagon is approximately 90 per cent. Work by previous researchers on this conjecture has tended to focus on special cases. Research of the PI gives a general analysis of the problem. It introduces a variational problem on the special linear group in two variables that captures the structure of the Reinhardt conjecture. An interesting feature of this problem is that the conjectured solution is not analytic, but only satisfies a Lipschitz condition. A second noteworthy feature of this problem is the presence of a nonlinear optimization problem in a finite number of variables, relating smoothed polygons to the conjecturally optimal smoothed octagon. The PI has previously completed many calculations related to the proof of the Reinhardt conjecture and proposes to complete the proof of the Reinhardt conjecture.

This research will solve a conjecture made in 1934 by Reinhardt about the convex shape in the plane whose optimal packing density is as small as possible. The significance of this proposal is found in its broader context. Here, three important fields of mathematical inquiry are brought to bear on a single problem: discrete geometry, nonsmooth variational analysis, and global nonlinear optimization. Problems concerning packings and density lie at the heart of discrete geometry and are closely connected with problems of the same nature that routinely arise in materials science. Variational problems and more generally control theory are have become indispensable tools in many disciplines, ranging from mathematical finance to robotic control. However, research that gives an exact nonsmooth solution is relatively rare, and this feature gives this project special interest among variational problem. This research is also expected to further develop methods that use computers to obtain exact global solutions to nonlinear optimization problems. Applications of nonlinear optimization are abundant throughout science and arise naturally whenever a best choice is sought among a system with finitely many parameters. Methods that use computers to find exact solutions thus have the potential of finding widespread use. Thus, by studying this particular packing problem, mathematical tools may be further developed with promising prospects of broad application throughout the sciences.

I don’t have the know-how to guess what Hales will do. I haven’t even read the proof of that theorem by László Fejes Tóth and C. A. Rogers! It seems like a miracle to me.

But here are some interesting things that it implies.

Let’s say a region is a relatively compact open set. Just for now, let’s say a shape is a nonempty convex centrally symmetric region in the plane, centered at the origin. Let <semantics>Shapes<annotation encoding="application/x-tex">Shapes</annotation></semantics> be the set of shapes. Let <semantics>Lattices<annotation encoding="application/x-tex">Lattices</annotation></semantics> be the set of lattices in the plane, where a lattice is a discrete subgroup isomorphic to <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}^2</annotation></semantics>.

We can define a function

<semantics>density:Shapes×Lattices[0,1]<annotation encoding="application/x-tex"> density: Shapes \times Lattices \to [0,1] </annotation></semantics>

as follows. For each shape <semantics>SShapes<annotation encoding="application/x-tex">S \in Shapes</annotation></semantics> and lattice <semantics>LLattices<annotation encoding="application/x-tex">L \in Lattices</annotation></semantics>, if we rescale <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> by a sufficiently small constant, the resulting shape <semantics>αS<annotation encoding="application/x-tex">\alpha S</annotation></semantics> will have the property that the regions <semantics>αS+<annotation encoding="application/x-tex">\alpha S + \ell</annotation></semantics> are disjoint as <semantics><annotation encoding="application/x-tex">\ell</annotation></semantics> ranges over <semantics>L 2<annotation encoding="application/x-tex">L \subseteq \mathbb{R}^2</annotation></semantics>. So, for small enough <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics>, <semantics>αS+L<annotation encoding="application/x-tex">\alpha S + L</annotation></semantics> will be a way of packing the plane by rescaled copies of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. We can take the supremum of the density of <semantics>αS+L<annotation encoding="application/x-tex">\alpha S + L</annotation></semantics> over such <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics>, and call it

<semantics>density(S,L)<annotation encoding="application/x-tex">density(S,L) </annotation></semantics>

Thanks to the theorem of László Fejes Tóth and C. A. Rogers, the maximum packing density of the shape <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> is

<semantics>sup LLatticesdensity(S,L)<annotation encoding="application/x-tex"> \sup_{L \in Lattices} density(S,L) </annotation></semantics>

Here I’m taking advantage of the obvious fact that the maximum packing density of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> equals that of any rescaled version <semantics>αS<annotation encoding="application/x-tex">\alpha S</annotation></semantics>. And in using the word ‘maximum’, I’m also taking for granted that the supremum is actually attained.

Given all this, the pessimization problem for packing centrally symmetric convex regions is all about finding

<semantics>inf SShapessup LLatticesdensity(S,L)<annotation encoding="application/x-tex"> \inf_{S \in Shapes} \; \sup_{L \in Lattices} density(S,L) </annotation></semantics>

But there’s more nice symmetry at work here. Linear transformations of the plane act on shapes, and lattices, and packings… and the concept of density is invariant under linear transformations!

One thing this instantly implies is that the maximum packing density for a centrally symmetric convex region doesn’t change if we apply a linear transformation to that region.

This is quite surprising. You might think that stretching or shearing a region could give a radically new way to pack it as densely as possible. And indeed that’s probably true in general. But for centrally symmetric convex regions, the densest packings are all lattice packings. So if we stretch or shear the region, we can just stretch or shear the lattice packing that works best, and get the lattice packing that works best for the stretched or sheared region. The packing density is unchanged!

We can say this using jargon. The group of linear transformations of the plane is <semantics>GL(2,)<annotation encoding="application/x-tex">GL(2, \mathbb{R})</annotation></semantics>. This acts on <semantics>Shapes<annotation encoding="application/x-tex">Shapes</annotation></semantics> and <semantics>Lattices<annotation encoding="application/x-tex">Lattices</annotation></semantics>, and for any <semantics>gGL(2,)<annotation encoding="application/x-tex">g \in GL(2, \mathbb{R})</annotation></semantics> we have

<semantics>density(gS,gL)=density(S,L)<annotation encoding="application/x-tex"> density(g S, g L) = density(S,L) </annotation></semantics>

So, the function

<semantics>density:Shapes×Lattices[0,1]<annotation encoding="application/x-tex"> density: Shapes \times Lattices \to [0,1] </annotation></semantics>

is <semantics>GL(2,)<annotation encoding="application/x-tex">GL(2,\mathbb{R})</annotation></semantics>-invariant. And thus, the maximum packing density is invariant:

<semantics>sup LLatticesdensity(S,L)=sup LLatticesdensity(gS,L)<annotation encoding="application/x-tex"> \sup_{L \in Lattices} density(S,L) = \sup_{L \in Lattices} density(g S,L) </annotation></semantics>

for all <semantics>gGL(2,)<annotation encoding="application/x-tex">g \in GL(2,\mathbb{R})</annotation></semantics>.

As mentioned before, we also have

<semantics>density(αS,L)=density(S,L)<annotation encoding="application/x-tex"> density(\alpha S, L) = density(S, L) </annotation></semantics>

where <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics> is any nonzero scalar multiple of the identity matrix (and thus a rescaling if <semantics>α>0<annotation encoding="application/x-tex">\alpha &gt; 0</annotation></semantics>). So, we can replace <semantics>Shapes<annotation encoding="application/x-tex">Shapes</annotation></semantics> by the quotient spaces <semantics>Shapes/ *<annotation encoding="application/x-tex">Shapes/\mathbb{R}^*</annotation></semantics>, and work with

<semantics>density:Shapes/ *×Lattices[0,1]<annotation encoding="application/x-tex"> density: Shapes/\mathbb{R}^* \times Lattices \to [0,1] </annotation></semantics>

<semantics>GL(2,)<annotation encoding="application/x-tex">GL(2,\mathbb{R})</annotation></semantics> still acts on the first factor here, with scalar multiples of the identity acting trivially, and this map is still <semantics>GL(2,)<annotation encoding="application/x-tex">GL(2,\mathbb{R})</annotation></semantics>-invariant.

I think there should be a topology on <semantics>Shapes<annotation encoding="application/x-tex">Shapes</annotation></semantics> that makes the quotient space <semantics>Shapes/ *<annotation encoding="application/x-tex">Shapes/\mathbb{R}^*</annotation></semantics> compact and makes

<semantics>density:Shapes×Lattices[0,1]<annotation encoding="application/x-tex"> density: Shapes \times Lattices \to [0,1] </annotation></semantics>

continuous. Something like the Hausdorff metric, maybe. Can anyone help me here?

None of this goes far in solving the packing pessimization problem for convex centrally symmetric regions in the plane. We’ve reduced the number of degrees of freedom, but they’re still infinite.

But still, it’s fun. I like how it’s vaguely reminiscent of the theory of modular functions, which can be seen as <semantics>SL(2,)<annotation encoding="application/x-tex">SL(2,\mathbb{R})</annotation></semantics>-invariant functions of a lattice together with an ellipse centered at the origin.

References

For more on packing pessimization problems, see:

To see who drew the pretty pictures, click on the pictures.

by john (baez@math.ucr.edu) at September 22, 2014 12:17 AM

September 21, 2014

Christian P. Robert - xi'an's og

new kids on the block

La Defense, Dec. 10, 2010This summer, for the first time, I took three Dauphine undergraduate students into research projects thinking they had had enough R training (with me!) and several stats classes to undertake such projects. In all cases, the concept was pre-defined and “all they had to do” was running a massive flow of simulations in R (or whatever language suited them best!) to check whether or not the idea was sound. Unfortunately, for two projects, by the end of the summer, we had not made any progress in any of the directions I wanted to explore… Despite a fairly regular round of meetings and emails with those students. In one case the student had not even managed to reproduce the (fairly innocuous) method I wanted to improve upon. In the other case, despite programming inputs from me, the outcome was impossible to trust.  A mostly failed experiment which makes me wonder why it went that way. Granted that those students had no earlier training in research, either in exploiting the literature or in pushing experiments towards logical extensions. But I gave them entries, discussed with them those possible new pathways, and kept updating schedules and work-charts. And the students were volunteers with no other incentive than discovering research (I even had two more candidates in the queue).  So it may be (based on this sample of 3!) that our local training system is missing in this respect. Somewhat failing to promote critical thinking and innovation by imposing too long presence hours and by evaluating the students only through standard formalised tests. I do wonder, as I regularly see [abroad] undergraduate internships and seminars advertised in the stats journals. Or even conferences.


Filed under: Kids, R, Statistics, University life Tagged: academic research, research internships, training of researchers, undergraduates

by xi'an at September 21, 2014 10:14 PM

Marco Frasca - The Gauge Connection

DICE 2014

ResearchBlogging.org

I have spent this week in Castiglioncello participating to the Conference DICE 2014. This Conference is organized with a cadence of two years with the main efforts due to Thomas Elze.

Castello Pasquini a Castiglioncello sede di DICE 2014

Castello Pasquini at Castiglioncello  (DICE 2014)

I have been a participant to the 2006 edition where I gave a talk about decoherence and thermodynamic limit (see here and here). This is one of the main conferences where foundational questions can be discussed with the intervention of some of the major physicists. This year there have been 5 keynote lectures from famous researchers. The opening lecture was held by Tom Kibble, one of the founding fathers of the Higgs mechanism. I met him at the registration desk and I have had the luck of a handshake and a few words with him. It was a recollection of the epic of the Standard Model. The second notable lecturer was Mario Rasetti. Rasetti is working on the question of big data that is, the huge number of information that is currently exchanged on the web having the property to be difficult to be managed and not only for a matter of quantity. What Rasetti and his group showed is that topological field theory yields striking results when applied to such a case. An application to NMRI for the brain exemplified this in a blatant manner.

The third day there were the lectures by Avshalom Elitzur and Alain Connes, the Fields medallist. Elitzur is widely known for the concept of weak measurement that is a key idea of quantum optics. Connes presented his recent introduction of the quanta of geometry that should make happy loop quantum gravity researchers.Alain Connes at DICE2014 You can find the main concepts here. Connes explained how the question of the mass of the Higgs got fixed and said that, since his proposal for the geometry of the Standard Model, he was able to overcome all the setbacks that appeared on the way. This was just another one. From my side, his approach appears really interesting as the Brownian motion I introduced in quantum mechanics could be understood through the quanta of volumes that Connes and collaborators uncovered.

Gerard ‘t Hooft talked on Thursday. The question he exposed was about cellular automaton and quantum mechanics (see here). It is several years that ‘t Hoof t is looking for a classical substrate to quantum mechanics and this was also the point of other speakers at the Conference. Indeed, he has had some clashes with people working on quantum computation as ‘t Hooft, following his views, is somewhat sceptical about it.'t Hooft at DICE2014 I intervened on this question based on the theorem of Lieb and Simon, generally overlooked in such discussions, defending ‘t Hoof ideas and so, generating some fuss (see here and the discussion I have had with Peter Shor and Aram Harrow). Indeed, we finally stipulated that some configurations can evade Lieb and Simon theorem granting a quantum behaviour at macroscopic level.

This is my talk at DICE 2014 and was given the same day as that of  ‘t Hooft (he was there listening)My talk at DICE 2014. I was able to prove the existence of fractional powers of Brownian motion and presented new results with the derivation of the Dirac equation from a stochastic process.

The Conference was excellent and I really enjoyed it. I have to thank the organizers for the beautiful atmosphere and the really pleasant stay with a full immersion in wonderful science. All the speakers yielded stimulating and enjoyable talks. For my side, I will keep on working on foundational questions and look forward for the next edition.

Marco Frasca (2006). Thermodynamic Limit and Decoherence: Rigorous Results Journal of Physics: Conference Series 67 (2007) 012026 arXiv: quant-ph/0611024v1

Ali H. Chamseddine, Alain Connes, & Viatcheslav Mukhanov (2014). Quanta of Geometry arXiv arXiv: 1409.2471v3

Gerard ‘t Hooft (2014). The Cellular Automaton Interpretation of Quantum Mechanics. A View on the Quantum Nature of our Universe, Compulsory or Impossible? arXiv arXiv: 1405.1548v2


Filed under: Conference, Quantum mechanics Tagged: Alain Connes, Avshalom Elitzur, DICE 2014, Foundations of quantum mechanics, Gerard 't Hooft, Mario Rasetti, Tom Kibble

by mfrasca at September 21, 2014 08:40 PM

Emily Lakdawalla - The Planetary Society Blog

MAVEN orbit insertion timeline
Today's the day that MAVEN enters orbit at Mars, bringing the number of Mars orbiters up to four. So far everything looks good. The orbit insertion burn should begin tonight at 18:50 PDT / 01:50 UTC. I'll be on stage with Mat Kaplan and Rich Zurek at Planetary Radio Live, keeping up to date with the latest news from the spacecraft; here is a timeline in PDT, UTC, CEST, and IST to help you follow along.

September 21, 2014 05:55 PM

ZapperZ - Physics and Physicists

Antimatter Explained
Another short and sweet video from MinutePhysics, this time it is the explanation on what antimatter is.



However, I think the explanation given earlier on the same subject is a bit more in-depth and less "manic" than this one.

Zz.

by ZapperZ (noreply@blogger.com) at September 21, 2014 04:27 PM

Peter Coles - In the Dark

Life, Work and Postgraduate Research

A very busy Freshers’ Week at the University of Sussex is now behind us and lectures proper start tomorrow morning. As far as I was concerned all the Freshers’ events were superimposed on a week that was already filled with other things, some good (of which more anon), and some not so good (of which I will say nothing further).

After welcome receptions at the weekend, Freshers’ Week for me began with an induction lecture with all the new students in the School of Mathematical and Physical Sciences (MPS) or at least as many as could rouse themselves for a 10am start the day after a big welcome party. In the event, the turnout was good. I then gave another little speech at a much less formal event in the Creativity Zone (which is situated in the building occupied by MPS. I then had to dash off to a couple of meetings but when I returned a couple of hours later the party was still going, so I helped myself to a beer and rejoined the socializing.

IMG-20140915-00402

Welcome to the new students in MPS!

And so it was for the rest of the week, dominated by meetings of one sort or another including one in London, until Friday and my last formal induction task in the form of a session for new postgraduate students in MPS. Since this happened at the end of Induction Week there wasn’t much of a practical nature say to the students that they hadn’t already heard during the School-based induction sessions that preceded it, so I decided to scrap the Powerpoint I had planned to use and just give a general pep talk. Doing so was quite an interesting experience because it reminded me of the time I started my own postgraduate education, here at Sussex.

As a matter of fact it was on the corresponding day in 1985 (Sunday 22nd September) that I moved down to Brighton in advance of starting my DPhil (as Sussex doctorates were called in those days). It’s hard to believe that was 29 years ago. As it turned out, I finished my thesis within three years and stayed on here at Sussex as a postdoctoral research fellow in the Astronomy Centre until 1990, whereupon I left to take up a teaching and research position at what is now Queen Mary, University of London. That was the start of a mini-tour of UK universities that ended up with me returning to Sussex last year as Head of the same school in which I started my research career.

This morning I noticed a story in the Times Higher about the loneliness and sense of isolation often faced by postgraduate research students which often leads to a crisis of confidence. I can certainly attest to that, for reasons I will try to explain below, so tried to reassure the students about it in the induction session on Friday.

The point is that a postgraduate research degree is very different from a programme of undergraduate study. For one thing, as a research student you are expected to work on your own a great deal of the time. That’s because nobody else will be doing precisely the same project so, although other students will help you out with some things, you’re not trying to solve the same problems as your peers as is the case with an undergraduate. Your supervisor will help you of course and make suggestions (of varying degrees of helpfulness), but a PhD is still a challenge that you have to meet on your own. I don’t think it is good supervisory practice to look over a research student’s shoulder all the time. It’s part of the purpose of a PhD that the student learns to go it alone. There is a balance of course, but my own supervisor was rather “hands off” and I regard that as the right way to supervise. I’ve always encouraged my own students to do things their own way rather than try to direct them too much.

That loneliness is tough in itself, but there’s also the scary fact that you do not usually know whether your problem has a solution, let alone whether you yourself can find it. There is no answer at the back of the book; if there were you would not be doing research. A good supervisor will suggest a project that he or she thinks is both interesting and feasible, but the expectation is that you will very quickly be in a position where you know more about that topic than your supervisor.

I think almost every research student goes through a phase in which they feel out of their depth. There are times when you get thoroughly stuck and you begin to think you will never crack it. Self-doubt, crisis of confidence, call it what you will, I think everyone who has done a postgraduate degree has experienced it. I certainly did. A year into my PhD I felt I was getting nowhere with the first problem I had been given to solve. All the other research students seemed much cleverer and more confident than me. Had I made a big mistake thinking I could this? I started to panic and began to think about what kind of job I should go into if I abandoned the idea of pursuing a career in research.

So why didn’t I quit? There were a number of factors, including the support and encouragement of my supervisor, staff and fellow students in the Astronomy Centre, and the fact that I loved living in Brighton, but above all it was because I knew that I would feel frustrated for the rest of my life if I didn’t see it through. I’m a bit obsessive about things like that. I can never leave a crossword unfinished either.

What happened was that after some discussion with my supervisor I shelved that first troublesome problem and tried another, much easier one. I cracked that fairly quickly and it became my first proper publication. Moreover, thinking about that other problem revealed that there was a way to finesse the difficulty I had failed to overcome in the first project. I returned to the first project and this time saw it through to completion. With my supervisor’s help that became my second paper, published in 1987.

I know it’s wrong to draw inferences about other people from one’s own particular experiences, but I do feel that there are general lessons. One is that if you are going to complete a research degree you have to have a sense of determination that borders on obsession. I was talking to a well-known physicist at a meeting not long ago and he told me that when he interviews prospective physics students he asks them “Can you live without physics?”. If the answer is “yes” then he tells them not to do a PhD. It’s not just a take-it-or-leave-it kind of job being a scientist. You have to immerse yourself in it and be prepared to put long hours in. When things are going well you will be so excited that you will find it as hard to stop as it is when you’re struggling. I’d imagine it is the just same for other disciplines.

The other, equally important, lesson to be learned is that it is essential to do other things as well. Being “stuck” on a problem is part-and-parcel of mathematics or physics research, but sometimes battering your head against the same thing for days on end just makes it less and less likely you will crack it. The human brain is a wonderful thing, but it can get stuck in a rut. One way to avoid this happening is to have more than one thing to think about.

I’ve lost count of the number of times I’ve been stuck on the last clue in a crossword. What I always do in that situation is put it down and do something else for a bit. It could even be something as trivial as making a cup of tea, just as long as I don’t think about the clue at all while I’m doing it. Nearly always when I come back to it and look at it afresh I can solve it. I have a large stack of prize dictionaries to prove that this works!

It can be difficult to force yourself to pause in this way. I’m sure that I’m not the only physicist who has been unable to sleep for thinking about their research. I do think however that it is essential to learn how to effect your own mental reboot. In the context of my PhD research this involved simply turning to a different research problem, but I think the same purpose can be served in many other ways: taking a break, going for a walk, playing sport, listening to or playing music, reading poetry, doing a crossword, or even just taking time out to socialize with your friends. Time spent sitting at your desk isn’t guaranteed to be productive.

So, for what it’s worth here is my advice to new postgraduate students. Work hard. Enjoy the challenge. Listen to advice from your supervisor, but remember that the PhD is your opportunity to establish your own identity as a researcher. Above all, in the words of the Desiderata:

Beyond a wholesome discipline,
be gentle with yourself.

Never feel guilty about establishing a proper work-life balance. Having more than one dimension to your life is will not only improve your well-being but also make you a better researcher.


by telescoper at September 21, 2014 02:06 PM

Lubos Motl - string vacua and pheno

A conversation with Nima Arkani-Hamed
On behalf of the Science Museum in London, science historian Graham Farmelo hosted a conversation with a top particle physicist of his generation, Nima Arkani-Hamed, on November 14th, 2013.



A 55-minute video of excerpts from the event was posted just two months ago. You may speed the video up by a factor of 1.25 or 1.5, if you wish ("options" wheel).

Nima has said lots of interesting and important things about theoretical physics of the 20th century (it's easy to highlight the breakthroughs of the 20th century in 3 minutes: relativity, quanta, and their cooperative applications: as a team, relativity and QM are hugely constraining), the recent past, the present, and the future; the LHC and the Higgs boson, and lots of related things. What the fundamental laws can and can't explain (the theories are effective and hierarchical)?

We're at a rather special era because we're beginning to ask a new type of questions that are deeper and more structured, Nima said.




Spacetime is doomed, doesn't exist, and has to be replaced. Farmelo wanted to call psychiatrists at the point but they would conclude that Nima is sane. Arkani-Hamed would also explain why the largest machine and experiment (the LHC) is needed to study the shortest distance scales – it's really due to the Heisenberg uncertainty principle. The vacuum is exciting, antimatter (Dirac...) may be produced. Some fine-tuning is needed to get a large Universe – and even to protect us from being black holes etc. (After this comment by Nima, I found Farmelo's frequent laughter both distracting and really really stupid for the first time; it was just "really stupid" before that.)

He also clarified that it's not true that we don't know how to combine QM and GR at all. We can calculate the first quantum corrections at long distances etc. Nima would explain why the physicists have mostly believed in a natural explanation of the Higgs' lightness etc. and this belief is starting to conflict with the experiments.

The folks in the audience have also asked some questions. Some of the questions were funny. You could expect that people who get seats in this small room where Arkani-Hamed speaks have a much-higher-than-average interest in and awareness of particle physics. But even after 40 minutes, one could hear questions like "So why don't you tell us, Nima, how you do your experiments?".

LOL – it's catastrophically hopeless but still funny enough. They calmly explained that Nima isn't an experimenter.




Nima would discuss what it meant to add new physics, that most proposed hypotheses may be immediately ruled out and it's a big achievement to construct a theory that isn't immediately dead. New physics shouldn't be just junk – it should better play some role and stabilize some instabilities and solve some hierarchy problems, and so on.

A guy in the audience didn't want to accept Nima's (obviously right) comments that the precise position and momentum of a particle (among other concepts that people used to believe) is meaningless in our Universe etc. An "argument" was that most people would probably disagree with Nima – what an "argument", holy crap. This claim about the meaninglessness must be an artifact of our current ignorance only, the guy would argue, and it will surely become meaningful again as the current stupid physicists are replaced by saner ones in the future. Nima would say that it's very unlikely that we would ever return to the conceptually simpler, classical underpinnings of quantum mechanics.

But even if there exists some more sophisticated miraculous loophole, we will have to radically change the meaning of all the words in the question (much like the meaning of many if not all words used by physicists has undergone lots of gradual as well as abrupt modifications in the past) before we get there, so it makes no sense to use our current language to attack those speculative future developments. Those comments by Nima are very important and often unappreciated by the laymen.

More generally, Nima would also say that the straightforward laymen's picture of the scientific method – prepare another clearcut theory of Nature, test it, rule it out, return back, prepare another one, and so on – is nothing like the actual theoretical physics as experts know it.

by Luboš Motl (noreply@blogger.com) at September 21, 2014 09:49 AM

September 20, 2014

Christian P. Robert - xi'an's og

Le Monde puzzle [#879]

Here is the last week puzzle posted in Le Monde:

Given an alphabet with 26 symbols, is it possible to create 27 different three-symbol words such that

  1. all symbols within a word are different
  2. all triplets of symbols are different
  3. there is no pair of words with a single common symbol

Since there are

28x27x26/3×2=2925

such three-symbol words, it could be feasible to write an R code that builds the 27-uplet, assuming it exists. However, by breaking those words into primary words [that share no common symbols] and secondary words [that share two symbols with one primary word], it seems to me that there can be a maximum of 26 words under those three rules…


Filed under: Books, Kids Tagged: combinatorics, Le Monde, mathematical puzzle

by xi'an at September 20, 2014 10:14 PM

Michael Schmitt - Collider Blog

New AMS Results – hints of TeV Dark Matter?

Yesterday the AMS Collaboration released updated results on the positron excess. The press release is available at the CERN press release site. (Unfortunately, the AMS web site is down due to syntax error – I’m sure this will be fixed very soon.)

The Alpha Magnetic Spectrometer was installed three years ago at the International Space Station. As the name implies, it can measure the charge and momenta of charged particles. It can also identify them thanks to a suite of detectors providing redundant and robust information. The project was designed and developed by Prof. Sam Ting (MIT) and his team. An international team including scientists at CERN coordinate the analysis of data.

AMS installed on the ISS. Photo from bowshooter blog.

There are more electrons than positrons striking the earth’s atmosphere. Scientists can predict the expected rate of positrons relative to the rate of electrons in the absence of any new phenomena. It is well known that the observed positron rate does not agree with this prediction. This plot shows the deviation of the AMS positron fraction from the prediction. Already at an energy of a couple of GeV, the data have taken off.

AMS positron fraction compared to prediction.

AMS positron fraction compared to prediction.

The positron fraction unexpectedly increases starting around 8 GeV. At first it increases rapidly, with a slower increase above 10 GeV until 250 GeV or so. AMS reports the turn-over to a decrease to occur at 275 ± 32 GeV though it is difficult to see from the data:

AMS positron fraction.  The upper plot shows the slope.

AMS positron fraction. The upper plot shows the slope.


This turnover, or edge, would correspond notionally to a Jacobian peak — i.e., it might indirectly indicate the mass of a decaying particle. The AMS press release mentions dark matter particles with a mass at the TeV scale. It also notes that no sharp structures are observed – the positron fraction may be anomalous but it is smooth with no peaks or shoulders. On the other hand, the observed excess is too high for most models of new physics, so one has to be skeptical of such a claim, and think carefully for an astrophysics origin of the “excess” positrons — see the nice discussion in Resonaances.

As an experimenter, it is a pleasure to see this nice event display for a positron with a measured energy of 369 GeV:

AMS event display: a high-energy positron

AMS event display: a high-energy positron

Finally, AMS reports that there is no preferred direction for the positron excess — the distribution is isotropic at the 3% level.

There is no preprint for this article. It was published two days ago in PRL 113 (2014) 121101″


by Michael Schmitt at September 20, 2014 09:16 PM

Peter Coles - In the Dark

No referenda, please..

One of the most interesting topics under discussion after the announcement of the results of Thursdays Referendum on Scottish independence is whether there will be another one which, in turn, leads to the question what is the proper plural of “referendum”?

Regular readers of this blog know that I’m never pedantic about such matters. Well, maybe a little bit, sometimes. Latin was my best subject at O-level, though, so I can’t resist making a comment.

Any dictionary will tell you that “referendum” is obtained from the Latin verb referre which is itself formed as re- (prefix meaning “back”) + ferre (to carry), thus its literal meaning is “carry back” or, more relevantly, “to refer”. Ferre is actually an irregular verb, so I’ll use simpler examples of regular verbs below

Latin grammar includes two related concepts derived from a verb, the gerund and the gerundive. The gerund is a verbal noun; such things exist in English in forms that mean the act of something, eg running, eating, loving. In the last case the relevant Latin verb is the first conjugation amare and the gerund is amandus. You can find a similar construction surviving in such English words as “graduand”. Note however that a gerund has no plural form because that would make no sense.

Related to the gerund is the gerundive which, as its name suggests, is an adjectival form related to the gerund, specifically expressing necessity.

In Latin, an adjective takes an ending that depends on the gender of the noun it describes; the gerundive also follows this pattern. In the example given above, the gerundive form is amandus in a masculine case or, if referring to a female entity, amanda, hence the name, which means “deserving or requiring love”, or amandum for a neuter noun. In cases where the noun is plural the forms would be amandi, amandae, and amanda. Endings for other verbs are formed in a similar fashion depending on their conjugation.

From this example you can see that in Latin amandum could mean either “loving” (gerund) or “a thing to be loved” (gerundive). Latin grammar is sufficiently clear, however, that the actual meaning will be clear from the context.

Now, to referendum. It seems clear to me that this is a gerundive and thus means “a thing to be referred” (the thing concerned being of no gender, as is normal in such cases in Latin). So what should be the word for more than one referendum?

Think about it and you’ll realise that referenda would imply “more than one thing to be referred”. The familiar word agenda is formed precisely this way and it means “(a list of things) to be done”. But this is not the desired meaning we want, ie “more than one example of a thing being referred”.

I would therefore argue that referenda is clearly wrong, in that it means something quite different from that needed to describe more than one of what a referendum is.

So what should we use? This is a situation where there isn’t a precise correspondence between Latin and English grammatical forms so it seems to me that we should just treat referendum as an English noun and give it the corresponding English plural. So “referendums” it is.

Any questions?


by telescoper at September 20, 2014 05:30 PM

September 19, 2014

arXiv blog

The Hidden World of Facebook "Like Farms"

If you pay a “like farm” to generate likes for your Facebook pages, what do you get?

September 19, 2014 10:34 PM

CERN Bulletin

Midsummer mysteries: Criminal masterminds? Not really…

In the summer, when offices are empty and the library is full of new faces, it may seem like a perfect opportunity to steal IT equipment. However, as we know, stealing never pays and thieves always get caught. Just like the person who stole several bikes parked in front of Reception…

 

Image: Katarina Anthony.

 As we have said many times: security affects us all. It would seem that the crafty little devil who stole four computers from the library (three privately owned and one belonging to CERN) in July hadn’t read our article. This individual naïvely thought that it would be possible to commit the thefts, sell his ill-gotten gains on the CERN Market and still get away with it.

But he was wrong, as the CERN security service and the IT security service were able to identify the guilty party within just a few days.  “The computers had been stolen over a period of four days but it was obvious to us that the same person was responsible,” explains Didier Constant, Head of the Security Service. “Thanks to the IT security service, we could see that the stolen computers had been connected to the CERN network after they were taken and that they had been put up for sale on the CERN Market.”

The thief’s strategic error was blatantly obvious in this case. However, even when the intentions are clear, it is not always so easy to find proof, especially if the thief tries to defend himself with explanations and alibis like a professional criminal. “The Geneva police helped us a lot,” says Didier Constant. “The person eventually admitted to three of the four thefts. He had probably sold the fourth computer outside CERN.”

Fortunately, the security service is never on holiday: also in July, another person thought he could come to CERN on the tram, help himself to a bike parked near Reception and use it to get away, repeating this process several times. “In total, over three weeks, this person stole about 10 bikes,” explains Didier Constant. “In this case we were able to identify the guilty party from our security cameras and the police had a criminal record for him.”

So there you have two very interesting stories. In both cases, it was thanks to tickets created on the CERN Portal that these crimes could be dealt with by experts in the services concerned and by the police. If you see unusual behaviour or if you are the victim of theft, don’t hesitate to report it.

September 19, 2014 09:09 PM

The Great Beyond - Nature blog

Ephemeral superheavy atoms coaxed into exotic molecules

Posted on behalf of Katharine Sanderson.

If you were ever to get excited about a chemical reaction, now might be the time.

An international team led by Christoph Düllmann at the Johannes Gutenberg University in Mainz, Germany, has managed to make a chemical compound containing the superheavy element seaborgium (Sg) — which has 106 protons in its nuclei — and six carbon monoxide groups.

The resulting molecule, reported on 18 September in Science, could be the start of a new chemical repertoire for the manmade superheavy elements, which do not exist in nature.

These elements are interesting not only to nuclear physicists — who use them to test how many protons they can pack into one nucleus before mutual electrostatic repulsion makes it explode — but also to chemists. The protons’ electrostatic pull on the electrons orbiting the nucleus is stronger in these elements than it is in lighter ones. This means that the electrons whiz around the nucleus at almost 80% the speed of light, a regime where Einstein’s special theory of relativity — which makes particles more massive the faster they get — begins to have a measurable effect. “It changes the whole electronic structure,” says Düllmann, making it different from those of elements that sit directly above the superheavy elements on the periodic table (see ‘Cracks in the periodic table‘).

Some chemists therefore expect superheavy elements to violate the general rule that elements in the same column should have similar electron structures and thus be chemically similar.

It is a brave chemist who attempts chemical reactions with superheavy elements. These cannot be studied with normal ‘wet chemistry’ methods and ordinary bunsen burners because they are made in very small numbers by smashing lighter atoms together, and tend to be extremely unstable, quickly ‘transmuting’ into other elements via radioactive decay. But it can, and has, been done, and researchers have identified fluorides, chlorides and oxides of these elements.

The difference this time is that the chemical reaction was done in a relatively cool environment, and a different kind of chemical bond was formed. Rather than a simple covalent bond, where the metal and the other element share electrons, Düllmann made a compound with a much more sophisticated sharing of electrons in the bond, called a coordination bond.

Untitled

Nuclei of the superheavy element seaborgium were created from a beam of neon ions (top right) and slowed down in a gas-filled chamber (RTC), where they reacted with carbon monoxide to produce a new kind of molecule.
Credit: P. Huey/Science

Düllmann’s team used the RIKEN Linear Accelerator (RILAC) in Japan to make seaborgium by firing a beam of neon ions (atomic number 10) at a foil of curium (96). This process yielded nuclei of seaborgium-265 — an isotope with a half-life of less than 20 seconds — at a rate of just one every few hours.

The beam also produced nuclei of molybdenum and tungsten, which are in the same column of the periodic table as seaborgium. The team separated the resulting seaborgium, molybdenum and tungsten from the neon using a magnetic field, and sent them into a gas-filled chamber to cool off and react with carbon monoxide. Molybdenum and tungsten are known to form carbonyls (Mo(CO)6 and W(CO)6).

Using a technique called gas chromatography, the team found that the seaborgium formed a compound that was volatile and tended to react with silica, the way its molybdenum- and tungsten-based siblings would. This indirect evidence was enough to convince Düllmann that he had made the first superheavy metal carbonyl (Sg(CO)6). “It was a fantastic feeling,” he says.

In this case the prediction — which the experiment confirmed — was that special relativity would make the molecule behave more like its lighter counterparts than might analogous compounds of different superheavy elements.

In an accompanying commentary, nuclear chemist Walter Loveland of Oregon State University in Corvallis writes that similar techniques could be applied to other superheavy elements from 104 to 109. In particular, the chemistry of element 109 (meitnerium) has never been studied before, he notes.

by Davide Castelvecchi at September 19, 2014 06:34 PM

Lubos Motl - string vacua and pheno

AMS in PRL: the positrons do stop increasing
...but the evidence for an actual drop remains underwhelming...



In April 2013, the Alpha Magnetic Spectrometer (AMS-02), a gadget carried by the International Space Station that looks for dark matter and other things and whose data are being evaluated by Nobel prize winner Sam Ting (MIT) and his folks, reported intriguing observations that were supposed to grow to a smoking fun proving that dark matter exists and is composed of heavy elementary particles:
AMS-02 seems to overcautiously censor solid evidence for dark matter

AMS: the steep drop is very likely there
I had various reasons for these speculative optimistic prophesies – including Sam Ting's body language. It just seemed that he knew more than he was saying and was only presenting a very small, underwhelming part of the observations.




Recall that among these high energy particles, there are both electrons and positrons. The positrons are more exotic and may be produced by pulsars – which is a boring explanation. However, they may also originate from the annihilation of dark matter "WIMP" particles. If that's so, the dark matter particle physics predicts that the positron fraction increases as you increase the energy of the electrons and positrons. But at some moment, when the energy reaches a few hundreds of \(\GeV\) or so, the positron fraction should stop growing and steeply drop afterwards.




Was that observed in 2013? Has it been observed by now? Finally, today, AMS-02 published a new paper in prestigious PRL:
High Statistics Measurement of the Positron Fraction in Primary Cosmic Rays of \(0.5\)–\(500\GeV\) with the Alpha Magnetic Spectrometer on the International Space Station (PRL)

CERN story, CERN press release + PDF supplement, copy at interactions.org, APS, NBC, Symmetry Magazine
The PRL abstract says that for the first time, they see that for the first time, they observe the positron fraction's increase with energy to stop somewhere, approximately at \(200\GeV\) although e.g. interactions.org puts the place at \(275\pm 32\GeV\). Moreover, the derivative of the number of positrons with respect to the energy exceeds the same derivative for electrons around dozens of \(\GeV\) which makes it more likely that these lower but high-energy positrons indeed directly originate from a high-energy source and not from deceleration.



While the claim about the end of the increase of the positron fraction agrees with the graphs like those above (other graphs show the positron fraction stabilized at \(0.15\) between \(190\) and \(430\GeV\) or so), I find the "end of the increase" or a "potentially emerging decrease" tantalizing but still unspectacularly weak and inconclusive. Indeed, the "straight decrease" itself still seems to be unsupported. Even if we were shown these graphs in April 2013, and we were shown a bit less than that, I would have thought that Sam Ting's hype was probably a bit excessive.

Just to be sure, the behavior in the graphs is compatible with a (below) \(1\TeV\) dark matter particle like a neutralino (supersymmetry's most convincing dark matter candidate), and indeed, I tend to think that this is what actually exists and will emerge at the LHC, too. Incidentally, some sources tell us that the LHC is back to business after the upgrade. It's a bit exaggeration of the actual ongoing "business" but let's hope that in April 2015, the \(13\TeV\) and perhaps \(14\TeV\) collisions will start smoothly and abruptly.



SUSY and related scenarios predicts the positron fraction as a function of energy that looks like the red graph above (two values of the neutralino mass are depicted). Up to a certain energy, it looks just like what AMS-02 has already shown us – which is good news for WIMP and/or SUSY – but we still haven't seen the dramatic drop yet. Of course, it's conceivable that Ting et al. are still hiding something they already have – and maybe have had already in April 2013. Maybe the hiding game is needed for the continued funding of their experiment. But this is just another speculation.



The neverending story that takes place at the ISS is described in this musical video clip.

by Luboš Motl (noreply@blogger.com) at September 19, 2014 06:08 PM

Emily Lakdawalla - The Planetary Society Blog

More jets from Rosetta's comet!
Another lovely view of comet Churyumov-Gerasimenko contains jets. Bonus: Emily explains how to use a flat field to rid these glorious Rosetta NavCam images of faint stripes and specks.

September 19, 2014 06:07 PM

CERN Bulletin

Administrative Circular No. 11 (Rev. 3) - Categories of members of the personnel

Administrative Circular No. 11 (Rev. 3) entitled “Categories of members of the personnel”, approved by the Director-General following discussion at the Standing Concertation Committee meeting of 3 July 2014 and entering into force on 1 September 2014, is available on the intranet site of the Human Resources Department:

This circular is applicable to all members of the personnel.

It cancels and replaces Administrative Circular No. 11 (Rev. 2) entitled “Categories of members of the personnel” of January 2013.

The circular was revised in order to include a minor adjustment of the determination of required period of break in the payment of subsistence allowance to certain categories of associated members of the personnel (taking account of possible technical means of control). Furthermore, the possibility of traineeships of long duration was restricted to cases in which the traineeship is awarded pursuant to an agreement between CERN and a funding agency on a national or international level.

Department Head Office
HR Department 

September 19, 2014 04:09 PM

astrobites - astro-ph reader's digest

Apply to be an Astronomy Ambassador

aas_astronomy_ambassadors_logo_2013

Do you think outreach is an important part of our jobs as scientists? We hope so! Do you want to learn more about how to effectively communicate science with the public and work with people who can teach you how to have a real impact in your community?

The AAS Astronomy Ambassadors program is now accepting applications for a workshop at this winter’s AAS meeting! We’ve discussed the Astronomy Ambassadors program before: check out Zack’s argument about why this program is so important, and if you’d like to learn more about what to expect, you can read Meredith’s and Allison’s accounts of the things they learned when they participated in the inaugural workshop.

The Ambassadors program is particularly seeking early-career astronomers who are interested in outreach but not necessarily experienced. So, don’t be discouraged if you haven’t had many opportunities to share science with public audiences. This workshop is perfect for you.

Read the official invitation from the AAS below, and click here to apply!

 

Astronomy Ambassadors Workshop for Early-Career AAS Members

The AAS Astronomy Ambassadors program supports early-career AAS members with training in resources and techniques for effective outreach to K-12 students, families, and the public. The next AAS Astronomy Ambassadors workshop will be held 3-4 January 2015 at the 225th AAS meeting in Seattle, Washington. Workshop participants will learn to communicate more effectively with public and school audiences; find outreach opportunities and establish ongoing partnerships with local schools, museums, parks, and/or community centers; reach audiences with personal stories, hands-on activities, and jargon-free language; identify strategies and techniques to improve their presentation skills; gain access to a menu of outreach resources that work in a variety of settings; and become part of an active community of astronomers who do outreach.

Participation in the program includes a few hours of pre-workshop online activities to help us get to know your needs; the two-day workshop, for which lunches and up to 2 nights’ lodging will be provided; and certification as an AAS Astronomy Ambassador, once you have logged three successful outreach events. The workshop includes presenters from the American Astronomical Society, the Astronomical Society of the Pacific, and the Pacific Science Center.

The number of participants is limited, and the application requires consent from your department chair. We invite applications from graduate students, postdocs and new faculty in their first two years after receipt of their PhD, and advanced undergraduates doing research and committed to continuing in astronomy. Early-career astronomers who are interested in doing outreach, but who haven’t done much yet, are encouraged to apply; we will have sessions appropriate for both those who have done some outreach already and those just starting their outreach adventures. We especially encourage applications from members of groups that are presently underrepresented in science.

Please complete the online application form by Monday, 20 October 2014.

by Astrobites at September 19, 2014 03:41 PM

Sean Carroll - Preposterous Universe

How Much Cosmic Inflation Probably Occurred?

Nothing focuses the mind like a hanging, and nothing focuses the science like an unexpected experimental result. The BICEP2 claimed discovery of gravitational waves in the cosmic microwave background — although we still don’t know whether it will hold up — has prompted cosmologists to think hard about the possibility that inflation happened at a very high energy scale. The BICEP2 paper has over 600 citations already, or more than 3/day since it was released. And hey, I’m a cosmologist! (At times.) So I am as susceptible to being prompted as anyone.

Cosmic inflation, a period of super-fast accelerated expansion in the early universe, was initially invented to help explain why the universe is so flat and smooth. (Whether this is a good motivation is another issue, one I hope to talk about soon.) In order to address these problems, the universe has to inflate by a sufficiently large amount. In particular, we have to have enough inflation so that the universe expands by a factor of more than 1022, which is about e50. Since physicists think in exponentials and logarithms, we usually just say “inflation needs to last for over 50 e-folds” for short.

So Grant Remmen, a grad student here at Caltech, and I have been considering a pretty obvious question to ask: if we assume that there was cosmic inflation, how much inflation do we actually expect to have occurred? In other words, given a certain inflationary model (some set of fields and potential energies), is it most likely that we get lots and lots of inflation, or would it be more likely to get just a little bit? Everyone who invents inflationary models is careful enough to check that their proposals allow for sufficient inflation, but that’s a bit different from asking whether it’s likely.

The result of our cogitations appeared on arxiv recently:

How Many e-Folds Should We Expect from High-Scale Inflation?
Grant N. Remmen, Sean M. Carroll

We address the issue of how many e-folds we would naturally expect if inflation occurred at an energy scale of order 1016 GeV. We use the canonical measure on trajectories in classical phase space, specialized to the case of flat universes with a single scalar field. While there is no exact analytic expression for the measure, we are able to derive conditions that determine its behavior. For a quadratic potential V(ϕ)=m2ϕ2/2 with m=2×1013 GeV and cutoff at MPl=2.4×1018 GeV, we find an expectation value of 2×1010 e-folds on the set of FRW trajectories. For cosine inflation V(ϕ)=Λ4[1−cos(ϕ/f)] with f=1.5×1019 GeV, we find that the expected total number of e-folds is 50, which would just satisfy the observed requirements of our own Universe; if f is larger, more than 50 e-folds are generically attained. We conclude that one should expect a large amount of inflation in large-field models and more limited inflation in small-field (hilltop) scenarios.

As should be evident, this builds on the previous paper Grant and I wrote about cosmological attractors. We have a technique for finding a measure on the space of cosmological histories, so it is natural to apply that measure to different versions of inflation. The result tells us — at least as far as the classical dynamics of inflation are concerned — how much inflation one would “naturally” expect in a given model.

The results were interesting. For definiteness we looked at two specific simple models: quadratic inflation, where the potential for the inflaton ϕ is simply a parabola, and cosine (or “natural“) inflation, where the potential is — wait for it — a cosine. There are many models one might consider (one recent paper looks at 193 possible versions of inflation), but we weren’t trying to be comprehensive, merely illustrative. And these are two nice examples of two different kinds of potentials: “large-field” models where the potential grows without bound (or at least until you reach the Planck scale), and “small-field” models where the inflaton can sit near the top of a hill.

2potentials

Think for a moment about how much inflation can occur (rather than “probably does”) in these models. Remember that the inflaton field acts just like a ball rolling down a hill, except that there is an effective “friction” from the expansion of the universe. That friction becomes greater as the expansion rate is higher, which for inflation happens when the field is high on the potential. So in the quadratic case, even though the slope of the potential (and therefore the force pushing the field downwards) grows quite large when the field is high up, the friction is also very large, and it’s actually the increased friction that dominates in this case. So the field rolls slowly (and the universe inflates) at large values, while inflation stops at smaller values. But there is a cutoff, since we can’t let the potential grow larger than the Planck scale. So the quadratic model allows for a large but finite amount of inflation.

The cosine model, on the other hand, allows for a potentially infinite amount of inflation. That’s because the potential has a maximum at the top of the hill. In principle, there is a trajectory where the field simply sits there and inflates forever. Slightly more realistically, there are other trajectories that start (with zero velocity) very close to the top of the hill, and take an arbitrarily long time to roll down. There are also, of course, trajectories that have a substantial velocity near the top of the hill, which would inflate for a relatively short period of time. (Inflation only happens when the energy density is mostly in the form of the potential itself, with a minimal contribution from the kinetic energy caused by the field velocity.) So there is an interesting tradeoff, and we would like to know which effect wins.

The answer that Grant and I derived is: in the quadratic potential, we generically expect a huge amount of inflation, while in the cosine potential, we expect barely enough (fairly close to the hoped-for 50 e-folds). Although you can in principle inflate forever near a hilltop, such behavior is non-generic; you really need to fine-tune the initial conditions to make it happen. In the quadratic potential, by contrast, getting a buttload of inflation is no problem at all.

Of course, this entire analysis is done using the classical measure on trajectories through phase space. The actual world is not classical, nor is there any strong reason to expect that the initial conditions for the universe are randomly chosen with respect to the Liouville measure. (Indeed, we’re pretty sure that they are not.)

So this study has a certain aspect of looking-under-the-lamppost. We consider the naturalness of cosmological histories with respect to the conserved measure on classical phase space because that’s what we can do. If we had a finished theory of quantum gravity and knew the wave function of the universe, we would just look at that.

We’re not looking for blatant contradictions with data, we’re looking for clues that can help us move forward. The way to interpret our result is to say that, if the universe has a field with a potential like the quadratic inflation model (and the initial conditions are sufficiently smooth to allow inflation at all), then it’s completely natural to get more than enough inflation. If we have the cosine potential, on the other hand, than getting enough inflation is possible, but far from a sure thing. Which might be very good news — it’s generally thought that getting precisely “just enough” inflation is annoying finely-tuned, but here it seems quite plausible. That might suggest that we could observe remnants of the pre-inflationary universe at very large scales today.

by Sean Carroll at September 19, 2014 02:51 PM

CERN Bulletin

CERN Road Race | 1 October
The 2014 edition of the annual CERN Road Race will be held on Wednesday 1 October at 18:15.   The 5.5 km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8 km. As usual, there will be a “best family” challenge (judged on best parent + best child). Trophies are awarded in the usual men’s, women’s and veterans’ categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter for free and each child will receive a medal. More information, and the online entry form, can be found here.

by Klaus Hanke at September 19, 2014 01:45 PM

CERN Bulletin

Exhibition | CERN Micro Club | 1-30 September
The CERN Micro Club (CMC) is organising an exhibition looking back on the origins of the personal computer, also known as the micro-computer, to mark the 60th anniversary of CERN and the club’s own 30th anniversary.   CERN, Building 567, R-021 and R-029 01.09.2014 - 30.09.2014 from 4.00 to 6.00 p.m. The exhibition will be held in the club’s premises (Building 567, rooms R-0121 and R-029) and will be open Mondays to Thursdays from 1 to 30 September 2014. Come and admire, touch and use makes and models that disappeared from the market many years ago, such as Atari, Commodore, Olivetti, DEC, IBM and Apple II and III, all in good working order and installed with applications and games from the period. Club members will be on hand to tell you about these early computers, which had memories of just of a few kilobytes, whereas those of modern computers can reach several gigabytes or even terabytes.

September 19, 2014 01:25 PM

Peter Coles - In the Dark

The Athenian Option Revisited

I have to admit that I didn’t stay up to watch the results come in from the referendum on Scottish independence, primarily because I knew I had a very busy morning ahead of me and needed an early night. Not eligible to vote myself I did toy with the option of having a bet on the outcome, but the odds on the “no” outcome I thought more likely were 9-1 on so hardly worth a flutter at all. The opinion polls may have had difficulty getting this one right, but I generally trust the bookies’ assessment.

Anyway, to summarize the outcome:

  • “No” obtained a mark of 55%, which corresponds to a solid II.2 with no need for resits.
  • “Yes” obtained a mark of 45%, which is a Third Class result, but may claim extenuating circumstances or request another attempt.

Sorry about that. I guess I’ve been doing too many examination boards these days…

On balance, I’m glad that Scotland voted “no” but I don’t think it would have been that much of a big deal in the long run had they decided otherwise. There might have been some short-term difficulties but we’d all have survived. In the end what matters is that this whole exercise was run democratically and the issue was settled by voting rather than fighting, which is what would have happened in the not-too-distant past.

The aftermath of the vote against Scottish secession has been dominated by talk of greater devolution of powers not only to Scotland but also to Wales and even the English regions. One striking thing about the referendum was the high turnout (by British standards) of around 85 per cent that contrasts strongly with the dismal rate of participation in, e.g.. the recent European elections. In the light of all this I thought I’d resurrect an idea I’ve blogged about before.

Some time ago I read a very interesting an provocative little book called The Athenian Option, which offers a radical vision of how to renew Britain’s democracy.

The context within which this book was written was the need to reform Britain’s unelected second chamber, the House of Lords. The authors of the book, Anthony Barnett and Peter Carty, were proposing a way to do this even before Tony Blair’s New Labour party came to power in 1997, promising to reform  the House of Lords in its manifesto. Despite being well into it’s third Parliament, New Labour hasn’t done much about it yet, and has even failed to offer any real proposals. Although it has removed voting rights from the hereditary peers, the result of this is that the House of Lords is still stuffed full of people appointed by the government.

The need for reform is now greater than ever. In reason times, we have seen dramatically increasing disillusionment with the political establishment, which has handed  out billions of pounds of tax payers’ money to the profligate banking sector causing a ballooning public debt, followed by savage cuts in public spending with consequent reductions in jobs and services.

Meanwhile, starting under New Labour, the culture of cronyism led to the creation of a myriad pointless quangos doing their best to strangle the entire country with red tape. Although Gordon Brown stated in 2004 that he was going to reduce  bureaucracy, the number of civil servants in the UK grew by about 12% (from 465,7000 to 522,930) between 2004 and 2009. If the amount of bureaucracy within the British university system is anything to go by, the burden of the constant processes of evaluation, assessment and justification is out of all proportion to what useful stuff actually gets done. This started in the Thatcher era with  Conservative governments who viewed the public services as a kind of enemy within, to be suspected, regulated and subdued. However, there’s no denying that it has got worse in recent years.

There is an even more sinister side to all this, in the steady erosion of civil liberties through increased clandestine surveillance, detention without trial and the rest of the paraphernalia of paranoid government. Big Brother isn’t as far off as we’d all like to think.

The furore over MP’s expenses led to further disgust with the behaviour of our elected representatives, many of whom seem to be more interested in lining their own pockets than in carrying out their duties as our elected representatives.

The fact is that the political establishment has become so remote from its original goal of serving the people that it is now regarded with near-total contempt by a large fraction of the population. Politics now primarily serves itself and, of course, big business. It needs to be forced to become more accountable to ordinary people. This is why I think the suggestion of radical reform along the lines suggested by Barnett and Carty is not only interesting, but something like it is absolutely essential if we are to survive as a democracy.

What they propose is to abolish the House of Lords as the Second Chamber, and replace it with a kind of jury selected by lottery from the population in much the same way that juries are selected for the crown courts except that they would be much larger, of order a thousand people or so.  This is called the Athenian Option because in ancient Athens all citizens could vote (although I should add that in ancient Athens there were about 5000 citizens and about 100,000 slaves, and women couldn’t vote even if they weren’t slaves, so the name isn’t at all that appropriate).

Selection of representatives from the electoral roll would be quite straightforward to achieve.  Service should be mandatory, but the composition of the Second Chamber could be refreshed sufficiently frequently that participation should not be too onerous for any individual. It may even be possible for the jury not to have to attend a physical `house’ anyway. They could vote by telephone or internet, although safeguards would be needed to prevent fraud or coercion. It would indeed probably be better if each member of the panel voted independently and in secret anyway.

The central body of government would continue to be a representative Parliament similar to the current House of Commons. The role of the jury would be  limited to voting on legislation sent to it by the House of Commons, which would continue to be elected by a General Election as it is at present. Laws passed by the Commons could not become law unless approved by the juries.

Turnout at British general elections has been falling steadily over the past two decades. Apathy has increased  because the parliamentary machine has become detached from its roots. If nothing is done to bring it back under popular control, extremist parties like the British National Party will thrive and the threat to our democracy will grow further.

The creation of regional assemblies in Wales, Scotland and Northern Ireland has not been as successful as it might have been because it has resulted not in more democracy, but in more politicians. The Welsh Assembly, for example, has little real power, but has fancy offices and big salaries for its members and we have it as well as Westminster and the local Councils.

We also have a European Parliament, again with very little real power but with its own stock of overpaid and self-important politicians elected by the tiny fraction of the electorate that bothers to vote.

My solution to this mess would be to disband the regional assemblies and create regional juries in their place. No legislation would be enacted in Wales unless passed by the Welsh jury, likewise elsewhere.

To be consistent, the replacement House of Lords should be an English jury, although perhaps there could be regional structures within England too. We would therefore have one representative house, The House of Commons, and regional juries for Wales, Scotland, England (possibly more than one) and Northern Ireland. This would create a much more symmetrical structure for the governance of the United Kingdom, putting an end to such idiocies as the West Lothian Question.

Of course many details would need to be worked out, but it seems to me that this proposal makes a lot of sense. It retains the political party system in the House of Commons where legislation would be debated and amended before being sent to the popular juries. The new system would, however, be vastly cheaper than our current system. It would be much fairer and more democratic. It would make the system of government more accountable, and it would give citizens a greater sense of participation in and responsibility for the United Kingdom’s political culture. Politics is too important to be left to politicians.

On the other hand, in order to set it up we would need entire sections of the current political structure to vote themselves out of existence. Since they’re doing very nicely out of the current arrangements, I think change is unlikely to be forthcoming through the usual channels. Turkeys won’t vote for Christmas.

Anyone care for a revolution?

 


by telescoper at September 19, 2014 01:00 PM

CERN Bulletin

Women’s rugby tournament | 27 September
Women’s rugby tournament Saint-Genis rugby pitch - Golf des Serves 27 September 2014 - 10 a.m. For the third consecutive year, the women's rugby club of CERN Meyrin St Genis, The Wildcats, are organising a women’s 7's rugby tournament. With the support of the Office Municipal des Sports of St Genis-Pouilly and various other sponsors, we will be welcoming 10 teams ready to fight it out for victory! Bring your family and friends for a great day of rugby! Come and discover the values of team spirit in rugby and support your local team (RC CMSG). An initiation for kids between 4 and 10 years old will be organised by school rugby trainers. There will also be a live music concert. Food and drink will be available all day. Concert schedule 6 p.m.: Bad spirits out of the boot 7 p.m.: SoundHazard 8 p.m.: Miss Proper & the Moving Targets 9 p.m.: Fuzzy Dunlop More information on: http://www.facebook.com/events/509236532536269/

September 19, 2014 12:58 PM

The Great Beyond - Nature blog

UN Security Council says Ebola is security threat

The Ebola outbreak in West Africa is “a threat to international peace and security”, the United Nations (UN) Security Council said on 18 September, in a resolution calling for a massive increase in the resources devoted to stemming the virus’s spread.

EbolaMask

Centers for Disease Control and Prevention

The council is asking countries to send supplies and medical personnel to Liberia, Guinea and Sierra Leone, and seeks to loosen travel restrictions that have hampered outbreak response in those countries. The unusual resolution was co-sponsored by 131 nations and approved at the first emergency council meeting organized in response to a health crisis.

More than 5,300 people are thought to have been infected with Ebola during the current epidemic, and more than 2,600 have died, according to the World Health Organization (WHO) in Geneva, Switzerland.

The pace of the disease’s spread seems to be increasing, with the number of Ebola cases now doubling every three weeks, UN secretary general Ban Ki-moon told the council. ”The gravity and scale of the situation now require a level of international action unprecedented for a health emergency,” he said.

WHO director-general Margaret Chan sounded a similarly dire warning. “This is likely the greatest peace-time challenge that the United Nations and its agencies have ever faced,” Chan told the security council.

The UN estimates that an effective response to the Ebola outbreak will cost nearly US$1 billion, double the $490 million figure put forth by the WHO on 28 August. The United States has promised a major influx of resources, with US President Barack Obama announcing on 16 September that he would send 3,000 military personnel and spend roughly $750 million to aid the Ebola fight.

by Lauren Morello at September 19, 2014 01:56 AM

September 18, 2014

The Great Beyond - Nature blog

US Congress approves stopgap funding bill
8531546523_b9f1ac7440_z

The US Congress may consider approving a final 2015 budget in November.

Architect of the Capitol

The US Senate passed a stopgap spending bill on 18 September that includes US$88 million to fight the Ebola outbreak in West Africa.

The bill, endorsed by the House of Representatives on 17 September, now heads to US President Barack Obama, who is expected to sign it into law. The legislation would fund government operations from 1 October — when the 2015 fiscal year begins — until 11 December.

Under the plan, the US Centers for Disease Control and Prevention would receive $30 million to send more health workers and resources to countries affected by the Ebola outbreak; the agency said earlier this week that it has roughly 100 personnel in Africa working on Ebola response. The Biomedical Advanced Research and Development Authority would receive $58 million to fund the development of the promising antibody cocktail known as ZMapp, made by Mapp Pharmaceutical in San Diego, California, and two vaccines against Ebola produced by the US National Institutes of Health and NewLink Genetics of Ames, Iowa.

The funding is a small fraction of the 3,000 military personnel and roughly $750 million that Obama has committed to the Ebola fight. The disease is thought to have infected more than 5,300 people and has killed more than 2,600, according to the World Health Organization in Geneva, Switzerland.

The temporary funding measure would essentially hold US agencies’ budgets flat at 2014 levels. A more permanent 2015 spending plan will have to wait until Congress returns to work after the federal election on 4 November.

Below are the funding levels that key US science agencies received in 2014, plus the funding levels proposed in 2015 House and Senate spending bills, and the estimated fiscal 2015 funding included in the stopgap measure approved on 18 September.

Agency 2014 funding level (US$ millions)
2015 House proposal 2015 Senate proposal 2015 stopgap measure (estimated, US$ millions)
National Institutes of Health 30,003 N/A N/A 30,003
Centers for Disease Control and Prevention 5,882 N/A N/A 5,912
Food and Drug Administration 2,640* 2,574 2,588 2,640
National Science Foundation 7,172 7,404 7,255 7,172
NASA (science) 5,151 5,193 5,200 5,151
Department of Energy Office of Science 5,066 5,071 N/A 5,066
Environmental Protection Agency 8,200 7,483 N/A 8,200
National Oceanic and Atmospheric Administration 5,314 5,325 5,420 5,314
US Geological Survey 1,032 1,035 N/A 1,032

* Includes one-time transfer of $79 million in user fees.

Additional reporting by Sara Reardon. 

by Lauren Morello at September 18, 2014 10:37 PM

Symmetrybreaking - Fermilab/SLAC

Pursuit of dark matter progresses at AMS

A possible sign of dark matter will eventually become clear, according to promising signs from the Alpha Magnetic Spectrometer experiment.

New results from the Alpha Magnetic Spectrometer experiment show that a possible sign of dark matter is within scientists’ reach.

Dark matter is a form of matter that neither emits nor absorbs light. Scientists think it is about five times as prevalent as regular matter, but so far have observed it only indirectly.

The AMS experiment, which is secured to the side of the International Space Station 250 miles above Earth, studies cosmic rays, high-energy particles in space. A small fraction of these particles may have their origin in the collisions of dark matter particles that permeate our galaxy. Thus it may be possible that dark matter can be detected through measurements of cosmic rays.

AMS scientists—based at the AMS control center at CERN research center in Europe and at collaborating institutions worldwide—compare the amount of matter and antimatter cosmic rays of different energies their detector picks up in space. AMS has collected information about 54 billion cosmic ray events, of which scientists have analyzed 41 billion.

Theorists predict that at higher and higher energies, the proportion of antimatter particles called positrons should drop in comparison to the proportion of electrons. AMS found this to be true.

However, in 2013 it also found that beyond a certain energy—8 billion electronvolts—the proportion of positrons begins to climb steeply.

“This means there’s something new there,” says AMS leader and Nobel Laureate Sam Ting of the Massachusetts Institute of Technology and CERN. “It’s totally unexpected.”

The excess was a clear sign of an additional source of positrons. That source might be an astronomical object we already know about, such as a pulsar. But the positrons could also be produced in collisions of particles of dark matter.

Today, Ting announced AMS had discovered the other end of this uptick in positrons—an indication that the experiment will eventually be able to discern what likely caused it.

“Scientists have been measuring this ratio since 1964,” says Jim Siegrist, associate director of the US Department of Energy’s Office of High-Energy Physics, which funded the construction of AMS. “This is the first time anyone has observed this turning point.”

The AMS experiment found that the proportion of positrons begins to drop off again at around 275 billion electronvolts.

The energy that comes out of a particle collision must be equal to the amount that goes into it, and mass is related to energy. The energies of positrons made in dark matter particle collisions would therefore be limited by the mass of dark matter particles. If dark matter particles of a certain mass are responsible for the excess positrons, those extra positrons should drop off rather suddenly at an energy corresponding to the dark matter particle mass.

If the numbers of positrons at higher energies do decrease suddenly, the rate at which they do it can give scientists more clues as to what kind of particles caused the increase in the first place. “Different particles give you different curves,” Ting says. “With more statistics in a few years, we will know how quickly it goes down.”

If they decrease gradually instead, it is more likely they were produced by something else, such as pulsars.

To gain a clearer picture, AMS scientists have begun to collect data about another matter-antimatter pair—protons and antiprotons—which pulsars do not produce.

The 7.5-ton AMS experiment was able to make these unprecedented measurements due to its location on the International Space Station, above the interference of Earth’s atmosphere.

“It’s really profound to me, the fact that we’re getting this fundamental data,” says NASA Chief Scientist Ellen Stofan, who recently visited the AMS control center. “Once we understand it, it could change how we see the universe.”

AMS scientists also announced today that the way that the positrons increased within the area of interest, between 8 and 257 GeV, was steady, with no sudden peaks. Such jolts could have indicated the cause of the positron proliferation were sources other than, or in addition to, dark matter.

In addition, AMS discovered that positrons and electrons act very differently at different energies, but that, when combined, the fluxes of the two together unexpectedly seem to fit into a single, straight slope.

“This just shows how little we know about space,” Ting says.

Fifteen countries from Europe, Asia and America participated in the construction of AMS. The collaboration works closely with a management team at NASA’s Johnson Space Center. NASA carried AMS to the International Space Station on the final mission of the space shuttle Endeavour in 2011.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at September 18, 2014 02:20 PM

arXiv blog

Nanotechnologists Discover How to Carve Tunnels Beneath the Surface of Silicon Chips

A new technique for creating pipes and tunnels deep inside silicon chips could change the way engineers make microfluidic machines and optoelectronic devices.

One of the enabling technologies of the modern world is the ability to construct ever smaller devices out of silicon. At first, these devices were purely electronic—diodes, transistors, capacitors and the like. But more recently, engineers have carved light pipes, fluid-pumping networks, and entire chemistry laboratories out of silicon.

September 18, 2014 02:00 PM

Peter Coles - In the Dark

Scotland Small?

Scotland small? Our multiform, our infinite Scotland _small_?
Only as a patch of hillside may be a cliche corner
To a fool who cries “Nothing but heather!” Where in September another
Sitting there and resting and gazing around
Sees not only heather but blaeberries
With bright green leaves and leaves already turned scarlet,
Hiding ripe blue berries; and amongst the sage-green leaves
Of the bog-myrtle the golden flowers of the tormentil shining;
And on the small bare places, where the little Blackface sheep
Found grazing, milkworts blue as summer skies;
And down in neglected peat-hags, not worked
In living memory, sphagnum moss in pastel shades
Of yellow, green and pink; sundew and butterwort
And nodding harebells vying in their colour
With the blue butterflies that poise themselves delicately upon them,
And stunted rowans with harsh dry leaves of glorious colour
“Nothing but heather!” — How marvellously descriptive! And incomplete!

 

by Hugh McDiarmid (1892-1978)


by telescoper at September 18, 2014 12:01 PM

Jaques Distler - Musings

Mrs. Adler Lies!

Such a sweet looking old lady…

Mrs. Adler's Gefilte Fish label: 21 piece - institutional pack

Above is the label from a can of gefilte fish that we bought for the holidays. A large can, to be sure, but not (as I discovered, upon opening it) a can containing 21 pieces of gefilte fish. Nor 20 pieces. 14 pieces of gefilte fish … in a can labeled as containing 21.

Wars have started over more trivial affronts.

by distler (distler@golem.ph.utexas.edu) at September 18, 2014 01:15 AM

Jaques Distler - Musings

Halloween 2013

It’s Halloween, again. Time for another pumpkin.

Wendy Davis pumpkin
Run, Wendy, run!

by distler (distler@golem.ph.utexas.edu) at September 18, 2014 01:14 AM

Jaques Distler - Musings

Audiophilia

Humans are hard-wired to find patterns.

Even when there are none.

Explaining those patterns (at least, the ones which are real) is what science is all about. But, even there, lie pitfalls. Have you really controlled for all of the variable which might have led to the result?

A certain audiophile and journalist posted a pair of files, containing a ~43sec clip of music, and challenged his readers to see if they could hear the difference between them. Sure enough, “File A” sounds a touch brighter than “File B.” A lively discussion ensued, before he revealed the “reason” for the difference:

  • File B was recorded, from his turntable, via a straight-through cable.
  • File A was recorded, from his turntable, via a cable that passed through a switchbox.

Somehow or other, the switchbox (or the associated cabling) was responsible for the added brightness. An even more lively discussion ensued.

Now, if you open up the two files in Audacity, you discover something interesting: File A gets from the beginning of the musical clip to the same point at the end of the clip 43 milliseconds faster than File B does. That’s a 0.1% difference in speed (and hence pitch) of the recording. Such a difference, while too small to be directly discernible as a change in pitch, ought to be clearly perceptible as “brighter.”

On the other hand, I would contend that putting a switchbox in the signal path cannot possibly cause the information, traveling down the wire, to have shorter duration. The far more likely explanation was that there was a 0.1% variation in the speed of Mr. Fremer’s turntable between the two recordings.

I used Audacity’s “Change speed …” effect to slow down File A (by -0.099 percent), clipped the result (so that the two files have exactly the same duration), and posted them below.

See if you can tell which one is Mr. Fremer’s File A, and which is his File B, and, more importantly, whether you can detect a difference in the brightness (or edginess or whatever other audiophiliac descriptions you’d like to attach to them), now that the speed difference has been corrected.

Update: (2/9/2014)

As I said in the comments, and as anyone who opens up the files in Audacity immediately discovers, it is easy to tell which file is which, by examining their waveforms. Here are Files “B” (upper) and “D” (lower)

Waveforms of Files B and D

(click on any image for a larger view). It’s not obvious, from this picture that they’re the same (bear with me, about that). But, if we think they are, it’s easy enough to check.

  1. First, zoom in as far as you can.
  2. Then use the time-shift tool, to align the two waveforms precisely, at some point in the clip. It’s best to focus on a segment with some high frequency (rapidly varying) content. Fortunately, the pops and clicks on Mr. Fremer’s vinyl record give us plenty to choose from.
  3. Now select one of the wave-forms, and choose “Effect“→”Invert”.
  4. Finally, select both and choose “Tracks“→”Mix and Render”.

If we’re correct, the two traces should precisely cancel each other out.

Exact cancellation Files B and D

And they do (except for the little bits at the beginning and end that I trimmed in creating File “D”). Note that I magnified the vertical scale by a factor of 5, so that you can really see the perfect cancellation.

You can repeat the same procedure for Files “B” and “C” but, no matter how carefully you perform step 2, you can never get anything close to perfect cancellation. So, without even listening to the files, Mr. Fremer (after he got over his initial misapprehension that “C” and “D” were the same) was able to confidently determine which was which.

In a way, that’s rather disappointing, because it doesn’t really tell us anything about how different “C” and “D” are, and whether fixing the speed of the former diminished, in any way, the perceptible differences between them. As Mr. Fremer says,

As to how the two files sound, I didn’t have time last night to listen but will do so today. Of course I know which is which so I’m not sure what my result might prove.

But, since we have Audacity fired up, let’s see what the story is.

Though I said that, for the clip as a whole, there’s no way to line the tracks, so as to achieve cancellation, on a short-enough timescale, you can get very good (though, of course, not perfect) cancellation. The cancellation doesn’t persist – the tracks wander in and out of phase with each other, due (at least in part, if not in toto) to the wow-and-flutter of Mr. Fremer’s turntable.

Here’s an example (chosen because Mr. Fremer seemed to think that there’s a flagrant disparity in the waveforms, right in the middle of this excerpt).

1/4 second excerpt from Files C and D.
File C (upper) and D (lower), from 32.750s to 33.000s.

Following the procedure outlined above, we align the two clips at the center, and attempt to cancel the waveforms:

1/4 second excerpt from Files C and D.

They cancel very well at the center, but progressively poorly as you move to either end, where the two tracks wander out-of-phase. Notice the sharp spikes. If you zoom in, you’ll notice that these are actually S-shaped: they’re the result of superposing two musical peaks (one of which we inverted, of course) that have gotten slightly out-of-phase with each other. They cancel at the center, but not at either end, where they have ceased to overlap. Of course, not just the peaks but everything else has also gone out-of-phase, so these S-shaped spikes sit on top of an incomprehensible hash.

You can repeat the process for other short excerpts, with similar-looking results.

Now, “Andy”, below, says he heard a systematic difference between the files, similar to what others reported for Fremer’s Files “A” and “B.” That bears further investigation. But I’ve said to Mr. Fremer that, if he really wants to get to the bottom of what differences, if any, the cables contributed to these recordings, it would be best to eliminate the wow-and-flutter that is clearly responsible for most (if not all) of the visible differences displayed here.

He should start with a digital source (like, say, one of the 24/96 FLAC files we’ve been discussing), played back through the two different cables he wants to test. That source, at least, won’t vary (in a random and uncontrollable fashion) from one playback to the next.

Repeatability is another one of those things that we strive for in Science.


In fact, it is that wandering in-and-out of phase that is the most glaring difference between the files and it (rather than the more sophisticated procedure that I outlined above) is what makes it trivial, for even a casual observer, to pick out which file is which.

by distler (distler@golem.ph.utexas.edu) at September 18, 2014 01:14 AM

Jaques Distler - Musings

Golem V

For nearly 20 years, Golem has been the machine on my desk. It’s been my mail server, web server, file server, … ; it’s run Mathematica and TeX and compiled software for me. Of course, it hasn’t been the same physical machine all these years. Like Doctor Who, it’s gone through several reincarnations.

Alas, word came down from the Provost that all “servers” must move (physically or virtually) to the University Data Center. And, bewilderingly, the machine on my desk counted as a “server.”

Obviously, a 27” iMac wasn’t going to make such a move. And, equally obvious, it would have been rather difficult to replace/migrate all of the stuff I have running on the current Golem. So we had to go out shopping for Golem V. The iMac stayed on my desk; the machine that moved to the Data Center is a new Mac Mini

The new Mac Mini
side view
Golem V, all labeled and ready to go
  • 2.3 GHz quad-core Intel Core i7 (8 logical cores, via hyperthreading)
  • 16 GB RAM
  • 480 GB SSD (main drive)
  • 1 TB HD (Time Machine backup)
  • 1 TB external HD (CCC clone of the main drive)
  • Dual 1 Gigabit Ethernet Adapters, bonded via LACP

In addition to the dual network interface, it (along with, I gather, a rack full of other Mac Minis) is plugged into an ATS, to take advantage of the dual redundant power supply at the Data Center.

Not as convenient, for me, as having it on my desk, but I’m sure the new Golem will enjoy the austere hum of the Data Center much better than the messy cacophony of my office.


I did get a tour of the Data Center out of the deal. Two things stood out for me.

  1. Most UPSs involve large banks of lead-acid batteries. The UPSs at the University Data Center use flywheels. They comprise a long row of refrigerator-sized cabinets which give off a persistent hum due to the humongous flywheels rotating in vacuum within.
  2. The server cabinets are painted the standard generic white. But, for the networking cabinets, the University went to some expense to get them custom-painted … burnt orange.
Custom paint job on the networking cabinets.

by distler (distler@golem.ph.utexas.edu) at September 18, 2014 01:10 AM

Jaques Distler - Musings

Bringing the Web to America

It has long been my conviction that anything appearing on the Wall Street Journal’s Editorial/Op-Ed pages is a lie. In fact, if there’s a paragraph appearing on those pages, in which you can’t spot an evident falsehood or obfuscation, then the problem is that you haven’t studied the topic, at hand, in sufficient depth.

On that note, it comes as no surprise that we “learn” [via Kevin Drum] that the Internet was the creation of private industry (specifically, Xerox PARC), not some nasty Government agency (DARPA). Nor is it surprising that the author of the book about PARC, on which the claims of the WSJ Op-Ed were based, promptly took to the pages of of the LA Times to debunk each and every paragraph. (See also Vint Cerf: “I would happily fertilize my tomatoes with Crovitz’ assertion.”)

Which leaves me little to do, but post a copy of this lecture, from 1999, by Paul Kunz of SLAC. The video quality is really bad, but this is (to my knowledge) the only extant copy. He tells a bit of the pre-history of the internet, and the role high energy physicists played.

As Michael Hiltzik alluded to, in his LA Times piece, AT&T (and, more relevant for Kunz’s story, the Europeen Telecoms) were dead-set against the internet, and did everything they could to smother it in its cradle. High energy physicists (who were, in turn, funded by …) played a surprising role in defeating them. (And yes, unsurprisingly, Al Gore makes a significant appearance towards the end.)

Enjoy ….


Paul Kunz: Bringing the Web to America

And now you know the answer to the trivia question: “What was the first website outside of Europe?”

Update:

For those unfamiliar with how this all works, Gordon Crovitz, the author of the hilariously wrong column in question, is the former publisher of the Wall Street Journal. And the column, itself, is now endlessly echoed and repeated in the wingnutosphere.

by distler (distler@golem.ph.utexas.edu) at September 18, 2014 12:57 AM

September 17, 2014

Symmetrybreaking - Fermilab/SLAC

XKCD creator answers ‘What if?’

Randall Munroe, author of the webcomic xkcd, has found another outlet for his inquisitive nature.

Relentless curiosity is the driving force behind Internet phenomenon Randall Munroe’s new book, What If? Serious Scientific Answers To Absurd Hypothetical Questions.

Munroe, a former NASA roboticist with an undergraduate degree in physics, is known for drawing xkcd, stick figure comics that cover a range of topics including math, coding and physics. The illustrated book, released on September 2, answers reader-submitted questions about imaginary scenarios that spark Munroe’s interest.

“When people ask me these questions, I get really curious about the answer,” Munroe said during a Google Hangout with fans on September 12. “At the end of the day, the thing that really drives me is when someone asks the question and I can’t stop wondering about the answer.”

Munroe said he receives more inquiries than he can read individually. This torrent of far-fetched wonderings also feeds a weekly series on the xkcd website.

Since its release earlier this month, What If? reached number one on Amazon’s bestseller list and topped the New York Times combined print and e-book nonfiction category.

During an hour-long video chat from Google’s headquarters in Mountain View, California, Munroe answered questions from online fans and host Hank Green of the “vlogbrothers” YouTube channel.

One example of a typical What If? question: How close would you need to be to a supernova to get a lethal dose of neutrino radiation? Munroe pointed out that although supernovae emit a large number of neutrinos that could interact with your DNA, if you got close enough to a supernova you would be vaporized before neutrinos became an issue.

It’s hard to give a real sense of how ghostly neutrinos are to someone who is unfamiliar with the topic, Munroe said. “The idea of having them interact with you at all is unlikely.”

Other topics covered during the discussion ranged from how large a mole of moles would be (they would form a planet, then heat up and erupt in volcanoes) to constraints on skyscraper size (primarily elevators, wind and money).

“I’ve found that some of the very best questions are definitely the ones from little kids,” Munroe said. “I think adults try to make the question really clever and try to bake in a bunch of crazy consequences.”

He also added that readers sometimes submit questions simply to try to get Munroe to do their homework. But such unimaginative queries are unlikely to find a response.

“That’s my basic gauge: Do I want to know the answer to this?” Munroe said. “Is it something that I don’t know already but I would like to know?”

 

Like what you see? Sign up for a free subscription to symmetry!

by Amanda Solliday at September 17, 2014 05:00 PM

The Great Beyond - Nature blog

Ebola economic impacts to hit US$359 million in 2014

The Ebola outbreak in West Africa is not only devastating the lives of thousands of people in Liberia, Guinea and Sierra Leone, but it is devastating the economies in those countries as well.

The outbreak is expected to halve economic growth this year in Guinea and Liberia, and reduce growth by 30% in Sierra Leone, according to a 17 September report from the World Bank. It estimates total economic damages from the three countries will total US$359 million in 2014. If the world does not respond quickly with money and resources to halt Ebola’s spread, this impact could grow eight-fold next year, warns the report — the first quantitative estimate of the outbreak’s economic impact.

The United Nations’ 16 September Ebola response plan estimates that the cost of immediate response to the crisis will be close to $1 billion, double the $495 million called for by the World Health Organization on 28 August. This estimate will only continue to increase if other countries do not contribute to the response soon, World Bank Group President Jim Yong Kim said in a phone conference with reporters.

In the long term, the World Bank imagines two scenarios for Ebola’s economic impact: a ‘low Ebola’ scenario in which the outbreak is rapidly contained within the three affected countries, and a ‘high Ebola’ scenario in which it goes unchecked until well into 2015. Under the latter scenario, the economies of Guinea, Liberia and Sierra Leone would suffer significantly; Liberia could lose as much as 12% of its gross domestic product in 2015, the analysis says, thus reducing the country’s growth rate from 6.8% to –4.9%.

Agriculture and mining are the sectors worst hit, along with manufacturing and construction.

“There are two kinds of contagion,” Kim said. “One is related to the virus itself and the other is related to the spread of fear about the virus.” Health-care costs and illness from the virus itself contribute little to the economic impact, the report found. Rather, 80–90% of the economic effects are due to the “fear factor” that shuts down transportation systems, including ports and airports, and keeps people away from their jobs.

The exact number of Ebola cases, for which estimates are constantly changing, is not relevant to the economic model that the World Bank developed, Kim said. “What really matters is how quickly we scale up the response so that we can address the entire number of cases. If we get an effective response on the ground in the next few months, we can blunt the vast majority, 80–90%, of the economic impact,” he added. If this does not happen and the epidemic spreads to other countries such as Nigeria, Ghana and Senegal, Kim cautioned, the ultimate economic hit from this outbreak could reach “many billions”.

Kim also announced a new effort to develop a “universal protocol” for Ebola treatment. Paul Farmer, a physician and global-health expert at Harvard University in Cambridge, Massachusetts; Anthony Fauci, director of the US National Institute for Allergy and Infectious Disease in Bethesda, Maryland; and several non-governmental organizations such as Médecins Sans Frontières will work on the protocol for the World Health Organization to adopt to ensure that all health-care workers will be trained to treat Ebola in the same way.

Such protocols have been crucial in improving management of diseases such as tuberculosis, Kim said. For Ebola, measures that are likely to be part of the protocol include simple steps such as isolation of patients and hydration, which can greatly improve survival.

by Sara Reardon at September 17, 2014 04:55 PM

Quantum Diaries

Calm before the storm: Preparing for LHC Run2

It’s been a relatively quiet summer here at CERN, but now as the leaves begin changing color and the next data-taking period draws nearer, physicists on the LHC experiments are wrapping up their first-run analyses and turning their attention towards the next data-taking period. “Run2″, expected to start in the spring of 2015, will be the biggest achievement yet for particle physics, with the LHC reaching a higher collision energy than has ever been produced in a laboratory before.

As someone who was here before the start of Run1, the vibe around CERN feels subtly different. In 2008, while the ambitious first-year physics program of ATLAS and CMS was quite broad in scope, the Higgs prospects were certainly the focus. Debates (and even some bets) about when we would find the Higgs boson – or even if we would find it – cropped up all over CERN, and the buzz of excitement could be felt from meeting rooms to cafeteria lunch tables.

Countless hours were also spent in speculation about what it would mean for the field if we *didn’t* find the elusive particle that had evaded discovery for so long, but it was Higgs-centric discussion nonetheless. If the Higgs boson did exist, the LHC was designed to find this missing piece of the Standard Model, so we knew we were eventually going to get our answer one way or another.

Slowly but surely, the Higgs boson emerged in Run1 data

Slowly but surely, the Higgs boson emerged in Run1 data. (via CERN)

Now, more than two years after the Higgs discovery and armed with a more complete picture of the Standard Model, attention is turning to the new physics that may lie beyond it. The LHC is a discovery machine, and was built with the hope of finding much more than predicted Standard Model processes. Big questions are being asked with more tenacity in the wake of the Higgs discovery: Will we find supersymmetry? will we understand the nature of dark matter? is the lack of “naturalness” in the Standard Model a fundamental problem or just the way things are?

The feeling of preparedness is different this time around as well. In 2008, besides the data collected in preliminary cosmic muon runs used to commission the detector, we could only rely on simulation to prepare the early analyses, inducing a bit of skepticism in how much we could trust our pre-run physics and performance expectations. Compounded with the LHC quenching incident after the first week of beam on September 19, 2008 that destroyed over 30 superconducting magnets and delayed collisions until the end of 2009, no one knew what to expect.

Expect the unexpected.

Expect the unexpected…unless it’s a cat.

Fast forward to 2014, we have an increased sense of confidence stemming from our Run1 experience, having put our experiments to the test all the way from data acquisition to event reconstruction to physics analysis to publication…done at a speed which surpassed even our own expectations. We know to what extent we can rely the simulation, and know how to measure the performance of our detectors.

We also have a better idea of what our current analysis limitations are, and have been spending this LHC shutdown period working to improve them. Working meeting agendas, usually with the words “Run2 Kick-off” or “Task Force” in the title, have been filled with discussions of how we will handle data in 2015, with what precision can we measure objects in the detector, and what our early analysis priorities should be.

The Run1 dataset was also used as a dress rehearsal for future runs, where for example, many searches employed novel techniques to reconstruct highly boosted final states often predicted in new physics scenarios. The aptly-named BOOST conference recently held at UCL this past August highlighted some of the most state-of-the-art tools currently being developed by both theorists and experimentalists in order to extend the discovery reach for new fundamental particles further into the multi-TeV region.

Even prior to Run1, we knew that such new techniques would have to be validated in data in order to convince ourselves they would work, especially in the presence of extreme pileup (ie: multiple, less-interesting interactions in the proton bunches we send around the LHC ring…a side effect from increased luminosity). While the pileup conditions in 7 and 8 TeV data were only a taste of what we’ll see in Run2 and beyond, Run1 gave us the opportunity to try out these new techniques in data.

One of the first ever boosted top candidate events recorded in the ATLAS detector, where all three top decay products can be found inside a single hadronic jet.

One of the first ever boosted hadronic top candidate events recorded in the ATLAS detector, where all three decay products (denoted by red circles) can be found inside a single large jet, denoted by a green circle. (via ATLAS)

Conversations around CERN these days sound similar to those we heard before the start of Run1…what if we discover something new, or what if we don’t, and what will that mean for the field of particle physics? Except this time, the prospect of not finding anything is less exciting. The Standard Model Higgs boson was expected to be in a certain energy range accessible at the LHC, and if it was excluded it would have been a major revelation.

There are plenty of well-motivated theoretical models (such as supersymmetry) that predict new interactions to emerge around the TeV scale, but in principle there may not be anything new to discover at all until the GUT scale. This dearth of any known physics processes spanning a range of orders of magnitude in energy is often referred to as the “electroweak desert.”

Physicists taking first steps out into the electroweak desert will still need their caffeine.

Physicists taking first steps out into the electroweak desert will still need their caffeine. (via Dan Piraro)

Particle physics is entering a new era. Was the discovery of the Higgs just the beginning, and there is something unexpected to find in the new data? or will we be left disappointed? Either way, the LHC and its experiments struggled through the growing pains of Run1 to produce one of the greatest discoveries of the 21st century, and if new physics is produced in the collisions of Run2, we’ll be ready to find it.

by Emily Thompson at September 17, 2014 03:29 PM

Lubos Motl - string vacua and pheno

Ambulance-chasing Large Hadron Collider collisions
Guest blog by Ben Allanach on the impure fun of rapid-response physics
B.A. is a professor of theoretical physics at the University of Cambridge. He is a supersymmetry enthusiast, and is always looking for ways to interpret data using it. You can watch his TEDx talk giving some background to the LHC, supersymmetry and dark matter, or (for experts) look at the paper that this blog refers to.
“Ambulance chasing” refers to the morally dubious practice of lawyers chasing down accident victims in order to help them sue. In a physics context, when some recent data disagrees with the Standard Model of particle physics and researchers come up with an interpretation in terms of new physics, they are called ambulance chasers too. This is probably because some view the practice as a little glory-grabbing and somehow impure: you’re not solving problems purely using your mind (you’re using data as well), and even worse that that, you’ve had to be quick or other researchers might have been able produce something similar before you. It’s not that the complainers get really upset, more that they can be a bit sniffy (and others are just taking the piss in a fun way). I’ve been ambulance chasing some data just recently with collaborators, and we’ve been having a great time. These projects are short, snappy and intense. You work long hours for a short period, playing ping-pong with the draft in the final stages while you quickly write the work up as a short scientific paper.



A couple of weeks ago, the CMS experiment released an analysis of some data (TRF) that piqued our interest because it had a small disagreement with Standard Model predictions. In order to look for interesting effects, CMS sieved the data in the following way: they required either an electron and an anti-electron or a muon and an anti-muon. Electrons and muons are called `leptons’ collectively. They also required two jets (sprays of strongly interacting particles) and some apparent missing energy. We’ve known for years that maybe you could find supersymmetry with this kind of sieving. The jets and leptons could come from the production of supersymmetric particles which decay into them and a supersymmetric dark matter particle. So if you find too many of these type of collisions compared to Standard Model predictions, it could be due to supersymmetric particle production.




The ‘missing energy’ under the supersymmetry hypothesis would be due to a supersymmetric dark matter particle that does not interact with the detector, and steals off momentum and energy from the collision. Some ordinary Standard Model type physics can produce collisions that pass through the sieve: for example top, anti-top production. But top anti-top production will give electron anti-muon production with the same probability as electron anti-electron production. So, to account for the dominant background (i.e. ordinary collisions that we are less interested in but that get through the sieve still), the experiment does something clever: they subtract off the electron anti-muon collisions from the electron anti-electron collisions.




The picture below shows the number of collisions that passed through the sieve depending upon the invariant mass of the lepton pair. The big peak is expected and is due to production of a Z-boson. But toward the left-hand side of the plot, you can see that there are a few too many observed events with low invariant masses, compared to the “background” prediction. We’ve interpreted this excess with our supersymmetric particle production hypothesis.



Plot of the dilepton mass distribution from CMS

For those in the know, this is a “2.6 sigma” discrepancy in the rate of production of the type of collisions that CMS had sieved. The number of sigma tells you how unlikely the data is to have come from your model (in this case, the Standard Model). The greater the number of sigma, the more unlikely. 2.6 sigma means that, if you had performed a hundred LHC experiments with identical conditions, the measurement would only have such a large discrepancy once, on average, assuming that the Standard Model is the correct theory of nature. At this point, it’s easy to make it sound like the signal is definitely a discovery. The trouble is, though, that the experiments look at hundreds upon hundreds of measurements, so some of them will come up as discrepant as 2.6 sigma and of course those are the ones you notice. So no one can claim that this is a discovery. Perhaps it will just disappear in new data, because it was one of those chance fluctuations (we’ve seen several like this disappear before). But perhaps it will stick around and get stronger, and that’s the possibility that we are excited about.

When you do this kind of project, the first thing is to check and see if your hypothesis is ruled out by other data, in which case it’s dead in the water before it can get swimming. After that, the question is: does your hypothesis make any other predictions that can be tested? For instance, we’ve been suggesting how the experiment can take another look at their own data to check our hypothesis (there should also be an obvious excess in the events if you plot them against another variable: `jet di-lepton invariant mass’). And we’ve been making predictions of our hypothesis for the prospects of detecting supersymmetry in Run II next year.

You can be sniffy about our kind of ambulance chasing for a variety of reasons - one of them is that it might be a waste of time because it’s “only a 2.6 sigma effect”. There is an obvious response to this: it’s better to work on a 2.6 sigma signal than a 0.0 sigma one.

by Luboš Motl (noreply@blogger.com) at September 17, 2014 01:22 PM

Tommaso Dorigo - Scientificblogging

John Ellis On The Ascent Of The Standard Model
Being at CERN for a couple of weeks, I could not refrain from following yesterday's talks in the Main Auditorium, which celebrated the 90th birthday of Herwig Schopper, who directed CERN in the crucial years of the LEP construction.

A talk I found most enjoyable was John Ellis'. He gave an overview of the historical context preceding the decision to build LEP, and then a summary of the incredible bounty of knowledge that the machine produced in the 1990s.

read more

by Tommaso Dorigo at September 17, 2014 09:27 AM

Clifford V. Johnson - Asymptotia

Baby Mothra!!!
So I discovered a terrifying (but also kind of fascinating and beautiful at the same time) new element to the garden this morning. We're having a heat wave here, and so this morning before leaving for work I thought I'd give the tomato plants a spot of moisture. I passed one of the tomato clusters and noticed that one of the (still green) tomatoes had a large bite taken out of it. I assumed it was an experimental bite from a squirrel (my nemesis - or one of them), and muttered dark things under my breath and then prepared to move away the strange coiled leaf that seemed to be on top of it. Then I noticed. It wasn't a leaf. caterpillar_horn_1 It was a HUGE caterpillar! Enormous! Giant and green with spots and even a red horn at one end! There's a moment when you're unexpectedly close to a creature like that where your skin crawls for a bit. Well, mine did for a while [...] Click to continue reading this post

by Clifford at September 17, 2014 04:14 AM

astrobites - astro-ph reader's digest

Our Moon, the Cosmic Ray Detector

Title: Lunar Detection of Ultra-High-Energy Cosmic Rays and Neutrinos
Authors: J.D. Bray et. al.
First Author’s Institution: University of Southampton

A great mystery in particle astrophysics today is the production of the so-called ultra-high-energy cosmic rays. In general, cosmic rays are produced in a variety of contexts (see this recent Astrobite for more on that), but astronomers have measured a few to have almost unbelievable energies. The first observation came in 1962 at the Volcano Ranch experiment in New Mexico where Dr. John D. Linsley measured a cosmic ray to have an energy of 1020 eV or 16 J. Another ultra-high-energy cosmic ray, discovered in October of 1991, was dubbed the “Oh-My-God Particle”, which had an energy of 3 x 1020 eV (50 J). To put that into context, 50 J is the kinetic energy of a baseball traveling at 60 miles per hour.

The name ‘cosmic ray‘ is something of a misnomer. The word ‘ray’ makes it sound like it’s some sort of light, like gamma rays. But, that is not the case. Cosmic rays are simply protons that have been accelerated to high energies by some astrophysical mechanism. The mystery of ultra-high-energy cosmic rays lies in that acceleration. Nobody is sure exactly what is accelerating these cosmic rays to such high velocities.

Unfortunately, they are also very difficult to study because they have a relatively low flux here at Earth. The arrival rate of ultra-high-energy cosmic rays are approximately one per square kilometer per century. The Pierre Auger Observatory in Argentina is an array of cosmic ray detectors that spans an area of 3,000 square kilometers (roughly the size of Rhode Island or Luxembourg), but despite their impressive detector size they still only detect 15 ultra-high-energy cosmic rays per year. Today’s paper by J.D. Bray et. al. explains how to use the Moon as a cosmic ray detector to increase the collection area far beyond that of the Pierre Auger.

When cosmic rays smash into things, a shower of various particles is produced. At Earth, this usually happens in the upper atmosphere, and it is actually those secondary particles present in the shower that are detected by observatories like the Pierre Auger. Those particles are often charged, and their rapid movement through a dense medium, like the atmosphere, causes them to emit a very brief pulse of light in the form of radio waves (Figure 1). The authors hope to use these characteristic radio emissions that occur when cosmic rays strike the Moon’s surface to study the cosmic rays.

Figure 1. Caption goes here...

Figure 1. Left: Radio waves are produced at an angle relative to the direction of the particle shower. The radiation pattern is dependent on the frequency of the produced radio waves (two examples are shown in blue and green). Right: The red line represents the radio pulse that would be detected at Earth.  The pulse is very short, only lasting for several nanoseconds. [source: Bray et. al, figure 1.]

Radio waves cannot travel very far through the Moon’s interior, which means only cosmic rays skimming the Moon’s surface will produce radio pulses that can be detected here on Earth. That notwithstanding, the Moon will still make a truly impressive cosmic ray detector. The authors estimate that it will be equivalent to a 33,000 square kilometer (roughly the size of Maryland or Belgium) cosmic ray detector on Earth. That’s 10 Pierre Auger observatories! With a detector of that size, the authors expect to see up to 165 ultra-high-energy cosmic rays per year.

There is still one issue for the team, though: a radio telescope array that is sensitive enough to detect the faint pulses produced by these cosmic rays does not exist yet. Fortunately, astronomers are about to build the highly anticipated Square Kilometer Array in South Africa and Australia, which will provide the sensitivity necessary to put this lunar technique to use. Once the array is complete, we may finally learn something about the origins of these ultra-energetic cosmic rays.

by Justin Vasel at September 17, 2014 01:10 AM

September 16, 2014

astrobites - astro-ph reader's digest

Apply to Write for Astrobites

Astrobites is seeking new graduate students to join the Astrobites collaboration.

Please share the information below with your graduate student colleagues. Applicants must be current graduate students. The deadline for applications is 15 October. Please email write4astrobites@gmail.com if you have any questions.

Application Details

The application consists of a sample Astrobite and two short essays. The application can be found and submitted at http://astrobites.org/apply-to-write-for-astrobites/

Goal: Your sample astrobite should discuss the motivation, methods, results, and conclusions of a published paper that has not been featured on Astrobites. Please do not summarize a paper of which you are an author, as this might lead to an unfair advantage relative to an application where the applicant is not involved with the paper.  If there are any concerns or ambiguities regarding this point, do not hesitate to seek guidance: write4astrobites@gmail.com.

Age of paper: We suggest you choose a paper that is at least 3 months old. Astrobites articles published during the author selection process will focus on newer papers, so you do not need to worry that your chosen article will be covered on astrobites.

Style: Please write at a level appropriate for undergraduate physics or astronomy majors and remember to explain jargon. We encourage you to provide links to previous astrobites or other science websites where appropriate. Links may either be provided as hyperlinks or as parenthetical citations. We suggest you read a few Astrobites posts to get a sense for how posts are typically written. You might use them as a guide for your sample post.

Figures: Your sample post should include at least one figure from the paper with an appropriate caption (not just the original caption). Figures may either be embedded in the text or placed at the end of the sample, and need to include appropriate citations to their source as well.

Length: Please keep your submission under 1,000 words including the figure caption(s). As we have received numerous applications for previous cycles, we unfortunately do not expect to be able to read beyond this limit.  Importantly, typical astrobites are usually between 500-800 words, so successful applications will demonstrate the ability to explain their chosen papers concisely. “Brevity is the soul of wit.”

Dates and Decision Process

The deadline for applications is 15 October. The Astrobites hiring committee will then review the applications and invite new authors to join based on the quality of their sample Astrobite and two short essays as well as on our needs for number of new authors. All applications will be reviewed anonymously in the interests of fairness. Applicants can expect to be notified by the end of November. If you have questions about the application process or responsibilities of Astrobites authors, don’t hesitate to get in touch at write4astrobites@gmail.com.

 

by Astrobites at September 16, 2014 06:01 PM

Quantum Diaries

Summer intern studies physics for self, family

This article appeared in Fermilab Today on Sept. 16, 2014.

Summer intern Sheri Lopez, here with son Dominic, pursues her love of physics as a student at the University of New Mexico-Los Alamos. She spent this summer at Fermilab as a summer intern. Photo courtesy of Sheri Lopez

Summer intern Sheri Lopez, here with son Dominic, pursues her love of physics as a student at the University of New Mexico-Los Alamos. She spent this summer at Fermilab as a summer intern. Photo courtesy of Sheri Lopez

Dominic is two. He is obsessed with “Despicable Me” and choo-choos. His mom Sheri Lopez is 29, obsessed with physics, and always wanted to be an astronaut.

But while Dominic’s future is full of possibilities, his mom’s options are narrower. Lopez is a single mother and a sophomore at the University of New Mexico-Los Alamos, where she is double majoring in physics and mechanical engineering. Her future is focused on providing for her son, and that plan recently included 10 weeks spent at Fermilab for a Summer Undergraduate Laboratories Internship (SULI).

“Being at Fermilab was beautiful, and it really made me realize how much I love physics,” Lopez said. “On the other end of the spectrum, it made me realize that I have to think of my future in a tangible way.”

Instead of being an astronaut, now she plans on building the next generation of particle detectors. Lopez is reaching that goal by coupling her love of physics with practical trade skills such as coding, which she picked up at Fermilab as part of her research developing new ways to visualize data for the MINERvA neutrino experiment.

“The main goal of it was to try to make the data that the MINERvA project was getting a lot easier to read and more presentable for a web-based format,” Lopez said. Interactive, user-friendly data may be one way to generate interest in particle physics from a more diverse audience. Lopez had no previous coding experience but quickly realized at Fermilab that it would allow her to make a bigger difference in the field.

Dominic, meanwhile, spent the summer with his grandparents in New Mexico. That was hard, Lopez said, but she received a lot of support from Internship Program Administrator Tanja Waltrip.

“I was determined to not let her miss this opportunity, which she worked so hard to acquire,” Waltrip said. Waltrip coordinates support services for interns like Lopez in 11 different programs hosted by Fermilab.

Less than 10 percent of applicants were accepted into Fermilab’s summer program. SULI is funded by the U.S. Department of Energy, so many national labs host these internships, and applicants choose which labs to apply to.

“There was never a moment when anyone doubted or said I couldn’t do it,” Lopez said. Dominic doesn’t understand why his mom was gone this summer, but he made sure to give her the longest hug of her life when she came back. For her part, Lopez was happy to bring back a brighter future for her son.

Troy Rummler

by Fermilab at September 16, 2014 04:21 PM

Symmetrybreaking - Fermilab/SLAC

Science gets social

If you like your science with a cup of coffee, a pint of beer or a raucous crowd, these events may be for you.

With an explosion of informal science events popping up around the world, it’s easier than ever to find ways to connect with scientists and fellow science enthusiasts.

Can’t find an event near you? Start your own! There are plenty of ways to reach out to fellow organizers for support.

 

Science Slam

At a Science Slam, performers compete for the affection of an audience—usually registered by clap-o-meter—by giving their best short, simple explanations of their research.

The first Science Slam took place in 2004 at a festival in Darmstadt, Germany, home of the GSI Centre for Heavy Ion Research and mission control for the European Space Agency. Alex Deppert, a city employee and poet with a PhD related to science communication, adapted the idea from the competitive Poetry Slams that started in Chicago in the 1980s. Science Slams now take place across the globe.

 

Science Festivals

Festivals offer a variety of activities for adults, from live tapings of “You’re the Expert,” a podcast in which comedians attempt to guess the obscure specialty of a scientist guest; to science pub crawls; to after-hours events at local museums; to the Bad Ad Hoc Hypothesis Festival—an event created by cartoonist Zach Weiner of the online comic Saturday Morning Breakfast Cereal at which scientists attempt to sincerely explain and defend fundamentally ridiculous theories before a panel of judges.

The modern science festival began in the late 1980s in Edinburgh, Scotland, and Cambridge, England. It spread across Europe and Asia and, in 2007, arrived at a different Cambridge, the home of MIT and Harvard University.

In 2009, the handful of US-based festival organizers formed the Science Festival Alliance. According to an annual report, in 2013 almost 300 events associated with the Science Festival Alliance drew more than 1000 visitors. About 30 of them drew more than 10,000 visitors each.

 

Nerd Nite

At Nerd Nite, a few people give short talks on their research or other topics of geeky interest in front of a potentially boozy crowd.

The first Nerd Nite took place in 2004 at The Midway Café bar in the Jamaica Plain neighborhood of Boston. Regular patron Christopher Balakrishnan, then a PhD candidate in evolutionary biology at Boston University, often found himself there telling tales from his three-month fieldwork stints in Africa. The bartenders suggested that he call his friends together and put on a slideshow.

Balakrishnan took the concept to the next level, inviting three other BU grad students to join him in explaining their own areas of research. The event drew enough of a crowd that, for the next couple of years, he continued to find researchers and organize talks. He eventually convinced his friend Matt Wasowski, who ran a series of trivia nights in New York, to try it out, too. The two have since helped spread Nerd Nite to more than 75 cities around the world.

 

Science Café

The Science Café is the salon of the informal science-learning world. For the price of a cup of coffee or a glass of wine, Science Café participants receive a short talk on science or technology, and then the floor opens for discussion and debate.

The Science Café is an offshoot of the Café Scientifique, created in 1998 in Leeds, England, by British television producer Duncan Dallas. The Café Scientifique, in turn, is a spinoff of the Café Philosophique, a philosophy-themed café that began in France in 1992.

In 2006, the producers of the public television science program NOVA gathered under one umbrella the few dozen Science Cafés that had popped up in the United States and began to offer resources to organizers, speakers and attendees through the site www.sciencecafes.org.

Today Science Cafés exist in at least 49 US states and 15 countries, operating under names such as Science on Tap, Science Pub, Ask a Scientist and Café Sci.

 

Science Tourism

You can also take science learning on the road—or out to sea—with science tourism companies such as Science Getaways, started in 2011 by astronomer Phil Plait and his wife, Marcella Setter, or Insight Cruises, which since 2008 have taken experts on board to offer lectures, discussions and tours.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at September 16, 2014 03:22 PM

ZapperZ - Physics and Physicists

"What keeps girls from studying physics and STEM" - An Important Article That Did Not Answer Its Own Question
Anyone following this blog for any considerable period of time would have seen my keen involvement in trying to engage more girls and women into physics. So this is a subject that I've followed and had participated in for many years.

So when I came across this opinion piece article, I will read it in its entirety, because even if this is a first-hand account of one's experience (the author is a female physicist), it is still another "data point" in trying to figure out what kind of hurdle a female student like her faced during her academic years.

Unfortunately, after reading the article, I am no closer in understanding the unique challenges that a female student faces, or what a female scientist faces, in the field of physics. She describes what can be done to improve education and open opportunities, but these are NOT specific to female students!

My advanced placement (AP) physics class, unfortunately, was about memorizing equations and applying them to specific contrived examples. I did not perform well on the midterm exam. The teacher advised me to drop the course, along with all the other girls in the class. 

This would be a turn-off for male students as well! So if that is the case, why is there an overwhelmingly more female students leaving the subject? She didn't say.

I stayed despite the teacher’s pressure, as the only girl in the class, and did well in the long run. I learned to love physics again in college, conducting original research with inspiring science professors who valued my presence in the scientific community. Physics professor Mary James at Reed College helped a lot by creating an active learning environment in her courses and teaching me that physics also needs “B” students.

Again, any student of any gender would benefit from that. This is not unique only to female students. So it still does not address the imbalance.

But there is so much more work to do. One key factor is federal funding for research. Federal funding is the main source of support for the kind of high-risk, high-reward investigations that sparked innovations such as the Internet, the MRI and GPS.

U.S. Sen. Patty Murray, D-Wash., serves on the U.S. Senate Appropriations Committee and understands the connection. In her recently released report “Opportunity Outlook: A Path For Tackling All Our Deficits Responsibly” she states, “By supporting early stage basic research that the private sector might not otherwise undertake, federal investment in R&D [research and development] has played a critical role in encouraging innovation across a swath of industries.” 

Again, this doesn't address the lack of women in physics. Increasing the opportunity and funding merely increase the overall number of people in the field, but will probably not change the percentage of women in this area. There's nothing here that reveals the unique and unforeseen hurdles  that only women faced that are keeping the participation down.

In the end, she simply argued for more funding to increase the opportunity of people in physics. There's nothing here whatsoever that addresses the issue of why there are very few women, both in absolute numbers and in relative percentage, in physics. I think there are other, better articles and research that have addressed this issue.

Zz.

by ZapperZ (noreply@blogger.com) at September 16, 2014 12:42 PM

Tommaso Dorigo - Scientificblogging

ATLAS Higgs Challenge Results
After four months of frenzy by over 1500 teams, the very successful Higgs Challenge launched by the ATLAS collaboration ended yesterday, and the "private leaderboard" with the final standings has been revealed. You can see the top 20 scorers below.


read more

by Tommaso Dorigo at September 16, 2014 09:58 AM

John Baez - Azimuth

Exploring Climate Data (Part 2)

guest post by Blake Pollard

I have been learning to make animations using R. This is an animation of the profile of the surface air temperature at the equator. So, the x axis here is the longitude, approximately from 120° E to 280° E. I pulled the data from the region that Graham Jones specified in his code on github: it’s equatorial line in the region that Ludescher et al. used:

For this animation I tried to show the 1997-1998 El Niño. Typically the Pacific is much cooler near South America, due to the upwelling of deep cold water:

(Click for more information.) That part of the Pacific gets even cooler during La Niña:

But it warms up during El Niños:

You can see that in the surface air temperature during the 1997-1998 El Niño, although by summer of 1998 things seem to be getting back to normal:

I want to practice making animations like this. I could make a much prettier and better-labelled animation that ran all the way from 1948 to today, but I wanted to think a little about what exactly is best to plot if we want to use it as an aid to understanding some of this El Niño business.


by John Baez at September 16, 2014 02:40 AM

September 15, 2014

astrobites - astro-ph reader's digest

Rethinking the Planet Formation Deadline

Stars start their lives surrounded by a disk of gas and dust. After several million years, the gas and most of this dust will dissipate, marking the end of the opportunity for gas giant planets to form. This timescale places strong constraints on our theories for how gas giants must form. The authors of today’s paper take a closer look at how the observations of disks around young stars have been carried out, and they find that many stars may actually retain their disks for far longer than previously thought.

Stars are not born on their own; rather they are born in clusters. The typical lifetime of protoplanetary disks is measured by first looking for disks around the stars in a cluster, and then comparing the fraction of stars with disks among different clusters of different ages. Such results are presented in Figure 1, which shows that 50% of stars lose their disks after 2 or 3 million years and 90% have lost their disks by the time they are 6 million years old. The author’s of today’s paper critique these results by identifying several selection effects that plague cluster observations.

The standard picture showing how quickly the fraction of stars with disks decreases with age. The lines show a linear and exponential fits to the data.

The standard picture showing how quickly the fraction of stars with protoplanetary disks decreases with age. The lines show linear and exponential fits to the data.

The core problem is that shortly after they form clusters will begin to expand and dissolve as the stars near the edges of the clusters are expelled. By the time clusters reach an age of 10 Myr, only 10-20% of their original stars remain. Clusters quickly expand beyond the field of view of most telescopes, so observations of older clusters are dominated by the stars that started in the cluster’s dense central region. The disks around stars in the central region are easily disrupted by gravitational perturbations from nearby stars as well as photoevaporated by the light from nearby bright massive stars. (For more on how cluster environments affect protoplanetary disks, see these astrobites.) In older clusters, the bias towards sampling these central stars may severely under-predict the disk fraction, resulting in an inferred disk dispersal timescale that is too short.

Additionally, star clusters vary considerably in their total mass. Because clusters expand and become more diffuse, low mass clusters cannot even be detected after a certain age, so the observations of older clusters are biased towards massive clusters. This is evident in Figure 1 where massive clusters (>1,000 solar masses) are shown in filled circles and less massive clusters are shown in open circles. The disk dispersal process may vary with cluster mass. Basing the disk fraction of older stars only on those in massive clusters is not ideal–especially considering that the majority of all stars may actually form in less massive clusters.

The authors attempt to correct some of these effects, and in Figure 2 they present an updated version of Figure 1. They added measurements of stars from the outer parts of clusters, and they added disk fractions for sparser clusters. These adjustments increase the disk fraction for older stars and lengthen the average disk lifetime. However, these corrections are not easy to make. The ages of sparse clusters are difficult to determine, and including stars at the outer reaches of clusters runs the risk of including stars that are not actually members of the clusters at all. These interlopers are likely much older and disk-free, so including them would artificially lower the measured disk fraction. For these reasons, the authors note that their updated values are still lower-limits on the true disk fractions.

In the end, the authors estimate that between a third and a half of stars may still host disks when they turn 10 Myr old. Perhaps we need to extend the deadlines we’ve placed on planet formation.

An updated version of Figure 1, now showing a longer protoplanetary disk lifetime (red line) when sparse clusters and stars at the edges of clusters are included. The authors note that this is still an underestimate of the disk fractions and of the average disk lifetime.

An updated version of Figure 1, now showing a longer protoplanetary disk lifetime (red line) when sparse clusters and stars at the edges of clusters are included. The authors note that this is likely still underestimates the disk fractions and of the average disk lifetime.

by Nick Ballering at September 15, 2014 05:16 PM

Symmetrybreaking - Fermilab/SLAC

Sci-fi writers, scientists imagine the future

A new project pairs science fiction authors with scientists to envision worlds that are both inspiring and achievable.

A few years ago, structural engineering professor Keith Hjelmstad received an unusual phone call. On the line was Neal Stephenson, author of futuristic thrillers such as Snow Crash and Cryptonomicon. He wanted to know whether it was possible to build a tower 20 kilometers high.

“Actually, I think he low-balled it at the time and said 15 kilometers,” Hjelmstad says. That’s still more than 15 times the height of the world’s tallest building—the Burj Khalifa in Dubai—and about 5 kilometers higher than the cruising altitude of a commercial aircraft.

Stephenson was working on a short story with a goal. He wanted to describe something fantastic—and feasible—for humans to consider striving for in the future.

Michael Crow, president of Arizona State University, had recently criticized the bleak, dystopian futures presented in much of today’s science fiction—arguably not a great source of inspiration for today’s scientists and engineers.

In response, Stephenson challenged himself to write a story that was both optimistic and realistic. And then he challenged other authors to work with experts to do the same.

With the help of Kathryn Cramer, editor of Year’s Best Science Fiction for the last decade, and Ed Finn, founding director of Arizona State University’s Center for Science and the Imagination, the challenge turned into Project Hieroglyph.

As Cramer says, “What we’re trying to do is envision a survivable future where the human race makes it.”

Cramer and Finn recruited authors including Elizabeth Bear, author of the Eternal Sky fantasy trilogy; Cory Doctorow, co-editor of the site BoingBoing and author of young adult novel Little Brother; Karl Schroeder, author of hard science fiction young adult novel Lockstep; and Bruce Sterling, author of cyberpunk novel Schismatrix. They wrote the short stories collected in the anthology Hieroglyph: Stories & Visions for a Better Future, released this month. The project also lives online as a series of ongoing discussions.

Stephenson, a self-professed “failed physics major,” says the idea for his tall tower story came from a 2003 proposal by Geoffrey Landis of NASA’s John Glenn Research Center and Vincent Denis of the International Space University in France. They argued that the lower atmospheric drag on objects launched from such a height would allow them to carry heavier loads into space.

As Stephenson learned from Hjelmstad, one of the biggest challenges in constructing such a tower would be dealing with wind—in the worst case, a blast from the jet stream. Hjelmstad came up with a way to deal with that by designing part of the tower to act as a sail, which could take advantage of the wind instead of fighting against it.

Hjelmstad says that the tall tower was the most ambitious engineering idea he’d ever been asked to consider, even in school. “I live in a fairly narrow world governed by codes and specifications and lawyers,” he says. “It was refreshing to think about a problem that was completely outside that.”

Hjelmstad says the project also made him think about how to inspire his own students, who continue to email him with new ideas for the tower even though the science fiction anthology is already published.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at September 15, 2014 03:50 PM

arXiv blog

How Network Theory Is Revealing Previously Unknown Patterns in Sports

Analyzing the network of passes between soccer players reveals that one of the world’s most successful teams plays an entirely different type of football to every other soccer team on the planet.

 

September 15, 2014 02:30 PM

arXiv blog

How Network Theory Is Revealing Previously Unknown Patterns in Sport

Analysing the network of passes between soccer players reveals that one of the world’s most successful teams plays an entirely different type of football to every other soccer team on the planet.


If you’ve ever watched soccer, you’ll know of the subtle differences in tactics and formation between different teams. There is the long ball game, the pressing game, the zone defence and so on. Many teams have particular styles of play that fans admire and hate.

September 15, 2014 02:30 PM

Matt Strassler - Of Particular Significance

Why did so few people see Auroras on Friday night?

Why did so few people see auroras on Friday night, after all the media hype? You can see one of two reasons in the data. As I explained in my last post, you can read what happened in the data shown in the Satellite Environment Plot from this website (warning — they’re going to make new version of the website soon, so you might have to modify this info a bit.) Here’s what the plot looked like Sunday morning.

What the "Satellite Environment Plot" on swpc.noaa.gov looked like on Sunday.  Friday is at left; time shown is "Universal" time; New York time is 4 hours later. There were two storms, shown as the red bars in the Kp index plot; one occurred very early Friday morning and one later on Friday.  You can see the start of the second storm in the "GOES Hp" plot, where the magnetic field goes wild very suddenly.  The storm was subsiding by midnight universal time, so it was mostly over by midnight New York time.

What the “Satellite Environment Plot” on swpc.noaa.gov looked like on Sunday. Friday is at left.  Time shown is “Universal” time (UTC); New York time is 4 hours later at this time of year. There were two storms, shown as the red bars in the Kp index chart (fourth line); one occurred very early Friday morning and one later on Friday. You can see the start of the second storm in the “GOES Hp” chart (third line), where the magnetic field goes wild very suddenly. The storm was subsiding by midnight Universal time, so it was mostly over by midnight New York time.

What the figure shows is that after a first geomagnetic storm very early Friday, a strong geomagnetic storm started (as shown by the sharp jump in the GOES Hp chart) later on Friday, a little after noon New York time ["UTC" is currently New York + 4/5 hours], and that it was short — mostly over before midnight. Those of you out west never had a chance; it was all over before the sun set. Only people in far western Europe had good timing. Whatever the media was saying about later Friday night and Saturday night was somewhere between uninformed and out of date.  Your best bet was to be looking at this chart, which would have shown you that (despite predictions, which for auroras are always quite uncertain) there was nothing going on after Friday midnight New York time.

But the second reason is something that the figure doesn’t show. Even though this was a strong geomagnetic storm (the Kp index reached 7, the strongest in quite some time), the auroras didn’t migrate particularly far south. They were seen in the northern skies of Maine, Vermont and New Hampshire, but not (as far as I know) in Massachusetts. Certainly I didn’t see them. That just goes to show you (AccuWeather, and other media, are you listening?) that predicting the precise timing and extent of auroras is educated guesswork, and will remain so until current knowledge, methods and information are enhanced. One simply can’t know for sure how far south the auroras will extend, even if the impact on the geomagnetic field is strong.

For those who did see the auroras on Friday night, it was quite a sight. And for the rest of us who didn’t see them this time, there’s no reason for us to give up. Solar maximum is not over, and even though this is a rather weak sunspot cycle, the chances for more auroras over the next year or so are still pretty good.

Finally, a lesson for those who went out and stared at the sky for hours after the storm was long over — get your scientific information from the source!  There’s no need, in the modern world, to rely on out-of-date media reports.


Filed under: Astronomy, Science and Modern Society Tagged: auroras, press

by Matt Strassler at September 15, 2014 01:38 PM

September 13, 2014

Tommaso Dorigo - Scientificblogging

Life After The 125 GeV Higgs: What Is Left Of Two-Higgs Doublet Models
I just read with interest the new paper on the arxiv by my INFN-Padova colleague Massimo Passera and collaborators, titled "Limiting Two-Higgs Doublet Models", and I thought I would explain to you here why I consider it very interesting and what are its conclusions.

read more

by Tommaso Dorigo at September 13, 2014 03:16 PM

ZapperZ - Physics and Physicists

When Stephen Hawking Burps, The World Media Goes Crazy!
Yes, I categorize this as a burp, which reveals how uninteresting and how little importance I put on this piece of news that has somehow garnered such widespread attention.

Whenever the name Stephen Hawking and the phrase "destruction of our universe" appear on the same sentence, that is just an incendiary combination that usually caused a world-wide explosion (pun intended). That's what happened when Hawking said that the Higgs boson that was discovered a couple of years ago at the LHC will result in the destruction of our universe.

My first reaction when I read this was: YAWN!

But of course, the public, and the popular media, ran away with it. After all, what more eye-catching headline can one make beyond something like "Higgs boson destroys the universe - Hawking". However, I think those strangelets in the LHC collisions that were going to form micro blackholes that will swallow our universe were here first, and they demand that they'd be the first to destroy our universe.

There is an opinion piece on the CNN webpage that addressed this issue. When CNN had to invite someone to write an opinion piece of a physics news, you know that it had gotten way too much attention!

So, the simplified argument goes like something like this -- the Higgs particle pervades space roughly uniformly, with a relatively high mass -- about 126 times that of the proton (a basic building block of atoms). Theoretical physicists noted even before the Higgs discovery that its relatively high mass would mean lower energy states exist. Just as gravity makes a ball roll downhill, to the lowest point, so the universe (or any system) tends toward its lowest energy state. If the present universe could one day transition to that lower energy state, then it is unstable now and the transition to a new state would destroy all the particles that exist today.

This would happen spontaneously at one point in space and time and then expand throughout the universe at the speed of light. There would be no warning, because the fastest a warning signal could travel is also at the speed of light, so the disaster and the warning would arrive at the same time.

That was the pedestrian description of what Hawking is talking about. But don't just stop there or you'll miss the CONTEXT of the probability of this happening.
 
Back to the universe. Whether the existence of Higgs boson means we're doomed depends on the mass of another fundamental particle, the top quark. It's the combination of the Higgs and top quark masses that determine whether our universe is stable.

Experiments like those at the Large Hadron Collider allow us to measure these masses. But you don't need to hold your breath waiting for the answer. The good news is that such an event is very unlikely and should not occur until the universe is many times its present age.
.
.
So don't lose any sleep over possible danger from the Higgs boson, even if the most famous physicist in the world likes to speculate about it. You're far more likely to be hit by lightning than taken out by the Higgs boson.

 See what I mean when I said that I yawned when I first read about Hawking's speculation?

Zz.

by ZapperZ (noreply@blogger.com) at September 13, 2014 01:28 PM

September 12, 2014

Matt Strassler - Of Particular Significance

Auroras — Quantum Physics in the Sky — Tonight?

Maybe. If we collectively, and you personally, are lucky, then maybe you might see auroras — quantum physics in the sky — tonight.

Before I tell you about the science, I’m going to tell you where to get accurate information, and where not to get it; and then I’m going to give you a rough idea of what auroras are. It will be rough because it’s complicated and it would take more time than I have today, and it also will be rough because auroras are still only partly understood.

Bad Information

First though — as usual, do NOT get your information from the mainstream media, or even the media that ought to be scientifically literate but isn’t. I’ve seen a ton of misinformation already about timing, location, and where to look. For instance, here’s a map from AccuWeather, telling you who is likely to be able to see the auroras.

Don't believe this map by AccuWeather.  Oh, sure, they know something about clouds.  But auroras, not much.

Don’t believe this map by AccuWeather. Oh, sure, they know something about clouds. But auroras, not much.

See that line below which it says “not visible”? This implies that there’s a nice sharp geographical line between those who can’t possibly see it and those who will definitely see it if the sky is clear. Nothing could be further than the truth. No one knows where that line will lie tonight, and besides, it won’t be a nice smooth curve. There could be auroras visible in New Mexico, and none in Maine… not because it’s cloudy, but because the start time of the aurora can’t be predicted, and because its strength and location will change over time. If you’re north of that line, you may see nothing, and if you’re south of it you still might see something.  (Accuweather also says that you’ll see it first in the northeast and then in the midwest.  Not necessarily.  It may become visible across the U.S. all at the same time.  Or it may be seen out west but not in the east, or vice versa.)

Auroras aren’t like solar or lunar eclipses, absolutely predictable as to when they’ll happen and who can see them. They aren’t even like comets, which behave unpredictably but at least have predictable orbits. (Remember Comet ISON? It arrived exactly when expected, but evaporated and disintegrated under the Sun’s intense stare.) Auroras are more like weather — and predictions of auroras are more like predictions of rain, only in some ways worse. An aurora is a dynamic, ever-changing phenomenon, and to predict where and when it can be seen is not much more than educated guesswork. No prediction of an aurora sighting is EVER a guarantee. Nor is the absence of an aurora prediction a guarantee one can’t be seen; occasionally they appear unexpectedly.  That said, the best chance of seeing one further away from the poles than usual is a couple of days after a major solar flare — and we had one a couple of days ago.

Good Information and How to Use it

If you want accurate information about auroras, you want to get it from the Space Weather Prediction Center, click here for their main webpage. Look at the colorful graph on the lower left of that webpage, the “Satellite Environment Plot”. Here’s an example of that plot taken from earlier today:

The "Satellite Environment Plot" from earlier today; focus your attention on the two lower charts, the one with the red and blue wiggly lines (GOES Hp) and on the one with the bars (Kp Index).  How to use them is explained in the text.

The “Satellite Environment Plot” from earlier today; focus your attention on the two lower charts, the one with the red and blue wiggly lines (GOES Hp) and on the one with the bars (Kp Index). How to use them is explained in the text.

There’s a LOT of data on that plot, but for lack of time let me cut to the chase. The most important information is on the bottom two charts.

The bottom row, the “Estimated Kp index”, tells you, roughly, how much “geomagnetic activity” there is (i.e., how disturbed is the earth’s magnetic field). If the most recent bars are red, then the activity index is 5 or above, and there’s a decent chance of auroras. The higher the index, the more likely are auroras and the further away from the earth’s poles they will be seen. That is, if you live in the northern hemisphere, the larger is the Kp index, the further south the auroras are likely to be visible. [If it's more than 5, you've got a good shot well down into the bulk of the United States.]

The only problem with the Kp index is that it is a 3-hour average, so it may not go red until the auroras have already been going for a couple of hours! So that’s why the row above it, “GOES Hp”, is important and helpful. This plot gives you much more up-to-date information about what the magnetic field of the earth is up to. Notice, in the plot above, that the magnetic field goes crazy (i.e. the lines get all wiggly) just around the time that the Kp index starts to be yellow or starts to be red.

Therefore, keep an eye on the GOES Hp chart. If you see it start to go crazy sometime in the next 48 hours, that’s a strong indication that the blast of electrically-charged particles from the Sun, thrown out in that recent solar flare, has arrived at the Earth, and auroras are potentially imminent.  It won’t tell you how strong they are though.  Still, this is your signal, if skies near you are dark and sufficiently clear, to go out and look for auroras. If you don’t see them, try again later; they’re changeable. If you don’t see them over the coming hour or so, keep an eye on the Kp index chart. If you’re in the mid-to-northern part of the U.S. and you see that index jump higher than 5, there’s a significant geomagnetic storm going on, so keep trying. And if you see it reach 8 or so, definitely try even if you’re living quite far south.

Of course, don’t forget Twitter and other real-time feeds.  These can tell you whether and where people are seeing auroras. Keeping an eye on Twitter and hashtags like #aurora, #auroras, #northernlights is probably a good idea.

One more thing before getting into the science. We call these things the “northern lights” in the northern hemisphere, but clearly, since they can be seen in different places, they’re not always or necessarily north of any particular place. Looking north is a good idea — most of us who can see these things tonight or tomorrow night will probably be south of them — but the auroras can be overhead or even south of you. So don’t immediately give up if your northern sky is blocked by clouds or trees. Look around the sky.

Auroras: Quantum Physics in the Sky

Now, what are you seeing if you are lucky enough to see an aurora? Most likely what you’ll see is green, though red, blue and purple are common (and sometimes combinations which give other colors, but these are the basic ones.)  Why?

The typical sequence of events preceding a bright aurora is this:

  1. A sunspot — an area of intense magnetic activity on the Sun, where the sun’s apparent surface looks dark — becomes unstable and suffers an explosion of sorts, a solar flare.
  2. Associated with the solar flare may be a “coronal mass ejection” — the expulsion of huge numbers of charged (and neutral) particles out into space. These charged particles include both electrons and ions (i.e. atoms which have lost one or more electrons). (Coronal mass ejections, which are not well understood, can occur in other ways, but the strongest are from big flares.)
  3. These charged particles travel at high speeds (much faster than any current human spaceship, but much slower than the speed of light) across space. If the sunspot that flared happens to be facing Earth, then some of those particles will arrive at Earth after as little as a day and as much as three days. Powerful flares typically make faster particles which therefore arrive sooner.
  4. When these charged particles arrive near Earth, it may happen (depending on what the Sun’s magnetic field and the Earth’s magnetic field and the magnetic fields near the particles are all doing) that many of the particles may spiral down the Earth’s magnetic field, which draws them to the Earth’s north and south magnetic poles (which lie close to the Earth’s north and south geographic poles.)
  5. When these high-energy particles (electrons and ions) rain down onto the Earth, they typically will hit atoms in the Earth’s upper atmosphere, 40 to 200 miles up. The ensuing collisions kick electrons in the struck atoms into “orbits” that they don’t normally occupy, as though they were suddenly moved from an inner superhighway ring road around a city to an outer ring road highway. We call these outer orbits “excited orbits”, and an atom of this type an “excited atom”.
  6. Eventually the electrons fall from these “excited orbits” back down to their usual orbits. This is often referred to as a “quantum transition” or, colloquially, a “quantum jump”, as the electron is never really found between the starting outer orbit and the final inner one; it almost instantaneously transfers from one to the other.
  7. In doing so, the jumping electron will emit a particle of electromagnetic radiation, called a “photon”. The energy of that photon, thanks to the wonderful properties of quantum mechanics, is always the same for any particular quantum transition.
  8. Visible light is a form of electromagnetic radiation, and photons of visible light are, by definition, ones that our eyes can see. The reason we can see auroras is that for particular quantum transitions of oxygen and nitrogen, the photons emitted are indeed those of visible light. Moreover, because the energy for each photon from a given transition is always the same, the color of the light that our eyes see, for that particular transition, is always the same. There is a transition in oxygen that always gives green light; that’s why auroras are often green. There is a more fragile transition that always gives red light; powerful auroras, which can excite oxygen atoms even higher in the atmosphere, where they are more diffuse and less likely to hit something before they emit light, can give red auroras. Similarly, nitrogen molecules have a transition that can give blue light. (Other transitions give light that our eyes can’t see.)  Combinations of these can give yellows, pinks, purples, whites, etc. But the basic colors are typically green and red, occasionally blue, etc.

So if you are lucky enough to see an aurora tonight or tomorrow night, consider what you are seeing.  Huge energies involving magnetic fields on the Sun have blown particles — the same particles that are of particular significance to this website — into space.  Particle physics and atomic physics at the top of the atmosphere lead to the emission of light many miles above the Earth.  And the remarkable surprises of quantum mechanics make that light not a bland grey, with all possible colors blended crudely together, but instead a magical display of specific and gorgeous hues, reminding us that the world is far more subtle than our daily lives would lead us to believe.


Filed under: Astronomy, Particle Physics Tagged: astronomy, atoms, auroras, press

by Matt Strassler at September 12, 2014 03:33 PM

ZapperZ - Physics and Physicists

The New Physical Review Journals Website - It Sucks!
Yeah, so from the title, you can already tell how I feel about it.

I look at the Physical Review Journals webpage quite often, at least a few times a week. After all, PRL is a journal that I scan pretty often, and I'm sure most physicists do as well. They changed the look and feel of the webpage several months ago, and right off the bat, there were a few annoying things.

First of all, one used to be able to see immediately the current list new papers appearing that week (for PRL, for example). Now, you need to click a few links to find it.

The page is heavily emphasized on "highlighted" papers, as if they are desperately trying to push to everyone how important these are. I don't mind reading them, but I'd like to see the entire listing of papers that week first and foremost. This somehow has been pushed back.

Lastly, and this is what is annoying the most, they seemed to not be optimized for tablet viewing, at least, not for me. I often read these journals on my iPad. I have iPad3, and I use the Safari browser that came with it. Had no problem with the old webpage, and other journals' webpages. But the new Phys. Rev. webpage is downright annoying! Part of the table of content "floats" with the page as one is scrolling down! I've uploaded a video of what I'm seeing so that you can see it for yourself.

I've e-mailed my complaints to the Feedback link. I had given it a few months in case this was a glitch or if they were still trying to sort out the kinks. But this seems to have persisted. I can't believe I'm the only one having this problem.

It is too bad. They had a nice, simple design before, and I could find things very quickly. Now, in trying to make it more sophisticated and more slick, they've ruined the usability for us who care more about getting the information than the bells and whistles.

Zz.

by ZapperZ (noreply@blogger.com) at September 12, 2014 01:47 PM

Symmetrybreaking - Fermilab/SLAC

Astrophysics at the edge of the Earth

Conducting research at the South Pole takes a unique level of commitment.

The sun sets but once a year at the South Pole, and it is a prolonged process. During a recent stay at the Amundsen-Scott South Pole Station, postdoctoral researcher Jason Gallicchio saw it hover along the horizon for about a week before dropping out of sight for six months. The station’s chefs prepared an eight-course meal to mark the occasion.

The South Pole is not the easiest place to mark the passage of time, but the spectacular view it offers of the night sky has made it one of the best places to study astrophysics—and a unique place to work.

Gallicchio, an associate fellow for the Kavli Insitute of Physics at the University of Chicago, is part of an astrophysics experiment at the South Pole Telescope. Last year he spent nearly a full year at the station, including a “winterover,” during which the crew for his experiment dwindled to just him and a colleague.

Life at the station

Making it to the South Pole took interviews and a rigorous training program that included emergency medical and firefighting training and a mandatory psychological evaluation, Gallicchio says.

“People were bombarding me with information—which grease to grease which parts with, how to analyze data in certain ways,” Gallicchio says. “It was totally intense every day. It was one of the best, most educational things in my life.”

Gallicchio landed at the South Pole in early January 2013 and remained there until mid-November.

“When you get off the plane at the South Pole, there is a feeling like you’re out in the ocean,” says University of Chicago physicist John Carlstrom, the principal investigator for the South Pole Telescope team, who has logged 15 round trips to the South Pole over the past two decades. “It’s just a featureless horizon. The snow is so dry it feels like Styrofoam.”

The living quarters at the South Pole station are comfortable and dorm-like, Gallicchio says. A brightly lit greenhouse provides an escape from the constant night and nosebleed-inducing dryness. It also supplies some greens to the daily menu, but no fresh fruit or nuts.

“One of the most popular things after dinner, ironically, is ice cream,” he says.

Gallicchio fashioned his sleeping schedule around his duties at the telescope. “I was always on-call,” he says. “A lot of people there were in that situation. It was totally acceptable to be eating or watching a movie and then to go off to work and come back.”

The living quarters are about a half-mile hike from the telescope building.

Because welding is challenging in the Antarctic chill, and because the long winter season limits construction time, the telescope was designed in pieces that could be quickly fastened together with thousands of structural bolts. The ski-equipped cargo planes that carry supplies to the South Pole station are limited to carrying 26,000 pounds per trip; the bolts practically required their own dedicated flight.

Much of the telescope’s instrumentation is tucked away in a heated building beneath the exposed dish. Its panels, machined to hair’s-width precision, are slightly warmed to keep them free of frost.

Earlier astrophysics experiments at the South Pole provided important lessons for how to best build, maintain and operate the South Pole Telescope, Carlstrom says. “Everything left out and exposed to the cold will fail in a way you probably hadn’t thought of.”

Measuring 10 meters across and weighing 280 tons, the South Pole Telescope precisely maps temperature variations in the cosmic microwave background, a kind of faint static left in the sky from the moment that light first escaped the chaos that followed the big bang.

The telescope was installed in 2007 and upgraded in 2012 to be sensitive to a type of pattern, called polarization, in the CMB. Studying the polarization of the CMB could tell scientists about the early universe.

Another South Pole experiment, BICEP2, recently reported finding a pattern that could be the first proof that our universe underwent a period of rapid expansion the likes of which we haven’t seen since just after the big bang. One of the goals of the South Pole Telescope is to further investigate and refine this result.

The South Pole Telescope’s next upgrade, which will grow its array from 1,500 to 15,000 detectors, is set for late 2015.

Icy isolation

During the winter months, the average temperature is negative 72 degrees, but “a lot of people find the altitude much worse than the cold,” Gallicchio says. No flights are scheduled in or out.

Gallicchio says he had never experienced such isolation. Even if you’re aboard the International Space Station, if something goes wrong you’re only a few hours away from civilization, he says. During a winter at the South Pole, there’s no quick return trip.

During the warmer months, up to about 200 scientists and support staff can occupy the South Pole station at any given time. During the winter, the group is cut to about 50.

During Gallacchio’s winterover, he was generally responsible for the telescope’s data acquisition and software systems, though he occasionally assisted with “crawling around fixing things.” Gallacchio could work on some of the telescope’s computer and electronics systems from the main station, while his more seasoned colleague Dana Hrubes often spent at least eight hours a day at the telescope. “He really taught me a lot and was a great partner,” Gallachio says.

At one point, the power went out, and his emergency training kicked in. Gallicchio and Hrubes began the steps needed to dock the telescope to protect it from the elements in case its heating elements ran out of backup power.

“Power going out is a big deal, as all of the heat from the station comes from waste heat in the generators, and eventually there’s going to be no heat,” he says. “The circuit-breaker kept tripping and it took [the staff] a while to figure out that a control cable had frayed and shorted itself. It got a little scary.”

Once the power plant mechanics found the problem, they repaired it and got all systems back online. “They did a great job.”

A view like no other

Gallicchio recalls the appearance of the first star after the weeklong sunset. Gradually, more and more stars appeared. Eventually, the sky was aglow.

On some nights, the southern lights, the aurora australis, took over. “When the auroras are active they are by far the brightest thing,” he says. “Everything has a green tint to it, including the snow and the buildings.”

Besides missing family, friends, bike rides and working in coffee shops, Gallicchio says he did enjoy his time at the South Pole. “Nothing about the experience itself would keep me from doing it again.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at September 12, 2014 01:00 PM