Particle Physics Planet


January 31, 2015

Emily Lakdawalla - The Planetary Society Blog

Camera now measuring even fainter Near-Earth Objects
Camera purchased with the support of a 2009 Shoemaker NEO Grant is now on a new telescope providing follow-up measurements for even fainter near-Earth objects.

January 31, 2015 01:03 AM

January 30, 2015

Christian P. Robert - xi'an's og

icefalls on Ben Nevis

 

benThe seminar invitation to Edinburgh gave me the opportunity and the excuse for a quick dash to Fort William for a day of ice-climbing on Ben alNevis. The ice conditions were perfect but there was alas too much snowdrift to attempt Point Five Gully, one of the mythical routes on the Ben. (Last time, the ice was not in good conditions.) Instead, we did three pitches on three different routes, one iced rock-face near the CIC hut, the first pitch of Waterfall Gully on Carn Dearg Buttress, and the first pitch of The Curtain, again on Carn Dearg Buttress.

The most difficult climb was the first one, grading about V.5 in Scottish grade, mCICaybe above that as the ice was rather rotten, forcing my guide Ali to place many screws. And forcing me to unscrew them! Then the difficulty got much lower, except for the V.5 start of the Waterfall, where I had to climb with hands an ice pillar as the ice-picks would not get a good grip. Breaking another large pillar in the process, fortunately mostly avoiding being hit. The final climb was quite easy, more of a snow steep slope than a true ice-climb. Too bad the second part of the route was blocked by two fellows who could not move! Anyway, it was another of those rare days on the ice, with enough choice to worry about sharing with other teams, and a terrific guide! And a reasonable dawaterfally for Scotland with little snow, no rain, plenty of wind and not that cold (except when belaying!).


Filed under: Mountains, pictures, Travel Tagged: Ben Nevis, Carn Dearg Buttress, Highlands, ice climbing, point five gully, Scotland, Scottish climbing grade, waterfall

by xi'an at January 30, 2015 11:15 PM

Symmetrybreaking - Fermilab/SLAC

Cosmic inflation remains undiscovered

A new study puts earlier discovery claims into perspective.

A previous study claiming the discovery of gravitational waves as cosmic inflation’s fingerprint has most likely been over-interpreted, scientists found in a joint analysis between the Planck and BICEP2 experiments.

The new study, whose key results were released today in statements from the European Space Agency and the National Science Foundation, did not find conclusive evidence of cosmic inflation. Cosmic inflation is the exponential growth of the universe within the first few fractions of a second after the big bang almost 14 billion years ago.

“This joint work has shown that [the earlier claims are] no longer robust once the emission from galactic dust is removed,” says Jean-Loup Puget, principal investigator of Planck’s High Frequency Instrument at the Institut d’Astrophysique Spatiale in Orsay, France, in the statements. “So, unfortunately, we have not been able to confirm that the signal is an imprint of cosmic inflation.”

“These results have important consequences for the entire research field,” says Planck project scientist Jan Tauber from ESA. “They will impact how future experiments searching for cosmic inflation will be designed.”

Controversial cosmic pattern

In the 1980s, physicists Alan Guth and Andrei Linde developed the theory of cosmic inflation. This rapid expansion would have left its mark in the form of a pattern in the cosmic microwave background—faint light left behind from just after the big bang. The BICEP2 experiment was designed to search for this pattern.

Last March, BICEP2 scientists announced that they had found the characteristic pattern, and that it was even more pronounced than expected. If this interpretation turned out to be confirmed, it would be direct evidence of cosmic inflation.

However, scientists began to raise the concern that the pattern found by the BICEP2 study could have been caused by something else, such as dust in our own galaxy.

The BICEP2 researchers were aware that dust might give them a false signal. To minimize this possibility, they located their experiment at the South Pole and pointed their telescope at a part of the sky that was considered particularly “clean.” Then, in their analysis, the researchers carefully subtracted possible dust signals based on various theoretical models and earlier dust measurements.

“However, we did not have a precise dust map of the sky at the time,” says Chao-Lin Kuo, a BICEP2 lead scientist at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and SLAC National Accelerator Laboratory.

Six months later, the Planck collaboration published the missing map, which showed that at least a significant portion of the BICEP2 signal came from dust.

“The Planck results demonstrated that no place in the sky is free of dust and that you have to deal with it accordingly," Tauber says.

Joining forces

To find out whether the BICEP2 signal was more than just background, the two experiments combined forces.

“We gave our data to the Planck team and vice versa,” says KIPAC’s Jaime Tolan, a member of the BICEP2 analysis crew. “Each team then analyzed the other group’s data using its own analysis tools.”

The Planck satellite, which surveyed the entire sky from 2009 to 2013, had analyzed cosmic light of different colors, or wavelengths. Because the signal components—dust and primordial gravity waves—have a different color spectrum, scientists could compare the measurements at different wavelengths to determine how much of the signal came from dust.

The key result of the new study is a measurement of how likely it is that the BICEP2 pattern is caused by cosmic inflation. 

“The amount of gravitational waves can probably be no more than about half the level claimed in our earlier study,” says Clem Pryke, a principal investigator of BICEP2 at the University of Minnesota, in the statements.

The results do not completely rule out that the gravitational wave signal could still be there; however, it is not very likely that most of the BICEP2 signal was caused by it.

Future experiments will be able to use the information from Planck to subtract dust backgrounds from their signal and to adjust their observation strategies.

“BICEP3, for instance, will be using a wavelength that is much less sensitive to dust than the one used by BICEP2,” says KIPAC researcher Walter Ogburn, a member of the BICEP2 team.

Deployed last November, the successor experiment of BICEP2 will start looking for signs of cosmic inflation during the next Antarctic winter.

The BICEP2/Planck collaboration has submitted a paper to the journal Physical Review Letters, and a preprint will be available on the arXiv next week.

 

Like what you see? Sign up for a free subscription to symmetry!

by Manuel Gnida at January 30, 2015 07:02 PM

CERN Bulletin

CERN Bulletin Issue No. 03-04/2015
Link to e-Bulletin Issue No. 03-04/2015Link to all articles in this issue No.

January 30, 2015 05:37 PM

CERN Bulletin

CERN Bulletin Issue No. 03-04/2015
Link to e-Bulletin Issue No. 03-04/2015Link to all articles in this issue No.

January 30, 2015 05:36 PM

Tommaso Dorigo - Scientificblogging

The ATLAS Top Production Asymmetry And One Thing I Do Not Like Of It
ATLAS sent today to the Cornell arxiv and to the journal JHEP their latest measurement of the top-antitop production asymmetry, and having five free minutes this afternoon I gave a look at the paper, as the measurement is of some interest. The analysis is done generally quite well, but I found out there is one things I do not particularly like in it... It does not affect the result in this case, but the used procedure is error-prone. 

But let's go in order. First I owe you a quick-and-dirty explanation of what is the top asymmetry and why you might care about it.

read more

by Tommaso Dorigo at January 30, 2015 05:11 PM

Emily Lakdawalla - The Planetary Society Blog

Talking to Pluto is hard! Why it takes so long to get data back from New Horizons
As I write this post, New Horizons is nearing the end of a weeklong optical navigation campaign. The last optical navigation images in the weeklong series will be taken tomorrow, but it will likely take two weeks or more for all the data to get to Earth. Two weeks! Why does it take so long?

January 30, 2015 03:53 PM

CERN Bulletin

Seminar | "Managing Italian research stations at the Poles" by Roberto Sparapani | 19 February
Polar areas are an ideal place to study climate change and other research fields. However, living and working at the Poles is a challenge for all the researchers involved. This presentation by Roberto Sparapani, who led the Italian research station Dirigibile Italia at Ny-Ålesund from 1997 to 2014, will take a short trip through the research and history of polar science - with a focus on the human factor, which makes a difference in a natural environment that leaves no room for improvisation.   The seminar will be held on 19 February at 4.30 p.m. in the Main Auditorium. It will be followed by a screening of Paola Catapano’s documentary for RAIWORLD “A Nord di Capo nord” (North of Cape North), in Italian with English subtitles. The documentary was given the "Artistic Direction Special Award" at the Rome Scientific Documentary Festival in December 2014. Ny-Ålesund is a small international research village located in the northwest coast of the Svalbard archipelago (79° N). Similar to a small "CERN", Ny-Ålesund is a community of 11 research stations and more than 20 countries working in the sunlight for 6 months in a row, and the reflections of the aurora for the rest of the year. Roberto Sparapani worked for CNR (the Italian Research Council) since 1983, initially with the Institute of Atmospheric Pollution Research as a laboratory technician. He soon started taking part in research expeditions to extreme areas as a logistics manager: on oceanographic ships from 1987 to 1990, in Nepal in 1991, the Spitzbergen in 1992, Antarctica in 1994, Greenland in 1999 and the Canadian Arctic (Alert) in 2000. He was base leader of the Antarctic Zucchelli station from 2004 to 2009 and of the Arctic research station "Dirigibile Italia" at Ny-Ålesund since its opening in 1997 until 2014.

January 30, 2015 03:36 PM

Emily Lakdawalla - The Planetary Society Blog

Dawn Journal: Closing in on Ceres
Dawn's chief engineer Marc Rayman gives an update on the mission as it gets ever closer to its next target: The dwarf planet Ceres.

January 30, 2015 02:31 PM

astrobites - astro-ph reader's digest

The Sun and its Iron Fist
ITitle: A higher-than-predicted measurement of iron opacity at solar interior temperatures

Authors: J. E. Bailey et al

First author’s institution: Sandia National Laboratories

The Sun and the Solar Discrepancy

The Sun’s interior is made up of three regions, depending on which type of energy transport mechanism is dominant. The innermost layer is the core (r = 0.25 solar radius), where all of the Sun’s energy is being generated through nuclear fusion of hydrogen. This is followed by a radiative zone stretching outward from the core to ~75% of the Sun’s radius, where transport of energy is through photons, i.e. radiation. The convection zone makes up the outermost layer of the Sun’s interior and extends the rest of the way from the radiative zone up to the Sun’s photosphere (where all the light we see from the Sun is coming from); energy is mainly transported through the movement of fluid/plasma in the convection zone. Sandwiched between the radiative and convection zone is the radiation/convection boundary (also known as the overshoot layer or the tacholine), where transition between radiation and convection takes place.

sun

Fig 1 – Solar interior [Wikipedia]

The change of temperature as a function of a depth (known as the temperature profile) of a star is affected by absorption of radiation by intervening matter. The degree to which radiation is being attenuated is measured by a quantity known as opacity (also known as absorption coefficient). The greater the star’s opacity, the more opaque is the star; we see less into its interior. Chemical composition affects a star’s opacity, as different elements absorb/scatter light differently. A star with a higher content of light absorbing/scattering elements has a higher opacity and vice versa. The Sun’s opacity is thus determined by its chemical composition.

The Standard Solar Model (SSM) is a mathematical model that tries to encapsulate the various facets of the Sun, such as its energy transport, chemical composition, temperature profiles, and many more. An accurate treatment of the Sun is crucial as the SSM is used to test stellar evolution theory — a substantial correction to the SSM might entail a significant revamping of our current understanding of how stars work. Solar modelers faced a hit when re-analysis of the Sun’s spectrum forced them to reduce the inferred amounts of solar carbon, nitrogen, and oxygen by 30-50%. However, predictions of solar models using the revised elemental abundances disagree with observations from helioseismology, which is a technique that probes the Sun’s internal structure using acoustic waves. If both spectral analysis and helioseismic observations are correct, then there must be something wrong about our current understanding of the Sun. How do we resolve this? Hint: opacity.

Other researchers have found that as long as the true mean opacity of the Sun’s interior is ~ 15% higher than predicted by solar models, this discrepancy can be resolved, as increased opacity makes up for the reduced elemental abundances. This means we have to re-evaluate the solar opacity in a model-independent way: direct measurements at conditions mimicking the Sun’s interior, where temperatures sore to millions of Kelvin.  But millions of Kelvin… isn’t that extremely difficult, you ask? This is where today’s paper comes in.

This paper

The authors of this paper used a facility known as the Z-machine in order to simulate the solar interior. The Z-machine is the world’s most powerful X-ray pulsed power facility located at Sandia National Laboratories (called Z-machine because particles shot from the accelerator collide with one another along the z-axis). Using the Z-machine, the authors measured the iron opacity at conditions very similar to those in the Sun’s radiation/convection zone boundary. This boundary layer is where the discrepancy described above is the most significant. Iron is chosen for the opacity measurement as it is the main opacity player at the boundary, contributing up to 25% of the total opacity. This is because iron has the largest number of stably-bound electrons and electrons are the main sources of opacity in the solar interior.

In order to better understand the physical processes that affect opacity, the authors performed measurements over a range of Te/ne values, where Te is electron temperature (the temperature at which the velocities of electrons produce a certain type of distribution known as the Maxwell-Boltzmann distribution) and ne is the electron number density. Figure 2 compares their measurements (red lines) with calculations from a solar opacity model (blue lines). As Te/ne increases, opacity increases, line features decrease, and the lines become broader. Although the model also varies as Te/ne changes, the changes are smaller compared to those of the direct measurements. The model and measurement agree best at the lowest Te/ne, but they diverge as Te/ne increases. The authors compared their opacity spectrum at a specific Te/ne condition (2.11×106 K / 3.1×1022 cm-3) that is deemed to represent the radiation/boundary zone most accurately) with calculations from more models, as shown in figure 3. None of the models reproduces the measured opacity spectra — as a matter of fact, the measured opacity is 30-400% higher than predicted.

fig1

Fig 2 – Measured iron opacity spectra versus wavelength at four Te/ne values compared with model calculations. Measurements are the red lines while model calculations are the blue lines. Te/ne increases as one goes up the panel. [Figure 2 from paper]

If one recalls back to our earlier discussion on the disagreement between spectral and helioseismic observations, we need the mean solar opacity to be ~15% higher than prediction. The measured wavelength-dependent iron opacity is 30-400% higher than predicted, which translates to ~7 (+/-3)% higher than the predicted mean opacity, i.e. roughly half of the change needed to resolve the solar discrepancy. Bearing in mind that iron is only one of the elements that contribute to opacity (albeit a main one), the mean opacity is expected to be much higher when all the elements present in the solar interior are considered.

fig2

Fig 3 – Comparisons of measured iron opacity spectra with multiple models at Te = 2.11×106 K and ne = 3.1×1022 cm-3, which were deemed to be most representative of the solar radiation/convection zone conditions. Data are the black lines while other colored lines are output from various models. It can clearly be seen that models don’t quite match the data. [Figure 3 from paper]

To conclude, predictions of current solar models disagree with results from helioseismic observations. This discrepancy can potentially be resolved if the true mean opacity of the Sun is ~15% higher than solar model predictions. The authors of today’s paper measured iron opacity at conditions similar to the Sun’s radiation/convection boundary layer and found it to be 30-400% higher than model predictions, translating to roughly 7% higher for the mean opacity. As such, there are missing ingredients in current solar models and corrections need to be made. It is sobering (and humbling) to ponder how much we still don’t understand about how the Sun works, given that it is the closest star to us and that most of our understanding about other stars is based on what we understand about our Sun.

 

by Suk Sien Tie at January 30, 2015 01:45 PM

CERN Bulletin

LHC Report: superconducting circuit powering tests

After the long maintenance and consolidation campaign carried out during LS1, the machine is getting ready to start operation with beam at 6.5 TeV… the physics community can’t wait! Prior to this, all hardware and software systems have to be tested to assess their correct and safe operation.

 

Most of the cold circuits (those with high current/stored energy) possess a sophisticated magnet protection system that is crucial to detect a transition of the coil from the superconducting to the normal state (a quench) and safely extract the energy stored in the circuits (about 1 GJ per dipole circuit at nominal current).LHC operation relies on 1232 superconducting dipoles with a field of up to 8.33 T operating in superfluid helium at 1.9 K, along with more than 500 superconducting quadrupoles operating at 4.2 or 1.9 K. Besides, many other superconducting and normal resistive magnets are used to guarantee the possibility of correcting all beam parameters, for a total of more than 10,000 magnets. About 1700 power converters are necessary to feed these superconducting circuits.

The commissioning of the superconducting circuits is a lengthy process.  All interlock and protection systems have to be tested, before and while ramping up the current in steps.  The typical time needed to commission a dipole circuit is in the order of 3 to 5 weeks. A total of more than 10,000 test steps have to be performed on the LHC’s circuits and analysed by the protection experts.

An additional challenge for this year’s commissioning of the superconducting circuits is the increase in energy (and therefore the current feeding each circuit) up to near the design values. In fact, when a superconducting magnet approaches its maximum performance, it experiences repetitive quenches (a phenomenon also known as training) before reaching the target magnetic field. The quenches are caused by the sudden release of the electromechanical stresses and the local increase in temperature above the transition level. The entire coil is then warmed up and needs to be cooled down again - for the LHC dipoles, this might take several hours. The LHC now has two sectors where the dipole magnets have been already trained. 20 and 7 quenches per sector respectively were necessary to reach the equivalent operational energy of 6.5 TeV, with a net time of 10 and 4 days spent in training the magnets.

As concerns the general preparation of the machine, the powering tests have now started in five of the eight LHC sectors: about 30% of the total number of test steps have been executed and the main dipoles and quadrupoles have been prepared for tests in half of the machine; all preparation activities should be completed in about three weeks’ time and the commissining of all circuits is expected to finish sometime in March.

January 30, 2015 12:01 PM

Peter Coles - In the Dark

The BICEP2 Bubble Bursts…

I think it’s time to break the worst-kept secret in cosmology, concerning the claimed detection of primordial gravitational waves by the BICEP2 collaboration that caused so much excitement last year; see this blog, passim. If you recall, the biggest uncertainty in this result derived from the fact that it was made at a single frequency, 150 GHz, so it was impossible to determine the spectrum of the signal. Since dust in our own galaxy emits polarized light in the far-infrared there was no direct evidence to refute the possibility that this is what BICEP2 had detected. The indirect arguments presented by the BICEP2 team (that there should be very little dust emission in the region of the sky they studied) were challenged, but the need for further measurements was clear.

Over the rest of last year, the BICEP2 team collaborated with the consortium working on the Planck satellite, which has measurements over the whole sky at a wide range of frequencies. Of particular relevance to the BICEP2 controversy are the Planck mesurements at such high frequency that they are known to be dominated by dust emission, specifically the 353 GHz channel. Cross-correlating these data with the BICEP2 measurements (and also data from the Keck Array which is run by the same team) should allow the identification of that part of the BICEP2 signal that is due to dust emission to be isolated and subtracted. What’s left would be the bit that’s interesting for cosmology. This is the work that has been going on, the results of which will officially hit the arXiv next week.

However, news has been leaking out over the last few weeks about what the paper will say. Being the soul of discretion I decided not to blog about these rumours. However, yesterday I saw the killer graph had been posted so I’ve decided to share it here:

cross-correlation

The black dots with error bars show the original BICEP/Keck “detection” of B-mode polarization which they assumed was due to primordial gravitational waves. The blue dots with error bars show the results after subtracting the correlated dust component. There is clearly a detection of B-mode polarization. However, the red curve shows the B-mode polarization that’s expected to be generated not by primordial gravitational waves but by gravitational lensing; this signal is already known. There’s a slight hint of an excess over the red curve at multipoles of order 200, but it is not statistically significant. Note that the error bars are larger when proper uncertainties are folded in.

Here’s a quasi-official statement of the result (orginall issued in French) that has been floating around on Twitter:

BICEP_null

To be blunt, therefore, the BICEP2 measurement is a null result for primordial gravitational waves. It’s by no means a proof that there are no gravitational waves at all, but it isn’t a detection. In fact, for the experts, the upper limit on the tensor-to-scalar ratio  R from this analysis is R<0.13 at 95% confidences there’s actually till room for a sizeable contribution from gravitational waves, but we haven’t found it yet.

The search goes on…


by telescoper at January 30, 2015 11:35 AM

Tommaso Dorigo - Scientificblogging

Me
This blog - which, in different sites, has been online since 2005, hence for over 10 years now - enjoys a core of faithful readers, who over the years have learnt more detail on my personal life than they probably thought they'd need. But it also occasionally attracts larger crowds of web navigants with an interest in particle physics. They have all rights to not know who I am and whether I am a 20yo geek or a retired professor, male or female, etcetera. 

read more

by Tommaso Dorigo at January 30, 2015 09:41 AM

The n-Category Cafe

The Univalent Perspective on Classifying Spaces

I feel like I should apologize for not being more active at the Cafe recently. I’ve been busy, of course, and also most of my recent blog posts have been going to the HoTT blog, since I felt most of them were of interest only to the HoTT crowd (by which I mean, “people interested enough in HoTT to follow the HoTT blog” — which may of course include many Cafe readers as well). But today’s post, while also inspired by HoTT, is less technical and (I hope) of interest even to “classical” higher category theorists.

In general, a classifying space for bundles of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s is a space <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> such that maps <semantics>YB<annotation encoding="application/x-tex">Y\to B</annotation></semantics> are equivalent to bundles of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s over <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics>. In classical algebraic topology, such spaces are generally constructed as the geometric realization of the nerve of a category of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, and as such they may be hard to visualize geometrically. However, it’s generally useful to think of <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> as a space whose points are <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, so that the classifying map <semantics>YB<annotation encoding="application/x-tex">Y\to B</annotation></semantics> of a bundle of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s assigns to each <semantics>yY<annotation encoding="application/x-tex">y\in Y</annotation></semantics> the corresponding fiber (which is an <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>). For instance, the classifying space <semantics>BO<annotation encoding="application/x-tex">B O</annotation></semantics> of vector bundles can be thought of as a space whose points are vector spaces, where the classifying map of vector bundle assigns to each point the fiber over that point (which is a vector space).

In classical algebraic topology, this point of view can’t be taken quite literally, although we can make some use of it by identifying a classifying space with its representable functor. For instance, if we want to define a map <semantics>f:BOBO<annotation encoding="application/x-tex">f:B O\to B O</annotation></semantics>, we’d like to say “a point <semantics>vBO<annotation encoding="application/x-tex">v\in B O</annotation></semantics> is a vector space, so let’s do blah to it and get another vector space <semantics>f(v)BO<annotation encoding="application/x-tex">f(v)\in B O</annotation></semantics>. We can’t do that, but we can do the next best thing: if blah is something that can be done fiberwise to a vector bundle in a natural way, then since <semantics>Hom(Y,BO)<annotation encoding="application/x-tex">Hom(Y,B O)</annotation></semantics> is naturally equivalent to the collection of vector bundles over <semantics>Y<annotation encoding="application/x-tex">Y</annotation></semantics>, our blah defines a natural transformation <semantics>Hom(,BO)Hom(,BO)<annotation encoding="application/x-tex">Hom(-,B O) \to Hom(-,B O)</annotation></semantics>, and hence a map <semantics>f:BOBO<annotation encoding="application/x-tex">f:B O \to B O</annotation></semantics> by the Yoneda lemma.

However, in higher category theory and homotopy type theory, we can really take this perspective literally. That is, if by “space” we choose to mean “<semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid” rather than “topological space up to homotopy”, then we can really define the classifying space to be the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, whose points (objects) are <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, whose morphisms are equivalences between <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, and so on. Now, in defining a map such as our <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>, we can actually just give a map from <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’s, as long as we check that it’s functorial on equivalences — and if we’re working in HoTT, we don’t even have to do the second part, since everything we can write down in HoTT is automatically functorial/natural.

This gives a different perspective on some classifying-space constructions that can be more illuminating than a classical one. Below the fold I’ll discuss some examples that have come to my attention recently.

All of these examples have to do with the classifying space of “types equivalent to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>” for some fixed <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. Such a classifying space, often denoted <semantics>BAut(X)<annotation encoding="application/x-tex">B Aut(X)</annotation></semantics>, has the property that maps <semantics>YBAut(X)<annotation encoding="application/x-tex">Y \to B Aut(X)</annotation></semantics> are equivalent to maps (perhaps “fibrations” or “bundles”) <semantics>ZY<annotation encoding="application/x-tex">Z\to Y</annotation></semantics> all of whose fibers are equivalent (a homotopy type theorist might say “merely equivalent”) to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. The notation <semantics>BAut(X)<annotation encoding="application/x-tex">B Aut(X)</annotation></semantics> accords with the classical notation <semantics>BG<annotation encoding="application/x-tex">B G</annotation></semantics> for the delooping of a (perhaps <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-) group: in fact this is a delooping of the group of automorphisms of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

Categorically (and homotopy-type-theoretically), we simply define <semantics>BAut(X)<annotation encoding="application/x-tex">B Aut(X)</annotation></semantics> to be the full sub-<semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid of <semantics>Gpd<annotation encoding="application/x-tex">\infty Gpd</annotation></semantics> (the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid of <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids) whose objects are those equivalent to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. You might have thought I was going to say the full sub-<semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid on the single object <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, and that would indeed give us an equivalent result, but the examples I’m about to discuss really do rely on having all the other equivalent objects in there. In particular, note that an arbitrary object of <semantics>BAut(X)<annotation encoding="application/x-tex">B Aut(X)</annotation></semantics> is an <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid that admits some equivalence to <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, but no such equivalence has been specified.

Example 1: <semantics>BAut(2)<annotation encoding="application/x-tex">B Aut(2)</annotation></semantics>

As the first example, let <semantics>X=2={0,1}<annotation encoding="application/x-tex">X = 2 = \{0,1\}</annotation></semantics>, the standard discrete space with two points. Then <semantics>Aut(2)=C 2<annotation encoding="application/x-tex">Aut(2) = C_2</annotation></semantics>, the cyclic group on 2 elements, and so <semantics>BAut(2)=BC 2=K(C 2,1)<annotation encoding="application/x-tex">B Aut(2) = B C_2 = K(C_2,1)</annotation></semantics>. Since <semantics>C 2<annotation encoding="application/x-tex">C_2</annotation></semantics> is an abelian group, <semantics>BC 2<annotation encoding="application/x-tex">B C_2</annotation></semantics> again has a (2-)group structure, i.e. we should have a multiplication operation <semantics>BC 2×BC 2BC 2<annotation encoding="application/x-tex">B C_2 \times B C_2 \to B C_2</annotation></semantics>, an identity, inversion, etc.

Using the equivalence <semantics>BC 2BAut(2)<annotation encoding="application/x-tex">B C_2 \simeq B Aut(2)</annotation></semantics>, we can describe all of these operations directly. A point <semantics>ZBAut(2)<annotation encoding="application/x-tex">Z \in B Aut(2)</annotation></semantics> is a space that’s equivalent to <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>, but without a specified equivalence. Thus, <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> is a set with two elements, but we haven’t chosen either of those elements to call “<semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>” or “<semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>”. As long as we perform constructions on <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> without making such an unnatural choice, we’ll get maps that act on <semantics>BAut(2)<annotation encoding="application/x-tex">B Aut(2)</annotation></semantics> and hence <semantics>BC 2<annotation encoding="application/x-tex">B C_2</annotation></semantics> as well.

The identity element of <semantics>BAut(2)<annotation encoding="application/x-tex">B Aut(2)</annotation></semantics> it’s fairly obvious: there’s only one canonical element of <semantics>BAut(2)<annotation encoding="application/x-tex">B Aut(2)</annotation></semantics>, namely <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> itself. The multiplication is not as obvious, and there may be more than one way to do it, but after messing around with it a bit you may come to the same conclusion I did: the product of <semantics>Z,WBAut(2)<annotation encoding="application/x-tex">Z,W\in B Aut(2)</annotation></semantics> should be <semantics>Iso(Z,W)<annotation encoding="application/x-tex">Iso(Z,W)</annotation></semantics>, the set of isomorphisms between <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> and <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>. Note that when <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> and <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> are 2-element sets, so is <semantics>Iso(Z,W)<annotation encoding="application/x-tex">Iso(Z,W)</annotation></semantics>, but in general there’s no way to distinguish either of those isomorphisms from the other one, nor is <semantics>Iso(Z,W)<annotation encoding="application/x-tex">Iso(Z,W)</annotation></semantics> naturally isomorphic to <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> or <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>. It is, however, obviously commutative: <semantics>Iso(Z,W)Iso(W,Z)<annotation encoding="application/x-tex">Iso(Z,W) \cong Iso(W,Z)</annotation></semantics>.

Moreover, if <semantics>Z=2<annotation encoding="application/x-tex">Z=2</annotation></semantics> is the identity element, then <semantics>Iso(2,W)<annotation encoding="application/x-tex">Iso(2,W)</annotation></semantics> is naturally isomorphic to <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>: we can define <semantics>Iso(2,W)W<annotation encoding="application/x-tex">Iso(2,W) \to W</annotation></semantics> by evaluating at <semantics>02<annotation encoding="application/x-tex">0\in 2</annotation></semantics>. Similarly, <semantics>Iso(Z,2)Z<annotation encoding="application/x-tex">Iso(Z,2)\cong Z</annotation></semantics>, so our “identity element” has the desired property.

Furthermore, if <semantics>Z=W<annotation encoding="application/x-tex">Z=W</annotation></semantics>, then <semantics>Iso(Z,Z)<annotation encoding="application/x-tex">Iso(Z,Z)</annotation></semantics> does have a distinguished element, namely the identity. Thus, it naturally equivalent to <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> by sending the identity to <semantics>02<annotation encoding="application/x-tex">0\in 2</annotation></semantics>. So every element of <semantics>BAut(2)<annotation encoding="application/x-tex">B Aut(2)</annotation></semantics> is its own inverse. The trickiest part is proving that this operation is associative. I’ll leave that to the reader (or you can try to decipher my Coq code).

(We did have to make some choices about whether to use <semantics>02<annotation encoding="application/x-tex">0\in 2</annotation></semantics> or <semantics>12<annotation encoding="application/x-tex">1\in 2</annotation></semantics>. I expect that as long as we make those choices consistently, making them differently will result in equivalent 2-groups.)

Example 2: An incoherent idempotent

In 1-category theory, an idempotent is a map <semantics>f:AA<annotation encoding="application/x-tex">f:A\to A</annotation></semantics> such that <semantics>ff=f<annotation encoding="application/x-tex">f \circ f = f</annotation></semantics>. In higher category theory, the equality <semantics>ff=f<annotation encoding="application/x-tex">f \circ f = f</annotation></semantics> must be weakened to an isomorphism or equivalence, and then treated as extra data on which we ought to ask for additional axioms, such as that the two induced equivalences <semantics>ffff<annotation encoding="application/x-tex">f \circ f \circ f \simeq f</annotation></semantics> coincide (up to an equivalence, of course, which then satisfies its own higher laws, etc.).

A natural question is if we have only an equivalence <semantics>fff<annotation encoding="application/x-tex">f \circ f \simeq f</annotation></semantics>, whether it can be “improved” to a “fully coherent” idempotent in this sense. Jacob Lurie gave the following counterexample in Warning 1.2.4.8 of Higher Algebra:

let <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> denote the group of homeomorphisms of the unit interval <semantics>[0,1]<annotation encoding="application/x-tex">[0,1]</annotation></semantics> which fix the endpoints (which we regard as a discrete group), and let <semantics>λ:GG<annotation encoding="application/x-tex">\lambda : G \to G</annotation></semantics> denote the group homomorphism given by the formula

<semantics>λ(g)(t)={12g(2t) if0t12 t if12t1.<annotation encoding="application/x-tex"> \lambda(g)(t) = \begin{cases} \frac{1}{2} g(2t) & \quad if\; 0\le t \le \frac{1}{2}\\ t & \quad if\; \frac{1}{2}\le t \le 1. \end{cases} </annotation></semantics>

Choose an element <semantics>hG<annotation encoding="application/x-tex">h\in G</annotation></semantics> such that <semantics>h(t)=2t<annotation encoding="application/x-tex">h(t)=2t</annotation></semantics> for <semantics>0t14<annotation encoding="application/x-tex">0\le t\le \frac{1}{4}</annotation></semantics>. Then <semantics>λ(g)h=hλ(λ(g))<annotation encoding="application/x-tex">\lambda(g)\circ h = h\circ \lambda(\lambda(g))</annotation></semantics> for each <semantics>gG<annotation encoding="application/x-tex">g\in G</annotation></semantics>, so that the group homomorphisms <semantics>λ,λ 2:GG<annotation encoding="application/x-tex">\lambda,\lambda^2 : G\to G</annotation></semantics> are conjugate to one another. It follows that the induced map of classifying spaces <semantics>e:BGBG<annotation encoding="application/x-tex">e:B G \to B G</annotation></semantics> is homotopic to <semantics>e 2<annotation encoding="application/x-tex">e^2</annotation></semantics>, and therefore idempotent in the homotopy category of spaces. However… <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> cannot be lifted to a [coherent] idempotent in the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-category of spaces.

Let’s describe this map <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> in the more direct way I suggested above. Actually, let’s do something easier and just as good: let’s replace <semantics>[0,1]<annotation encoding="application/x-tex">[0,1]</annotation></semantics> by Cantor space <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics>. It’s reasonable to guess that this should work, since the essential property of <semantics>[0,1]<annotation encoding="application/x-tex">[0,1]</annotation></semantics> being used in the above construction is that it can be decomposed into two pieces (namely <semantics>[0,12]<annotation encoding="application/x-tex">[0,\frac{1}{2}]</annotation></semantics> and <semantics>[12,1]<annotation encoding="application/x-tex">[\frac{1}{2},1]</annotation></semantics>) which are both equivalent to itself, and <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics> has this property as well:

<semantics>2 2 +12 ×2 12 +2 .<annotation encoding="application/x-tex">2^{\mathbb{N}} \cong 2^{\mathbb{N}+1} \cong 2^{\mathbb{N}} \times 2^1 \cong 2^{\mathbb{N}} + 2^{\mathbb{N}}.</annotation></semantics>

Moreover, <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics> has the advantage that this decomposition is disjoint, i.e. a coproduct. Thus, we can also get rid of the assumption that our automorphisms preserve endpoints, which was just there in order to allow us to glue two different automorphisms on the two copies in the decomposition.

Therefore, our goal is now to construct an endomap of <semantics>BAut(2 )<annotation encoding="application/x-tex">B Aut(2^{\mathbb{N}})</annotation></semantics> which is incoherently, but not coherently, idempotent. As discussed above, the elements of <semantics>BAut(2 )<annotation encoding="application/x-tex">B Aut(2^{\mathbb{N}})</annotation></semantics> are spaces that are equivalent to <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics>, but without any such specified equivalence. Looking at the definition of Lurie’s <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>, we can see that intuitively, what it does is shrink the interval to half of itself, acting functorially, and add a new copy of the interval at the end. Thus, it’s reasonable to define <semantics>e:BAut(2 )BAut(2 )<annotation encoding="application/x-tex">e:B Aut(2^{\mathbb{N}}) \to B Aut(2^{\mathbb{N}})</annotation></semantics> by

<semantics>e(Z)=Z+2 .<annotation encoding="application/x-tex">e(Z) = Z + 2^{\mathbb{N}}.</annotation></semantics>

Here <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> is some space equivalent to <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics>, and in order for this map to be well-defined, we need to show is that if <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> is equivalent to <semantics>2 <annotation encoding="application/x-tex">2^{\mathbb{N}}</annotation></semantics>, then so is <semantics>Z+2 <annotation encoding="application/x-tex">Z + 2^{\mathbb{N}}</annotation></semantics>. However, the decomposition <semantics>2 2 +2 <annotation encoding="application/x-tex">2^{\mathbb{N}} \cong 2^{\mathbb{N}} + 2^{\mathbb{N}}</annotation></semantics> ensures this. Moreover, since our definition didn’t involve making any unnatural choices, it’s “obviously” (and in HoTT, automatically) functorial.

Now, is <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> incoherently-idempotent, i.e. do we have <semantics>e(e(Z))e(Z)<annotation encoding="application/x-tex">e(e(Z))\cong e(Z)</annotation></semantics>? Well, that is just asking whether

<semantics>(Z+2 )+2 is equivalent toZ+2 <annotation encoding="application/x-tex"> (Z + 2^{\mathbb{N}}) + 2^{\mathbb{N}} \quad\text{is equivalent to}\quad Z + 2^{\mathbb{N}} </annotation></semantics>

but this again follows from <semantics>2 2 +2 <annotation encoding="application/x-tex">2^{\mathbb{N}} \cong 2^{\mathbb{N}} + 2^{\mathbb{N}}</annotation></semantics>! Showing that <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> is not coherent is a bit harder, but still fairly straightforward using our description; I’ll leave it as an exercise, or you can try to decipher the Coq code.

Example 3: Natural pointed sets

Let’s end by considering the following question: in what cases does the natural map <semantics>BS n1BS n<annotation encoding="application/x-tex">B S_{n-1} \to B S_{n}</annotation></semantics> have a retraction, where <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics> is the symmetric group on <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements? Looking at homotopy groups, this would imply that <semantics>S n1S n<annotation encoding="application/x-tex">S_{n-1} \hookrightarrow S_n</annotation></semantics> has a retraction, which is true for <semantics>n<5<annotation encoding="application/x-tex">n\lt 5</annotation></semantics> but not otherwise. But let’s look instead at the map on classifying spaces.

The obvious way to think about this map is to identify <semantics>BS n<annotation encoding="application/x-tex">B S_n</annotation></semantics> with <semantics>BAut(n)<annotation encoding="application/x-tex">B Aut(\mathbf{n})</annotation></semantics>, where <semantics>n<annotation encoding="application/x-tex">\mathbf{n}</annotation></semantics> is the discrete set with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements, and similarly <semantics>BS n1<annotation encoding="application/x-tex">B S_{n-1}</annotation></semantics> with <semantics>BAut(n1)<annotation encoding="application/x-tex">B Aut(\mathbf{n-1})</annotation></semantics>. Then an element of <semantics>BAut(n1)<annotation encoding="application/x-tex">B Aut(\mathbf{n-1})</annotation></semantics> is a set <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> with <semantics>n1<annotation encoding="application/x-tex">n-1</annotation></semantics> elements, and the map <semantics>BS n1BS n<annotation encoding="application/x-tex">B S_{n-1} \to B S_{n}</annotation></semantics> takes it to <semantics>Z+1<annotation encoding="application/x-tex">Z+1</annotation></semantics> which has <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements.

However, another possibility is to identify <semantics>BS n1<annotation encoding="application/x-tex">B S_{n-1}</annotation></semantics> instead with the classifying space of pointed sets with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements. Since an isomorphism of pointed sets must respect the basepoint, this gives an equivalent groupoid, and now the map <semantics>BS n1BS n<annotation encoding="application/x-tex">B S_{n-1} \to B S_{n}</annotation></semantics> is just forgetting the basepoint. With this identification, a putative retraction <semantics>BS nBS n1<annotation encoding="application/x-tex">B S_{n} \to B S_{n-1}</annotation></semantics> would assign, to any set <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements, a pointed set <semantics>(r(Z),r 0)<annotation encoding="application/x-tex">(r(Z),r_0)</annotation></semantics> with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements. Note that the underlying set <semantics>r(Z)<annotation encoding="application/x-tex">r(Z)</annotation></semantics> need not be <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> itself; they will of course be isomorphic (since both have <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> elements), but there is no specified or natural isomorphism. However, to say that <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics> is a retraction of our given map says that if <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> started out pointed, then <semantics>(r(Z),r 0)<annotation encoding="application/x-tex">(r(Z),r_0)</annotation></semantics> is isomorphic to <semantics>(Z,z 0)<annotation encoding="application/x-tex">(Z,z_0)</annotation></semantics> as pointed sets.

Let’s do some small examples. When <semantics>n=1<annotation encoding="application/x-tex">n=1</annotation></semantics>, our map <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics> has to take a set with 1 element and assign to it a pointed set with 1 element. There’s obviously a unique way to do that, and just as obviously if we started out with a pointed set we get the same set back again.

The case <semantics>n=2<annotation encoding="application/x-tex">n=2</annotation></semantics> is a bit more interesting: our map <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics> has to take a set <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> with 2 elements and assign to it a pointed set with 2 elements. One option, of course, is to define <semantics>r(Z)=2<annotation encoding="application/x-tex">r(Z)=2</annotation></semantics> for all <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>. Since every pointed 2-element set is uniquely isomorphic to every other, this satisfies the requirement. Another option motivated by example 1, which is perhaps a little more satisfying, would be to define <semantics>r(Z)=Iso(Z,Z)<annotation encoding="application/x-tex">r(Z) = Iso(Z,Z)</annotation></semantics>, which is pointed by the identity.

The case <semantics>n=3<annotation encoding="application/x-tex">n=3</annotation></semantics> is more interesting still, since now it is not true that any two pointed 3-element sets are naturally isomorphic. Given a 3-element set <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>, how do we assign to it functorially a pointed 3-element set? The best way I’ve thought of is to let <semantics>r(Z)<annotation encoding="application/x-tex">r(Z)</annotation></semantics> be the set of automorphisms <semantics>fIso(Z,Z)<annotation encoding="application/x-tex">f\in Iso(Z,Z)</annotation></semantics> such that <semantics>f 3=id<annotation encoding="application/x-tex">f^3 = id</annotation></semantics>. This has 3 elements, the identity and two 3-cycles, and we can take the identity as a basepoint. And if <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> came with a point <semantics>z 0<annotation encoding="application/x-tex">z_0</annotation></semantics>, then we can define an isomorphism <semantics>Zr(Z)<annotation encoding="application/x-tex">Z \cong r(Z)</annotation></semantics> by sending <semantics>zZ<annotation encoding="application/x-tex">z\in Z</annotation></semantics> to the unique <semantics>fr(Z)<annotation encoding="application/x-tex">f\in r(Z)</annotation></semantics> having the property that <semantics>f(z 0)=z<annotation encoding="application/x-tex">f(z_0)= z</annotation></semantics>.

The case <semantics>n=4<annotation encoding="application/x-tex">n=4</annotation></semantics> is somewhat similar: given a 4-element set <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>, define <semantics>r(Z)<annotation encoding="application/x-tex">r(Z)</annotation></semantics> to be the set of automorphisms <semantics>fIso(Z,Z)<annotation encoding="application/x-tex">f\in Iso(Z,Z)</annotation></semantics> such that <semantics>f 2=id<annotation encoding="application/x-tex">f^2 = id</annotation></semantics> and whose set of fixed points is either empty or all of <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics>. This has 4 elements and is pointed by the identity; in fact, it is the permutation representation of the Klein four-group. And once again, if <semantics>Z<annotation encoding="application/x-tex">Z</annotation></semantics> came with a point <semantics>z 0<annotation encoding="application/x-tex">z_0</annotation></semantics>, we can define <semantics>Zr(Z)<annotation encoding="application/x-tex">Z \cong r(Z)</annotation></semantics> by sending <semantics>zZ<annotation encoding="application/x-tex">z\in Z</annotation></semantics> to the unique <semantics>fr(Z)<annotation encoding="application/x-tex">f\in r(Z)</annotation></semantics> such that <semantics>f(z 0)=z<annotation encoding="application/x-tex">f(z_0)= z</annotation></semantics>.

I will end with a question that I don’t know the answer to: is there any way to see from this perspective on classifying spaces that such a retraction doesn’t exist in the case <semantics>n=5<annotation encoding="application/x-tex">n=5</annotation></semantics>?

by shulman (viritrilbia@gmail.com) at January 30, 2015 05:48 AM

January 29, 2015

Christian P. Robert - xi'an's og

relabelling mixtures

Another short paper about relabelling in mixtures was arXived last week by Pauli and Torelli. They refer rather extensively to a previous paper by Puolamäki and Kaski (2009) of which I was not aware, paper attempting to get an unswitching sampler that does not exhibit any label switching, a concept I find most curious as I see no rigorous way to state that a sampler is not switching! This would imply spotting low posterior probability regions that the chain would cross. But I should check the paper nonetheless.

Because the G component mixture posterior is invariant under the G! possible permutations, I am somewhat undeciced as to what the authors of the current paper mean by estimating the difference between two means, like μ12. Since they object to using the output of a perfectly mixing MCMC algorithm and seem to prefer the one associated with a non-switching chain. Or by estimating the probability that a given observation is from a given component, since this is exactly 1/G by the permutation invariance property. In order to identify a partition of the data, they introduce a loss function on the joint allocations of pairs of observations, loss function that sounds quite similar to the one we used in our 2000 JASA paper on the label switching deficiencies of MCMC algorithms. (And makes me wonder why this work of us is not deemed relevant for the approach advocated in the paper!) Still, having read this paper, which I find rather poorly written, I have no clear understanding of how the authors give a precise meaning to a specific component of the mixture distribution. Or how the relabelling has to be conducted to avoid switching. That is, how the authors define their parameter space. Or their loss function. Unless one falls back onto the ordering of the means or the weights which has the drawback of not connecting with the levels sets of a particular mode of the posterior distribution, meaning that imposing the constraints result in a region that contains bits of several modes.

At some point the authors assume the data can be partitioned into K≤G groups such that there is a representative observation within each group never sharing a component (across MCMC iterations) with any of the other representatives. While this notion is label invariant, I wonder whether (a) this is possible on any MCMC outcome; (b) it indicates a positive or negative feature of the MCMC sampler.; and (c) what prevents the representatives to switch in harmony from one component to the next while preserving their perfect mutual exclusion… This however constitutes the advance in the paper, namely that component dependent quantities as estimated as those associated with a particular representative. Note that the paper contains no illustration, hence that the method may prove hard to impossible to implement!


Filed under: Books, Statistics Tagged: arXiv, Bayesian estimation, finite mixtures, label switching, Matthew Stephens, pivot, University of Warwick

by xi'an at January 29, 2015 11:15 PM

Emily Lakdawalla - The Planetary Society Blog

How NASA's Yearly Budget Request Comes Together
It takes a year to make, and is the starting point for all coming debate by Congress. It's the President's Budget Request, and understanding how it comes together is an important part of being an effective space advocate.

January 29, 2015 05:48 PM

CERN Bulletin

Where students turn into teachers: the eighth Inverted CERN School of Computing
For the eighth time since 2005, the CERN School of Computing (CSC) has organised its inverted school, which will take place at CERN on 23 and 24 February 2015, in the IT Auditorium (Room 31/3-004).   The idea for inverted CSCs stemmed from the observation that at regular CSCs it is common to find students in the room who know more on a particular (advanced) topic than the lecturer. So why not try and exploit this and turn the students into teachers? CSC2014 students made proposals via an electronic discussion forum, from which a programme was designed. This year’s programme focuses on challenging and innovative topics, including: the evolution of processor architectures, the growing complexity of CPUs and its impact on the software landscape, exploring clustering and data processing, the importance of message passing in high-performance computing, the development of applications across heterogeneous systems. There will be also lectures on applied computing used in the simulation of longitudinal beam dynamics problems typical of the accelerator sector. Attendance is free and open to everyone. Though most of the lectures are part of a series, the programme is designed so that lectures can be followed independently. Registration is not mandatory, but will allow you to obtain a copy of the full printed booklet (first registered, first served). The inverted schools are one key step in a process that's been in place for several years to identify and train young new lecturers for the main School. This year’s main school will take place in September in Kavala, Greece and the thematic school in Split, Croatia, next May. For further information on the CERN School of Computing, see http://cern.ch/csc or contact computing.school@cern.ch. Lecturers Vincent CROFT  National Institute for Subatomic Physics (NIKHEF), RU-Nijmegen, Netherland             Helvi HARTMANN  Frankfurt Institute for Advanced Studies, Germany André PEREIRA LIP-Minho, Braga, Portugal Pawel SZOSTEK CERN, Geneva, Switzerland Helga TIMKO  CERN, Geneva, Switzerland   Programme overview Monday 23 February 2015 09:00-09:15    Welcome 09:15-09:30     Introduction to the inverted CSC 09:30-10:30 Basic concepts in computer architectures - Pawel Szostek 11:00-12:00      Numerical Methods of Longitudinal Beam Dynamics - Helga Timko 13:30-14:30      Exploring EDA - Vincent Alexander Croft 14:30-15:30      Multi-core processors and multithreading - Pawel Szostek 16:00-17:00      Numerical Challenges & Limitations of Longitudinal Beam Dynamics - Helga Timko Tuesday 24 February 2015 10:00-11:00       Taking Raw Data Towards Analysis - Vincent Alexander Croft 11:00-12:00        Challenges of Modern High Performance Computing - Helvi Hartmann 13:30-14:30        Scalable Parallel Computing - Andre Pereira 14:30-15:30        Message Passing - Helvi Hartmann 16:00-17:00        Frameworks to Aid Code Development and Performance Portability - Andre Pereira Alberto Pace, Director,  CERN School of Computing

January 29, 2015 04:48 PM

arXiv blog

How Network Science Is Changing Our Understanding of Law

The first network analysis of the entire body of European Community legislation reveals the pattern of links between laws and their resilience to change.

January 29, 2015 03:47 PM

Symmetrybreaking - Fermilab/SLAC

Real scientists borrow ‘Big Bang’ costumes

Parkas from The Big Bang Theory recently wound up in Greenland on an actual scientific expedition.

In the season two finale of the show The Big Bang Theory, four of the show’s main characters decide to head on an expedition to the North Pole. To get a taste of the arctic cold before leaving Pasadena, they venture into a restaurant’s walk-in freezer wearing bright red parkas.

Four years later, those same parkas headed to Greenland for the real deal.

In 2013, Abigail Vieregg, a physicist at the Kavli Institute for Cosmological Physics at the University of Chicago, was charged with leading a group of four scientists on a trip to the Arctic Circle to survey sites for a potential future neutrino experiment. But there was a snag. The scientists needed to provide their own gear, and appropriate cold-weather clothing doesn’t come cheap.

Luckily, Vieregg remembered the episode of The Big Bang Theory and, even more luckily, her fellow researcher and former thesis advisor was University of California, Los Angeles, physicist David Saltzberg—who just happens to be the science advisor for the show.

The parkas used in the show were real Canada Goose coats that offered full protection from the elements. A few inquiries later and Saltzberg had the gear.

This was the only time, he says, that the show has loaned out science equipment for a real project.

“I’ve certainly lent them a lot of equipment of the years, so it only seemed fair,” he says. This has included oscilloscopes, a globe of the cosmic microwave background, and all the parts for an integrated ion trap and mass spectrometer.

“They come over here with a big truck sometimes to pick stuff up.”

It turned out that two of the coats were the right size for two of the researchers, so on they went to Greenland.

Researchers had scouted Greenland in the 1990s to check it for optical transparency when they were planning to build AMANDA, a neutrino-hunting experiment that later became IceCube and depends on picking up signals in the form of light.

However, no one had checked Greenland for radio transparency—and ultra-high energy neutrinos can be found through their radio emissions as well.

Vieregg, Saltzberg, John Kovac, and Christian Miki essentially lived in the coats as they surveyed Summit Station to determine if it would be a good place to look for ultra-high energy neutrinos.

“That means you’re eating in [the parkas], you’re doing work in them, you’re trying to fix generators in them, you’re digging holes in them,” Vieregg says. “So they don’t look as pristine as they once did.”

The team took two antennas and transmitted data from one to the other by bouncing a signal off the ground, passing it through 6 kilometers of ice. The change in brightness from one to the other indicated how radio-transparent the ice was.

The team published their paper last fall—with a special nod to their parka benefactors.

“There aren’t many physics papers that get to thank Warner Brothers,” Saltzberg says.

As for the parkas? They’re back in storage somewhere, slightly worse for the wear, waiting for their next adventure.

Courtesy of: Abigail Vieregg

 

Like what you see? Sign up for a free subscription to symmetry!

by Lauren Biron at January 29, 2015 02:00 PM

astrobites - astro-ph reader's digest

Two Winds

Short gamma ray bursts are rapidly flickering flashes of energetic photons (gamma rays) lasting about a second. The total energy released in photons by one of these creatures (1049 erg) is only a little less than that of a supernova (1051erg), and supernovas last weeks! Often short gamma ray bursts are followed by steady afterglows of lower-energy photons (x-rays) lasting up to a few hours. There’s a good working model for what’s causing them: extremely rapid black hole accretion powering a jet. But the standard model’s got a problem: it can’t explain the x-ray afterglows. The authors of today’s paper present a solution to the x-ray problem. Since their work builds on the standard model let’s start there.

The Standard Model

Standard model for gamma ray bursts

Figure 1: the standard model for a short gamma ray burst. 1) Two neutron stars merge (not in the diagram), 2) the super heavy neutron star collapses to a black hole (lower left corner), 3) the black hole (red squiggly along the time axis) accretes the remaining matter (maroon wedge in the lower left corner), 4) a jet is launched (cyan diagonal swath), 5) gamma rays are emitted when the jet slams into the interstellar medium. (Figure from me.)

The engine of the standard model is a black hole accreting a lot of matter very, very fast: about a solar mass in one second. This sounds crazy, but black hole accretion is very efficient at liberating mass energy; it’s a good way to get 1049erg out of a stellar mass system over 1 second. This rapid accretion launches a jet of matter at more than 0.999 times the speed of light. Okay, that sounds crazy too, but these extremely relativistic speeds are needed to explain the rapid flickering of the gamma rays. Far away from the central engine, the jet slams into some dense gas, maybe the interstellar medium. This shock produces the gamma rays we see from Earth.

I’ve drawn a spacetime diagram of the model in Fig 1. Recall, objects moving a lot slower than light (like the observer on the far right of the diagram) trace vertical paths in spacetime diagrams; light rays trace diagonal paths.

A double neutron star merger could potentially do all of this. When Nature puts two neutron stars together you might expect a really large neutron star. But neutron stars much larger than 2 solar masses (most isolated ones are about 1.5 solar masses) can’t support themselves: they collapse to form black holes. A black hole – neutron star merger could also do all of this, but these types of mergers are probably much rarer.

The Standard Model + A Supramassive Neutron Star

Gamma ray burst model with an x-ray emitting supramassive neutron star.

Figure 2: the standard model for a short gamma ray burst, including a supramassive neutron star remnant. 1) Two neutron stars merge (not shown in the diagram), 2) the long-lasting supramassive neutron star (wiggly red blob at the lower left) emits x-rays (cyan diagonal squigglies), 3) when its rotation slows down too much, it collapses to a black hole, 4) the black hole (red squiggly along the time axis) accretes the remaining matter (maroon wedge in the middle of the time axis), 5) a jet is launched (cyan diagonal swath), 6) gamma rays are emitted when the jet slams into the interstellar medium. But in this model, the observer sees the x-rays before the gamma rays! (Figure from me.)

Okay, black hole formation leading to rapid accretion leading to a jet leading to gamma rays. So how do we get x-rays out of this thing? Here’s a promising hint: “really massive neutron stars can’t support their own gravity.” “Can’t” as in “not even for a little while”? Well, actually, if they’re spinning super fast, they can survive a little while. In a neutron star merger, this is exactly what happens. The final orbits of the two stars are extremely fast, and this orbital motion carries over into the rapid spin of the merged remnant. (Here’s a movie from Stephan Rosswog.) This wobbling, throbbing, spinning blob is called a supramasive neutron star, because if it weren’t spinning it would collapse to a black hole.

Furthermore, the churning and mixing of the supramasive neutron star will amplify its magnetic field so that it begins to emit light like a pulsar, only much higher energy: x-ray light, actually. Great! So this is where the x-rays come from.

Not so fast; the timing is wrong! The x-rays we’ve just described would actually precede the burst of gamma rays. Here, I’ve added the supramassive neutron star in Fig 2. The observer doesn’t see any x-rays after gamma rays. And we know this is wrong, we’ve seen lots of x-ray afterglows.

The Two Winds Model

Spacetime diagram of the two-winds model.

Figure 3: the two-winds model. 1) Two neutron stars merge (not shown in the diagram), 2) the supramassive neutron star (wiggly red blob at the lower left) rotates faster on the inside than on the outside causing it to heat, and it emits a slow, dense wind (light orange wobbly diagonal swath), 3) when it begins to rotate uniformly, the magnetic field built up by the earlier differential rotation drives a fast, diffuse wind (green wobbly diagonal swath), 4) when its rotation slows down too much, it collapses to a black hole, 5) the black hole (red squiggly along the time axis) accretes the remaining matter (maroon wedge toward the top of the time axis), 6) a jet is launched (cyan diagonal swath), 7) gamma rays are emitted when the jet slams into the interstellar medium, 8) x-rays are emitted when they diffuse through the slow wind, or when the shock due to the fast wind breaks through the slow wind. (Figure from Rezzolla and Kumar.)

This brings us to the new model. Rezzolla and Kumar have discovered a simple explanation for x-rays after gamma rays by turning their attention away from the central engine. Their model uses all the same bits and pieces we’ve already examined, but it also introduces faraway winds. Their knowledge of winds is based on computer simulations of supramassive neutron stars.

When the supramassive neutron star forms, right after merger, its inside is rotating faster than its outside. This differential rotation heats the supramassive neutron star by friction. While the differential rotation lasts (about 10 seconds), the hot star blows off a shell of slow, dense wind. After about 10 seconds, the supramassive neutron star begins rotating uniformly, and again emits x-rays and a fast, diffuse wind. The fast wind is driven by the same mechanism that emits x-rays, a spinning magnetic field.

When the fast wind slams into the shell of slow wind, it shocks and generates x-rays. But the x-rays are trapped. The slow wind is so dense that it’s opaque to x-rays. Just like photons generated in the Sun, the x-rays scatter and diffuse slowly through the slow wind, slower than the speed of light, slower than the shock wave that produces them. Much of the x-ray light reaches the outer surface, where it can stream freely toward Earth after the jet breaks through and forms the gamma ray burst.

In Fig 3, to the right, Rezzolla and Kumar present their model just like our spacetime diagrams above. Notice that in this model, the gamma rays are formed when the jet shocks into the slow wind, not when it shocks into the interstellar medium.

One key observation could strengthen Rezzolla and Kumar’s two winds model: an x-ray precursor to a short gamma ray burst. The spacetime diagram for their model reveals how x-rays could be visible after and before the main burst. In fact, such precursors have already been seen! But don’t get hasty; there are other ways for neutron star mergers to emit x-rays before they merge (for example shattering their crusts).

by Brett Deaton at January 29, 2015 01:14 AM

January 28, 2015

Emily Lakdawalla - The Planetary Society Blog

Commercial Crew Rivalries: Fun to Watch, Everybody Wins
Now that Boeing and SpaceX have won the high-profile privilege of carrying astronauts to the ISS, they must start making public appearances as reluctant equals.

January 28, 2015 11:30 PM

Christian P. Robert - xi'an's og

Bayesian optimization for likelihood-free inference of simulator-based statistical models

Michael Gutmann and Jukka Corander arXived this paper two weeks ago. I read part of it (mostly the extended introduction part) on the flight from Edinburgh to Birmingham this morning. I find the reflection it contains on the nature of the ABC approximation quite deep and thought-provoking.  Indeed, the major theme of the paper is to visualise ABC (which is admittedly shorter than “likelihood-free inference of simulator-based statistical models”!) as a regular computational method based on an approximation of the likelihood function at the observed value, yobs. This includes for example Simon Wood’s synthetic likelihood (who incidentally gave a talk on his method while I was in Oxford). As well as non-parametric versions. In both cases, the approximations are based on repeated simulations of pseudo-datasets for a given value of the parameter θ, either to produce an estimation of the mean and covariance of the sampling model as a function of θ or to construct genuine estimates of the likelihood function. As assumed by the authors, this calls for a small dimension θ. This approach actually allows for the inclusion of the synthetic approach as a lower bound on a non-parametric version.

In the case of Wood’s synthetic likelihood, two questions came to me:

  • the estimation of the mean and covariance functions is usually not smooth because new simulations are required for each new value of θ. I wonder how frequent is the case where we can always use the same basic random variates for all values of θ. Because it would then give a smooth version of the above. In the other cases, provided the dimension is manageable, a Gaussian process could be first fitted before using the approximation. Or any other form of regularization.
  • no mention is made [in the current paper] of the impact of the parametrization of the summary statistics. Once again, a Cox transform could be applied to each component of the summary for a better proximity of/to the normal distribution.

When reading about a non-parametric approximation to the likelihood (based on the summaries), the questions I scribbled on the paper were:

  • estimating a complete density when using this estimate at the single point yobs could possibly be superseded by a more efficient approach.
  • the authors study a kernel that is a function of the difference or distance between the summaries and which is maximal at zero. This is indeed rather frequent in the ABC literature, but does it impact the convergence properties of the kernel estimator?
  • the estimation of the tolerance, which happens to be a bandwidth in that case, does not appear to be processed in this paper, which could explain for very low probabilities of acceptance mentioned in the paper.
  • I am lost as to why lower bounds on likelihoods are relevant here. Unless this is intended for ABC maximum likelihood estimation.

Guttmann and Corander also comment on the first point, through the cost of producing a likelihood estimator. They therefore suggest to resort to regression and to avoid regions of low estimated likelihood. And rely on Bayesian optimisation. (Hopefully to be commented later.)


Filed under: Books, Statistics, University life Tagged: ABC, ABC validation, Bayesian optimisation, non-parametrics, synthetic likelihood

by xi'an at January 28, 2015 11:15 PM

ZapperZ - Physics and Physicists

Charles Townes
As reported almost everywhere, we lost Nobel Laureate Charles Townes at the age of 99. Oh how we we all are standing on the shoulder of this giant in physics.

Zz.

by ZapperZ (noreply@blogger.com) at January 28, 2015 07:02 PM

Peter Coles - In the Dark

R.I.P. Charles Townes, the physicist whose work touched all our lives

Just a short post to mark the passing of a truly great physicist, Charles H. Townes, who died yesterday at the age of 99.

Charles Townes, pictured in 2013

Charles Townes, pictured in 2013

Townes came to fame for his pioneering work on the theory and applications of the maser , which he then followed up by designing the first laser. Lasers are used in many common consumer devices such as optical disk drives, laser printers, barcode scanners and fibre-optic cables. They are also used in medicine for laser surgery and various skin treatments, and in industry for cutting and welding materials.

The work of Charles Townes in physics has thus had an enormous impact on everyday life; he was awarded the Nobel Prize for is his work on quantum electronics, especially lasers and masers.

It’s very sad that he didn’t quite make his century, especially because this year is the International Year of Light, which will involve many activities and celebrations relating to his work on lasers. Much of our experimental work in Physics here in the Department of Physics and Astronomy at the University of Sussex involves lasers in various ways, and we will find an appropriate occasion to celebrate the life and achievements of a truly great physicist. Until then let me just express my condolences to the friends, family and colleagues of Charles Townes on the loss not only of an eminent scientist but of an extremely nice man.

R.I.P. Charles Townes, physicist and gentleman (1915-2015).


by telescoper at January 28, 2015 06:01 PM

Christian P. Robert - xi'an's og

Glibc GHOST vulnerability

screen shot with ubuntu 10.10Just heard about a security vulnerability on Linux machines running Red Hat version 5 to 7, Ubuntu 10.04 and 12.04, Debian version 7, Fedora versions 19 and older, and SUSE versions 11 and older. The vulnerability occurs through a buffer overflow from some functions in the C library Glibc, which allows for a remote code to execute, and the fix to the problem is indicated on that NixCRaft webpage. (It is also possible to run the GHOST C code if you want to live dangerously!)

 

 


Filed under: Linux Tagged: C, Glibc, Kubuntu 12.04, Linux, security vulnerability, Ubuntu 10.10

by xi'an at January 28, 2015 03:04 PM

Peter Coles - In the Dark

A whole lotta cheatin’ going on? REF stats revisited

telescoper:

Here’s a scathing analysis of Research Excellence Framework. I don’t agree with many of the points raised and will explain why in a subsequent post (if and when I get the time), but I reblogging it here in the hope that it will provoke some comments either here or on the original post (also a wordpress site).

Originally posted on coastsofbohemia:

 

1.

The rankings produced by Times Higher Education and others on the basis of the UK’s Research Assessment Exercises (RAEs) have always been contentious, but accusations of universities’ gaming submissions and spinning results have been more widespread in REF2014 than any earlier RAE. Laurie Taylor’s jibe in The Poppletonian that “a grand total of 32 vice-chancellors have reportedly boasted in internal emails that their university has become a top 10 UK university based on the recent results of the REF”[1] rings true in a world in which Cardiff University can truthfully[2]claim that it “has leapt to 5th in the Research Excellence Framework (REF) based on the quality of our research, a meteoric rise” from 22nd in RAE2008. Cardiff ranks 5th among universities in the REF2014 “Table of Excellence,” which is based on the GPA of the scores assigned by the REF’s “expert panels” to the three…

View original 2,992 more words


by telescoper at January 28, 2015 01:35 PM

Lubos Motl - string vacua and pheno

The holographic Da Vinci code and quantum error correction
Since the wrong 2012 papers, I have been encouraging
Dr Polchinski, tear down this firewall!
He's an exceptional physicist who has done cool things and can do many more so the earlier he can escape from the firewall, or extinguish it, the better. Finally, there is a fun paper about the holographic code co-authored by Polchinski that doesn't mention the firewall and it is pretty cool and, in my opinion, at least morally correct:
Bulk-Boundary Duality, Gauge Invariance, and Quantum Error Correction
by Mintun, Polchinski, Rosenhaus (Santa Barbara). Their primary question is how to reconstruct bulk operators from the boundary operators in AdS/CFT. But I actually think that closely analogous claims are valid for the microstates of the black holes and the black hole interior, too.



They try to clarify some previous ideas about the relationship between the holographic dictionary in quantum gravity on one side; and quantum error correction in quantum information science on the other side.




The paper is only four two-column pages long which is why you may want to read it in its entirety and I won't try to reproduce the technicalities here.




But a broader point is that the gauge symmetries of the boundary theory used to be considered "completely irrelevant" for the dual bulk description but it is becoming very likely that they actually do play a role in the bulk physics, too. And the precise mechanism that allows these gauge symmetries to play a role resembles the "quantum erasure correction" and "quantum secret sharing", concepts that are known and studied by the folks in the quantum information science, except that quantum gravity automatically seems to do these things in a smarter way than what they were able to find out so far!

In particular, Mintun et al. say that the bulk operator \(\Phi(0)\) at the center of the AdS (or any point) has to commute with the spatially separated operators, thanks to the bulk locality. Via the AdS/CFT dictionary, it means that \(\Phi(0)\) has to commute with all boundary local operators \({\mathcal O} (x_i)\), those restricted to a single site.

That condition may sound weird – it looks as if the bulk had nothing to do with the boundary and required "new" degrees of freedom. Except that this troublesome conclusion isn't right. No one is saying that the bulk operators have to be constructed from the local, one-site operators on the boundary. If you allow to contract operators from several sites at the boundary, you will find out that solutions exist – and there are actually many of them. Those solutions are referred to as the "precursor" operators.

Note that Polchinski has been intrigued by precursors in AdS/CFT for more than 15 years.

In the usual picture of AdS/CFT, both sides have some local or gauge symmetries. The boundary theory has an \(SU(N)\) Yang-Mills symmetry. The SUGRA description of the bulk has diffeomorphisms, gauge symmetries for the \(p\)-form gauge fields, and so on. Normally, we say that only the physical states on both states – all the kinematical states modulo all the gauge symmetries – are supposed to match. And the unphysical states are some junk that depends on the description.

However, this paper says that even the unphysical states, especially the gauge-non-invariant states on the boundary, have some important application on the other side, especially in the bulk, because the bulk operators may be more naturally written using unphysical operators on the boundary etc. Those non-gauge-invariant operators would carry slightly more information but if you have just slightly gauge-non-invariant operators, you may find the corresponding right gauge-invariant operator and the procedure is analogous to quantum error correction.

This new paper has already made me think about various connections with ideas in my research or research of others but most of the details remain confusing. So let me sketch just two of these relationships and do so briefly.

Connections between the paper and some seemingly inequivalent ideas

First, two weeks ago, I wrote a blog post about the monster group in quantum gravity and I also mentioned a relationship between black hole microstates and the monodromies around the Euclideanized horizon. Some people think that I was completely kidding but I was not. ;-)

The strongest claim is that the volume of the whole (infinite-dimensional etc.) gauge group reduced to the event horizon will be nothing else than \(\exp(S_{BH})\), the exponentiated black hole entropy – essentially because the trip around the thermal circle is allowed to return you to the same state, up to any gauge transformation. (Well, the monodromies actually don't label all pure microstates but some particular mixed or ER-EPR states but I don't want to go into details here.) So every theory with gravitational degrees of freedom has to have a certain amount of symmetries. There are no black hole horizons in the paper by Mintun et al. but I do think that the degree of ambiguity of the precursor operators they acknowledge is mathematically analogous to the microstates of a black hole – or an event horizon, to be more general – and their work also implies something about the black hole code, too.

Second, I believe that there are intense overlooked relationships between the work by Mintun et al. (and older papers they built upon) and the formalism of Papadodimas and Raju. To be slightly more specific, look at and below equation 5 in Mintun et al.. A commutator isn't zero but it is "effectively" zero – in this case, when acting on gauge-invariant states, for some reason. This is similar to comments in Papadodimas-Raju in and below equation 9 in which the commutator of \({\mathcal O}\) and \(\tilde{\mathcal O}\) "effectively" vanishes – when included in a product of operators with a small enough number of factors.

In both cases, some commutators are claimed to be "much smaller than generic" because of certain special circumstances. If the claims are basically analogous, there is a relationship between the gauge invariance of (a priori more general, gauge-non-invariant) boundary operators; and between operators that keep the pre-existing bulk (black hole...) microstate the same (because they only allow a limited number of local field excitations).

Another general point. Both your humble correspondent and (later) Polchinski have been nervous about the main ambiguity in the Papadodimas-Raju construction. There are many ways how the black hole interior operators may be represented in terms of the boundary CFT operators. There's this state dependence (which I am not anxious about but Polchinski is) but it is not the only source of ambiguity: one has to pick a convention on ensembles and related things, too.

Now, Mintun, Polchinski, and Rosenhaus see something similar in their construction. In the boundary CFT (the same side), they also see many precursor operators to represent the bulk fields. They're equivalent when acting on gauge-invariant operators. However, it could be an important unexpected point that
the black hole interior operators are dual to gauge-non-invariant operators in the boundary CFT
and if that's the case, much of the new content in the papers by Mintun et al. and Papadodimas-Raju could actually be the same, at least morally! If the quote above is right, we face a potential "intermediate" answer to the question whether the AdS/CFT correspondence contains predictions for the observations performed by the infalling observer. It may be the case that the precise predictions for the black hole interior may be extracted from the boundary CFT – but only if you add something like a "gauge choice", too.

Many details remain confusing, as you can see, but I do think that we are getting closer to a picture that teaches us a great deal about the character of the "holographic code" in quantum gravity as well the way how the black hole interior (and its local operators) are interpreted in terms of the boundary CFT (a non-gravitational or bulk-non-local description of the microstates). Effectively or approximately vanishing operators due to gauge symmetries and boundary gauge symmetries themselves seem to play a vital role in this final picture that is going to be found rather soon.

And that's my prophesy.

by Luboš Motl (noreply@blogger.com) at January 28, 2015 12:28 PM

January 27, 2015

John Baez - Azimuth

Trends in Reaction Network Theory

For those who have been following the posts on reaction networks, this workshop should be interesting! I hope to see you there.

Workshop on Mathematical Trends in Reaction Network Theory, 1-3 July 2015, Department of Mathematical Sciences, University of Copenhagen. Organized by Elisenda Feliu and Carsten Wiuf.

Description

This workshop focuses on current and new trends in mathematical reaction network theory, which we consider broadly to be the theory describing the behaviour of systems of (bio)chemical reactions. In recent years, new interesting approaches using theory from dynamical systems, stochastics, algebra and beyond, have appeared. We aim to provide a forum for discussion of these new approaches and to bring together researchers from different communities.

Structure

The workshop starts in the morning of Wednesday, July 1st, and finishes at lunchtime on Friday, July 3rd. In the morning there will be invited talks, followed by contributed talks in the afternoon. There will be a reception and poster session Wednesday in the afternoon, and a conference dinner Thursday. For those participants staying Friday afternoon, a sightseeing event will be arranged.

Organization

The workshop is organized by the research group on Mathematics of Reaction Networks at the Department of Mathematical Sciences, University of Copenhagen. The event is sponsored by the Danish Research Council, the Department of Mathematical Sciences and the Dynamical Systems Interdisciplinary Network, which is part of the UCPH Excellence Programme for Interdisciplinary Research.

Confirmed invited speakers

Nikki Meskhat (North Carolina State University, US)

Alan D. Rendall (Johannes Gutenberg Universität Mainz, Germany)

• János Tóth (Budapest University of Technology and Economics, Hungary)

Sebastian Walcher (RWTH Aachen, Germany)

Gheorghe Craciun (University of Wisconsin, Madison, US)

David Doty (California Institute of Technology, US)

>

Manoj Gopalkrishnan (Tata Institute of Fundamental Research, India)

Michal Komorowski (Institute of Fundamental Technological Research, Polish Academy of Sciences, Poland)

John Baez (University of California, Riverside, US)

Important dates

Abstract submission for posters and contributed talks: March 15, 2015.

Notification of acceptance: March 26, 2015.

Registration deadline: May 15, 2015.

Conference: July 1-3, 2015.

The organizers

The organizers are Elisenda Feliu and Carsten Wiuf at the Department of Mathematical Sciences of the University of Copenhagen.

They’ve written some interesting papers on reaction networks, including some that discuss chemical reactions with more than one stationary state. This is a highly nonlinear regime that’s very important in biology:

• Elisenda Feliu and Carsten Wiuf, A computational method to preclude multistationarity in networks of interacting species, Bioinformatics 29 (2013), 2327-2334.

Motivation. Modeling and analysis of complex systems are important aspects of understanding systemic behavior. In the lack of detailed knowledge about a system, we often choose modeling equations out of convenience and search the (high-dimensional) parameter space randomly to learn about model properties. Qualitative modeling sidesteps the issue of choosing specific modeling equations and frees the inference from specific properties of the equations. We consider classes of ordinary differential equation (ODE) models arising from interactions of species/entities, such as (bio)chemical reaction networks or ecosystems. A class is defined by imposing mild assumptions on the interaction rates. In this framework, we investigate whether there can be multiple positive steady states in some ODE models in a given class.

Results. We have developed and implemented a method to decide whether any ODE model in a given class cannot have multiple steady states. The method runs efficiently on models of moderate size. We tested the method on a large set of models for gene silencing by sRNA interference and on two publicly available databases of biological models, KEGG and Biomodels. We recommend that this method is used as (i) a pre-screening step for selecting an appropriate model and (ii) for investigating the robustness of non-existence of multiple steady state for a given ODE model with respect to variation in interaction rates.

Availability and Implementation. Scripts and examples in Maple are available in the Supplementary Information.

• Elisenda Feliu, Injectivity, multiple zeros, and multistationarity in reaction networks, Proceedings of the Royal Society A.

Abstract. Polynomial dynamical systems are widely used to model and study real phenomena. In biochemistry, they are the preferred choice for modelling the concentration of chemical species in reaction networks with mass-action kinetics. These systems are typically parameterised by many (unknown) parameters. A goal is to understand how properties of the dynamical systems depend on the parameters. Qualitative properties relating to the behaviour of a dynamical system are locally inferred from the system at steady state. Here we focus on steady states that are the positive solutions to a parameterised system of generalised polynomial equations. In recent years, methods from computational algebra have been developed to understand these solutions, but our knowledge is limited: for example, we cannot efficiently decide how many positive solutions the system has as a function of the parameters. Even deciding whether there is one or more solutions is non-trivial. We present a new method, based on so-called injectivity, to preclude or assert that multiple positive solutions exist. The results apply to generalised polynomials and variables can be restricted to the linear, parameter-independent first integrals of the dynamical system. The method has been tested in a wide range of systems.

You can see more of their papers on their webpages.


by John Baez at January 27, 2015 10:44 PM

Quantum Diaries

Looking Forward to 2015: Analysis Techniques

With 2015 a few weeks old, it seems like a fine time to review what happened in 2014 and to look forward to the new year and the restart of data taking. Along with many interesting physics results, just to name a few, LHCb saw its 200th publication, a test of lepton universality. With protons about to enter the LHC, and the ALICE and LHCb detectors recording muon data from transfer line tests between the SPS and LHC (see also here), the start of data-taking is almost upon us. For some implications, see Ken Bloom’s post here. Will we find supersymmetry? Split Higgs? Nothing at all? I’m not going to speculate on that, but I would like to review two techniques which played a key role in two results from LHCb and a few analysis techniques which enabled them.

The first result I want to discuss is the \(Z(4430)^{-}\). The first evidence for this state came from the Belle Collaboration in 2007, with subsequent studies in 2009 and in 2013. BaBar also searched for the state, and while they did not see it, they did not rule it out.

The LHCb collaboration searched for this state, using the specific decay mode \(B^0\to \psi’ K^{+} \pi^{-} \), with \(\psi’\) decaying to two muons. For more reading, see the nice writeup from earlier in 2014. As in the Belle analyses, which looked using muons or electrons in the final \(\psi’\) state, the trick here is to look for bumps in the \(\psi’ \pi^{-}\) mass distribution. If a peak appears which is not described  by the conventional 2 and 3 quark states, mesons and baryons, we know and love, it must be from a state involving a \(c \overline{c}d\overline{u}\) quark combination. The search is performed in two ways: a model-dependent search, which looks at the \(K\pi\) and \(\psi’\pi\) invariant mass and decay angle distributions, and a “model independent” search which looks for structure induced in the \(K\pi\) system induced by a resonance in the \(\psi’\pi\) system and does not invoke any exotic resonances.

At the end of the day, it is found in both cases that the data are not described without including a resonance for the \(Z(4430)^-\).

Now, it appears that we have a resonance on our hands, but how can we be sure? In the context of the aforementioned model dependent analysis, the amplitude for the \(Z(4430)^{-}\) is modeled as a Breit-Wigner amplitude, which is a complex number. If this amplitude is plotted in the imaginary plane as a function of the invariant mass of the resonance, a circular shape is traced out. This is characteristic of a resonance. Therefore, by fitting the real and imaginary parts of the amplitude in six bins of \(\psi’\pi\) invariant mass, the shape can be directly compared to that of an exected resonance. That’s exactly what’s done in the plot below:

The argand plane for the Z(4430)- search. Units are arbitrary.

The argand plane for the Z(4430)- search. Units are arbitrary.

What is seen is that the data (black points) roughly follow the outlined circular shape given by the Breit-Wigner resonance (red). The outliers are pulled due to detector effects. The shape quite clearly follows the circular characteristic of a resonance. This diagram is called an Argand Diagram.

 

Another analysis technique to identify resonances was used to find the two newest particles by LHCb:

Depiction of the two Xi_b resonances found by the LHCb Collaboration. Credit to Italic Pig (http://italicpig.com/blog/)

Depiction of the two Xi_b resonances found by the LHCb Collaboration. Credit to Italic Pig

Or perhaps seen as

 

Xi_b resonances, depicted by Lison Bernet.

Xi_b resonances, depicted by Lison Bernet.

Any way that you draw them, the two new particles, the \(\Xi_b’^-\) and \(\Xi_b^{*-}\) were seen by the LHCb collaboration a few months ago. Notably, the paper was released almost 40 years to the day that the discovery of the \(J/\psi\) was announced, sparking the November Revolution, and the understanding that mesons and baryons are composed of quarks. The \(\Xi_b’^-\) and \(\Xi_b^{*-}\) baryons are yet another example of the quark model at work. The two particles are shown in \(\delta m \equiv m_{candidate}(\Xi_b^0\pi_s^-)-m_{candidate}(\Xi_b^0)-m(\pi)\) space below.

Xi_b'^- and Xi_b^{*-} mass peaks shown in delta(m_candidate) space.

\(\Xi_b’^-\) and \(\Xi_b^{*-}\) mass peaks shown in \(\delta(m_{candidate})\) space.

Here, the search is performed by reconstructing \(\Xi_b^0 \pi^-_s\) decays, where the \(\Xi_b^0\) decays to \(\Xi_c^+\pi^-\), and \(\Xi_c^+\to p K^- \pi^+\). The terminology \(\pi_s\) is only used to distinguish between that pion and the other pions. The peaks are clearly visible. Now, we know that there are two resonances, but how do we determine whether or not the particles are the \(\Xi_b’^-\) and \(\Xi_b^{*-}\)? The answer is to fit what is called the helicity distributions of the two particles.

To understand the concept, let’s consider a toy example. First, let’s say that particle A decays to B and C, as \(A\to B C\). Now, let’s let particle C also decay, to particles D and F, as \(C\to D F\). In the frame where A decays at rest, the decay looks something like the following picture.

Simple Model of A->BC, C->DF

Simple Model of \(A\to BC\), \(C\to DF\)

There should be no preferential direction for B and C to decay if A is at rest, and they will decay back to back from conservation of momentum. Likewise, the same would be true if we jump to the frame where C is at rest; D and F would have no preferential decay direction. Therefore, we can play a trick. Let’s take the picture above, and exactly at the point where C decays, jump to its rest frame. We can then measure the directions of the outgoing particles. We can then define a helicity angle \(\theta_h\) as the angle between the C flight in A’s rest frame and D’s flight in C’s rest frame. I’ve shown this in the picture below.

Helicity Angle Definition for a simple model

Helicity Angle Definition for a simple model

If there is no preferential direction of the decay, we would expect a flat distribution of \(\theta_h\). The important caveat here is that I’m not including anything about angular momentum, spin or otherwise, in this argument. We’ll come back to that later. Now, we can identify A as the \(\Xi_b’\) or \(\Xi_b^*\) candidate, C as the \(\Xi_b^0\) and D as the \(\Xi_C\) candidates used in the analysis. The actual data are shown below.

Helicity angle distributions for the Xi_b' and Xi_b* candidates (upper and lower, respectively).

Helicity angle distributions for the \(\Xi_b’ \)and \(\Xi_b*\) candidates (upper and lower, respectively).

While it appears that the lower mass may have variations, it is statistically consistent with being a flat line. Now the extra power of such an analysis is that if we now consider angular momentum of the particles themselves, there are implied selection rules which will alter the distributions above, and which allow for exclusion or validation of particle spin hypotheses simply by the distribution shape. This is the rationale for having the extra fit in the plot above. As it turns out, both distributions being flat allows for the identification of  the \(\Xi ‘_b^-\) and the \(\Xi_b^{*-}\), but do not allow for conclusive ruling out of other spins.

With the restart of data taking at the LHC almost upon us (go look on Twitter for #restartLHC), if you see a claim for a new resonance, keep an eye out for Argand Diagrams or Helicity Distributions.

by Adam Davis at January 27, 2015 07:11 PM

astrobites - astro-ph reader's digest

Shedding Light on Galaxy Formation
Title: Galaxies that Shine: radiation hydrodynamical simulations of disk galaxies

Authors: Joakim Rosdahl, Joop Schaye, Romain Teyssier, Oscar Agertz

First Author’s Institution: Leiden Observatory, Leiden University, Leiden, The Netherlands

Paper Status: Submitted to MNRAS

Computational simulations have proven invaluable in understanding the formation and evolution of galaxies. When the first galaxies were made in simulations, they formed… too well. Gas cooled too much and too fast, and these galaxies formed way too many stars. These first simulations, however, missed a whole host of physics that today fall under the umbrella of “feedback” processes. Feedback encompasses a wide range of really interesting astrophysics, including radiation from stars heating and ionizing surrounding gas, thermal and kinetic energy injection from supernova explosions,  heating from active galactic nuclei (AGN), and the impact of AGN jets. Among other things, these processes can drive galactic winds, blowing gas out of galaxies, and slowing down star formation.

Including every type of feedback in a simulation is a great way to produce realistic galaxies, but is unfortunately computationally expensive and impossible to do perfectly. In fact, much of the relevant physics occurs on scales far smaller than the simulation resolution, and must be addressed with what is called “sub-grid” models. Today, the game of producing realistic galaxies in simulations boils down to figuring out the right physics to include, while minimizing computational costs. Much progress has been made in this field, with one giant exception: radiation. Properly accounting for radiation is expensive and complex; the nearly universal solution is to make assumptions about how photons propagate through gas, so it doesn’t have to be computed directly. The authors of this paper present the first galaxy-scale simulations of “radiation hydrodynamics”, or hydrodynamic simulations that directly compute the radiative transfer of photons, and their feedback onto the galaxy.

Feedback, Feedback, Feedback

The authors produce galaxy simulations using RAMSES-RT, an adaptive mesh refinement (AMR) code that includes a nearly first-principles treatment of radiation, based upon the RAMSES code. Their treatment of radiation breaks photons into five energy bins, infrared, optical, and 3 ultraviolet bins separated by hydrogen and helium ionization energies. These act on the gas through three primary physical processes; the ionization and heating of the gas through interactions with hydrogen and helium, momentum transfer between the photons and the gas (radiation pressure), and pressure trough interactions with dust (including the effects of light scattering off of dust).  In addition, they include prescriptions for star formation and supernova feedback, radiative heating and cooling, and chemistry to accurately track the abundances of hydrogen and helium and their ionization states. The photons are produced every timestep in the simulation from “star particles” (representation of groups of stars in the simulated galaxy); the number of photons and their energies are determined by the given star particle’s mass and size.

fig1

Fig.1: The radiation flux from the G9 galaxy for all five photon energy bins. Shown is the face-on (top) and edge-on (bottom) views of the galaxy. (Source: Fig. 1 of Rosdahl et. al. 2015)

 

table3

Fig. 2: Table of each type of simulation run, showing which feedback types were included in each. These are (in order) supernova feedback, heating from radiation, momentum transfer between photons and gas (radiation pressure), and radiation pressure on dust (Source: Table 3 of Rosdahl et. al. 2015)

The authors include all of this physics into simulations of 3 disk galaxies (labelled G8, G9, G10), each containing roughly 108, 109, and 1010 solar masses of gas + stars, embedded in dark matter haloes of about 1010, 1011, and 1012 solar masses. The heaviest of these three is comparable to the Milky Way. Fig. 1 above shows face-on and edge on views of radiation flux from the G9 galaxy in all 5 photon energy bins. In each of their simulations, they evolve each galaxy for 500 Myr, examining how turning on / off various feedback processes (namely supernova, radiation heating, and radiation pressure) affect the evolution of each galaxy. Fig. 2 gives the combination of physics in each simulation type and their labels.

Star Formation and Galactic Winds

Although they produce a thorough investigation into many of the details of their radiation feedback, this astrobite will focus on only two effects: how the radiation affects the formation of stars, and its effect on driving galactic winds. Fig. 3 presents the total star formation (in stellar masses) and star formation rate for the G9 galaxy under 6 different simulations. The labels in Fig. 3 are given in Fig. 2. The dashed lines give the mass outflow rate from the galactic winds as measured outside the galaxy. On one extreme, the simulation with no feedback converts gas into stars too efficiently, and drives no galactic winds. On the other, the full feedback simulation (dark red) produces the least amount of stars, but interestingly, has weaker galactic winds than some of the other simulations. The three thick lines in the top of Fig. 3 give the supernova + radiation feedback simulations. Compared to the supernova only simulation (blue), the radiative heating feedback provides the dominant change, while including radiation pressure and dust pressure only make small changes to the total star formation.

fig4

Fig. 3: For galaxy G9, shown is the total mass of stars (top), the star formation rate (bottom), and the galactic wind outflow rate (bottom, dashed) for each of the simulations listed in Fig. 2. (Source: FIg. 4 of Rosdahl et. al. 2015)

 

fig5

Fig. 4: Galactic winds from the G9 galaxy with supernova feedback only (left) and with supernova and radiation feedback (right). The images show the surface (column) density of hydrogen. (Source: Fig. 5 of Rosdahl et. al. 2015)

Fig. 4 shows the outflows of the G9 galaxy at the end of the simulation run, with SN feedback only on the left, and full feedback on the right. Although the two are morphologically quite different, the authors show that the differences in total mass loss from galactic winds between the two simulations is minimal (see Fig. 3). In fact, they show that the full radiation feedback model produces slightly less winds, a byproduct of slowing down star formation in the galaxy.

The Future of Galaxy Evolution

The authors have shown that radiative feedback does play an important role in studying galaxy formation and evolution. In this work, they sought to characterize the effects of supernova and radiative feedback vs. supernova feedback alone.  In future work, the study of radiation feedback on various scales, from small slices of the galactic disk to larger galaxies, and the inclusion of AGN feedback in these simulations, will be important in piecing together a complete understanding of galaxy formation.

by Andrew Emerick at January 27, 2015 06:32 PM

Peter Coles - In the Dark

The Map is not the Territory

I came across this charming historical map while following one of my favourite Twitter feeds “@Libroantiguo” which publishes fascinating material about books of all kinds, especially old ones. It shows the location of London coffee houses and is itself constructed in the shape of a coffee pot:

Coffee
Although this one is obviously just a bit of fun, maps like this are quite fascinating, not only as practical guides to navigating a transport system but also because they often stand up very well as works of art. It’s also interesting how they evolve with time  because of changes to the network and also changing ideas about stylistic matters.

A familiar example is the London Underground or Tube map. There is a fascinating website depicting the evolutionary history of this famous piece of graphic design. Early versions simply portrayed the railway lines inset into a normal geographical map which made them rather complicated, as the real layout of the lines is far from regular. A geographically accurate depiction of the modern tube network is shown here which makes the point:

tubegeo

A revolution occurred in 1933 when Harry Beck compiled the first “modern” version of the map. His great idea was to simplify the representation of the network around a single unifying feature. To this end he turned the Central Line (in red) into a straight line travelling left to right across the centre of the page, only changing direction at the extremities. All other lines were also distorted to run basically either North-South or East-West and produce a regular pattern, abandoning any attempt to represent the “real” geometry of the system but preserving its topology (i.e. its connectivity).  Here is an early version of his beautiful construction:

Note that although this a “modern” map in terms of how it represents the layout, it does look rather dated in terms of other design elements such as the border and typefaces used. We tend not to notice how much we surround the essential things, which tend to last, with embellishments that date very quickly.

More modern versions of this map that you can get at tube stations and the like rather spoil the idea by introducing a kink in the central line to accommodate the complexity of the interchange between Bank and Monument stations as well as generally buggering about with the predominantly  rectilinear arrangement of the previous design:

I quite often use this map when I’m giving popular talks about physics. I think it illustrates quite nicely some of the philosophical issues related with theoretical representations of nature. I think of theories as being like maps, i.e. as attempts to make a useful representation of some  aspects of external reality. By useful, I mean the things we can use to make tests. However, there is a persistent tendency for some scientists to confuse the theory and the reality it is supposed to describe, especially a tendency to assert there is a one-to-one relationship between all elements of reality and the corresponding elements in the theoretical picture. This confusion was stated most succintly by the Polish scientist Alfred Korzybski in his memorable aphorism :

The map is not the territory.

I see this problem written particularly large with those physicists who persistently identify the landscape of string-theoretical possibilities with a multiverse of physically existing domains in which all these are realised. Of course, the Universe might be like that but it’s by no means clear to me that it has to be. I think we just don’t know what we’re doing well enough to know as much as we like to think we do.

A theory is also surrounded by a penumbra of non-testable elements, including those concepts that we use to translate the mathematical language of physics into everday words. We shouldn’t forget that many equations of physics have survived for a long time, but their interpretation has changed radically over the years.

The inevitable gap that lies between theory and reality does not mean that physics is a useless waste of time, it just means that its scope is limited. The Tube  map is not complete or accurate in all respects, but it’s excellent for what it was made for. Physics goes down the tubes when it loses sight of its key requirement: to be testable.

In any case, an attempt to make a grand unified theory of the London Underground system would no doubt produce a monstrous thing so unwieldly that it would be useless in practice. I think there’s a lesson there for string theorists too…

Now, anyone for a game of Mornington Crescent?

 


by telescoper at January 27, 2015 06:22 PM

Symmetrybreaking - Fermilab/SLAC

Of symmetries, the strong force and Helen Quinn

Scientist Helen Quinn has had a significant impact on the field of theoretical physics.

Modern theoretical physicists spend much of their time examining the symmetries governing particles and their interactions. Researchers describe these principles mathematically and test them with sophisticated experiments, leading to profound insights about how the universe works.

For example, understanding symmetries in nature allowed physicists to predict the flow of electricity through materials and the shape of protons. Spotting imperfect symmetries led to the discovery of the Higgs boson.

One researcher who has used an understanding of symmetry in nature to make great strides in theoretical physics is Helen Quinn. Over the course of her career, she has helped shape the modern Standard Model of particles and interactions— and outlined some of its limitations. With various collaborators, she has worked to establish the deep mathematical connection between the fundamental forces of nature, pondered solutions to the mysterious asymmetry between matter and antimatter in the cosmos and helped describe properties of the particle known as the charm quark before it was discovered experimentally.

“Helen's contributions to physics are legendary,” says Stanford University professor of physics Eva Silverstein. Silverstein first met Quinn as an undergraduate in 1989, then became her colleague at SLAC in 1997.

Quinn’s best-known paper is one she wrote with fellow theorist Roberto Peccei in 1977. In it, they showed how to solve a major problem with the strong force, which governs the structure of protons and other particles. The theory continues to find application across particle physics. “That's an amazing thing: that an idea you had almost 40 years ago is still alive and well,” says Peccei, now a professor emeritus of physics at the University of California, Los Angeles.

GUTs, glory, and broken symmetries

Quinn was born in Australia in 1943 and emigrated with her family to the United States while she was still a university student. For that reason, she says, “I had a funny path through undergraduate school.”

When she moved to Stanford University, she had already spent two years studying at the University of Melbourne to become a meteorologist with support from the Australian Weather Bureau, and needed to select an academic major that wouldn’t force her to start over again. That program happened to be physics.

With the longest linear accelerator in the world nearing completion next door at what is now called SLAC National Accelerator Laboratory, Stanford was an auspicious place to study particle physics, so Quinn stayed on to finish her PhD. “Really, the beginning was the fact that particle physics was bubbling at that time at Stanford, and that's where I got hooked on it,” she says. She entered the  graduate  program when women comprised only about 2 percent of all physics students in American PhD programs.

After finishing her PhD, Quinn traveled to Germany for postdoctoral research at the DESY laboratory before returning to the United States. She taught high school in Boston briefly before landing a position at Harvard University. While there, she collaborated with theorist Steven Weinberg and Howard Georgi to work on something known as “grand unified theories,” whimsically nicknamed GUTs. GUT models were attempts to bring together the three forces described by quantum physics: electromagnetism, which holds together atoms, and the weak and strong forces, which govern nuclear structure. (There still is no quantum theory of gravity, the fourth fundamental force.)

“Her paper with Howard Georgi and Steve Weinberg on grand unified theories was the first paper that made sense of grand unified theories,” Peccei says.

Quinn returned to SLAC during a leave of absence from Harvard, where she connected with Peccei. The two of them had frequent conversations with Weinberg and Gerard ’t Hooft, both of whom were visiting SLAC at that time. (Both Weinberg and ’t Hooft later won Nobel Prizes for their work on symmetries in particle physics.)

At that time, many theorists were engaged in understanding the strong force, which governs the structure of particles such as protons, using a theory called quantum chromodynamics, or QCD.  (The name “chromodynamics” refers to the “color charge” of quarks, which is analogous to electric charge.)

The problem: QCD predicted some results at odds with experiment, including an electrical property of neutrons.

Quinn and Peccei realized that they could make that problem go away if one type of quark had no mass. While that was at odds with reality, it hadn’t always been so, Quinn says: “That led me to think, well, in the very early universe when it's hot …quarks are massless.”

By adding a new symmetry once quarks acquired their masses from the Higgs field, they could resolve the problem with QCD. As soon as their paper came out, Weinberg realized the theory also made a prediction that Quinn and Peccei had not noticed: the axion, which might comprise some or all of the mysterious dark matter binding galaxies together. (Independently, Frank Wilczek also found the axion implicit in the Peccei-Quinn theory.) Quinn laughs now over how obvious she says it seems in retrospect.

Experiments and education

After her collaboration with Peccei, Quinn worked extensively with experimentalists and other theorists at SLAC to understand the interactions involving the bottom quark. Studying particles containing bottom quarks is one of the best ways to investigate the symmetries built into QCD, which in turn may offer clues as to why there’s a lot more matter than antimatter in the cosmos.

Along the way, Quinn was elected as member of the National Academy of Sciences, and has received a number of prestigious prizes, including the J.J. Sakurai Prize for theoretical physics and the Dirac Medal from the International Center for Theoretical Physics. She also served as president of the American Physical Society, the premiere professional organization for physicists in the United States.

Since retiring in 2010, Quinn has turned her attention full-time to one of her long-time passions: science education at the kindergarten through high-school level. As part of the board on science education at the National Academy of Sciences, she headed the committee that produced the document “A Framework for K-12 Science Education” in 2011.

“The overarching goal is that most students should have the experience of learning and understanding, not just a bunch of disconnected facts,” she says.

Instead of enduring perpetual tests as required under current policy, she wants students to focus on learning “the way science works: how to think about problems as scientists do and analyze data and evidence and draw conclusions based on evidence.” Peccei calls her “unique among very well-known physicists” for this later work.

“She's devoted a tremendous amount of time to physics education, and has been really a champion of that at a national level,” he says.

On top of that, the Peccei-Quinn model remains a powerful tool for theorists and “a good candidate to solve some of the outstanding problems in particle physics and cosmology,” Silverstein says. Along with dark matter, these include Silverstein’s own research in string theory and early universe inflation.

As with her efforts on behalf of education, the impact of Quinn’s physics research is in how it lays the foundation for others to build on. There’s a certain symmetry in that.

 

Like what you see? Sign up for a free subscription to symmetry!

 

by Matthew R. Francis at January 27, 2015 06:03 PM

Clifford V. Johnson - Asymptotia

The Visitors
jim_gates_talk_usc_26_jan_2015_smallYesterday I sneaked on to campus for a few hours. I'm on family leave (as I mentioned earlier) and so I've not been going to campus unless I more or less have to. Yesterday was one of those days that I decided was a visit day and so visit I did. I went to say hi to a visitor to the Mathematics Department, Sylvester James Gates Jr., an old friend who I've known for many years. He was giving the CAMS (Center for Applied Mathematical Sciences) distinguished lecture with the title "How Attempting To Answer A Physics Question Led Me to Graph Theory, Error-Correcting Codes, Coxeter Algebras, and Algebraic Geometry". You can see him in action in the picture above. I was able to visit with Jim for a while (lunch with him and CAMS director Susan Friedlander), and then hear the talk, which was very interesting. I wish he'd had time to say more on all the connections he mentioned in the title, but what he did explain sounded rather interesting. It is all about the long unsolved problem of finding certain kinds of (unconstrained, off-shell) representations of extended supersymmetry. (Supersymmetry is, you may know, a symmetry that [...] Click to continue reading this post

by Clifford at January 27, 2015 05:34 PM

January 26, 2015

arXiv blog

How a Box Could Solve the Personal Data Conundrum

Software known as a Databox could one day both safeguard your personal data and sell it, say computer scientists.

 

January 26, 2015 11:31 PM

astrobites - astro-ph reader's digest

Evryscope, Greek for “wide-seeing”
Title: Evryscope Science: Exploring the Potential of All-Sky Gigapixel-Scale Telescopes

Authors: Nicholas M. Law et al.

First Author’s Institution: University of North Carolina at Chapel Hill

How fantastic would it be to image the entire sky, every few minutes, every night, for a series of years? The science cases for such surveys —in today’s paper they are called All-Sky Gigapixel Scale Surveys— are numerous, and span a huge range of astronomical topics. Just to begin with, such surveys could: detect transiting giant planets, sample Gamma Ray Bursts and nearby Supernovae, and a wealth of other rare and/or unexpected transient events that are further described in the paper.

Evryscope is a telescope that sets out to take such a minute-by-minute movie of the sky accessible to it. It is designed as an array of extremely wide-angle telescopes, contrasting the traditional meaning of the word “tele-scope” (Greek for “far-seeing”) by Evryscope’s emphasis on extremely wide angles (“Evryscope” is Greek for “wide-seeing”). The array is currently being constructed by the authors at the University of North Carolina at Chapel Hill, and is scheduled to be deployed at the Cerro Tololo Inter-American Observatory (CTIO) in Chile later this year.

But wait, aren’t there large sky surveys out there that are already patrolling the sky a few times a week? Yes, there are! But a bit differently. There is for example the tremendously successful Sloan Digital Sky Survey (SDSS— see Figure 1, and read more about SDSS-related Astrobites here, here, here), which has paved the way for numerous other surveys such as Pan-STARRS, and the upcoming Large Synoptic Survey Telescope (LSST). These surveys are all designed around a similar concept: they utilize a single large-aperture telescope that repeatedly observes few-degree-wide fields to achieve deep imaging. Then the observations are tiled together to cover large parts of the sky several times a week.

SDSS_telescope_new

Figure 1: The Sloan Digital Sky Survey Telescope, a 2.5m telescope that surveys large areas of the available sky a few times a week. The Evryscope-survey concept is a bit different, valuing the continuous coverage of almost the whole available sky over being able to see faint far-away objects. Image from the SDSS homepage.

The authors of today’s paper note that surveys like the SDSS are largely optimized to finding day-or-longer-type events such as supernovae —and are extremely good at that— but are not, however, sensitive to the very diverse class of even shorter-timescale transient events (remembering the list of example science cases above). Up until now, such short-timescale events have generally been studied with individual small telescopes staring at single, limited, fields of view. Expanding on this idea, the authors then propose the Evryscope as an array of small telescopes arranged so that together they can survey the whole available sky minute-by-minute. In contrast to SDSS-like surveys, an Evryscope-like survey will not be able to detect targets nearly as faint as SDSS-like surveys can, but rather focuses on the continuous monitoring of the brightest objects it can see.

evryscope_mount

Figure 2: The currently under-construction Evryscope, showing the 1.8m diameter custom-molded dome. The dome houses 27 individual 61mm aperture telescopes, each of which have their own CCD detector. Figure 1 from the paper.

Evryscope: A further description

Evryscope is designed as an array of 27 61mm optical telescopes, arranged in a custom-molded fiberglass dome, which is mounted on an off-the-shelf German Equatorial mount (see Figure 2). Each telescope has its own 29 MPix CCD detector, adding up to a total detector size of 0.78GPix! The authors refer to Evryscope’s observing strategy as a “ratcheting survey”, as it goes like this: the dome follows the instantaneous field of view (see Figure 3, left) by rotating the dome ever so slowly to compensate for Earth’s rotation rate, taking 2 minute exposures back-to-back for two hours, and then reset and repeat (see Figure 3, right). This ratcheting approach enables Evryscope to image essentially every part of the visible sky for at least 2 hours every night!

evryscope_comb

Figure 3: Evryscope sky coverage (blue), for a mid-latitude Northern-hemisphere site (30°N), showing the SDSS DR7 photometric survey (red) for scale. Left: Instantaneous Evryscope coverage (8660 sq. deg.), including the individual camera fields-of-view (skewed boxes). Right: The Evryscope sky coverage over one 10-hour night. The intensity of the blue color corresponds to the length of continuous coverage (between 2 and 10 hours, in steps of 2 hours) provided by the ratcheting survey, covering a total of 18400 sq.deg. every night. Figures 3 (left) and 5 (right) from the paper.

With its Gigapixel-scale detector, Evryscope will gather large amounts of data, amounting to about 100 TB of compressed FITS images per year! The data will be stored and analyzed on site. The pipeline will be optimized to provide real-time detection of interesting transient events, with rapid retrieval of compressed associated images, allowing for rapid follow-up with other more powerful telescopes. Real-time analysis of the sheer amount of data that Gigapixel-scale systems like Evryscope create would have been largely unfeasible just a few years ago. The rise of consumer digital imaging, ever increasing computing power, and decreasing storage costs have however made the overall cost manageable (a few million dollars much less than one million dollars!) with current technology.

Outlook

The Evryscope array is scheduled to see first light later this year in CTIO in Chile, where it will start to produce a powerful minute-by-minute data-set on transient events happening in its field of view, which until now have not been feasible to capture. But it won’t see the whole sky, just the sky it can see from its location on Earth. So then, why stop there, why not reuse the design and expand? Indeed, this is what the authors are already thinking —see whitepaper about the “Antarctic-Evryscope” on the group’s website. And who knows, maybe soon after we will have an Evryscope at *evry* major observatory in the world working together to record a continuous movie of the whole sky?

by Gudmundur Stefansson at January 26, 2015 10:35 PM

Lubos Motl - string vacua and pheno

A reply to an anti-physics rant by Ms Hossenfelder
S.H. of Tokyo University sent me a link to another text about the "problems with physics". The write-up is one month old and for quite some time, I refused to read it in its entirety. Now I did so and the text I will respond to is really, really terrible. The author is Sabine Hossenfelder and the title reads
Does the scientific method need revision?

Does the prevalence of untestable theories in cosmology and quantum gravity require us to change what we mean by a scientific theory?
To answer this, No. Only people who have always misunderstood how science works – at least science since the discoveries by Albert Einstein – need to change their opinions what a scientific theory is and how it is being looked for. Let me immediately get to the propositions in the body of the write-up and respond.




Here we go:
Theoretical physics has problems.
Theoretical physics solves problems and organizes ideas about how Nature works. Anything may be substituted for "it" to the sentence "it has problems" but the only reason why someone would substitute "theoretical physics" into this sentence is that he or she hates science and especially the most remarkable insights that physics discovered in recent decades.




The third sentence says:
But especially in high energy physics and quantum gravity, progress has basically stalled since the development of the standard model in the mid 70s.
This is an absolutely preposterous claim. First, since the mid 1970s, there have been several important experimental discoveries – like the discoveries of the W-bosons, Z-bosons, Higgs boson, top quark, neutrino oscillations; non-uniformities of the cosmic microwave background, the cosmological constant, and so on, and so on.

But much more shockingly, there have been long sequences of profound and amazing theoretical discoveries, including supersymmetry, supergravity, superstring theory, its explanation for the black hole thermodynamics, D-branes, dualities, holography, AdS/CFT correspondence, AdS/mundane_physics correspondences, and so on, and so on. Many of these results deservedly boast O(10,000) citations – like AdS/CFT – which actually sometimes beats the figures of the Standard Model. Which of those discoveries are more important is debatable and the citation counts can't be treated dogmatically but some of the recent discoveries are unquestionably in the "same league" as the top papers that have led to the Standard Model.

It is silly not to consider these amazing advances "fully important" just because they're primarily theoretical in character. The W-bosons, Z-bosons, Higgs bosons etc. have been believed to exist since the 1960s even though they were also discovered in 1983 or 2012, respectively, and they were "just a theory" for several previous decades. The beta-decay was known by competent particle physicists to be mediated by the W-boson even though no W-boson had been seen by 1983. Exactly analogously, we know that the gravitational force (and other forces) is mediated by closed strings even though we haven't seen a fundamental string yet. The situations are absolutely analogous and people claiming that it is something "totally different" are hopelessly deluded.

One can become virtually certain about certain things long before the thing is directly observed – and that is true not only for particular species of bosons but also for the theoretical discoveries since the mid 1970s that I have mentioned.
Yes, we’ve discovered a new particle every now and then. Yes, we’ve collected loads of data.
In the framework of quantum field theory, almost all discoveries can be reduced to the "discovery of a new particle". So if someone finds such a discovery unimpressive, he or she simply shows his or her disrespect for the whole discipline. But the discoveries were not just discoveries of new particles.
But the fundamental constituents of our theories, quantum field theory and Riemannian geometry, haven’t changed since that time.
That's completely untrue. Exactly since the 1970s, state-of-the-art physics has switched from quantum field theory and Riemannian geometry to string theory as its foundational layer. People have learned that this more correct new framework is different from the previous approximate ones; but from other viewpoints, it is exactly equivalent thanks to previously overlooked relationships and dualities.

Laymen and physicists who are not up to their job may have failed to notice that a fundamental paradigm shift has taken place in physics since the mid 1970s but that can't change the fact that this paradigm shift has occurred.
Everybody has their own favorite explanation for why this is so and what can be done about it. One major factor is certainly that the low hanging fruits have been picked, [experiments become hard, relevant problems are harder...].

Still, it is a frustrating situation and this makes you wonder if not there are other reasons for lack of progress, reasons that we can do something about.
If Ms Hossenfelder finds physics this frustrating, she should leave it – and after all, her bosses should do this service for her, too. Institutionalized scientific research has also become a part of the Big Government and it is torturing lots of people who would love to be liberated but they still think that to pretend to be scientists means to be on a great welfare program. Niels Bohr didn't establish Nordita as another welfare program, however, so he is turning in his grave.

Ms Hossenfelder hasn't written one valuable paper in her life but her research has already cost the taxpayers something that isn't far from one million dollars. It is not shocking that she tries to pretend that there are no results in physics – in this way, she may argue that she is like "everyone else". But she is not. Some people have made amazing or at least pretty interesting and justifiable discoveries, she is just not one of those people. She prefers to play the game that no one has found anything and the taxpayers are apparently satisfied with this utterly dishonest demagogy.

If you have the feeling that the money paid to the research is not spent optimally, you may be right but you may want to realize that it's thanks to the likes of Hossenfelder, Smolin, and others who do nothing useful or intellectually valuable and who are not finding any new truths (and not even viable hypotheses) about Nature.
Especially in a time when we really need a game changer, some breakthrough technology, clean energy, that warp drive, a transporter! Anything to get us off the road to Facebook, sorry, I meant self-destruction.
We don't "need" a game changer now more than we needed it at pretty much any moment in the past (or we will need it in the future). People often dream about game changers and game changers sometimes arrive.

We don't really "need" any breakthrough technology and we certainly don't need "clean energy" because we have lots of clean energy, especially with the rise of fracking etc.

We may "need" warp drive but people have been expressing similar desires for decades and competent physicists know that warp drive is prohibited by the laws of relativity.

And we don't "need" transporters – perhaps the parties in the Ukrainian civil war need such things.

Finally, we are more resilient and further from self-destruction than we were at pretty much any point in the past. Also, we don't need to bash Facebook which is just another very useful pro-entertainment website. It is enough to ignore Facebook if you think it's a waste of time – I am largely doing so ;-) but I still take the credit for having brought lots of (more socially oriented) people who like it to the server.

So every single item that Hossenfelder enumerates in her list "what we need" is crap.
It is our lacking understanding of space, time, matter, and their quantum behavior that prevents us from better using what nature has given us.
This statement is almost certainly untrue, too. A better understanding of space, time, and matter – something that real physicists are actually working on, and not just bashing – will almost certainly confirm that warp drives and similar things don't exist. Better theories will give us clearer explanations why these things don't exist. There may be some "positive applications" of quantum gravity but right now, we don't know what they could be and they are surely not the primary reason why top quantum gravity people do the research they do.

The idea that the future research in quantum gravity will lead to practical applications similar to warp drive is a belief, a form of religion, and circumstantial evidence (and sometimes almost rigorous proofs) makes this belief extremely unlikely.
And it is this frustration that lead people inside and outside the community to argue we’re doing something wrong, ...
No, this is a lie, too. As I have already said, physics bashers are bashing physics not because of frustration that physics isn't making huge progress – it obviously *is* making huge progress. Physics bashers bash physics in order to find excuses for their own non-existent or almost non-existent results in science – something I know very well from some of the unproductive physicists in Czechia whom the institutions inherited from the socialist era. They try to hide that they are nowhere near the top physicists – and most of them are just useless parasites. And many listeners buy these excuses because the number of incredibly gullible people who love to listen to similar conspiracy theories (not so much to science) is huge. And if you combine this fact with many ordinary people's disdain for mathematics etc., it is not surprising that some of these physics bashers may literally make living out of their physics bashing and nothing else.
The arxiv categories hep-th and gr-qc are full every day with supposedly new ideas. But so far, not a single one of the existing approaches towards quantum gravity has any evidence speaking for it.
This is complete rubbish. The tens of thousands of papers are full of various kinds of evidence supporting this claim or another claim about the inner workings of Nature. In particular, the scientific case for string theory as the right framework underlying the Universe is completely comparable to the case for the Higgs boson in the 1960s. The Higgs boson was discovered in 2012, 50 years after the 1960s, but that doesn't mean that adequate physicists in the 1960s were saying that "there wasn't any evidence supporting that theory".

People who were not embarrassed haven't said such a thing and people who are not embarrassing themselves are not saying a similar thing about string theory – and other things – today.
To me the reason this has happened is obvious: We haven’t paid enough attention to experimentally testing quantum gravity. One cannot develop a scientific theory without experimental input. It’s never happened before and it will never happen. Without data, a theory isn’t science. Without experimental test, quantum gravity isn’t physics.
None of these statements is right. We have paid more than enough attention to "experimental quantum gravity". It is a vastly overstudied and overfunded discipline. All sensible physicists realize that it is extremely unlikely that we will directly observe some characteristic effects of quantum gravity in the near future. The required temperatures are around \(10^{32}\) kelvins, the required distances are probably \(10^{-35}\) meters, and so on. Max Planck has known the values of these natural units since the late 19th century.

So we have paid more than enough attention to this strategy.

It is also untrue that the progress in theoretical physics since the mid 1970s has been done "without experimental input". The amount of data we know about many things is huge. To a large extent, the knowledge of one or two basic experiments showing quantum mechanics and one or two experiments testing gravity is enough to deduce a lot. General relativity, quantum mechanics, and string theory largely follow from (subsets of) these several elementary experiments.

On the other hand, it is not true that scientific progress cannot be made without (new) experimental input. Einstein found special relativity even though he wasn't actively aware of the Michelson-Morley experiment. He could have deduced the whole theory independently of any experiments. Experiments had previously been used to construct e.g. Maxwell's equations but Einstein didn't deal with them directly. Einstein only needed the equations themselves. More or less the same thing occurred 10 years later when he discovered general relativity. But the same approach based on "nearly pure thought" has also be victorious in the case of Bekenstein's and Hawking's black hole thermodynamics, string theory, and in some other important examples.

So the idea that one can't find important things without some new experiments – excluding experiments whose results are old and generally known – is obviously untrue. Science haters can say that this or that important part of science "is not science" or "is not physics" but that doesn't change anything about the fact that certain insights about Nature may be found and have been found and supported by highly convincing bodies of evidence in similar ways. Only simpletons may pay attention to demagogue's proclamation that "something is not science". This emotional scream is not a technical argument for or against any scientifically meaningful proposition.

I will omit another repetitive paragraph where Hossenfelder advocates "experimental quantum gravity". She thinks that tons of effects are easily observable because she's incompetent.
Yes, experimental tests of quantum gravity are farfetched. But if you think that you can’t test it, you shouldn’t put money into the theory either.
This is totally wrong. It is perfectly sensible to pay almost all of the quantum gravity research money to the theorists because whether someone likes it or not, quantum gravity is predominantly a theoretical discipline. It is about people's careful arguments, logical thoughts, and calculations that make our existing knowledge fit together more seamlessly than before.

In particular, the goal of quantum gravity is to learn how space and time actually work in our Universe, a world governed by the postulates of quantum mechanics. Quantum gravity is not – and any discipline of legitimate science is not – a religious cult that trains its followers to believe in far-fetched theories. The idea that you may observe completely new effects of quantum gravity (unknown to the theorists) in your kitchen is far-fetched and that really means that it is extremely unlikely. And its being extremely unlikely is the rational reason why almost no money is going into this possibility. This justification can't be "beaten" by the ideological cliché that everything connected with experiments in the kitchen should have a priority because it's "more scientific".

It's not more scientific. A priori, it is equally scientific. A posteriori, it is less scientific because arguments rooted in science almost reliably show that such new quantum gravity effects in the kitchen are very unlikely – some of them are rather close to supernatural phenomena such as telekinesis. So everything that Ms Hossenfelder says is upside down once again.
And yes, that’s a community problem because funding agencies rely on experts’ opinion. And so the circle closes.
Quantum gravity theorists and string theorists are getting money because they do absolutely amazing research, sometimes make a medium-importance discovery, and sometimes a full-fledged breakthrough. And if or when they don't do such a thing for a few years, they are still exceptional people who are preserving and nurturing the mankind's cutting-edge portrait of the Universe. The folks in the funding agencies are usually less than full-fledged quantum gravity or string theorists. But as long as the system at least barely works, they still know enough – much more than an average human or Ms Hossenfelder knows – so they may see that something fantastic is going on here or there even though they can't quite join the research. That's true for various people making decisions in government agencies but that's true e.g. for Yuri Milner, too.

As Ms Hossenfelder indicated, the only way how this logic may change – and yes, I think it is unfortunately changing to some extent – is that the funding decisions don't depend on expert opinion (and on any people connected with knowledge and progress in physics) at all. The decisions may be done by people who hate physics and who have no idea about contemporary physics. The decisions may depend on people who are would-be authority and pick winners and losers by arbitrarily stating that "this is science" and "this is not science". I don't have to say how such decisions (would?) influence the research.
To make matters worse, philosopher Richard Dawid has recently argued that it is possible to assess the promise of a theory without experimental test whatsover, and that physicists should thus revise the scientific method by taking into account what he calls “non-empirical facts”.
Dawid just wrote something that isn't usual among the prevailing self-appointed "critics and philosophers of physics" but he didn't really write anything that would be conceptually new. At least intuitively, physicists like Dirac or Einstein have known all these things for a century. Of course that "non-empirical facts" have played a role in the search for the deeper laws of physics and this role became dramatic about 100 years ago.
Dawid may be confused on this matter because physicists do, in practice, use empirical facts that we do not explicitly collect data on. For example, we discard theories that have an unstable vacuum, singularities, or complex-valued observables. Not because this is an internal inconsistency — it is not. You can deal with this mathematically just fine. We discard these because we have never observed any of that. We discard them because we don’t think they’ll describe what we see. This is not a non-empirical assessment.
This was actually the only paragraph I fully read when I replied to S.H. in Tokyo for the first time – and this paragraph looked "marginally acceptable" to me from a certain point of view.

Well, the paragraph is only solving a terminological issue. Should the violation of unitarity or instability of the Universe that would manifest itself a Planck time after the Big Bang, or something like that be counted as "empirical" or "non-empirical" input? I don't really care much. It's surely something that most experts consider consistency conditions, like Dawid.

We may also say that we "observe" that the Universe isn't unstable and doesn't violate unitarity. But this is a really tricky assertion. Our interpretation of all the observations really assumes that probabilities are non-negative and add to 100%. Whatever our interpretation of any experiment is, it must be adjusted to this assumption. So it's a pre-empirical input. It follows from pure logic. Also, some instabilities and other violations of what we call "consistency conditions" (e.g. unitarity) may be claimed to be very small and therefore hard to observe. But some of these violations will be rejected by theorists, anyway, even if they are very tiny because they are violations of consistency conditions.

I don't really care about the terminology. What's important in practice is that these "consistency conditions" cannot be used as justifications for some new fancy yet meaningful experiments.
A huge problem with the lack of empirical fact is that theories remain axiomatically underconstrained.
The statement is surely not true in general. String theory is 100% constrained. It cannot be deformed at all. It has many solutions but its underlying laws are totally robust.
This already tells you that the idea of a theory for everything will inevitably lead to what has now been called the “multiverse”. It is just a consequence of stripping away axioms until the theory becomes ambiguous.
If the multiverse exists, and it is rather likely that it does, it doesn't mean that the laws of physics are ambiguous. It just means that the world is "larger" and perhaps has more "diverse subregions" than previously thought. But all these regions follow the same unambiguous laws of physics – laws of physics we want to understand as accurately as possible.

The comment about "stripping away axioms" is tendentious, too, because it suggests that there is some "a priori known" number of axioms that is right. But it's not the case. If someone randomly invents a set of axioms, it may be too large (overconstrained) or too small (underconstrained). In the first case, some axioms should be stripped away, in the latter case, some axioms should be added. But the very fact that a theory predicts or doesn't predict the multiverse doesn't imply that its set of axioms is underconstrained or overconstrained.

For example, some theories of inflation predict that inflation is not eternal and no multiverse is predicted; other, very analogous theories (that may sometimes differ by values of parameters only!) predict that inflation is eternal and the Universe emerges. So Hossenfelder's claim that the multiverse is linked with "underconstrained axioms" is demonstrably incorrect, too.
Somewhere along the line many physicists have come to believe that it must be possible to formulate a theory without observational input, based on pure logic and some sense of aesthetics. They must believe their brains have a mystical connection to the universe and pure power of thought will tell them the laws of nature.
There is nothing mystical about this important mode of thinking in theoretical physics. It's how special relativity was found, much like general relativity, the idea that atoms exist, the idea that the motion of atoms is linked to heat, not to mention the Dirac equation, gauge theories, and many other things. A large fraction of theoretical physicists have made their discovery by optimizing the "beauty" of the candidate laws of physics. People like Dirac have emphasized the importance of the mathematical beauty in the search for the laws of physics all the time, and for a good reason.



That's the most important thing Dirac needed to write on a Moscow blackboard.

And the more recent breakthroughs in physics we consider, the greater role such considerations have played (and will play). And the reason why this "mathematical beauty" works isn't supernatural – even though many of us love to be amazed by this power of beautiful mathematics and this meme is often sold to the laymen, too. One may give Bayesian explanations why "more beautiful" laws are more likely to hold than generic, comparable, but "less beautiful" competitors. Bayesian inference dictates to assign comparable prior probabilities to competing hypotheses and because the mathematically beautiful theories have a smaller number of truly independent assumptions and building blocks, and therefore a smaller number of ways how to invent variations, their prior probability won't be split to so many "sub-hypotheses". Moreover, as we describe deeper levels of reality, the risk that an inconsistency emerges is high and ever higher, and the "not beautiful theories" are increasingly likely to lead to one kind of an inconsistency or another.

Sabine Hossenfelder's denial of this principle only shows her lack of familiarity with physics, its logic, and its history.
You can thus never arrive at a theory that describes our universe without taking into account observations, period.
Whether someone has ever found important things without "any observations" is questionable. But it is still true and important that a good theorist may need 1,000 times less empirical data than a worse theorist to find and write down a correct theory, and a bad theorist will not find the right theory with arbitrarily large amounts of data! And that's the real "period", that's why the mathematical beauty is important for good theoretical physicists – and the others have almost no chance to make progress these days.
The attempt to reduce axioms too much just leads to a whole “multiverse” of predictions, most of which don’t describe anything we will ever see.
I have already said that there is no relationship between the multiverse and the underdeterminedness of the sets of axioms.
(The only other option is to just use all of mathematics, as Tegmark argues. You might like or not like that; at least it’s logically coherent. But that’s a different story and shall be told another time.)
But these Tegmark's comments are purely verbal philosophical remarks without any scientific content. They don't imply anything for observations, not even in principle. For this reason, they have nothing to do with physical models of eternal inflation or the multiverse or even specific compactifications of string/M-theory which are completely specific theories about Nature and the observations of it.
Now if you have a theory that contains more than one universe, you can still try to find out how likely it is that we find ourselves in a universe just like ours. The multiverse-defenders therefore also argue for a modification of the scientific method, one that takes into account probabilistic predictions.
Most people writing papers about the multiverse – more precisely, papers evoking the anthropic principle – use the probability calculus incorrectly. But the general statement that invoking probabilities in deductions of properties of Nature is a "modification of the scientific method" is a total idiocy. The usage of probabilities was not only "allowed" in the scientific method for quite some time. In fact, science could have never been done without probabilities at all! All of science is about looking at the body of our observations and saying which explanation is more likely and which explanation is less likely.



And of course that a theory with a "larger Universe than previously thought" and perhaps with some extra rules to pinpoint "our location" in this larger world is an OK competitor to describe the Universe a priori.

Every experimenter needs to do some calculations involving probabilities – probabilities that a slightly unexpected result is obtained by chance, and so on – all the time. Ms Hossenfelder just doesn't have a clue what science is.
In a Nature comment out today, George Ellis and Joe Silk argue that the trend of physicists to pursue untestable theories is worrisome.
Please not again.
I agree with this, though I would have said the worrisome part is that physicists do not care enough about the testability — and apparently don’t need to care because they are getting published and paid regardless.
I don't get paid a penny but I am still able to see that the people whose first obsession is "testability" are either crackpots or third-class physicists such as Ms Hossenfelder who don't have an idea what they are talking about.

The purpose of science is to find the truth about Nature. Easy testability (in practice) means that there exists a procedure, an experimental procedure, that may accelerate the process by which we decide whether the hypothesis is true or not. But the testability doesn't actually make the hypothesis true (or more true) and scientists are looking for correct theories, not falsifiable theories, and it's an entirely different thing.

One could say that the less falsifiable a theory is, the better. We are looking for theories that withstand tests. So they won't be falsified anytime soon! A theory that has already resisted some attempts to be falsified is in a better shape than a theory that has already been falsified. The only "philosophical" feature of this kind that is important is that the propositions made by the theory are scientifically meaningful – i.e. having some non-tautological observable consequences in principle. If this is satisfied, the hypothesis is perfectly scientific and its higher likelihood to be falsified soon may only hurt. If one "knows" that a hypothesis is likely to die after a soon-to-be-performed experiment, it's probably because he "knows" that the hypothesis is actually unlikely.
See, in practice the origin of the problem is senior researchers not teaching their students that physics is all about describing nature. Instead, the students are taught by example that you can publish and live from outright bizarre speculations as long as you wrap them into enough math.
Maybe this is what Ms Hossenfelder has learned from her superiors such as Mr Smolin but no one is teaching these things at good places – like those I have been affiliated with.
I cringe every time a string theorist starts talking about beauty and elegance.
Because you are a stupid cringing crackpot.
Whatever made them think that the human sense for beauty has any relevance for the fundamental laws of nature?
The history of physics, especially 20th century physics, plus the Bayesian arguments showing that more beautiful theories are more likely. The sense of beauty used by these physicists – one that works so often – is very different from the sense of beauty used by average humans or average women in some respects. But it also has some similar features so it is similar in other respects.

Even more important is to point out that this extended discussion about "strings and beauty" is a straw man because almost no arguments referring to "beauty" can be found in papers on string theory. Many string theorists would actually disagree that "beauty" is a reason why they think that the theory is on the right track. Ms Hossenfelder is basically proposing illogical connections between her numerous claims, all of which happen to be incorrect.

I will omit one paragraph repeating content-free clichés that science describes Nature. Great, I agree that science describes Nature.
Call them mathematics, art, or philosophy, but if they don’t describe nature don’t call them science.
The only problem is that all theories that Ms Hossenfelder has targeted for her criticism do describe Nature and are excellent and sometimes paramount additions to science (sometimes nearly established ones, sometimes very promising ones), unlike everything that Ms Hossenfelder and similar "critics of physics" have ever written in their whole lives.

by Luboš Motl (noreply@blogger.com) at January 26, 2015 01:25 PM

Tommaso Dorigo - Scientificblogging

Reviews In Physics - A New Journal
The publishing giant Elsevier is about to launch a new journal, Reviews in Physics. This will be a fully open-access, peer-reviewed journal which aims at providing short reviews (15 pages maximum) on physics topics at the forefront of research. The web page of the journal is here, and a screenshot is shown below.

read more

by Tommaso Dorigo at January 26, 2015 01:20 PM

January 25, 2015

arXiv blog

First Videos Created of Whole Brain Neural Activity in an Unrestrained Animal

Neuroscientists have recorded the neural activity in the entire brains of freely moving nematode worms for the first time.


The fundamental challenge of neuroscience is to understand how the nervous system controls an animal’s behavior. In recent years, neuroscientists have made great strides in determining how the collective activity of many individual neurons is critical for controlling behaviors such as arm reach in primates, song production in the zebrafinch and the choice between swimming or crawling in leeches.

January 25, 2015 05:51 PM

astrobites - astro-ph reader's digest

Grad students: apply now for ComSciCon 2015!
ComSciCon 2015

ComSciCon 2015 will be the third in the annual series of Communicating Science workshops for graduate students

Applications are now open for the Communicating Science 2015 workshop, to be held in Cambridge, MA on June 18-20th, 2015!

Graduate students at US institutions in astronomy, and all fields of science and engineering, are encouraged to apply. The application will close on March 1st.

It’s been more than two years since we announced the first ComSciCon worshop here on Astrobites. Since then, we’ve received almost 2000 applications from graduate students across the country, and we’ve welcomed about 150 of them to three national and local workshops held in Cambridge, MA. You can read about last year’s workshop to get a sense for the activities and participants at ComSciCon events.

While acceptance to the workshop is competitive, attendance of the workshop is free of charge and travel support will be provided to accepted applicants.

Participants will build the communication skills that scientists and other technical professionals need to express complex ideas to their peers, experts in other fields, and the general public. There will be panel discussions on the following topics:

  • Communicating with Non-Scientific Audiences
  • Science Communication in Popular Culture
  • Communicating as a Science Advocate
  • Multimedia Communication for Scientists
  • Addressing Diversity through Communication

In addition to these discussions, ample time is allotted for interacting with the experts and with attendees from throughout the country to discuss science communication and develop science outreach collaborations. Workshop participants will produce an original piece of science writing and receive feedback from workshop attendees and professional science communicators, including journalists, authors, public policy advocates, educators, and more.

ComSciCon attendees have founded new science communication organizations in collaboration with other students at the event, published more than 25 articles written at the conference in popular publications with national impact, and formed lasting networks with our student alumni and invited experts. Visit the ComSciCon website to learn more about our past workshop programs and participants.

ComSciCon 2014

Group photo at the 2014 ComSciCon workshop

If you can’t make it to the national workshop in June, check to see whether one of our upcoming regional workshops would be a good fit for you.

This workshop is sponsored by Harvard University, the Massachusetts Institute of Technology, University of Colorado Boulder, the American Astronomical Society, the American Association for the Advancement of Science, the American Chemical Society, and Microsoft Research.

by Nathan Sanders at January 25, 2015 03:44 PM

Tommaso Dorigo - Scientificblogging

The Plot Of The Week: CMS Search For Majorana Neutrinos
The CMS collaboration has released yesterday results of a search for Majorana neutrinos in dimuon data collected by the CMS detector in 8 TeV proton-proton collisions delivered by the LHC in 2012. If you are short of time and just need an executive summary, here it is: no such thing is seen, unfortunately, and limits are set on the production rate of heavy neutrinos N as a function of their mass. If you have five spare minutes, however, you might be interested in some more detail of the search and its results.

read more

by Tommaso Dorigo at January 25, 2015 12:39 PM

January 24, 2015

Geraint Lewis - Cosmic Horizons

The Constant Nature of the Speed of light in a vacuum
Wow! It has been a while, but I do have an excuse! I have been finishing up a book on the fine-tuning of the Universe and hopefully it will be published (and will become a really big best seller?? :) in 2015. But time to rebirth the blog, and what a better way to start that a gripe.

There's been some chatter on the interweb about a recent story about the speed of light in a vacuum being slowed down. Here's oneHere's another. Some of these squeak loudly about how the speed of light may not be "a constant", implying that something has gone horribly wrong with the Universe. Unfortunately, some of my physicsy colleagues were equally shocked but the result.

Why would one be shocked? Well, the speed of light being constant to all observers is central of Einstein's Special Theory of Relativity. Surely if these results are right, and Einstein is wrong, then science is a mess, etc etc etc.

Except there is nothing mysterious about this result. Nothing strange. In fact it was completely expected. The question boils down to what you mean by speed.
Now, you might be thinking that speed is simply related to the time it takes for a thing to travel from here to there. But we're dealing with light here, which, in classical physics is represented by oscillations in an electromagnetic field, while in our quantum picture it's oscillations in the wave function; the difference is not important.

When you first encounter electromagnetic radiation (i.e. light) you are often given a simple example of a single wave propagating in a vacuum. Every student of physics will have seen this picture at some point;
The electric (and magnetic) fields oscillate as a sin wave and the speed at which bumps in the wave move forward is the speed of light. This was one of the great successes of James Clark Maxwell, one of the greatest physicists who ever lived. In his work, he fully unified electricity and magnetism and showed that electromagnetic radiation, light, was the natural consequence. 

Without going into too many specific details, this is known as the phase velocity. For light in a vacuum, the phase velocity is equal to c.

One of the coolest things I ever learnt was Fourier series, or the notion that you can construct arbitrary wave shapes by adding together sins and cos waves. This still freaks me out a bit to this day, but instead of an electromagnetic wave being a simple sin or cos you can add waves to create a wave packet, basically a lump of light.

But when you add waves together, the result lump doesn't travel at the same speed as the waves that comprise the packet. The lump moves with what's known as the group velocity. Now, the group velocity and the phase velocity are, in general, different. In fact, they can be very different as it is possible to construct a packet that does not move at all, while all the waves making up the packet are moving at c!

So, this result was achieved by manipulating the waves to produce a packet whose group velocity was measurably smaller than a simple wave. That's it! Now, this is not meant to diminish the work of the experimenters, as this is not easy to set up and measure, but it means nothing for the speed of light, relativity etc etc. And the researchers know that!

And as I mentioned, understanding the difference between phase and group velocity has been known for a long time, with Hamilton (of Hamiltonian fame) in 1839, and Rayleigh in 1877. These initial studies were in waves in general, mainly sound waves, not necessarily light, but the mathematics are basically the same. 

Before I go, once of the best course I took as an undergraduate was called vibrations and waves. At the time, I didn't really see the importance of of what I was learning, but the mathematics was cool. I still love thinking about it. Over the years, I've come to realise that waves are everywhere, all throughout physics, science, and, well everything. Want to model a flag, make a ball and spring model. Want to make a model of matter, ball and spring. And watch the vibrations!

Don't believe me? Watch this - waves are everywhere.





by Cusp (noreply@blogger.com) at January 24, 2015 11:25 PM

January 23, 2015

Symmetrybreaking - Fermilab/SLAC

Superconducting electromagnets of the LHC

You won't find these magnets in your kitchen.

Magnets are something most of us are familiar with, but you may not know that magnets are an integral part of almost all modern particle accelerators. These magnets aren’t the same as the one that held your art to your parent’s refrigerator when you were a kid. Although they have a north and south pole just as your fridge magnets do, accelerator magnets require quite a bit of engineering.

When an electrically charged particle such as a proton moves through a constant magnetic field, it moves in a circular path. The size of the circle depends on both the strength of the magnets and the energy of the beam. Increase the energy, and the ring gets bigger; increase the strength of the magnets, the ring gets smaller.

The Large Hadron Collider is an accelerator, a crucial word that reminds us that we use it to increase the energy of the beam particles. If the strength of the magnets remained the same, then as we increased the beam energy, the size of the ring would similarly have to increase. Since the size of the ring necessarily remains the same, we must increase the strength of the magnets as the beam energy is increased. For that reason, particle accelerators employ a special kind of magnet.

When you run an electric current through a wire, it creates a magnetic field; the strength of the magnetic field is proportional to the amount of electric current. Magnets created this way are called electromagnets. By controlling the amount of current, we can make electromagnets of any strength we want. We can even reverse the magnet’s polarity by reversing the direction of the current.

Given the connection between electrical current and magnetic field strength, it is clear that we need huge currents in our accelerator magnets. To accomplish this, we use superconductors, materials that lose their resistance to electric current when they are cooled enough. And “cooled” is an understatement. At 1.9 Kelvin (about 450 degrees Fahrenheit below zero), the centers of the magnets at the LHC are one of the coldest places in the universe—colder than the temperature of space between galaxies.

Given the central role of magnets in modern accelerators, scientists and engineers at Fermilab and CERN are constantly working to make even stronger ones. Although the main LHC magnets can generate a magnetic field about 800,000 times that generated by the Earth, future accelerators will require even more. The technology of electromagnets, first observed in the early 1800s, is a vibrant and crucial part of the laboratories’ futures.


A version of this article was published in Fermilab Today.

 

Like what you see? Sign up for a free subscription to symmetry!

by Don Lincoln, Fermi National Accelerator Laboratory at January 23, 2015 02:00 PM

January 22, 2015

Symmetrybreaking - Fermilab/SLAC

DECam’s nearby discoveries

The Dark Energy Camera does more than its name would lead you to believe.

The Dark Energy Camera, or DECam, peers deep into space from its mount on the 4-meter Victor Blanco Telescope high in the Chilean Andes.

Thirty percent of the camera’s observing time—about 105 nights per year—go to the team that built it: scientists working on the Dark Energy Survey.

Another small percentage of the year is spent on maintenance and upgrades to the telescope. So who else gets to use DECam? Dozens of other projects share its remaining time.

Many of them study objects far across the cosmos, but five of them investigate ones closer to home.

Overall, these five groups take up just 20 percent of the available time, but they’ve already taught us some interesting things about our planetary neighborhood and promise to tell us more in the future.

Far-out asteroids

Stony Brook University’s Aren Heinze and the University of Western Ontario’s Stanimir Metchev used DECam for four nights in early 2014 to search for unknown members of our solar system’s main asteroid belt, which sits between Mars and Jupiter.

To detect such faint objects, one needs to take a long exposure. However, the paths of these asteroids lie close enough to Earth that taking an exposure longer than a few minutes results in blurred images. Heinze and Metchev’s fix was to stack more than 100 images taken in less than two minutes each.

With this method, the team expects to measure the positions, motions and brightnesses of hundreds of main belt asteroids not seen before. They plan to release their survey results in late 2015, and an early partial analysis indicates they’ve already found hundreds of asteroids in a region smaller than DECam’s field of view—about 20 times the area of the full moon.

Whole new worlds

Scott Sheppard of the Carnegie Institution for Science in Washington DC and Chad Trujillo of Gemini Observatory in Hilo, Hawaii, use DECam to look for distant denizens of our solar system. The scientists have imaged the sky for two five-night stretches every year since November 2012.

Every night, the DECam’s sensitive 570-megapixel eye captures images of an area of sky totaling about 200 to 250 times the area of the full moon, returning to each field of view three times. Sheppard and Trujillo run the images from each night through software that tags everything that moves.

“We have to verify everything by eye,” Sheppard says. So they look through about 60 images a night, or 300 total from a perfect five-night observing run, a process that gives them a few dozen objects to study at Carnegie’s Magellan Telescope.

The scientists want to find worlds beyond Pluto and its brethren—a region called the Kuiper Belt, which lies some 30 to 50 astronomical units from the sun (compared to the Earth’s 1). On their first observing run, they caught one.

This new world, with the catalog name of 2012 VP113, comes as close as 80 astronomical units from the sun and journeys as far as 450. Along with Sedna, a minor planet discovered a decade ago, it is one of just two objects found in what was once thought of as a complete no man’s land.

Sheppard and Trujillo also have discovered another dwarf planet that is one of the top 10 brightest objects beyond Neptune, a new comet, and an asteroid that occasionally sprouts an unexpected tail of dust.

Mythical creatures

Northern Arizona University’s David Trilling and colleagues used the DECam for three nights in 2014 to look for “centaurs”—so called because they have characteristics of both asteroids and comets. Astronomers believe centaurs could be lost Kuiper Belt objects that now lie between Jupiter and Neptune.

Trilling’s team expects to find about 50 centaurs in a wide range of sizes. Because centaurs are nearer to the sun than Kuiper Belt objects, they are brighter and thus easier to observe. The scientists hope to learn more about the size distribution of Kuiper Belt objects by studying the sizes of centaurs. The group recently completed its observations and plan to report them later in 2015.

Next-door neighbors

Lori Allen of the National Optical Astronomy Observatory outside Tucson, Arizona, and her colleagues are looking for objects closer than 1.3 astronomical units from the sun. These near-Earth objects have orbits that can cross Earth’s—creating the potential for collision.

Allen’s team specializes in some of the least-studied NEOs: ones smaller than 50 meters across. 

Even small NEOs can be destructive, as demonstrated by the February 2013 NEO that exploded above Chelyabinsk, Russia. The space rock was just 20 meters wide, but the shockwave from its blast shattered windows, which caused injuries to more than 1000 people.

In 2014, Allen’s team used the DECam for 10 nights. They have 20 more nights to use in 2015 and 2016.

They have yet to release specific findings from the survey’s first year, but the researchers say they have a handle of the distribution of NEOs down to just 10 meters wide. They also expect to discover about 100 NEOs the size of the one that exploded above Chelyabinsk.

Space waste

Most surveys looking for “space junk”—inactive satellites, parts of spacecraft and the like in orbit around the Earth—can see only pieces larger than about 20 centimeters. But there’s a lot more material out there.

How much is a question Patrick Seitzer of the University of Michigan and colleagues hope to answer. They used DECam to hunt for debris smaller than 10 centimeters, or the size of a smartphone, in geosynchronous orbit.

The astronomers need to capture at least four images of each piece of debris to determine its position, motion and brightness. This can tell them about the risk from small debris to satellites in geosynchronous orbit. Their results are scheduled for release in mid-2015.
 

 

Like what you see? Sign up for a free subscription to symmetry!

by Liz Kruesi at January 22, 2015 02:00 PM

January 21, 2015

Lubos Motl - string vacua and pheno

A new paper connecting heterotic strings with an LHC anomaly
Is the LHC going to experimentally support details of string theory in a few months?

Just one week ago, I discussed a paper that has presented a model capable of explaining three approximately 2.5-sigma anomalies seen by the LHC, including the \(\tau\mu\) decay of the Higgs boson \(h\), by using a doubled Higgs sector along with the gauged \(L_\mu-L_\tau\) symmetry.

I have mentioned a speculative addition of mine: those gauge groups could somewhat naturally appear in \(E_8\times E_8\) heterotic string models, my still preferred class of string/M-theory compactifications to describe the Universe around us.

Today, there is a new paper
Explaining the CMS \(eejj\) and \(e /\!\!\!\!{p}_T jj\) Excess and Leptogenesis in Superstring Inspired \(E_6\) Models
by Dhuria and 3 more Indian co-authors that apparently connects an emerging, so far small and inconclusive experimental anomaly at the LHC, with heterotic strings.




The authors consider superstring-inspired models with an \(E_6\) group and supersymmetry whose R-parity is unbroken. And the anomaly they are able to explain is the 2.8-sigma CMS excess that I wrote about in July 2014 and that was attributed to a \(2.1\TeV\) right-handed \(W^\pm_R\)-boson.




The new Indian paper shows that it is rather natural to explain the anomaly in terms of the heterotic models with gauge groups broken to\[

E_8\times E'_8 \to E_6\times SU(3)\times E'_8

\] but they are careful about identifying the precise new particles that create the excess. In fact, it seems that the right-handed gauge bosons are not ideal to play the role. They will lead to problems with baryogenesis. All the baryon asymmetry will disappear because \(B-L\) and \(B+L\) are violated, either at low energies or intensely at the electroweak scale. So this theory would apparently predict that all matter annihilates against the antimatter.

Instead of the right-handed gauge bosons, they promote new exotic sleptons that result from the breaking of \(E_6\) down to a cutely symmetric maximal subgroup\[

E_6\to SU(3)_C \times SU(3)_L \times SU(3)_R

\] under which the fundamental representation decomposes as\[

{\bf 27} = ({\bf 3}, {\bf 3}, {\bf 1}) \oplus
({\bf \bar 3}, {\bf 1}, {\bf \bar 3}) \oplus
({\bf 1}, {\bf \bar 3}, {\bf 3})

\] which should look beautiful to all devout Catholics who love the Holy Trinity. The three \(SU(3)\) factors represent the QCD color, the left-handed extension of the electroweak \(SU(2)_W\), and its right-handed partner.

There are lots of additional technical features that you may want to study in the 8-page-long paper. But I want to emphasize some big-picture, emotional message. And it is the following.

The superpartners have been considered the most likely new particles that may emerge in particle physics experiments. They have the best motivation – the supersymmetric solution to the hierarchy problem (the lightness of the Higgs boson) – to appear at low energies. On the other hand, it's "sensible" to assume that all other new particles, e.g. those linked to grand unification or extra dimensions, are tied to very high energies and therefore unobservable in the near future.

But this expectation isn't rock-solid. In fact, just like the Standard Model fermions are light, there may be additional particles that naturally result from GUT or string theory model building that are light and accessible to the LHC, too. One could expect that "it is likely" that the gauge coupling unification miracle from minimal SUSY GUT ceases to work. But it may work, perhaps with some fixes, and although the fixes are disadvantages, the models may have some advantages that are even more irresistible than the gauge coupling unification.

The possibility that some other, non-SUSY aspects of string models will be found first is here and it is unbelievably attractive, indeed. I would bet that this particular ambitious scenario is "less likely than yes/not" (or whatever is the opposite to "more likely than not" LOL) but the probability isn't zero.

A lighter topic: intestines and thumbs on feet



By Don Lincoln. ;-)

by Luboš Motl (noreply@blogger.com) at January 21, 2015 10:11 PM

Quantum Diaries

How to build your own particle detector

This article ran in symmetry on Jan. 20, 2015

Make a cloud chamber and watch fundamental particles zip through your living room! Image: Sandbox Studio, Chicago

Make a cloud chamber and watch fundamental particles zip through your living room! Image: Sandbox Studio, Chicago

The scale of the detectors at the Large Hadron Collider is almost incomprehensible: They weigh thousands of tons, contain millions of detecting elements and support a research program for an international community of thousands of scientists.

But particle detectors aren’t always so complicated. In fact, some particle detectors are so simple that you can make (and operate) them in your own home.

The Continuously Sensitive Diffusion Cloud Chamber is one such detector. Originally developed at UC Berkeley in 1938, this type of detector uses evaporated alcohol to make a ‘cloud’ that is extremely sensitive to passing particles.

Cosmic rays are particles that are constantly crashing into the Earth from space. When they hit Earth’s atmosphere, they release a shower of less massive particles, many of which invisibly rain down to us.

When a cosmic ray zips through a cloud, it creates ghostly particle tracks that are visible to the naked eye.

Building a cloud chamber is easy and requires only a few simple materials and steps:

Materials:

  • Clear plastic or glass tub (such as a fish tank) with a solid lid (plastic or metal)
  • Felt
  • Isopropyl alcohol (90% or more. You can find this at a pharmacy or special order from a chemical supply company. Wear safety goggles when handling the alcohol.)
  • Dry ice (frozen carbon dioxide. Often used at fish markets and grocery stores to keep products cool. Wear thick gloves when handling the dry ice.)

Steps:

  1. Cut the felt so that it is the size of the bottom of the fish tank. Glue it down inside the tank (on the bottom where the sand and fake treasure chests would normally go).
  2. Once the felt is secured, soak it in the isopropyl alcohol until it is saturated. Drain off any excess alcohol.
  3. Place the lid on top of dry ice so that it lies flat. You might want to have the dry ice in a container or box so that it is more stable.
  4. Flip the tank upside down, so that the felt-covered bottom of the tank is on top, and place the mouth of the tank on top of the lid.
  5. Wait about 10 minutes… then turn off the lights and shine a flashlight into your tank.
Artwork by: Sandbox Studio, Chicago

What is happening inside your cloud chamber?

The alcohol absorbed by the felt is at room temperature and is slowly evaporating into the air. But as the evaporated alcohol sinks toward the dry ice, it cools down and wants to turn back into a liquid.

The air near the bottom of the tank is now supersaturated, which means that it is just below its atmospheric dew point. And just as water molecules cling to blades of grass on cool autumn mornings, the atmospheric alcohol will form cloud-like droplets on anything it can cling to.

Particles, coming through!

When a particle zips through your cloud chamber, it bumps into atmospheric molecules and knocks off some of their electrons, turning the molecules into charged ions. The atmospheric alcohol is attracted to these ions and clings to them, forming tiny droplets.

The resulting tracks left behind look like the contrails of airplane—long spindly lines marking the particle’s path through your cloud chamber.

What you can tell from your tracks?

Many different types of particles might pass through your cloud chamber. It might be hard to see, but you can actually differentiate between the types of particles based on the tracks they leave behind.

Short, fat tracks

Sorry—not a cosmic ray. When you see short, fat tracks, you’re seeing an atmospheric radon atom spitting out an alpha particle (a clump of two protons and two neutrons). Radon is a naturally occurring radioactive element, but it exists in such low concentrations in the air that it is less radioactive than peanut butter. Alpha particles spat out of radon atoms are bulky and low-energy, so they leave short, fat tracks.

Long, straight track

Congratulations! You’ve got muons! Muons are the heavier cousins of the electron and are produced when a cosmic ray bumps into an atmospheric molecule high up in the atmosphere. Because they are so massive, muons bludgeon their way through the air and leave clean, straight tracks.

Zig-zags and curly-cues

If your track looks like the path of a lost tourist in a foreign city, you’re looking at an electron or positron (the electron’s anti-matter twin). Electrons and positrons are created when a cosmic ray crashes into atmospheric molecules. Electrons and positrons are light particles and bounce around when they hit air molecules, leaving zig-zags and curly-cues.

Forked tracks

If your track splits, congratulations! You just saw a particle decay. Many particles are unstable and will decay into more stable particles. If your track suddenly forks, you are seeing physics in action!

 

 

Sarah Charley

by Fermilab at January 21, 2015 06:01 PM

ZapperZ - Physics and Physicists

GUTs and TOEs
Another informative video, for the general public, from Don Lincoln and Fermilab.



Of course, if you had read my take on the so-called "Theory of Everything", you would know my stand on this when we consider emergent phenomena.

Zz.

by ZapperZ (noreply@blogger.com) at January 21, 2015 05:23 PM

Quantum Diaries

Lepton Number Violation, Doubly Charged Higgs Bosons, and Vector Boson Fusion at the LHC

Doubly charged Higgs bosons and lepton number violation are wickedly cool.

Hi Folks,

The Standard Model (SM) of particle physics is presently the best description of matter and its interactions at small distances and high energies. It is constructed based on observed conservation laws of nature. However, not all conservation laws found in the SM are intentional, for example lepton number conservation. New physics models, such as those that introduce singly and doubly charged Higgs bosons, are flexible enough to reproduce previously observed data but can either conserve or violate these accidental conservation laws. Therefore, some of the best ways of testing if these types of laws are much more fundamental may be with the help of new physics.

Observed Conservation Laws of Nature and the Standard Model

Conservation laws, like the conservation of energy or the conservation of linear momentum, have the most remarkable impact on life and the universe. Conservation of energy, for example, tells us that cars need fuel to operate and perpetual motion machines can never exist. A football sailing across a pitch does not suddenly jerk to the left at 90º because conversation of linear momentum, unless acted upon by a player (a force). This is Newton’s First Law of Motion. In particle physics, conservation laws are not taken lightly; they dictate how particles are allowed to behave and forbid some processes from occurring. To see this in action, lets consider a top quark (t) decaying into a W boson and a bottom quark (b).

asdasd

asdasd

A top quark cannot radiate a W+ boson and remain a top quark because of conservation of electric charge. Top quarks have an electric charge of +2/3 e, whereas W+ bosons have an electric charge of +1e, and we know quite well that

(+2/3)e ≠ (+1)e + (+2/3)e.

For reference a proton has an electric charge of +1e and an electron has an electric charge of -1e. However, a top quark can radiate a W+ boson and become a bottom quark, which has electric charge of -1/3e. Since

(+2/3)e = (+1)e + (-1/3)e,

we see that electric charge is conserved.

Conservation of energy, angular momentum, electric charged, etc., are so well-established that the SM is constructed to automatically obey these laws. If we pick any mathematical term in the SM that describes how two or more particles interact (for example how the top quark, bottom quark, and W boson interact with each other) and then add up the electric charge of all the participating particles, we will find that the total electric charge is zero:

The top quark-bottom quark-W boson vertices in the Standard Model, and the net charge carried by each interaction.

The top quark-bottom quark-W boson interaction terms in the Standard Model. Bars above quarks indicate that the quark is an antiparticle and has opposite charges.

 

Accidental Conservation Laws

However, not all conservation laws that appear in the SM are intentional. Conservation of lepton number is an example of this. A lepton is any SM fermion that does not interact with the strong nuclear force. There are six leptons in total: the electron, muon, tau, electron-neutrino, muon-neutrino, and tau-neutrino. We assign lepton number

L=1 to all leptons (electron, muon, tau, and all three neutrinos),

L=-1 to all antileptons (positron, antimuon, antitau, and all three antineutrinos),

L=0 to all other particles.

With these quantum number assignments, we see that lepton number is a conserved in the SM. To clarify this important point: we get lepton number conservation for free due to our very rigid requirements when constructing the SM, namely the correct conservation laws (e.g., electric and color charge) and particle content. Since lepton number conservation was not intentional, we say that lepton number is accidentally conserved. Just as we counted the electric charge for the top-bottom-W interaction, we can count the net lepton number for the electron-neutrino-W interaction in the SM and see that lepton number really is zero:

Words

The W boson-neutrino-electron interaction terms in the Standard Model. Bars above leptons indicate that the lepton is an antiparticle and has opposite charges.

However, lepton number conservation is not required to explain data. At no point in constructing the SM did we require that it be conserved. Because of this, many physicists question whether lepton number is actually conserved. It may be, but we do not know. This is indeed one topic that is actively researched. An interesting example of a scenario in which lepton number conservation could be tested is the class of theories with singly and doubly charged Higgs boson. That is right, there are theories containing additional Higgs bosons that an electric charged equal or double the electric charge of the proton.

as

Models with scalar SU(2) triplets contain additional neutral Higgs bosons as well as singly and doubly charged Higgs bosons.

Doubly charged Higgs bosons have an electric charge that is twice as large as a proton (2e), which leads to rather peculiar properties. As discussed above, every interaction between two or more particles must respect the SM conservation laws, such as conservation of electric charge. Because of this, a doubly charged Higgs (+2e) cannot decay into a top quark (+2/3 e) and an antibottom quark (+1/3 e),

(+2)e ≠ (+2/3)e + (+1/3)e.

However, a doubly charged Higgs (+2e) can decay into two W bosons (+1e) or two antileptons (+1e) with the same electric charge,

(+2)e = (+1)e + (+1)e.

but that is it. A doubly charged Higgs boson cannot decay into any other pair of SM particles because it would violate electric charge conservation. For these two types of interactions, we can also check whether or not lepton number is conserved:

For the decay into same-sign W boson pairs, the total lepton number is 0L + 0L + 0L = 0L. In this case, lepton number is conserved!

For the decay into same-sign leptons pairs, the total lepton number is 0L + (-1)L + (-1)L = -2L. In this case, lepton number is violated!

Words

Doubly charged Higgs boson interactions for same-sign W boson pairs and same-sign electron pairs. Bars indicate antiparticles. C’s indicate charge flipping.

Therefore if we observe a doubly charged Higgs decaying into a pair of same-sign leptons, then we have evidence that lepton number is violated. If we only observe doubly charged Higgs decaying into same-sign W bosons, then one may speculate that lepton number is conserved in the SM.

Doubly Charged Higgs Factories

Doubly charged Higgs bosons do not interact with quarks (otherwise it would violate electric charge conservation), so we have to rely on vector boson fusion (VBF) to produce them. VBF is when two bosons from on-coming quarks are radiated and then scatter off each other, as seen in the diagram below.

Figure 2: Diagram depicting the process known as WW Scattering, where two quarks from two protons each radiate a W boson that then elastically interact with one another.

Diagram depicting the process known as WW Scattering, where two quarks from two protons each radiate a W boson that then elastically interact with one another.

If two down quarks, one from each oncoming proton, radiate a W- boson (-1e) and become up quarks, the two W- bosons can fuse into a negatively, doubly charged Higgs (-2e). If lepton number is violated, the Higgs boson can decay into a pair of same-sign electrons (2x -1e). Counting lepton number at the beginning of the process (L = 0 – 0 = 0) and at the end (L = 0 – 2 = -2!), we see that it changes by two units!

Same-sign W- pairs fusing into a doubly charged Higgs boson that decays into same-sign electrons.

Same-sign W- pairs fusing into a doubly charged Higgs boson that decays into same-sign electrons.

If lepton number is not violated, we will never see this decay and only see decays to two very, very energetic W- boson (-1e). Searching for vector boson fusion as well as lepton number violation are important components of the overarching Large Hadron Collider (LHC) research program at CERN. Unfortunately, there is no evidence for the existence of doubly charged scalars. On the other hand, we do have evidence for vector boson scattering (VBS) of the same-sign W bosons! Additional plots can be found on ATLAS’ website.  Reaching this tremendous milestone is a triumph for the LHC experiments. Vector boson fusion is a very, very, very, very, very rare process in the Standard Model and difficult to separate from other SM processes. Finding evidence for it is a first step in using the VBF process as a probe of new physics.

Words. Credit: Junjie Zhu (Michigan)

Same-sign W boson scattering candidate event at the LHC ATLAS experiment. Slide credit: Junjie Zhu (Michigan)

We have observed that some quantities, like momentum and electric charge, are conserved in nature. Conservation laws are few and far between, but are powerful. The modern framework of particle physics has these laws built into them, but has also been found to accidentally conserve other quantities, like lepton number. However, as lepton number is not required to reproduce data, it may be the case that these accidental laws are not, in fact, conserved. Theories that introduce charged Higgs bosons can reproduce data but also predict new interactions, such as doubly charged Higgs bosons decaying to same-sign W boson pairs and, if lepton number is violated, to same-sign charged lepton pairs. These new, exotic particles can be produced through vector boson fusion of two same-sign W boson pairs. VBF is a rare process in the SM and can greatly increase if new particles exist. At last, there is evidence for vector boson scattering of same-sign W bosons, and may be the next step to discovering new particles and new laws of nature!

Happy Colliding

- Richard (@BraveLittleMuon)

by Richard Ruiz at January 21, 2015 04:16 PM

Clifford V. Johnson - Asymptotia

Flowers of the Sky
Augsburger_Wunderzeichenbuch,_Folio_52 Here is a page of a lovely set of (public domain) images of comets and meteors, as depicted in various ways through the centuries. The above sample is from the famous [...] Click to continue reading this post

by Clifford at January 21, 2015 04:15 PM

Tommaso Dorigo - Scientificblogging

One Year In Pictures
A periodic backup of my mobile phone yesterday - mainly pictures and videos - was the occasion to give a look back at things I did and places I visited in 2014, for business and leisure. I thought it would be fun to share some of those pictures with you, with sparse comments. I know, Facebook does this for you automatically, but what does Facebook know of what is meaningful and what isn't ? So here we go.
The first pic was taken at Beaubourg, in Paris - it is a sculpture I absolutely love: "The king plays with the queen" by Max Ernst.



Still in Paris (for a vacation at the beginning of January), the grandiose interior of the Opera de Paris...

read more

by Tommaso Dorigo at January 21, 2015 02:12 PM

January 20, 2015

Jester - Resonaances

Planck: what's new
Slides from the recent Planck collaboration meeting are now available online. One can find there preliminary results that include an input from Planck's measurements of the polarization of the  Cosmic Microwave Background (some which were previously available via the legendary press release in French). I already wrote about the new  important limits on dark matter annihilation cross section. Here I picked up a few more things that may be of interest for a garden variety particle physicist.








  • ΛCDM. 
    Here is a summary of Planck's best fit parameters of the standard cosmological model with and without the polarization info:

    Note that the temperature-only numbers are slightly different than in the 2013 release, because of improved calibration and foreground cleaning.  Frustratingly, ΛCDM remains  solid. The polarization data do not change the overall picture, but they shrink some errors considerably. The Hubble parameter remains at a low value; the previous tension with Ia supernovae observations seems to be partly resolved and blamed on systematics on the supernovae side.  For the large scale structure fans, the parameter σ8 characterizing matter fluctuations today remains at a high value, in some tension with weak lensing and cluster counts. 
  • Neff.
    There are also better limits on deviations from ΛCDM. One interesting result is the new improved constraint on the effective number of neutrinos, Neff in short. The way this result is presented may be confusing.  We know perfectly well there are exactly 3 light active (interacting via weak force) neutrinos; this has been established in the 90s at the LEP collider, and Planck has little to add in this respect. Heavy neutrinos, whether active or sterile, would not show in this measurement at all.  For light sterile neutrinos, Neff implies an upper bound on the mixing angle with the active ones. The real importance of  Neff lies in that it counts any light particles (other than photons) contributing to the energy density of the universe at the time of CMB decoupling. Outside the standard model neutrinos, other theorized particles could contribute any real positive number to Neff, depending on their temperature and spin. A few years ago there have been consistent hints of Neff  much larger 3, which would imply physics beyond the standard model. Alas, Planck has shot down these claims. The latest number combining Planck and Baryon Acoustic Oscillations is Neff =3.04±0.18, spot on 3.046 expected from the standard model neutrinos.  This represents an important constraint on any new physics model with very light (less than eV) particles. 
  • Σmν.
    The limit on the sum of the neutrino masses keeps improving and gets into a really interesting regime. Recall that, from oscillation experiments, we can extract the neutrino mass differences: Δm32 ≈ 0.05 eV and Δm12≈0.009 eV up to a sign, but we don't know their absolute masses.  Planck and others have already excluded the possibility that all 3 neutrinos have approximately the same mass. Now they are not far from probing the so-called inverted hierarchy, where two neutrinos have approximately the same mass and the 3rd is much lighter, in which case Σmν ≈ 0.1 eV. Planck and Baryon Acoustic Oscillations set the limit Σmν < 0.16 eV at 95% CL, however this result is not strongly advertised because it is sensitive to the value of the Hubble parameter. Including non-Planck measurements leads to a weaker, more conservative limit Σmν < 0.23 eV, the same as quoted in the 2013 release. 
  • CνB.
    For dessert, something cool. So far we could observe the cosmic neutrino background only through its contribution to the  energy density of radiation in the early universe. This affects observables that can be inferred from the CMB acoustic peaks, such as the Hubble expansion rate or the time of matter-radiation equality. Planck, for the first time, probes the properties of the CνB. Namely, it measures the  effective sound speed ceff and viscosity cvis parameters, which affect the growth of perturbations in the CνB. Free-streaming particles like the neutrinos should have ceff^2 =  cvis^2 = 1/3, while Planck measures ceff^2 = 0.3256±0.0063 and  cvis^2 = 0.336±0.039. The result is unsurprising, but it may help constraining some more exotic models of neutrino interactions. 


To summarize, Planck continues to deliver disappointing results, and there's still more to follow ;)

by Jester (noreply@blogger.com) at January 20, 2015 11:48 PM

Symmetrybreaking - Fermilab/SLAC

How to build your own particle detector

Make a cloud chamber and watch fundamental particles zip through your living room!

The scale of the detectors at the Large Hadron Collider is almost incomprehensible: They weigh thousands of tons, contain millions of detecting elements and support a research program for an international community of thousands of scientists.

But particle detectors aren’t always so complicated. In fact, some particle detectors are so simple that you can make (and operate) them in your own home.

The Continuously Sensitive Diffusion Cloud Chamber is one such detector. Originally developed at UC Berkeley in 1938, this type of detector uses evaporated alcohol to make a ‘cloud’ that is extremely sensitive to passing particles.

Cosmic rays are particles that are constantly crashing into the Earth from space. When they hit Earth’s atmosphere, they release a shower of less massive particles, many of which invisibly rain down to us.

When a cosmic ray zips through a cloud, it creates ghostly particle tracks that are visible to the naked eye.

Building a cloud chamber is easy and requires only a few simple materials and steps:

Materials:

  • Clear plastic or glass tub (such as a fish tank) with a solid lid (plastic or metal)
  • Felt
  • Isopropyl alcohol (90% or more. You can find this at a pharmacy or special order from a chemical supply company. Wear safety goggles when handling the alcohol.)
  • Dry ice (frozen carbon dioxide. Often used at fish markets and grocery stores to keep products cool. Wear thick gloves when handling the dry ice.)

Steps:

  1. Cut the felt so that it is the size of the bottom of the fish tank. Glue it down inside the tank (on the bottom where the sand and fake treasure chests would normally go).
  2. Once the felt is secured, soak it in the isopropyl alcohol until it is saturated. Drain off any excess alcohol.
  3. Place the lid on top of dry ice so that it lies flat. You might want to have the dry ice in a container or box so that it is more stable.
  4. Flip the tank upside down, so that the felt-covered bottom of the tank is on top, and place the mouth of the tank on top of the lid.
  5. Wait about 10 minutes… then turn off the lights and shine a flashlight into your tank.
Artwork by: Sandbox Studio, Chicago

What is happening inside your cloud chamber?

The alcohol absorbed by the felt is at room temperature and is slowly evaporating into the air. But as the evaporated alcohol sinks toward the dry ice, it cools down and wants to turn back into a liquid.

The air near the bottom of the tank is now supersaturated, which means that it is just below its atmospheric dew point. And just as water molecules cling to blades of grass on cool autumn mornings, the atmospheric alcohol will form cloud-like droplets on anything it can cling to.

Particles, coming through!

When a particle zips through your cloud chamber, it bumps into atmospheric molecules and knocks off some of their electrons, turning the molecules into charged ions. The atmospheric alcohol is attracted to these ions and clings to them, forming tiny droplets.

The resulting tracks left behind look like the contrails of airplane—long spindly lines marking the particle’s path through your cloud chamber.

What you can tell from your tracks?

Many different types of particles might pass through your cloud chamber. It might be hard to see, but you can actually differentiate between the types of particles based on the tracks they leave behind.

Short, fat tracks

Sorry—not a cosmic ray. When you see short, fat tracks, you’re seeing an atmospheric radon atom spitting out an alpha particle (a clump of two protons and two neutrons). Radon is a naturally occurring radioactive element, but it exists in such low concentrations in the air that it is less radioactive than peanut butter. Alpha particles spat out of radon atoms are bulky and low-energy, so they leave short, fat tracks.

Long, straight track

Congratulations! You’ve got muons! Muons are the heavier cousins of the electron and are produced when a cosmic ray bumps into an atmospheric molecule high up in the atmosphere. Because they are so massive, muons bludgeon their way through the air and leave clean, straight tracks.

Zig-zags and curly-cues

If your track looks like the path of a lost tourist in a foreign city, you’re looking at an electron or positron (the electron’s anti-matter twin). Electrons and positrons are created when a cosmic ray crashes into atmospheric molecules. Electrons and positrons are light particles and bounce around when they hit air molecules, leaving zig-zags and curly-cues.

Forked tracks

If your track splits, congratulations! You just saw a particle decay. Many particles are unstable and will decay into more stable particles. If your track suddenly forks, you are seeing physics in action!

Like what you see? Sign up for a free subscription to symmetry!

    by Sarah Charley at January 20, 2015 07:44 PM

    ZapperZ - Physics and Physicists

    Macrorealism Violated By Cs Atoms
    It is another example where the more they test QM, the more convincing it becomes.

    This latest experiment is to test whether superposition truly exist via a very stringent test and applying the Leggett-Garg criteria.

    In comparison with these earlier experiments, the atoms studied in the experiments by Robens et al.’s are the largest quantum objects with which the Leggett-Garg inequality has been tested using what is called a null measurement—a “noninvasive” measurement that allows the inequality to be confirmed in the most convincing way possible. In the researchers’ experiment, a cesium atom moves in one of two standing optical waves that have opposite electric-field polarizations, and the atom’s position is measured at various times. The two standing waves can be pictured as a tiny pair of overlapping one-dimensional egg-carton strips—one red, one blue (Fig. 1). The experiment consists of measuring correlation between the atom’s position at different times. Robens et al. first put the atom into a superposition of two internal hyperfine spin states; this corresponds to being in both cartons simultaneously. Next, the team slid the two optical waves past each other, which causes the atom to smear out over a distance of up to about 2 micrometers in a motion known as a quantum walk. Finally, the authors optically excited the atom, causing it to fluoresce and reveal its location at a single site. Knowing where the atom began allows them to calculate, on average, whether the atom moved left or right from its starting position. By repeating this experiment, they can obtain correlations between the atom’s position at different times, which are the inputs into the Leggett-Garg inequality.

    You may read the result they got in the report. Also note that you also get free access to the actual paper.

    But don't miss the importance of this work, as stated in this review.


    Almost a century after the quantum revolution in science, it’s perhaps surprising that physicists are still trying to prove the existence of superpositions. The real motivation lies in the future of theoretical physics. Fledgling theories of macrorealism may well form the basis of the next generation “upgrade” to quantum theory by setting the scale of the quantum-classical boundary. Thanks to the results of this experiment, we can be sure that the boundary cannot lie below the scale at which the cesium atom has been shown to behave like a wave. How high is this scale? A theoretical measure of macroscopicity [8] (see 18 April 2013 Synopsis) gives the cesium atom a modest ranking of 6.8, above the only other object tested with null measurements [5], but far below where most suspect the boundary lies. (Schrödinger’s cat is a 57.) In fact, matter-wave interferometry experiments have already shown interference fringes with Buckminsterfullerene molecules [9], boasting a rating as high as 12. In my opinion, however, we can be surer of the demonstration of the quantumness of the cesium atom because of the authors’ exclusion of macrorealism via null result measurements. The next step is to try these experiments with atoms of larger mass, superposed over longer time scales and separated by greater distances. This will push the envelope of macroscopicity further and reveal yet more about the nature of the relationship between the quantum and the macroworld.


    Zz.

    by ZapperZ (noreply@blogger.com) at January 20, 2015 06:31 PM

    Lubos Motl - string vacua and pheno

    Prof Collins explains string theory
    Prof Emeritus Walter Lewin has been an excellent physics instructor who loved to include truly physical demonstrations of certain principles, laws, and concepts.



    After you understand string theory, don't forget about inertia, either. ;-)

    When the SJWs fired him and tried to erase him from the history of the Universe, a vacuum was created at MIT.




    The sensible people at MIT were thinking about a way to fill this vacuum. After many meetings, the committee decided to hire a new string theory professor who is especially good at teaching, someone like Barton Zwiebach #2 but someone who can achieve an even more intimate contact with the students.




    At the end, it became clear that they had to hire Prof Collins and her mandatory physics class on string theory is shown above. It is not too demanding even though e.g. the readers of texts by Mr Smolin or Mr Woit – or these not so Gentlemen themselves – may still find the material too technical.

    But the rest will surely enjoy it. ;-)



    Someone could think that this affiliation with MIT is just a joke but I assure you that Dr Paige Hopewell from the Bikini Calculus lecture above has been an excellent nuclear physicist affiliated with the MIT. While at Purdue, she would win an award in 2007, and so on.

    See also: hot women banned in optics

    by Luboš Motl (noreply@blogger.com) at January 20, 2015 10:35 AM

    Clifford V. Johnson - Asymptotia

    In Print…!
    graphic_novel_event_postcard_picture_ofHere's the postcard they made to advertise the event of tomorrow (Tuesday)*. I'm pleased with how the design worked out, and I'm extra pleased about one important thing. This is the first time that any of my graphical work for the book has been printed professionally in any form on paper, and I am pleased to see that the pdf that I output actually properly gives the colours I've been working with on screen. There's always been this nagging background worry (especially after the struggles I had to do to get the right output from my home printers) that somehow it would all be terribly wrong... that the colours would [...] Click to continue reading this post

    by Clifford at January 20, 2015 02:55 AM

    January 19, 2015

    Jester - Resonaances

    Weekend plot: spin-dependent dark matter
    This weekend plot is borrowed from a nice recent review on dark matter detection:
    It shows experimental limits on the spin-dependent scattering cross section of dark matter on protons. This observable is not where the most spectacular race is happening, but it is important for constraining more exotic models of dark matter. Typically, a scattering cross section in the non-relativistic limit is independent of spin or velocity of the colliding particles. However, there exist reasonable models of dark matter where the low-energy cross section is more complicated. One possibility is that the interaction strength is proportional to the scalar product of spin vectors of a dark matter particle and a nucleon (proton or neutron). This is usually referred to as the spin-dependent scattering, although other kinds of spin-dependent forces that also depend on the relative velocity are possible.

    In all existing direct detection experiments, the target contains nuclei rather than single nucleons. Unlike in the spin-independent case, for spin-dependent scattering the cross section is not enhanced by coherent scattering over many nucleons. Instead, the interaction strength is proportional to the expectation values of the proton and neutron spin operators in the nucleus.  One can, very roughly, think of this process as a scattering on an odd unpaired nucleon. For this reason, xenon target experiments such as Xenon100 or LUX are less sensitive to the spin-dependent scattering on protons because xenon nuclei have an even number of protons.  In this case,  experiments that contain fluorine in their target molecules have the best sensitivity. This is the case of the COUPP, Picasso, and SIMPLE experiments, who currently set the strongest limit on the spin-dependent scattering cross section of dark matter on protons. Still, in absolute numbers, the limits are many orders of magnitude weaker than in the spin-independent case, where LUX has crossed the 10^-45 cm^2 line. The IceCube experiment can set stronger limits in some cases by measuring the high-energy neutrino flux from the Sun. But these limits depend on what dark matter annihilates into, therefore they are much more model-dependent than the direct detection limits.

    by Jester (noreply@blogger.com) at January 19, 2015 05:56 PM

    ZapperZ - Physics and Physicists

    I Win The Nobel Prize And All I Got Was A Parking Space
    I'm sure it is a slight exaggeration, but it is still amusing to read Shuji Nakamura's response on the benefits he got from UCSB after winning the physics Nobel Prize. On the benefits of winning a Nobel Prize:

     "I don't have to teach anymore and I get a parking space. That's all I got from the University of California." 

     Zz.

    by ZapperZ (noreply@blogger.com) at January 19, 2015 04:36 PM

    Georg von Hippel - Life on the lattice

    Scientific Program "Fundamental Parameters of the Standard Model from Lattice QCD"
    Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.

    We are therefore happy to announce the scientific program "Fundamental Parameters of the Standard Model from Lattice QCD" to be held from August 31 to September 11, 2015 at the Mainz Institute for Theoretical Physics (MITP) at Johannes Gutenberg University Mainz, Germany.

    This scientific programme is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.

    We would like to invite you to consider attending this and to apply through our website. After the deadline (March 31, 2015), an admissions committee will evaluate all the applications.

    Among other benefits. MITP offers all its participants office space and access to computing facilities during their stay. In addition, MITP will cover local housing expenses for accepted participants. The MITP team will arrange the accommodation individually and also book the accommodation for accepted participants.

    Please do not hesitate to contact us at coordinator@mitp.uni-mainz.de if you have any questions.

    We hope you will be able to join us in Mainz in 2015!

    With best regards,

    the organizers:
    Gilberto Colangelo, Georg von Hippel, Heiko Lacker, Hartmut Wittig

    by Georg v. Hippel (noreply@blogger.com) at January 19, 2015 04:22 PM

    Georg von Hippel - Life on the lattice

    Upcoming conference/workshop deadlines
    This is just a short reminder of some upcoming deadlines for conferences/workshops in the organization of which I am in some way involved.

    Abstract submission for QNP 2015 closes on 6th February 2015, and registration closes on 27th February 2015. Visit this link to submit and abstract, and this link to register.

    Applications for the Scientific Programme "Fundamental Parameters from Lattice QCD" at MITP close on 31st March 2015. Visit this link to apply.

    by Georg v. Hippel (noreply@blogger.com) at January 19, 2015 04:20 PM

    Clifford V. Johnson - Asymptotia

    Experiments with Colour
    Well, that was interesting! I got a hankering to experiment with pastels the other day. I am not sure why. Then I remembered that I had a similar urge some years ago but had not got past the phase of actually investing in a few bits of equipment. So I dug them out and found a bit of time to experiment. pastel_experiment_18_jan_2015_small It is not a medium I've really done anything in before and I have a feeling it is a good additional way of exploring technique, and feeling out colour design for parts of the book later on. Who knows? Anyway, all I know is that without my [...] Click to continue reading this post

    by Clifford at January 19, 2015 02:45 PM

    January 18, 2015

    Clifford V. Johnson - Asymptotia

    LAIH Luncheon – Ramiro Gomez
    Yesterday's Luncheon at the Los Angeles Institute for the Humanities, the first of the year, was another excellent one (even though it was a bit more compact than I'd have liked). We caught up with each other and discussed what's been happening with over the holiday season, and then had the artist Ramiro Gomez give a fantastic talk ("Luxury, Interrupted: Art Interventions for Social Change") about his work in highlighting the hidden people of Los Angeles - those cleaners, caregivers, gardeners and others who help make the city tick along, but who are treated as invisible by most. LAIH_Ramiro_Gomez_16th_Jan_2015 As someone who very regularly gets totally ignored (like I'm not even there!) while standing in front of my own house by many people in my neighbourhood who [...] Click to continue reading this post

    by Clifford at January 18, 2015 05:29 PM

    Quantum Diaries

    The Ties That Bind
    Cleaning the ATLAS Experiment

    Beneath the ATLAS detector – note the well-placed cable ties. IMAGE: Claudia Marcelloni, ATLAS Experiment © 2014 CERN.

    A few weeks ago, I found myself in one of the most beautiful places on earth: wedged between a metallic cable tray and a row of dusty cooling pipes at the bottom of Sector 13 of the ATLAS Detector at CERN. My wrists were scratched from hard plastic cable ties, I had an industrial vacuum strapped to my back, and my only light came from a battery powered LED fastened to the front of my helmet. It was beautiful.

    The ATLAS Detector is one of the largest, most complex scientific instruments ever constructed. It is 46 meters long, 26 meters high, and sits 80 metres underground, completely surrounding one of four points on the Large Hadron Collider (LHC), where proton beams are brought together to collide at high energies.  It is designed to capture remnants of the collisions, which appear in the form of particle tracks and energy deposits in its active components. Information from these remnants allows us to reconstruct properties of the collisions and, in doing so, to improve our understanding of the basic building blocks and forces of nature.

    On that particular day, a few dozen of my colleagues and I were weaving our way through the detector, removing dirt and stray objects that had accumulated during the previous two years. The LHC had been shut down during that time, in order to upgrade the accelerator and prepare its detectors for proton collisions at higher energy. ATLAS is constructed around a set of very large, powerful magnets, designed to curve charged particles coming from the collisions, allowing us to precisely measure their momenta. Any metallic objects left in the detector risk turning into fast-moving projectiles when the magnets are powered up, so it was important for us to do a good job.

    ATLAS Big Wheel

    ATLAS is divided into 16 phi sectors with #13 at the bottom. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN

    The significance of the task, however, did not prevent my eyes from taking in the wonder of the beauty around me. ATLAS is shaped somewhat like a large barrel. For reference in construction, software, and physics analysis, we divide the angle around the beam axis, phi, into 16 sectors. Sector 13 is the lucky sector at the very bottom of the detector, which is where I found myself that morning. And I was right at ground zero, directly under the point of collision.

    To get to that spot, I had to pass through a myriad of detector hardware, electronics, cables, and cooling pipes. One of the most striking aspects of the scenery is the ironic juxtaposition of construction-grade machinery, including built-in ladders and scaffolding, with delicate, highly sensitive detector components, some of which make positional measurements to micron (thousandth of a millimetre) precision. All of this is held in place by kilometres of cable trays, fixings, and what appear to be millions of plastic (sometimes sharp) cable ties.

    Inside the ATLAS Detector

    Scaffolding and ladder mounted inside the precision muon spectrometer. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN.

    The real beauty lies not in the parts themselves, but rather in the magnificent stories of international cooperation and collaboration that they tell. The cable tie that scratched my wrist secures a cable that was installed by an Iranian student from a Canadian university. Its purpose is to carry data from electronics designed in Germany, attached to a detector built in the USA and installed by a Russian technician.  On the other end, a Japanese readout system brings the data to a trigger designed in Australia, following the plans of a Moroccan scientist. The filtered data is processed by software written in Sweden following the plans of a French physicist at a Dutch laboratory, and then distributed by grid middleware designed by a Brazilian student at CERN. This allows the data to be analyzed by a Chinese physicist in Argentina working in a group chaired by an Israeli researcher and overseen by a British coordinator.  And what about the cable tie?  No idea, but that doesn’t take away from its beauty.

    There are 178 institutions from 38 different countries participating in the ATLAS Experiment, which is only the beginning.  When one considers the international make-up of each of the institutions, it would be safe to claim that well over 100 countries from all corners of the globe are represented in the collaboration.  While this rich diversity is a wonderful story, the real beauty lies in the commonality.

    All of the scientists, with their diverse social, cultural and linguistic backgrounds, share a common goal: a commitment to the success of the experiment. The plastic cable tie might scratch, but it is tight and well placed; its cable is held correctly and the data are delivered, as expected. This enormous, complex enterprise works because the researchers who built it are driven by the essential nature of the mission: to improve our understanding of the world we live in. We share a common dedication to the future, we know it depends on research like this, and we are thrilled to be a part of it.

    ATLAS Collaboration Members

    ATLAS Collaboration members in discussion. What discoveries are in store this year? IMAGE: Claudia Marcelloni, ATLAS Experiment © 2008 CERN.

    This spring, the LHC will restart at an energy level higher than any accelerator has ever achieved before. This will allow the researchers from ATLAS, as well as the thousands of other physicists from partner experiments sharing the accelerator, to explore the fundamental components of our universe in more detail than ever before. These scientists share a common dream of discovery that will manifest itself in the excitement of the coming months. Whether or not that discovery comes this year or some time in the future, Sector 13 of the ATLAS detector reflects all the beauty of that dream.

    by Steven Goldfarb at January 18, 2015 04:42 PM

    January 17, 2015

    Sean Carroll - Preposterous Universe

    We Are All Machines That Think

    My answer to this year’s Edge Question, “What Do You Think About Machines That Think?”


    Active_brainJulien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there’s not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work, L’homme machine (Man a Machine), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.

    As we all know, even today La Mettrie’s ideas aren’t universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.

    We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.

    Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.

    From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.

    What’s harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we’ll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.

    Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.

    by Sean Carroll at January 17, 2015 07:48 PM

    Lubos Motl - string vacua and pheno

    Papers by BICEP2, Keck, and Planck out soon
    ...and other news from the CMB Minnesota conference...
    Off-topic: I won't post a new blog post on the "warmest 2014" measurements and claims. See an updated blog post on RRSS AMSU for a few new comments and a graph on the GISS and NCDC results.
    The Twitter account of Kevork Abazajian of UC Irvine seems to be the most useful public source where you may learn about some of the most important announcements made at a recent CMB+Pol conference in Minnesota (January 14th-16th, 2015).



    Is BICEP2's more powerful successor still seeing the gravitational waves?




    Here are the tweets:

    Charles Lawrence (Planck): Planck ultimate results will be out in weeks. CMB lensing potential was detected to 40σ by Planck. Measurement by Planck of \(1s\to 2s\) H transition from CMB has uncertainties 5.5 times better than the laboratory. Planck is not systematics limited on any angular scale. Future space mission needs 10-20x less noise. Try & find a foreground-free spot for polarization experiments (snark intended)-Planck 857 GHz map.

    100, 143, 217, 353 GHz polarization data won't be released in the 2015 @Planck data release.




    Anthony Chalinor (Planck): Temperature to polarization leakage in upcoming data release is not corrected for, so users beware. Planck finds that adding (light) massive sterile neutrinos does nothing to reduce their tension with the lensing+BAO data.

    Francois Boulanger (Planck): B-mode signal will not be detected without the removal of dust polarization from Planck with high accuracy and confidence. Dust SED does not vary strongly across the sky, which was surprising.

    Matthieu Tristram (Planck): Planck finds no region where the dust polarization can be neglected compared to primordial B-modes. (LM: This seems like a sloppy blanket statement to me: whether one is negligible depends on \(\ell\), doesn't it?)

    Sabino Matarrese (Planck): Starobinsky \(\varphi^2\) & exponential inflationary potential are most favored by the Planck primordial power spectrum reconstructions. No evidence of a primordial isocurvature non-Gaussianity is seen in Planck 2015. \(f_{NL} \sim 0.01\) non-Gaussianity of standard inflation will take LSS (halo bias & bispectrum) + 21 cm + CMB.

    Matias Zaldarriaga (theorist): if high \(r\) is detected, then something other that \(N\) \(e\)-folds is setting the inflationary dynamics. He is effectively giving \(r\lt 0.01\) as a theory-favored upper limit from inflation on the tensor amplitude.

    Abazajian @Kevaba: cosmology has the highest experimental sensitivity to neutrino mass and is forecast to maintain that position.

    Lorenzo Sorbo (theorist): non-boring tensors! Parity violation is detectable at 9σ. Parity violations can produce a differing amount of left- and right-handed gravitons, and produce non-zero TB and EB modes. Cosmological matter power spectrum gives neutrino mass constraints because neutrinos transition from radiation like to matter like. Shape and amplitude of power spectrum gives a handle on the neutrino mass. @Planck gives \(0.3\eV\) limits, the oscillation scale. \(dP_k/P_k\sim 1\%\) levels on matter power spec gives \(20\meV\) constraints on neutrino masses. CMB-S4 experiments alone should be able to get down to the \(34\meV\) level, \(15\meV\) level with BAO measurements.

    Olivier Doré (SPHEREx): SPHEREx mission for all-sky spectra for every 6.2" pixels to \(R=40\) in NIR. Quite a legacy! SPHEREx will detect with high significance single-field inflation non-Gaussianity. SPHEREx will detect *every* quasar in the Universe, approximately 1.5 million. SPHEREx astroph: 1412.4872.

    The @SPTelescope polarization main survey patch of 500 square degrees is currently underway.

    Bradford Benson (SPTpol): preliminary results presentation of SPTpol BB modes detection with 5σ level of lensing scale modes. SPT-3G will have 16,000 3-band multichroic pixels with 3 720 mm 4K alumina lenses w/ 3x FOV. SPT-3G will have 150σ detection of lensing B modes & forecast \(\sigma(N_{eff})=0.06\).

    Suzanne Staggs (ACTpol): ACTpol has detected CMB lensing B modes at 4.5σ. neutrinos & dark energy forecasts for Advanced ACT. Exact numbers are available in de Bernardis poster.

    Nils Halverson (POLARBEAR): POLARBEAR rejects "no lensing B-modes" at 4.2σ. Simons Array of 3x POLARBEAR-2 forecast sensitivity \(\sigma(m_\nu)=40\meV\), \(\sigma(r=0.1)=\sigma(ns)=0.006\).

    Paolo de Bernardis poster: Advanced ACT plus BOSS ultimate sensitivity \(96\meV\) for \(ν\) mass.

    John Kováč (BICEP2): BICEP2 sees excess power at 1 degree scale in BB.
    BICEP2 + Planck + Keck Array analysis SOON. Cannot be shown yet.
    Keck Array is 2.5 times more sensitive than BICEP2. The analysis is underway. With the dataset we had back when we published, we were only able to exclude dust at 1.7 sigma. No departure of SED from simple scaling law is very good news.
    At end of Kováč's talk: BICEP2 + Planck out by end of month. Those + Keck Array 150 GHz by spring 2015. All of this + other Keck frequencies will be released by the end of 2015.
    Aurelien Fraisse (SPIDER): SPIDER 6 detector, 2 frequencies flight under way, & foreground limited, not systematics. \(r \lt 0.03\) at 3σ, low foreground.

    Al Kogut (PIPER): PIPER will be doing almost all of sky B modes at multifrequency, 8 flights get to \(r \lt 0.007\) (2σ).

    CLASS will be able to measure \(r = 0.01\), even with galactic foregrounds. Site construction underway.

    Lloyd Knox (theorist): detecting relic neutrinos is possible via gravitational effects in the CMB. The dynamics of the phase shift in acoustic peaks results from variation in \(N_{eff}\).

    Uroš Seljak (theorist): multiple deflections in the weak lensing signal is important when the convergence sensitivity gets to ~1% level. Effects not at 10% in CLκκ, more like 1%. Krause et al. is in preparation. Delensing of B-modes has a theoretical limit at 0.2 μK arcmin or \(r=2\times 10^{-5}\).

    Carlo Contaldi: little was known about the dust polarization before BICEP2 & Planck. SPIDER = 6 x BICEP2 - 30 km of atmosphere and less exposure time. Detailed modeling of the dust polarization took place. Large B field uncertainty, input was taken from starlight observations. Full 3D models for the "southern patch" including the BICEP2 window reproduce the WMAP 23 GHz channel. Small & large scales great, not interm.

    Raphael Flauger (theorist): BICEP2 BB + Planck 353 GHz give no evidence for primordial B modes. Plus, the sun sets outside.

    by Luboš Motl (noreply@blogger.com) at January 17, 2015 06:39 AM

    January 16, 2015

    Symmetrybreaking - Fermilab/SLAC

    20-ton magnet heads to New York

    A superconducting magnet begins its journey from SLAC laboratory in California to Brookhaven Lab in New York.

    Imagine an MRI magnet with a central chamber spanning some 9 feet—massive enough to accommodate a standing African elephant. Physicists at the US Department of Energy’s Brookhaven National Laboratory need just such an extraordinary piece of equipment for an upcoming experiment. And, as luck would have it, physicists at SLAC National Accelerator Laboratory happen to have one on hand.

    Instead of looking at the world’s largest land animal, this magnet takes aim at the internal structure of something much smaller: the atomic nucleus.

    Researchers at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) specialize in subatomic investigations, smashing atoms and tracking the showers of fast-flying debris. RHIC scientists have been sifting through collision data nuclei for 13 years, but to go even deeper they need to upgrade their detector technology. That’s where a massive cylindrical magnet comes in.

    “The technical difficulty in manufacturing such a magnet is staggering,” says Brookhaven Lab physicist David Morrison, co-spokesperson for PHENIX, one of RHIC’s two main experiments. “The technology may be similar to an MRI—also a superconducting solenoid with a hollow center—but many times larger and completely customized. These magnets look very simple from the outside, but the internal structure contains very sophisticated engineering. You can’t just order one of these beasts from a catalogue. ”

    The proposed detector upgrade—called sPHENIX—launched the search for this elusive magnet. After assessing magnets at physics labs across the world, the PHENIX collaboration found an ideal candidate in storage across the country.

    At SLAC in California, a 40,000-pound beauty had recently finished a brilliant experimental run. This particular solenoid magnet—a thick, hollow pipe about 3.5 meters across and 3.9 meters long—once sat at the heart of a detector in SLAC’s BaBar experiment, which explored the asymmetry between matter and antimatter from 1999 to 2008.

    “We disassembled the detector and most of the parts have already gone to the scrap yard,” says Bill Wisniewski, who serves as the deputy to the SLAC Particle Physics and Astrophysics director and was closely involved with planning the move. “It’s just such a pleasure to see that there’s some hope that a major component of the detector—the solenoid—will be reused.”

    The magnet was loaded onto a truck and departed SLAC today, beginning its long and careful journey to Brookhaven’s campus in New York.

    “The particles that bind and constitute most of the visible matter in the universe remain quite mysterious,” says PHENIX co-spokesperson Jamie Nagle, a physicist at the University of Colorado. “We’ve made extraordinary strides at RHIC, but the BaBar magnet will take us even further. We’re grateful for this chance to give this one-of-a-kind equipment a second life, and I’m very excited to see how it shapes the future of nuclear physics.”

    Courtesy of: Brookhaven Lab

    The BaBar solenoid

    The BaBar magnet, a 30,865-pound solenoid housed in an 8250-pound frame, was built by the Italian company Ansaldo. Ansaldo’s superconducting magnets have found their way into many pioneering physics experiments, including the ATLAS and CMS detectors of the Large Hadron Collider. The inner ring of the BaBar magnet spans 2.8 meters with a total outer diameter of nearly 3.5 meters—nearly the width of the Statue of Liberty’s arm.

    During its run at SLAC, the BaBar experiment made many strides in fundamental physics, including contributions to the work awarded the 2008 Nobel Prize in Physics for the theory behind “charge-parity violation,” the idea that matter and antimatter behave in slightly different ways. This concept explains in part why the universe today is filled with matter and not antimatter.

    “BaBar was a seminal experiment in particle physics, and the magnet’s strength, size and uniform field proved essential to its discoveries,” says John Haggerty, the Brookhaven physicist leading the acquisition of the BaBar magnet. “It’s a remarkable piece of engineering, and it has potential beyond its original purpose.”

    In May 2013, Haggerty visited SLAC to meet with Wesley Craddock, the engineer who worked with the magnet since its installation, and Mike Racine, the technician who supervised its removal and storage. “It was immediately clear that this excellent solenoid was in very good condition and almost ready to move,” Haggerty says.

    Adds Morrison, “The BaBar magnet is larger than our initial plans called for, but using this incredible instrument will save considerable resources by repurposing existing national lab assets.”

    Brookhaven Lab was granted ownership of the BaBar solenoid in July 2013, but there was still the issue of the entire continent that sat between SLAC and the experimental hall of the PHENIX detector.

    Photo by: Andy Freeberg, SLAC National Accelerator Laboratory

    Moving the magnet

    The Department of Energy is no stranger to sharing massive magnets. In the summer of 2013, the 50-foot-wide Muon g-2 ring moved from Brookhaven Lab to Fermilab, where it will search for undiscovered particles hidden in the vacuum.

    “As you might imagine, shipping this magnet requires very careful consideration,” says Peter Wanderer, who heads Brookhaven’s Superconducting Magnet Division and worked with colleagues Michael Anerella and Paul Kovach on engineering for the big move. “You’re not only dealing with an oddly shaped and very heavy object, but also one that needs to be protected against even the slightest bit of damage. This kind of high-field, high-uniformity magnet can be surprisingly sensitive.”

    Preparations for the move required consulting with one of the solenoid’s original designers in Italy, Pasquale Fabbricatore, and designing special shipping fixtures to stabilize components of the magnet.

    After months of preparation at both SLAC and Brookhaven, the magnet—inside its custom packaging—was loaded onto a specialized truck this morning, and slowly began its journey to New York.

    “I’m sad to see it go,” Racine says. “It’s the only one like it in the world. But I’m happy to see it be reused.”

    After the magnet arrives, a team of experts will conduct mechanical, electrical, and cryogenic tests to prepare for its use in the upgrade to the sPHENIX upgrade.

    “We hope to have sPHENIX in action by 2021—including the BaBar magnet at its heart—but we have to remember that it is currently a proposal, and physics is full of surprises,” Morrison says.

    The BaBar magnet will be particularly helpful in identifying upsilons—the bound state of a very heavy bottom quark and an equally heavy anti-bottom quark. There are three closely related kinds of upsilons, each of which melts, or dissociates, at a different well-defined trillion-degree temperature. This happens in the state of matter known as quark-gluon plasma, or QGP, which was discovered at RHIC.

    “We can use these upsilons as a very precise thermometer for the QGP and understand its transition into normal matter,” Morrison says. “Something similar happened in the early universe as it began to cool microseconds after the big bang.”

     

    Like what you see? Sign up for a free subscription to symmetry!

    by Justin Eure at January 16, 2015 09:10 PM

    Symmetrybreaking - Fermilab/SLAC

    Scientists complete array on Mexican volcano

    An international team of astrophysicists has completed an advanced detector to map the most energetic phenomena in the universe.

    On Thursday, atop Volcán Sierra Negra, on a flat ledge near the highest point in Mexico, technicians filled the last of a collection of 300 cylindrical vats containing millions of gallons of ultrapure water.

    Together, the vats serve as the High-Altitude Water Cherenkov (HAWC) Gamma-Ray Observatory, a vast particle detector covering an area larger than 5 acres. Scientists are using it to catch signs of some of the highest-energy astroparticles to reach the Earth.

    The vats sit at an altitude of 4100 meters (13,500 feet) on a rocky site within view of the nearby Large Millimeter Telescope Alfonso Serrano. The area remained undeveloped until construction of the LMT, which began in 1997, brought with it the first access road, along with electricity and data lines.

    Temperatures at the top of the mountain are usually just cool enough for snow year-round, even though the atmosphere at the bottom of the mountain is warm enough to host palm trees and agave.

    “The local atmosphere is part of the detector,” says Alberto Carramiñana, general director of INAOE, the National Institute of Astrophysics, Optics and Electronics.

    Scientists at HAWC are working to understand high-energy particles that come from space. High-energy gamma rays come from extreme environments such as supernova explosions, active galactic nuclei and gamma-ray bursts. They’re also associated with high-energy cosmic rays, the origins of which are still unknown.

    When incoming gamma rays and cosmic rays from space interact with Earth’s atmosphere, they produce a cascade of particles that shower the Earth. When these high-energy secondary particles reach the vats, they shoot through the water inside faster than particles of light can, producing an optical shock wave called “Cherenkov radiation.” The boom looks like a glowing blue, violet or ultraviolet cone.

    The Pierre Auger Cosmic Ray Observatory in western Argentina, in operation since 2004, uses similar surface detector tanks to catch cosmic rays, but its focus is particles at higher energies—up to millions of giga-electronvolts. HAWC observes widely and deeply between the energy range of 100 giga-electronvolts and 100,000 giga-electronvolts.

    “HAWC is a unique water Cherenkov observatory, with no actual peer in the world,” Carramiñana says.

    Results from HAWC will complement the Fermi Gamma-ray Space Telescope, which observes at lower energy levels, as well as dozens of other tools across the electromagnetic spectrum.

    The vats at HAWC are made of corrugated steel, and each one holds a sealed, opaque bladder containing 50,000 gallons of liquid, according to Manuel Odilón de Rosas Sandoval, HAWC tank assembly coordinator. Each tank is 4 meters (13 feet) high and 7.3 meters (24 feet) in diameter and includes four light-reading photomultiplier tubes to detect the Cherenkov radiation.

    From its perch, HAWC sees the high-energy spectrum, in which particles have more energy in their motion than in their mass. The device is open to particles from about 15 percent of the sky at a time and, as the Earth rotates, is exposed to about 2/3 of the sky per day.

    Combining data from the 1200 sensors, astrophysicists can piece together the precise origins of the particle shower. With tens of thousands of events hitting the vats every second, around a terabyte of data will arrive per day. The device will record half a trillion events per year.

    The observatory, which was proposed in 2006 and began construction in 2012, is scheduled to operate for 10 years. “I look forward to the operational lifetime of HAWC,” Carramiñana says. “We are not sure what we will find.”

    More than 100 researchers from 30 partner organizations in Mexico and the United States collaborate on HAWC, with two additional associated scientists in Poland and Costa Rica. Prominent American partners include the University of Maryland, NASA’s Goddard Space Flight Center and Los Alamos National Laboratory. Funding comes from the Department of Energy, the National Science Foundation and Mexico’s National Council of Science and Technology.

    Like what you see? Sign up for a free subscription to symmetry!

    by Eagle Gamma at January 16, 2015 02:00 PM

    Quantum Diaries

    Will Self’s CERN

    “It doesn’t look to me like the rose window of Notre Dame. It looks like a filthy big machine down a hole.” — Will Self

    Like any documentary, biography, or other educational program on the radio, Will Self’s five-part radio program Self Orbits CERN is partially a work of fiction. It is based, to be sure, on a real walk through the French countryside along the route of the Large Hadron Collider, on the quest for a promised “sense of wonder”. And it is based on real tours at CERN and real conversations. But editorial and narrative choices have to be made in producing a radio program, and in that sense it is exactly the story that Will Self wants to tell. He is, after all, a storyteller.

    It is a story of a vast scientific bureaucracy that promises “to steal fire from the gods” through an over-polished public relations team, with day-to-day work done by narrow, technically-minded savants who dodge the big philosophical questions suggested by their work. It is a story of big ugly new machines whose function is incomprehensible. It is the story of a walk through thunderstorms and countryside punctuated by awkward meetings with a cast of characters who are always asked the same questions, and apparently never give a satisfactory answer.

    Self’s CERN is not the CERN I recognize, but I can recognize the elements of his visit and how he might have put them together that way. Yes, CERN has secretariats and human resources and procurement, all the boring things that any big employer that builds on a vast scale has to have. And yes, many people working at CERN are specialists in the technical problems that define their jobs. Some of us are interested in the wider philosophical questions implied by trying to understand what the universe is made of and how it works, but some of us are simply really excited about the challenges of a tiny part of the overall project.

    “I think you understand more than you let on.”Professor Akram Khan

    The central conflict of the program feels a bit like it was engineered by Self, or at least made inevitable by his deliberately-cultivated ignorance. Why, for example, does he wait until halfway through the walk to ask for the basic overview of particle physics that he feels he’s missing, unless it adds to the drama he wants to create? By the end of the program, he admits that asking for explanations when he hasn’t learned much background is a bit unfair. But the trouble is not whether he knows the mathematics. The trouble, rather, is that he’s listened to a typical, very short summary of why we care about particle physics, and taken it literally. He has decided in advance that CERN is a quasi-religious entity that’s somehow prepared to answer big philosophical questions, and never quite reconsiders the discussion based on what’s actually on offer.

    If his point is that particle physicists who speak to the public are sometimes careless, he’s absolutely right. We might say we are looking for how or why the universe was created, when really we mean we are learning what it’s made of and the rules for how that stuff interacts, which in turn lets us trace what happened in the past almost (but not quite) back to the moment of the Big Bang. When we say we’re replicating the conditions at that moment, we mean we’re creating particles so massive that they require the energy density that was present back then. We might say that the Higgs boson explains mass, when more precisely it’s part of the model that gives a mechanism for mass to exist in models whose symmetries forbid it. Usually a visit to CERN involves several different explanations from different people, from the high-level and media-savvy down to the technical details of particular systems. Most science journalists would put this information together to present the perspective they wanted, but Self apparently takes everything at face value, and asks everyone he meets for the big picture connections. His narrative is edited to literally cut off technical explanations, because he wants to hear about beauty and philosophy.

    Will Self wants the people searching for facts about the universe to also interpret them in the broadest sense, but this is much harder than he implies. As part of a meeting of the UK CMS Collaboration at the University of Bristol last week, I had the opportunity to attend a seminar by Professor James Ladyman, who discussed the philosophy of science and the relationship of working scientists to it. One of the major points he drove home was just how specialized the philosophy of science can be: that the tremendous existing body of work on, for example, interpreting Quantum Mechanics requires years of research and thought which is distinct from learning to do calculations. Very few people have had time to learn both, and their work is important, but great scientific or great philosophical work is usually done by people who have specialized in only one or the other. In fact, we usually specialize a great deal more, into specific kinds of quantum mechanical interactions (e.g. LHC collisions) and specific ways of studying them (particular detectors and interactions).

    Toward the end of the final episode, Self finds himself at Voltaire’s chateau near Ferney, France. Here, at last, is what he is looking for: a place where a polymath mused in beautiful surroundings on both philosophy and the natural world. Why have we lost that holistic approach to science? It turns out there are two very good reasons. First, we know an awful lot more than Voltaire did, which requires tremendous specialization discussed above. But second, science and philosophy are no longer the monopoly of rich European men with leisure time. It’s easy to do a bit of everything when you have very few peers and no obligation to complete any specific task. Scientists now have jobs that give them specific roles, working together as a part of a much wider task, in the case of CERN a literally global project. I might dabble in philosophy as an individual, but I recognize that my expertise is limited, and I really enjoy collaborating with my colleagues to cover together all the details we need to learn about the universe.

    In Self’s world, physicists should be able to explain their work to writers, artists, and philosophers, and I agree: we should be able to explain it to everyone. But he — or at least, the character he plays in his own story — goes further, implying that scientific work whose goals and methods have not been explained well, or that cannot be recast in aesthetic and moral terms, is intrinsically suspect and potentially valueless. This is a false dichotomy: it’s perfectly possible, even likely, to have important research that is often explained poorly! Ultimately, Self Orbits CERN asks the right questions, but it is too busy musing about what the answers should be to pay attention to what they really are.

    For all that, I recommend listening to the five 15-minute episodes. The music is lovely, the story engaging, and the description of the French countryside invigorating. The jokes were great, according to Miranda Sawyer (and you should probably trust her sense of humour rather than the woefully miscalibrated sense of humor that I brought from America). If you agree with me that Self has gone wrong in how he asks questions about science and which answers he expects, well, perhaps you will find some answers or new ideas for yourself.

    by Seth Zenz at January 16, 2015 01:48 PM

    Jon Butterworth - Life and Physics

    A follow up on research impact and the REF

    Anyone connected with UK academia, who follows news about it, or indeed who has met a UK academic socially over the last couple of years, will probably have heard about the Research Excellence Framework (REF). All UK universities had their research assessed in a long-drawn-out process which will influence how billions of pounds of research funding are distributed. Similar excercises go on every six or so years.

    The results are not a one-dimensional league table, which is good; so everyone has their favourite way of combining them to make their own league table, which is entertaining. My favourite is “research intensity” (see below, from the THE):

    ref

    A new element in the REF this time was the inclusion of some assessment of “Impact”. This (like the REF itself) is far from universally popular. Personally I’m relatively supportive of this element in principle though, as I wrote here. Essentially, while I don’t think all academic research should be driven by predictions of its impact beyond academia, I do think that it should be part of the mix. The research activity of any major physics department should, even serendipitously, have some impact outside of the academic discipline (as well as lots in it), and it is worth collecting and assessing some evidence for this. Your mileage in other subjects may vary.

    I also considered whether my Guardian blog might constitute a form of impact-beyond-academia for the discovery of the Higgs boson and the other work of the Large Hadron Collider, and I even asked readers for evidence and help (thanks!). In the end we did submit a “case study” on this. There is a summary of the case that was submitted here. The studies generally have more hard evidence than is given in that précis, but you get the idea.

    Similar summaries of all UCL’s impact case studies are given here. Enjoy…


    Filed under: Physics, Politics, Science, Science Policy, Writing Tagged: Guardian, Higgs, LHC, UCL

    by Jon Butterworth at January 16, 2015 08:30 AM

    January 15, 2015

    Andrew Jaffe - Leaves on the Line

    Oscillators, Integrals, and Bugs

    [Update: The bug seems fixed in the latest version, 10.0.2.]

    I am in my third year teaching a course in Quantum Mechanics, and we spend a lot of time working with a very simple system known as the harmonic oscillator — the physics of a pendulum, or a spring. In fact, the simple harmonic oscillator (SHO) is ubiquitous in almost all of physics, because we can often represent the behaviour of some system as approximately the motion of an SHO, with some corrections that we can calculate using a technique called perturbation theory.

    It turns out that in order to describe the state of a quantum SHO, we need to work with the Gaussian function, essentially the combination exp(-y²/2), multiplied by another set of functions called Hermite polynomials. These latter functions are just, as the name says, polynomials, which means that they are just sums of terms like ayⁿ where a is some constant and n is 0, 1, 2, 3, … Now, one of the properties of the Gaussian function is that it dives to zero really fast as y gets far from zero, so fast that multiplying by any polynomial still goes to zero quickly. This, in turn, means that we can integrate polynomials, or the product of polynomials (which are just other, more complicated polynomials) multiplied by our Gaussian, and get nice (not infinite) answers.

    Unfortunately, Wolfram Inc.’s Mathematica (the most recent version 10.0.1) disagrees:

    MathematicaGaussHermiteBug

    The details depend on exactly which Hermite polynomials I pick — 7 and 16 fail, as shown, but some combinations give the correct answer, which is in fact zero unless the two numbers differ by just one. In fact, if you force Mathematica to split the calculation into separate integrals for each term, and add them up at the end, you get the right answer.

    I’ve tried to report this to Wolfram, but haven’t heard back yet. Has anyone else experienced this?

    by Andrew at January 15, 2015 04:40 PM

    January 14, 2015

    ATLAS Experiment

    The Ties That Bind

    A few weeks ago, I found myself in one of the most beautiful places on earth: wedged between a metallic cable tray and a row of dusty cooling pipes at the bottom of Sector 13 of the ATLAS Detector at CERN. My wrists were scratched from hard plastic cable ties, I had an industrial vacuum strapped to my back, and my only light came from a battery powered LED fastened to the front of my helmet. It was beautiful.

    Cleaning the ATLAS detector

    Beneath the ATLAS detector – note the well-placed cable ties. IMAGE: Claudia Marcelloni, ATLAS Experiment © 2014 CERN.

    The ATLAS Detector is one of the largest, most complex scientific instruments ever constructed. It is 46 meters long, 26 meters high, and sits 80 metres underground, completely surrounding one of four points on the Large Hadron Collider (LHC), where proton beams are brought together to collide at high energies.  It is designed to capture remnants of the collisions, which appear in the form of particle tracks and energy deposits in its active components. Information from these remnants allows us to reconstruct properties of the collisions and, in doing so, to improve our understanding of the basic building blocks and forces of nature.

    On that particular day, a few dozen of my colleagues and I were weaving our way through the detector, removing dirt and stray objects that had accumulated during the previous two years. The LHC had been shut down during that time, in order to upgrade the accelerator and prepare its detectors for proton collisions at higher energy. ATLAS is constructed around a set of very large, powerful magnets, designed to curve charged particles coming from the collisions, allowing us to precisely measure their momenta. Any metallic objects left in the detector risk turning into fast-moving projectiles when the magnets are powered up, so it was important for us to do a good job.

    ATLAS Big Wheel

    ATLAS is divided into 16 phi sectors with #13 at the bottom. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN

    The significance of the task, however, did not prevent my eyes from taking in the wonder of the beauty around me. ATLAS is shaped somewhat like a large barrel. For reference in construction, software, and physics analysis, we divide the angle around the beam axis, phi, into 16 sectors. Sector 13 is the lucky sector at the very bottom of the detector, which is where I found myself that morning. And I was right at ground zero, directly under the point of collision.

    To get to that spot, I had to pass through a myriad of detector hardware, electronics, cables, and cooling pipes. One of the most striking aspects of the scenery is the ironic juxtaposition of construction-grade machinery, including built-in ladders and scaffolding, with delicate, highly sensitive detector components, some of which make positional measurements to micron (thousandth of a millimetre) precision. All of this is held in place by kilometres of cable trays, fixings, and what appear to be millions of plastic (sometimes sharp) cable ties.

    Inside the ATLAS detector

    Scaffolding and ladder mounted inside the precision muon spectrometer. IMAGE: Steven Goldfarb, ATLAS Experiment © 2014 CERN.

    The real beauty lies not in the parts themselves, but rather in the magnificent stories of international cooperation and collaboration that they tell. The cable tie that scratched my wrist secures a cable that was installed by an Iranian student from a Canadian university. Its purpose is to carry data from electronics designed in Germany, attached to a detector built in the USA and installed by a Russian technician.  On the other end, a Japanese readout system brings the data to a trigger designed in Australia, following the plans of a Moroccan scientist. The filtered data is processed by software written in Sweden following the plans of a French physicist at a Dutch laboratory, and then distributed by grid middleware designed by a Brazilian student at CERN. This allows the data to be analyzed by a Chinese physicist in Argentina working in a group chaired by an Israeli researcher and overseen by a British coordinator.  And what about the cable tie?  No idea, but that doesn’t take away from its beauty.

    There are 178 institutions from 38 different countries participating in the ATLAS Experiment, which is only the beginning.  When one considers the international make-up of each of the institutions, it would be safe to claim that well over 100 countries from all corners of the globe are represented in the collaboration.  While this rich diversity is a wonderful story, the real beauty lies in the commonality.

    All of the scientists, with their diverse social, cultural and linguistic backgrounds, share a common goal: a commitment to the success of the experiment. The plastic cable tie might scratch, but it is tight and well placed; its cable is held correctly and the data are delivered, as expected. This enormous, complex enterprise works because the researchers who built it are driven by the essential nature of the mission: to improve our understanding of the world we live in. We share a common dedication to the future, we know it depends on research like this, and we are thrilled to be a part of it.

    ATLAS Collaboration

    ATLAS Collaboration members in discussion. What discoveries are in store this year?  IMAGE: Claudia Marcelloni, ATLAS Experiment © 2008 CERN.

    This spring, the LHC will restart at an energy level higher than any accelerator has ever achieved before. This will allow the researchers from ATLAS, as well as the thousands of other physicists from partner experiments sharing the accelerator, to explore the fundamental components of our universe in more detail than ever before. These scientists share a common dream of discovery that will manifest itself in the excitement of the coming months. Whether or not that discovery comes this year or some time in the future, Sector 13 of the ATLAS detector reflects all the beauty of that dream.


    Steven Goldfarb Steven Goldfarb is a physicist from the University of Michigan working on the ATLAS Experiment at CERN. He currently serves as the Outreach & Education Coordinator, a member of the ATLAS Muon Project, and an active host for ATLAS Virtual Visits. Send a note to info@atlas-live.ch and he will happily host a visit from your school.

    by Steve at January 14, 2015 05:10 PM

    Subscriptions

    Feeds

    [RSS 2.0 Feed] [Atom Feed]


    Last updated:
    January 31, 2015 12:51 PM
    All times are UTC.

    Suggest a blog:
    planet@teilchen.at