Particle Physics Planet


November 27, 2014

Christian P. Robert - xi'an's og

Le Monde puzzle [#887quater]

And yet another resolution of this combinatorics Le Monde mathematical puzzle: that puzzle puzzled many more people than usual! This solution is by Marco F, using a travelling salesman representation and existing TSP software.

N is a golden number if the sequence {1,2,…,N} can be reordered so that the sum of any consecutive pair is a perfect square. What are the golden numbers between 1 and 25?

For instance, take n=199, you should first calculate the “friends”. Save them on a symmetric square matrix:

m1 <- matrix(Inf, nrow=199, ncol=199)
diag(m1) <- 0
for (i in 1:199) m1[i,friends[i]] <- 1

Export the distance matrix to a file (in TSPlib format):

library(TSP)
tsp <- TSP(m1)
tsp
image(tsp)
write_TSPLIB(tsp, "f199.TSPLIB")

And use a solver to obtain the results. The best solver for TSP is Concorde. There are online versions where you can submit jobs:

0 2 1000000
2 96 1000000
96 191 1000000
191 168 1000000
  ...

The numbers of the solution are in the second column (2, 96, 191, 168…). And they are 0-indexed, so you have to add 1 to them:

3  97 192 169 155 101 188 136 120  49 176 148 108 181 143 113 112  84  37  63 18  31  33  88168 193  96 160 129 127 162 199  90  79 177 147  78  22 122 167 194 130  39 157  99 190 13491 198  58  23  41 128 196  60  21 100 189 172 152 73 183 106  38 131 125 164 197  59 110 146178 111 145  80  20  61 135 121  75  6  94 195166 123 133 156  69  52 144  81  40   9  72 184  12  24  57  87  82 62  19  45  76 180 109 116 173 151  74  26  95 161 163 126  43 153 17154  27 117 139  30  70  11  89 107 118 138 186103  66 159 165 124 132  93  28   8  17  32  45  44  77 179 182 142  83  86  14  50 175 114 55 141 115  29  92 104 185  71  10  15  34   27  42 154 170 191  98 158  67 102 187 137 119 25  56 65  35  46 150 174  51  13  68  53  47 149 140  85  36  64 105  16  48

Filed under: Books, Kids, R, Statistics, University life Tagged: Le Monde, mathematical puzzle, travelling salesman Concorde

by xi'an at November 27, 2014 11:14 PM

Sean Carroll - Preposterous Universe

Thanksgiving

This year we give thanks for a technique that is central to both physics and mathematics: the Fourier transform. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, and Landauer’s Principle.)

Let’s say you want to locate a point in space — for simplicity, on a two-dimensional plane. You could choose a coordinate system (x, y), and then specify the values of those coordinates to pick out your point: (x, y) = (1, 3).

axes-rotate

But someone else might want to locate the same point, but they want to use a different coordinate system. That’s fine; points are real, but coordinate systems are just convenient fictions. So your friend uses coordinates (u, v) instead of (x, y). Fortunately, you know the relationship between the two systems: in this case, it’s u = y+x, v = y-x. The new coordinates are rotated (and scaled) with respect to the old ones, and now the point is represented as (u, v) = (4, 2).

Fourier transforms are just a fancy version of changes of coordinates. The difference is that, instead of coordinates on a two-dimensional space, we’re talking about coordinates on an infinite-dimensional space: the space of all functions. (And for technical reasons, Fourier transforms naturally live in the world of complex functions, where the value of the function at any point is a complex number.)

Think of it this way. To specify some function f(x), we give the value of the function f for every value of the variable x. In principle, an infinite number of numbers. But deep down, it’s not that different from giving the location of our point in the plane, which was just two numbers. We can certainly imagine taking the information contained in f(x) and expressing it in a different way, by “rotating the axes.”

That’s what a Fourier transform is. It’s a way of specifying a function that, instead of telling you the value of the function at each point, tells you the amount of variation at each wavelength. Just as we have a formula for switching between (u, v) and (x, y), there are formulas for switching between a function f(x) and its Fourier transform f(ω):

f(\omega) = \frac{1}{\sqrt{2}} \int dx f(x) e^{-i\omega x}
f(x) = \frac{1}{\sqrt{2}} \int d\omega f(\omega) e^{i\omega x}.

Absorbing those formulas isn’t necessary to get the basic idea. If the function itself looks like a sine wave, it has a specific wavelength, and the Fourier transform is just a delta function (infinity at that particular wavelength, zero everywhere else). If the function is periodic but a bit more complicated, it might have just a few Fourier components.

MIT researchers showing how sine waves can combine to make a square-ish wave.

MIT researchers showing how sine waves can combine to make a square-ish wave.

In general, the Fourier transform f(ω) gives you “the amount of the original function that is periodic with period 2πω.” This is sometimes called the “frequency domain,” since there are obvious applications to signal processing, where we might want to take a signal that has an intensity that varies with time and pick out the relative strength of different frequencies. (Your eyes and ears do this automatically, when they decompose light into colors and sound into pitches. They’re just taking Fourier transforms.) Frequency, of course, is the inverse of wavelength, so it’s equally good to think of the Fourier transform as describing the “length domain.” A cosmologist who studies the large-scale distribution of galaxies will naturally take the Fourier transform of their positions to construct the power spectrum, revealing how much structure there is at different scales.

microcontrollers_fft_example

To my (biased) way of thinking, where Fourier transforms really come into their own is in quantum field theory. QFT tells us that the world is fundamentally made of waves, not particles, and it is extremely convenient to think about those waves by taking their Fourier transforms. (It is literally one of the first things one is told to do in any introduction to QFT.)

But it’s not just convenient, it’s a worldview-changing move. One way of characterizing Ken Wilson’s momentous achievement is to say “physics is organized by length scale.” Phenomena at high masses or energies are associated with short wavelengths, where our low-energy long-wavelength instruments cannot probe. (We need giant machines like the Large Hadron Collider to create high energies, because what we are really curious about are short distances.) But we can construct a perfectly good effective theory of just the wavelengths longer than a certain size — whatever size it is that our theoretical picture can describe. As physics progresses, we bring smaller and smaller length scales under the umbrella of our understanding.

Without Fourier transforms, this entire way of thinking would be inaccessible. We should be very thankful for them — as long as we use them wisely.

Credit: xkcd.

Note that Joseph Fourier, inventor of the transform, is not the same as Charles Fourier, utopian philosopher. Joseph, in addition to his work in math and physics, invented the idea of the greenhouse effect. Sadly that’s not something we should be thankful for right now.

by Sean Carroll at November 27, 2014 09:38 PM

Peter Coles - In the Dark

At a Lecture

Since mistakes are inevitable, I can easily be taken
for a man standing before you in this room filled
with yourselves. Yet in about an hour
this will be corrected, at your and at my expense,
and the place will be reclaimed by elemental particles
free from the rigidity of a particular human shape
or type of assembly. Some particles are still free. It’s not all dust.

So my unwillingness to admit it’s I
facing you now, or the other way around,
has less to do with my modesty or solipsism
than with my respect for the premises’ instant future,
for those afore-mentioned free-floating particles
settling upon the shining surface
of my brain. Inaccessible to a wet cloth eager to wipe them off.

The most interesting thing about emptiness
is that it is preceded by fullness.
The first to understand this were, I believe, the Greek
gods, whose forte indeed was absence.
Regard, then, yourselves as rehearsing perhaps for the divine encore,
with me playing obviously to the gallery.
We all act out of vanity. But I am in a hurry.

Once you know the future, you can make it come
earlier. The way it’s done by statues or by one’s furniture.
Self-effacement is not a virtue
but a necessity, recognised most often
toward evening. Though numerically it is easier
not to be me than not to be you. As the swan confessed
to the lake: I don’t like myself. But you are welcome to my reflection.

by Joseph Brodsky (1940-1996)

 

 


by telescoper at November 27, 2014 06:30 PM

Clifford V. Johnson - Asymptotia

Monday’s Quarry
red_line_guy_24_11_2014_smallMonday's quick grab on the subway on the way to work. I claim that one of the most useful aspects of the smartphone is its facility for holding people in predictable poses in order to be sketched. He had a very elegant face and head, and was engrossed in his game, and I was done reviewing my lecture notes on scattering of light, so I went for it. I was able to get out my notepad and a pen and get a good fast [...] Click to continue reading this post

by Clifford at November 27, 2014 12:53 AM

November 26, 2014

Christian P. Robert - xi'an's og

Le Monde puzzle [#887ter]

Here is a graph solution to the recent combinatorics Le Monde mathematical puzzle, proposed by John Shonder:

N is a golden number if the sequence {1,2,…,N} can be reordered so that the sum of any consecutive pair is a perfect square. What are the golden numbers between 1 and 25?

Consider an undirected graph GN with N vertices labelled 1 through N. Draw an edge between vertices i and j if and only if i + j is a perfect square. Then N is golden if GN contains a Hamiltonian path — that is, if there is a connected path that visits all of the vertices exactly once.g25I wrote a program (using Mathematica, though I’m sure there must be an R library with similar functionality) that builds up G sequentially and checks at each step whether the graph contains a Hamiltonian path. The program starts with G1 — a single vertex and no edges. Then it adds vertex 2. G2 has no edges, so 2 isn’t golden.

Adding vertex 3, there is an edge between 1 and 3. But vertex 2 is unconnected, so we’re still not golden.

The results are identical to yours, but I imagine my program runs a bit faster. Mathematica contains a built-in function to test for the existence of a Hamiltonian path.

g36Some of the graphs are interesting. I include representations of G25 and G36. Note that G36 contains a Hamiltonian cycle, so you could arrange the integers 1 … 36 on a roulette wheel such that each consecutive pair adds to a perfect square.

A somewhat similar problem:

Call N a “leaden” number if the sequence {1,2, …, N} can be reordered so that the sum of any consecutive pair is a prime number. What are the leaden numbers between 1 and 100? What about an arrangement such that the absolute value of the difference between any two consecutive numbers is prime?

[The determination of the leaden numbers was discussed in a previous Le Monde puzzle post.]


Filed under: Books, Kids, Statistics, University life Tagged: graph theory, Hamiltonian path, Le Monde, Mathematica, mathematical puzzle

by xi'an at November 26, 2014 11:14 PM

Emily Lakdawalla - The Planetary Society Blog

Some Recent Views of Mars from Hubble
Ted Stryk showcases some of his processed versions of recent Hubble Space Telescope views of Mars.

November 26, 2014 10:50 PM

arXiv blog

The Same Name Puzzle: Twitter Users Are More Likely to Follow Others With The Same First Name But Nobody Knows Why

If you use social networks to follow other people who share your first name, you’re not alone. The question is why.

November 26, 2014 09:23 PM

Quantum Diaries

Graduating, part 2: Final Thesis Revisions
IMG_5306

The doorway to the registrar’s office where the final thesis check takes place

I took an entire month between defending my thesis and depositing it with the grad school. During that month, I mostly revised my thesis, but also I took care of a bunch of logistical things I had been putting off until after the defense: subletting the apartment, selling the car, engaging movers, starting to pack… and of course putting comments into the thesis from the committee. I wrote back to my (now current) new boss who said we should chat again after I “come up for air” (which is a pretty accurate way of describing it). I went grocery shopping, and for the first time in months it was fun to walk around the store imagining and planning the things I could make in my kitchen. I had spare creative energy again!

Partly I needed a full month to revise the thesis because I was making changes to the analysis within the thesis right up to the day before I defended, and I changed the wording on the concluding sentences literally 20 minutes before I presented. I didn’t have time to polish the writing because the analysis was changing so much. The professor who gave me the most detailed comments was justifiably annoyed that he didn’t have sufficient time to read the whole dissertation before the defense. It worked out in the end, because the time he needed to finish reading was a time when I didn’t want to think about my thesis in any way. I even left town and visited friends in Chicago, just to break up the routine that had become so stressful. There’s nothing quite as nice as waking up to a cooked breakfast when you’ve forgotten that cooked breakfasts are an option.

There were still thesis revisions to implement. Some major comments reflected the fact that, while some chapters had been edited within a peer group, no one had read it cover-to-cover until after the defense. The professor who had the most detailed comments wrote a 12-page email detailing his suggestions, many of which were word substitutions and thus easy to implement. Apparently I have some tics in my formal writing style.

I use slightly too many (~1.2) semicolons per page of text; this reflects my inclination to use compound sentences but also avoid parentheses in formal writing. As my high school teacher, Perryman, taught me: if you have to use parentheses you’re not being confidently declarative, and if you ever want to use nested parentheses in a formal setting, figure out what you really want to say and just say it! (subtext: or figure out why you don’t want to say it, and don’t say it. No amount of parenthesis can make a statement disappear.) Anyway, I’d rather have too many semicolons than too many parentheses; I’d rather be seen as too formal than too tentative. It’s the same argument, to me, that I’d rather wear too much black than too much pink. So, many of the semicolons stayed in despite the comments. Somehow, in the thesis haze, I didn’t think of the option of many simple single-clause sentences. Single-clause sentences are hard.

I also used the word “setup” over 100 times as a catch-all word to encompass all of the following: apparatus, configuration, software, procedure, hypothesis. I hadn’t noticed that, and I have no good reason for it, so now my thesis doesn’t use the word “setup” at all. I think. And if it does, it’s too late to change it now!

And of course there was the matter of completing the concluding paragraph so it matched the conclusion I presented in my defense seminar. That took some work. I also tried to produce some numbers to complete the description of my analysis in more detail than I needed for the defense seminar, just for archival completeness. But by the time I had fixed everything else, it was only a few hours until my deposit margin-check appointment (and also 2:30am), so I gave up on getting those numbers.

The deposit appointment was all of 5 minutes long, but marked the line between “almost done” and “DONE!!!”. The reviewing administrator realized this. She shook my hand three times in those 5 minutes. When it was done, I went outside and there were birds singing. I bought celebratory coffee and a new Wisconsin shirt. And then started packing up my apartment for the movers arriving the next morning.

During that month of re-entering society,  I had some weird conversations which reminded me how isolated I had been during the thesis. A friend who used to work in our office had started her own business, but I’d only had time to ask her about it once or perhaps twice. When we had a bit of time to catch up more, I asked how it had been during the last few months, and she replied that it had been a year. A year. It just went by and I didn’t notice, without the regular office interactions.

I’d gotten into a grove of watching a couple episodes each night of long-running TV shows with emotionally predictable episodic plot lines. Star Trek and various murder mysteries were big. The last series was “House, MD” with Hugh Laurie. By coincidence, when I defended my thesis and my stress level starting deflating, I was almost exactly at the point in the series where they ran out of mysteries from the original book it was based on, and started going more into a soap-opera style character drama. By the time I wasn’t interested in the soap opera aspects anymore, it was time to start reengaging with my real-life friends.

A few days after I moved away from Madison, when I was staying with my parents, I picked up my high school routine of reading the local paper over breakfast, starting with the comics, then local editorials. I found (or rather, my dad found) myself criticizing the writing from the point of view of a dissertator. It takes more than a few days to get out of thesis-writing mode. The little nagging conscience doesn’t go away, still telling me that the difference between ok writing and great writing is important, more so now than at any point so far in my career. For the last edits of a PhD, it might be important to criticize at that level of detail. But for a local paper, pretty much anything is useful to the community.

At lunch Saturday in a little restaurant in the medieval part of the Italian village of Assergi, I found the antidote. When I can’t read any of the articles and posters on the walls, when I can’t carry on a conversation with more than 3-word sentences, it doesn’t matter anymore if the paragraphs have a clear and concise topic sentence. I need simple text. I’m happy if I can understand the general meaning. The humility of starting over again with Italian is the antidote for the anxiety of a thesis. It’s ok to look like a fool in some ways, because I am a certified non-fool in one small part of physics.

It’s not perfect of course: there’s still a lot of anxiety inherent in living in a country without speaking the language (well enough to get by without english-speaking help). I’ll write more about the cultural transition in another post, since I have so many posts to catch up on from while I was in the thesis-hole, and this post is definitely long enough. But for now, the thesis is over.

by Laura Gladstone at November 26, 2014 08:09 PM

Emily Lakdawalla - The Planetary Society Blog

Join me in Washington, D.C. for a post-Thanksgiving Celebration of Planetary Exploration
See Bill Nye, Europa scientist Kevin Hand, and Mars scientist Michael Meyer speak at a special event on Capitol Hill on December 2nd.

November 26, 2014 05:54 PM

Emily Lakdawalla - The Planetary Society Blog

United Launch Alliance Answers Burning Questions about Orion's Rocket
When Orion launches next week, you may notice something alarming: The spacecraft's rocket sort of catches itself on fire. But not to worry, says United Launch Alliance.

November 26, 2014 05:28 PM

Quantum Diaries

Scintillator extruded at Fermilab detects particles around the globe

This article appeared in Fermilab Today on Nov. 26, 2014

The plastic scintillator extrusion line, shown here, produces detector material for export to experiments around the world. Photo: Reidar Hahn

The plastic scintillator extrusion line, shown here, produces detector material for export to experiments around the world. Photo: Reidar Hahn

Small, clear pellets of polystyrene can do a lot. They can help measure cosmic muons at the Pierre Auger Observatory, search for CP violation at KEK in Japan or observe neutrino oscillation at Fermilab. But in order to do any of these they have to go through Lab 5, located in the Fermilab Village, where the Scintillation Detector Development Group, in collaboration with the Northern Illinois Center for Accelerator and Detector Design (NICADD), manufactures the exclusive source of extruded plastic scintillator.

Like vinyl siding on a house, long thin blocks of plastic scintillator cover the surfaces of certain particle detectors. The plastic absorbs energy from collisions and releases it as measurable flashes of light. Fermilab’s Alan Bross and Anna Pla-Dalmau first partnered with local vendors to develop the concept and produce cost-effective scintillator material for the MINOS neutrino oscillation experiment. Later, with NIU’s Gerald Blazey, they built the in-house facility that has now exported high-quality extruded scintillator to experiments worldwide.

“It was clear that extruded scintillator would have a big impact on large neutrino detectors,” Bross said, “but its widespread application was not foreseen.”

Industrially manufactured polystyrene scintillators can be costly — requiring a labor-intensive process of casting purified materials individually in molds that have to be cleaned constantly. Producing the number of pieces needed for large-scale projects such as MINOS through casting would have been prohibitively expensive.

Extrusion, in contrast, presses melted plastic pellets through a die to create a continuous noodle of scintillator (typically about four centimeters wide by two centimeters tall) at a much lower cost. The first step in the production line mixes into the melted plastic two additives that enhance polystyrene’s natural scintillating property. As the material reaches the die, it receives a white, highly reflective coating that holds in scintillation light. Two cold water tanks respectively bathe and shower the scintillator strip before it is cool enough to handle. A puller controls its speed, and a robotic saw finally cuts it to length. The final product contains either a groove or a hole meant for a wavelength-shifting fiber that captures the scintillation light and sends the signal to electronics in the most useful form possible.

Bross had been working on various aspects of the scintillator cost problem since 1989, and he and Pla-Dalmau successfully extruded experiment-quality plastic scintillator with their vendors just in time to make MINOS a reality. In 2003, NICADD purchased and located at Lab 5 many of the machines needed to form an in-house production line.

“The investment made by Blazey and NICADD opened extruded scintillators to numerous experiments,” Pla-Dalmau said. “Without this contribution from NIU, who knows if this equipment would have ever been available to Fermilab and the rest of the physics community?”

Blazey agreed that collaboration was an important part of the plastic scintillator development.

“Together the two institutions had the capacity to build the resources necessary to develop state-of-the-art scintillator detector elements for numerous experiments inside and outside high-energy physics,” Blazey said. “The two institutions remain strong collaborators.”

Between their other responsibilities at Fermilab, the SDD group continues to study ways to make their scintillator more efficient. One task ahead, according to Bross, is to work modern, glass wavelength-shifting fibers into their final product.

“Incorporation of the fibers into the extrusions has always been a tedious part of the process,” he said. “We would like to change that.”

Troy Rummler

by Fermilab at November 26, 2014 04:10 PM

Peter Coles - In the Dark

Quantum Technologies at Sussex

Some good news arrived today. We had been hoping to hear it since September but it finally appeared today. It involves several physicists from the Atomic, Molecular and Optical (AMO) Group of the Department of Physics & Astronomy in the School of Mathematical and Physical Sciences here at the University of Sussex who bid to participate in a major investment (of ~£270M) in quantum technology overseen by the Engineering and Physical Sciences Research Council (EPSRC). Today we learned that Sussex physicists were successful in their applications and in fact will participate in two of the four new Quantum Technology “hubs” now being set up. One of the hubs is led by the University of Oxford and the other by the University of Birmingham. We will be starting work on these projects on 1st December 2014 (i.e. next Monday) and the initial funding is for five years. Congratulations to all those involved, not just at Sussex but also in those other institutions participating in the new programme.

For a relatively small Department this is an outstanding achievement for Sussex, and the funding gained will help us enormously with our strategy of expanding laboratory-based experiment physics on the University of Sussex campus. Since I arrived here last year it has been a priority for the School to increase and diversify its research portfolio, both to enhance the range and quality of our research itself and to allow us to teach a wider range of specialist topics at both undergraduate and postgraduate level. This particular subject is also one in which we hope to work closely with local comanies, as quantum technology is likely to be a key area for growth over the next few years.

I’m very excited by all this, because it represents a successful first step towards the ambitious goals the Department has set and it opens up a pathway for further exciting developments I hope to be able to post about very soon.

To celebrate, here’s a gratuitous picture of a laser experiment:

laser

You can find more information about the Quantum Technology hubs altogether here.

The text of the official University of Sussex  press release follows:

Sussex scientists have been awarded £5.5 million to develop devices that could radically change how we measure time, navigate our world and solve seemingly impossible mathematical equations.

The grants, received by members of the University’s Atomic, Molecular and Optical Physics (AMO) research group, represent part of a £270 million UK government investment announced today (26 November) to convert quantum physics research into commercial products.

Quantum technology is the applied field of quantum theory. It includes such phenomena as “quantum entanglement”, the idea that objects are not independent if they have interacted with each other or come into being through the same process, and that changing one will also change the other, no matter how far apart they are.

Members of the AMO group have become part of two major national quantum centres: the UK Quantum Technology Hub on Networked Quantum Information Technologies and the UK Quantum Technology Hub for Sensors and Metrology. These centres bring together universities and industry to develop and construct quantum technologies.

The award from the Engineering and Physical Sciences Research Council (EPSRC) will help to fund several Sussex research projects:

  • Dr Jacob Dunningham will be developing a theory to understand how remote objects can be detected with exquisite precision by making use of a networks of sensors linked by quantum entanglement.
  • Dr Winfried Hensinger, as part of one hub, will develop the quantum processor microchip architecture and a new technique of quantum processing using microwave radiation to enable the construction of a large-scale “super-fast” quantum computer. As part of the other hub, he will develop powerful portable sensors able to detect magnetic fields with unprecedented accuracy utilizing a new generation of microchips capable of holding arrays of individual charged atoms.
  • Dr Alessia Pasquazi will develop miniature, ultra-fast, photonic sources of light that form the heart of a new generation of quantum sensors and navigation devices.
  • Dr Marco Peccianti will shrink to the size of a shoe box an “optical frequency comb”, a highly accurate clock currently found only in state-of-the-art laboratories.
  • Prof Barry Garraway will design new rotation sensors for compact navigation devices using atom-chip technology.
  • Dr Matthias Keller will develop a network connecting several quantum processors through the exchange of single photons, resulting in a new version of the internet, the so-called ‘quantum internet’.

In response to the funding news, Professor Peter Coles, Head of the School of Mathematics and Physical Sciences, said: “Quantum sensors offer amazing possibilities for smaller and lighter devices with extraordinary precision. As a consequence, quantum theory promises revolutionary technological applications in computing, measurement, navigation, and security.”

Professor Michael Davies, Pro-Vice-Chancellor for Research, said: “This new research programme will consolidate the reputation of the University of Sussex as one of the world-leading centres for the development of ground-breaking quantum technologies.”

The research will be supplemented by a significant Sussex investment and will make use of the world-leading multi-million pound quantum technology laboratories located at the University.

Professor Coles added: “Our pioneering ‘MSc in Frontiers of Quantum Technology’ program along with numerous PhD positions will provide training for a new generation of researchers and developers to be employed in the emerging quantum technology sector.”

Greg Clark, Minister of State for Universities, Science and Cities, said: “This exciting new Quantum Hubs network will push the boundaries of knowledge and exploit new technologies, to the benefit of healthcare, communications and security.

“Today’s announcement is another example of the government’s recognition of the UK’s science base and its critical contribution to our sustained economic growth”.


by telescoper at November 26, 2014 01:25 PM

Christian P. Robert - xi'an's og

Methodological developments in evolutionary genomic [3 years postdoc in Montpellier]

[Here is a call for a post-doctoral position in Montpellier, South of France, not Montpelier, Vermont!, in a population genetics group with whom I am working. Highly recommended if you are currently looking for a postdoc!]

Three-year post-doctoral position at the Institute of Computational Biology (IBC), Montpellier (France) :
Methodological developments in evolutionary genomics.

One young investigator position opens immediately at the Institute for Computational Biology (IBC) of Montpellier (France) to work on the development of innovative inference methods and software in population genomics or phylogenetics to analyze large-scale genomic data in the fields of health, agronomy and environment (Work Package 2 « evolutionary genomics » of the IBC). The candidate will develop its own research on some of the following topics : selective processes, demographic history, spatial genetic processes, very large phylogenies reconstruction, gene/species tree reconciliation, using maximum likelihood, Bayesian and simulation-based inference. We are seeking a candidate with a strong background in mathematical and computational evolutionary biology, with interest in applications and software development. The successfull candidate will work on his own project, build in collaboration with any researcher involved in the WP2 project and working at the IBC labs (AGAP, CBGP, ISEM, I3M, LIRMM, MIVEGEC).

IBC hires young investigators, typically with a PhD plus some post-doc experience, a high level of publishing, strong communication abilities, and a taste for multidisciplinary research. Working full-time at IBC, these young researchers will play a key role in Institute life. Most of their time will be devoted to scientific projects. In addition, they are expected to actively participate in the coordination of workpackages, in the hosting of foreign researchers and in the organization of seminars and events (summer schools, conferences…). In exchange, these young researchers will benefit from an exceptional environment thanks to the presence of numerous leading international researchers, not to mention significant autonomy for their work. Montpellier hosts one of the most vibrant communities of biodiversity research in Europe with several research centers of excellence in the field. This positions is open for up to 3 years with a salary well above the French post-doc standards. Starting date is open to discussion.

 The application deadline is January 31, 2015.

Living in Montpellier: http://www.agropolis.org/english/guide/index.html

 

Contacts at WP2 « Evolutionary Genetics » :

 

Jean-Michel Marin : http://www.math.univ-montp2.fr/~marin/

François Rousset : http://www.isem.univ-montp2.fr/recherche/teams/evolutionary-genetics/staff/roussetfrancois/?lang=en

Vincent Ranwez : https://sites.google.com/site/ranwez/

Olivier Gascuel : http://www.lirmm.fr/~gascuel/

Submit my application : http://www.ibc-montpellier.fr/open-positions/young-investigators#wp2-evolution


Filed under: pictures, Statistics, Travel, University life, Wines Tagged: academic position, Bayesian statistics, biodiversity, computational biology, France, Institut de Biologie Computationelle, Montpellier, phylogenetic models, position, postdoctoral position

by xi'an at November 26, 2014 01:18 PM

Symmetrybreaking - Fermilab/SLAC

Needed: citizen scientists for Higgs hunt

A new project asks citizen scientists for help finding unknown Higgs boson decays in LHC data from the ATLAS experiment.

Just days after the CMS experiment at the Large Hadron Collider released a large batch of data to the public, the ATLAS experiment has launched its own citizen science initiative, making this one of the best weeks ever to be a fan of the LHC.

The new Higgs Hunters project enlists the help of online volunteers in searching for new information about the Higgs boson.

“Having found the Higgs boson particle, now we want to know how it works,” says Alan Barr, a professor of particle physics at the University of Oxford and lead scientist for the Higgs Hunters project. “We’d like you to look at these pictures of collisions and tell us what you see.”

Massive particles such as the Higgs boson decay into lighter particles after being created in particle collisions. The ATLAS experiment detects the tracks of these lighter particles—unless they have no charge. The ATLAS detector does not see neutral particles in its tracker until they decay into even lighter particles that do have a charge. 

The Higgs Hunters project provides data from the ATLAS experiment to the public in the form of collision event displays, pictures sometimes reminiscent of fireworks that show how particles moved through the detector after a collision. 

Some of these displays show particle tracks that seem to appear out of thin air, starting outside the center of a collision—the sign that an invisible, neutral particle escaped from the collision and decayed.

ATLAS scientists trying to figure out if the Higgs boson can decay to these neutral particles. But they have to identify and compare many of these “thin air” events before they can do that. Computer algorithms struggle with this task, but people are excellent at pattern recognition. That’s why the scientists turned to citizen science.

“We’re excited that we found a way to present data to the public in a way that they can easily experience it,” says Andy Haas, an assistant professor of physics at New York University and one of the Higgs Hunters collaborators. “And at the same time, they will help us perform real science.”

The project is a collaboration between New York University, Oxford University, ATLAS and the online citizen science forum Zooniverse

Courtesy of: Higgs Hunters

 

Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at November 26, 2014 12:00 PM

astrobites - astro-ph reader's digest

Chondrule formation by shocks?

Extraterrestrial rocks are important

Fig. 1: A chondrite from Gujba. Most of the small roundish pieces are  chondrules. (Image adopted from J.   Bollard)

Fig. 1: A chondrite from Gujba. Most of the small roundish pieces are chondrules. (Image adopted from J. Bollard)

What’s a chondrule? Never heard of that before. That’s probably your first reaction after reading this esoteric-sounding word. However, you’ve probably heard about meteorites before. Meterorites offer an insight in the compositions at different locations in our solar system. Particularly, unmodified meteorites, i.e. bodies with no differentiation or melting, can help to explain the origin of planetary systems. These unmodified meteorites are called chondrites (Fig. 1) and their most important components are small grains of a few mm to cm size, namely chondrules. Measurements of radioactive elements (short-lived radionuclides) in chondrules constrain their formation to the first few million years of the solar system. Remember that the solar-system itself is about 4.6 billion years old.
Thus, chondrules are the building blocks of our solar system. However, it is unknown how these grains of dust formed initially. Imprints on chondrules reveal that they must have been heated almost instantenously before they cooled within a few hours. These time-scales are very short compared to the millions of years of solar nebula evolution. One suggestion is that nebular shockwaves allowed a fast formation of chondrules. The shock induces efficient heating, which caused melting of dust. Subsequently, the melted dust forms droplets, which cool and finally solidify to chondrules.

A simple model of shocks

Imagine a shock as a discontinuouity travelling through a medium. Roughly speaking, the authors investigate the following sequence of events. The shock induces a quick increase of the dust temperature to a peakvalue T_peak. After the shock event the temperature falls off again and settles to a lower constant temperature, so called post-shock temperature T_post.
It is the aim of the paper to test chondrule formation for different conditions. The authors carry out simulations, which solve a radiative transfer model in one dimension, to estimate the evolution of the temperature.
The authors distinguish between local shocks, such as planetasimal bow shocks, and global shocks, induced by gravitational instabilities for instance. You can imagine a local shock as a small hot bullet, which is put into a bucket of cold water. The hot bullet can radiate its heat into all directions thus the temperature drops very quickly. In contrast, a global shock can be described by a hot plate being put into the bucket of cold water. When one part of the plate radiates into the cold water, it radiates into regions, which are already heated by another part before. Hence, the cooling process for the plate, i.e. global shocks in general, is less efficient.

Results: Shock-induced formation of chondrules is at least difficult

First the authors examine the temperatures for global shocks and high opacity. They did that by plotting the peak temperature of the shock and the temperature after the shock with respect to the shock velocity for three different densities (Fig. 2). You can see there is no velocity where the peak temperature exceeds the temperature required for melting (upper solid line) and the post-shock temperature is low enough to avoid evaporation of volatiles (lower solid line). The authors conclude that chondrules cannot be formed by global shocks in a medium with high density, because emitted photons are likely to be absorbed by other particles again. Thus, it is diffcult to get rid of the thermal energy in this region. Formally speaking, cooling is less efficient in an optically thick medium. The situation is different for local shocks in an optically thick medium instead. These shocks might have the right conditions because of the possibility of more rapid cooling.

Fig. 2: The peak and post temperature with respect to the velocity of the shock for three different densities (left: 10⁻⁹ g/cm³, middle: 10⁻⁸ g/cm³, right: 10⁻⁷ g/cm³). As one can see there is no velocity for which the peak temperatures can exceed the minimum temperature required for melting (upper solid line), and the post-shock temperature is beneath the maximum temperature required for solidification (lower solid line).

Fig. 2: The peak and post temperature with respect to the velocity of the shock for three different densities (left: 10⁻⁹ g/cm³, middle: 10⁻⁸ g/cm³, right: 10⁻⁷ g/cm³). As one can see there is no velocity for which the peak temperatures can exceed the minimum temperature required for melting (upper solid line), and the post-shock temperature is beneath the maximum temperature required for solidification (lower solid line).

In the case of low opacity for global shocks, it is easier to radiate energy away because the radiation is blocked less by the surrounding. However, the escape of radiation energy turns out to be so efficient that the temperature already drops below the evaporation temperature after only a few minutes. This evolution of temperature disagrees with imprints on chondrules, too. Now you might think that if cooling is insufficient in one case and overly efficient in another, let’s look for the sweet spot in between! Unfortunately, such a spot does not exist for global shocks according to the authors.

Conclusion and Discussion: What comes next?

The conclusion of the paper is that formation of chondrules via global shocks is impossible because the induced cooling is either too weak (optically thick case) or too strong (optically thin case). However, local shocks in an optically thick medium could have the characteristic properties because the temperature can drop faster than for global shocks.
But are these simulations anything like a real protoplanetary disk? Well, the model is idealized in several senses. Most important, it only takes into account a one-dimensional model of radiative transfer, while protoplanetary disks are three dimensional. Furthermore possible processes such as nucleation and condensation are missing and only hydrogen dissociation and recombination is taken into account. Nevertheless, it is rather unlikely that adding more physics changes the general conclusion of the paper drastically. However, mentioning the problems and difficulties producing the appropriate conditions for chondrule formation via shocks raises doubt to the fundamental concept of shock-induced formation. Maybe chondrules are generally formed through a different mechanism instead?

by Michael Küffmeier at November 26, 2014 11:14 AM

Emily Lakdawalla - The Planetary Society Blog

A Rich Potpourri of Future Mission Concepts
The past few months have brought announcements for new missions from India and China as well as a wealth of creative ideas for future missions.

November 26, 2014 11:01 AM

Peter Coles - In the Dark

Research in Modelling Ocean Systems

Time to do a favour for an old friend of mine (who was in fact a graduate student at Sussex at the same time as me, back in the 80s, and is an occasional commenter on this blog), Adrian Burd. Adrian moved to the US of A some time ago and now works on Oceanography (that’s Wave Mechanics, I guess..). Anyway, he now has an opportunity for a PhD student which is suitable for a candidate with a background in Mathematics or Physics. Since I’m Head of the School of Mathematical and Physical Sciences, I thought I’d put the advertisement up on here and see if there are any takers. Looks like an interesting one to me!

GradFlyer

You can download a pdf of the flyer here.

Please direct any queries to Adrian!


by telescoper at November 26, 2014 09:05 AM

Emily Lakdawalla - The Planetary Society Blog

The Science of “Bennu’s Journey”
The OSIRIS-REx project released Bennu’s Journey, a movie describing one possible history of our target asteroid – Bennu. The animation is among the most highly detailed productions created by Goddard’s Conceptual Image Laboratory.

November 26, 2014 12:17 AM

Clifford V. Johnson - Asymptotia

Look Up In the Sky…!
graduate_electromagnetism_class_sky_watchingYesterday's graduate class in electromagnetism had a bit of extra fun. We did a particular computation in some detail, and arrived at a pair of results. We thought about the main features of the equations we'd derived and I then asked the class if they could think of an example. An example with those equations essentially written all over it. It was the sky. Not just the blueness of the sky (for which the result supplies a partial answer) but the pattern of blueness on the sky, especially when looking through your polarised sunglasses. (You know how you tilt your head when wearing them and you can darken or lighten the sky a bit? Well, that effect is way more effective if you are looking in a direction at right angles to the sun as opposed to either toward or away from the sun.) So I took the class outside to gaze upon the sky in person, rather than just sit and talk about it. Actually, a little bit of knowledge about the pattern of blue in the sky is useful in a lot of ways. For example it is amusing to me to see how often architects and their artist collaborators get the sky wrong in renderings of [...] Click to continue reading this post

by Clifford at November 26, 2014 12:07 AM

November 25, 2014

Christian P. Robert - xi'an's og

reflections on the probability space induced by moment conditions with implications for Bayesian Inference [refleXions]

“The main finding is that if the moment functions have one of the properties of a pivotal, then the assertion of a distribution on moment functions coupled with a proper prior does permit Bayesian inference. Without the semi-pivotal condition, the assertion of a distribution for moment functions either partially or completely specifies the prior.” (p.1)

Ron Gallant will present this paper at the Conference in honour of Christian Gouréroux held next week at Dauphine and I have been asked to discuss it. What follows is a collection of notes I made while reading the paper , rather than a coherent discussion, to come later. Hopefully prior to the conference.

The difficulty I have with the approach presented therein stands as much with the presentation as with the contents. I find it difficult to grasp the assumptions behind the model(s) and the motivations for only considering a moment and its distribution. Does it all come down to linking fiducial distributions with Bayesian approaches? In which case I am as usual sceptical about the ability to impose an arbitrary distribution on an arbitrary transform of the pair (x,θ), where x denotes the data. Rather than a genuine prior x likelihood construct. But I bet this is mostly linked with my lack of understanding of the notion of structural models.

“We are concerned with situations where the structural model does not imply exogeneity of θ, or one prefers not to rely on an assumption of exogeneity, or one cannot construct a likelihood at all due to the complexity of the model, or one does not trust the numerical approximations needed to construct a likelihood.” (p.4)

As often with econometrics papers, this notion of structural model sets me astray: does this mean any latent variable model or an incompletely defined model, and if so why is it incompletely defined? From a frequentist perspective anything random is not a parameter. The term exogeneity also hints at this notion of the parameter being not truly a parameter, but including latent variables and maybe random effects. Reading further (p.7) drives me to understand the structural model as defined by a moment condition, in the sense that

\mathbb{E}[m(\mathbf{x},\theta)]=0

has a unique solution in θ under the true model. However the focus then seems to make a major switch as Gallant considers the distribution of a pivotal quantity like

Z=\sqrt{n} W(\mathbf{x},\theta)^{-\frac{1}{2}} m(\mathbf{x},\theta)

as induced by the joint distribution on (x,θ), hence conversely inducing constraints on this joint, as well as an associated conditional. Which is something I have trouble understanding, First, where does this assumed distribution on Z stem from? And, second, exchanging randomness of terms in a random variable as if it was a linear equation is a pretty sure way to produce paradoxes and measure theoretic difficulties.

The purely mathematical problem itself is puzzling: if one knows the distribution of the transform Z=Z(X,Λ), what does that imply on the joint distribution of (X,Λ)? It seems unlikely this will induce a single prior and/or a single likelihood… It is actually more probable that the distribution one arbitrarily selects on m(x,θ) is incompatible with a joint on (x,θ), isn’t it?

“The usual computational method is MCMC (Markov chain Monte Carlo) for which the best known reference in econometrics is Chernozhukov and Hong (2003).” (p.6)

While I never heard of this reference before, it looks like a 50 page survey and may be sufficient for an introduction to MCMC methods for econometricians. What I do not get though is the connection between this reference to MCMC and the overall discussion of constructing priors (or not) out of fiducial distributions. The author also suggests using MCMC to produce the MAP estimate but this always stroke me as inefficient (unless one uses our SAME algorithm of course).

“One can also compute the marginal likelihood from the chain (Newton and Raftery (1994)), which is used for Bayesian model comparison.” (p.22)

Not the best solution to rely on harmonic means for marginal likelihoods…. Definitely not. While the author actually uses the stabilised version (15) of Newton and Raftery (1994) estimator, which in retrospect looks much like a bridge sampling estimator of sorts, it remains dangerously close to the original [harmonic mean solution] especially for a vague prior. And it only works when the likelihood is available in closed form.

“The MCMC chains were comprised of 100,000 draws well past the point where transients died off.” (p.22)

I wonder if the second statement (with a very nice image of those dying transients!) is intended as a consequence of the first one or independently.

“A common situation that requires consideration of the notions that follow is that deriving the likelihood from a structural model is analytically intractable and one cannot verify that the numerical approximations one would have to make to circumvent the intractability are sufficiently accurate.” (p.7)

This then is a completely different business, namely that defining a joint distribution by mean of moment equations prevents regular Bayesian inference because the likelihood is not available. This is more exciting because (i) there are alternative available! From ABC to INLA (maybe) to EP to variational Bayes (maybe). And beyond. In particular, the moment equations are strongly and even insistently suggesting that empirical likelihood techniques could be well-suited to this setting. And (ii) it is no longer a mathematical worry: there exist a joint distribution on m(x,θ), induced by a (or many) joint distribution on (x,θ). So the question of finding whether or not it induces a single proper prior on θ becomes relevant. But, if I want to use ABC, being given the distribution of m(x,θ) seems to mean I can only generate new values of this transform while missing a natural distance between observations and pseudo-observations. Still, I entertain lingering doubts that this is the meaning of the study. Where does the joint distribution come from..?!

“Typically C is coarse in the sense that it does not contain all the Borel sets (…)  The probability space cannot be used for Bayesian inference”

My understanding of that part is that defining a joint on m(x,θ) is not always enough to deduce a (unique) posterior on θ, which is fine and correct, but rather anticlimactic. This sounds to be what Gallant calls a “partial specification of the prior” (p.9).

Overall, after this linear read, I remain very much puzzled by the statistical (or Bayesian) implications of the paper . The fact that the moment conditions are central to the approach would once again induce me to check the properties of an alternative approach like empirical likelihood.


Filed under: Statistics, University life Tagged: ABC, compatible conditional distributions, empirical likelihood, expectation-propagation, harmonic mean estimator, INLA, latent variable, MCMC, prior distributions, structural model, variational Bayes methods

by xi'an at November 25, 2014 11:14 PM

Quantum Diaries

Geometry and interactions

Or, how do we mathematically describe the interaction of particles?

In my previous post, I addressed some questions concerning the nature of the wavefunction, the most truthful mathematical representation of a particle. Now let us make this simple idea more complete, getting closer to the deep mathematical structure of particle physics. This post is a bit more “mathematical” than the last, and will likely make the most sense to those who have taken a calculus course. But if you bear with me, you may also come to discover that this makes particle interactions even more attractive!

The field theory approach considers wavefunctions as fields. In the same way as the temperature field \(T(x,t)\) gives the value of the temperature in a room at space \(x\) and time \(t\), the wavefunction \(\phi (x,t)\) quantifies the probability of presence of a particle at space point \(x\) and time \(t\).
Cool! But if this sounds too abstract to you, then you should remember what Max Planck said concerning the rise of quantum physics: “The increasing distance between the image of the physical world and our common-sense perception of it simply indicates that we are gradually getting closer to reality”.

Almost all current studies in particle physics focus on interactions and decays of particles. How does the concept of interaction fit into the mathematical scheme?

The mother of all the properties of particles is called the Lagrangian function. Through this object a lot of properties of the theory can be computed. Here let’s consider the Lagrangian function for a complex scalar field without mass (one of the simplest available), representing particles with electric charge and no spin:

\(L(x) = \partial_\mu \phi(x)^* \partial^\mu \phi(x) \).

Mmm… Is it just a bunch of derivatives of fields? Not really. What do we mean when we read \(\phi(x)\)? Mathematically, we are considering \(\phi\) as a vector living in a vector space “attached” to the space-time point \(x\). For the nerds of geometry, we are dealing with fiber bundles, structures that can be represented pictorially in this way:

fibers

Click on image for larger version

The important consequence is that, if \(x\) and \(y\) are two different space-time points, a field \(\phi(x)\) lives in a different vector space (fiber) with respect to \(\phi(y)\)! For this reason, we are not allowed to perform operations with them, like taking their sum or difference (it’s like comparing a pear with an apple… either sum two apples or two pears, please). This feature is highly non-trivial, because it changes the way we need to think about derivatives.

In the \(L\) function we have terms containing derivatives of the field \(\phi(x)\). Doing this, we are actually taking the difference of the value of the field at two different space-time points. But … we just outlined that we are not allowed to do it! How can we solve this issue?

If we want to compare fields pertaining to the same vector space, we need to slightly modify the notion of derivative introducing the covariant derivative \(D\):

\( D_\mu = \partial_\mu + ig A_\mu(x) \).

Here, on top of the derivative \(\partial\), there is the action of the “connection” \(A(x)\), a structure which takes care of “moving” all the fields in the same vector space, and eventually allows us to compare apples with apples and pears with pears.
So, a better way to write down the Lagrangian function is:

\(L(x) = D_\mu \phi(x)^* D^\mu \phi(x) \).

If we expand \(D\) in terms of the derivative and the connection, \(L\) reads:

\(L(x) = \partial_\mu \phi(x)^* \partial^\mu \phi(x) +ig A_\mu (\partial^\mu \phi^* \phi – \phi^* \partial^\mu \phi) + g^2 A^2 \phi^* \phi \).

Do you recognize the role of these three terms? The first one represents the propagation of the field \(\phi\). The last two are responsible for the interactions between the fields \(\phi, \phi^*\) and the \(A\) field, referred to as the “photon” in this context.

interactions

Click on image for larger version

This slightly hand-waving argument involving fields and space-time is a simple handle to understand how the interactions among particles emerge as a geometric feature of the theory.

If we consider more sophisticated fields with spin and color charges, the argument doesn’t change. We need to consider a more refined “connection” \(A\), and we could see the physical interactions among quarks and gluons (namely QCD, Quantum Chromo Dynamics) emerging just from the mathematics.

 Probably the professor of geometry in my undergrad course would call this explanation “Spaghetti Mathematics”, but I think it can give you a flavor of the mathematical subtleties involved in the theory of particle physics.

by Andrea Signori at November 25, 2014 08:27 PM

Quantum Diaries

Graduating, part 1: The Defense

It’s been a crazy 3 weeks since I officially finished my PhD. I’m in the transition from being a grad student slowly approaching insanity to a postdoc who has everything figured out, and it’s a rocky transition.

DSC_0738The end of the PhD at Wisconsin has two steps. The first is the defense, which is a formal presentation of my research to the professors and committee, our colleagues, and very few friends and family. The second is actually turning the completed dissertation to the grad school, with the accompanying “margin check” appointment with the grad school. In between, the professors can send me comments about the thesis. I’ve heard so many stories of different universities setting up the end of a degree differently, it’s pretty much not worth going into the details. If you or someone you know is going through this process, you don’t need a comparison of how it works at different schools, you just need a lot of support and coping mechanisms. All the coping mechanisms you can think of, you need them. It’s ok, it’s a limited time, don’t feel guilty, just get through it. There is an end, and you will reach it.

The days surrounding the defense were planned out fairly carefully, including a practice talk with my colleagues, again with my parents (who visited for the defense), and delivery burritos. I ordered coffee and doughnuts for the defense from the places where you get those, and I realized why such an important day has such a surprisingly small variety of foods: because deviating from the traditional food is so very far down my list of priorities when there’s the physics to think about, and the committee, and the writing. The doughnuts just aren’t worth messing with. Plus, the traditional place to get doughnuts is already really good.

We even upheld a tradition the night before the defense. It’s not really a tradition per se, but I’ve seen it once and performed it once, so that makes it a tradition. If you find it useful, you can call it an even stronger tradition! We played an entire soundtrack and sung along, with laptops open working on defense slides. When my friend was defending, we watched “Chicago” the musical, and I was a little hoarse the next day. When I was defending, we listened to Leonard Bernstein’s version of Voltaire’s “Candide,” which has some wonderful wordplay and beautiful writing for choruses. The closing message was the comforting thought that it’s not going to be perfect, but life will go on.

“We’re neither wise nor pure nor good, we’ll do the best we know. We’ll build our house, and chop our wood, and make our garden grow.”

Hearing that at the apex of thesis stress, I think it will always make me cry. By contrast, there’s also a scene in Candide depicting the absurd juxtaposition of a fun-filled fair centered around a religious inquisition and hanging. Every time someone said they were looking forward to seeing my defense, I thought of this hanging-festival scene. I wonder if Pangloss had to provide his own doughnuts.

The defense itself went about as I expected it would. The arguments I presented had been polished over the last year, the slides over the last couple weeks, and the wording over a few days. My outfit was chosen well in advance to be comfortable, professional, and otherwise unremarkable (and keep my hair out my way). The seminar itself was scheduled for the time when we usually have lab group meetings, so the audience was the regular lab group albeit with a higher attendance-efficiency factor. The committee members were all present, even though one had to switch to a 6am flight into Madison to avoid impending flight cancellations. The questions from the committee mostly focused on understanding the implications of my results for other IceCube results, which I took to mean that my own work was presented well enough to not need further explanation.

It surprised me, in retrospect, how quickly the whole process went. The preparation took so long, but the defense itself went so quickly. From watching other people’s defenses, I knew to expect a few key moments: an introduction from my advisor, handshakes from many people at the end of the public session, the moment of walking out from the closed session to friends waiting in the hallway, and finally the first committee member coming out smiling to tell me they decided to pass me. I knew to look for these moments, and they went by so much faster in my own defense than I remember from my friends. Even though it went by so quickly, it still makes a difference having friends waiting in the hallway.

People asked me if it was a weight off my shoulders when I finally defended my thesis. It was, in a way, but even more it felt like cement shoes off my feet. Towards the end of the process, for the last year or so, a central part of myself felt professionally qualified, happy, and competent. I tried desperately to make that the main part. But until the PhD was finished, that part wasn’t the exterior truth. When I finished, I felt like the qualifications I had on paper matched how qualified I felt about myself. I’m still not an expert on many things, but I do know the dirty details of IceCube software and programing. I have my little corner of expertise, and no one can take that away. Degrees are different from job qualifications that way: if you stop working towards a PhD several years in, it doesn’t count as a fractional part of a degree; it’s just quitting. But if you work at almost any other job for a few years, you can more or less call it a few years of experience. A month before my defense, part of me knew I was so so so close to being done, but that didn’t mean I could take a break.

And now, I can take a break.

by Laura Gladstone at November 25, 2014 08:08 PM

Symmetrybreaking - Fermilab/SLAC

Students join the hunt for exotic new physics

Students will help the MoEDAL experiment at CERN seek evidence of magnetic monopoles, microscopic black holes and other phenomena.

For the first time, a high school has joined a high-energy physics experiment as a full member. Students from the Simon Langton Grammar School in Canterbury, England, have become participants in the newest experiment at the Large Hadron Collider at CERN.

The students will help with the search for new exotic particles such as magnetic monopoles, massive supersymmetric particles, microscopic black hole remnants, Q-balls and strangelets through an experiment called MoEDAL (Monopole and Exotics Detector at the LHC).

The students, who take part in a school-based research lab, will remotely monitor radiation backgrounds at the experiment.

The Simon Langton school has worked with the experiment’s chips, called Timepix, during previous projects that included a cosmic-ray detector the students helped to design. It was launched aboard a dishwasher-sized UK tech demonstration satellite by a commercial firm in July.

“I think it’s enormously exciting for these students to think about what they could find,” says Becky Parker, the physics teacher who oversees the Langton school’s involvement. “It’s empowering for them that they could be a part of these amazing discoveries… You can’t possibly teach them about particle physics unless you can teach them about discovery.”

The state-of-the-art array of Timepix chips that the Langton group will monitor is the only real-time component of the four detector systems that comprise the MoEDAL experiment, which is operated by a collaboration of 66 physicists from 23 institutes in 13 countries on 4 continents.

The MoEDAL detector acts as a giant camera, with 400 stacks of plastic detectors as its “film.” MoEDAL is also designed to capture the particle messengers of new physics for further study in a 1-ton pure-aluminum trapping system.

MoEDAL is sensitive to massive, long-lived particles predicted by a number of “beyond the Standard Model” theories that other LHC experiments may be incapable of detecting and measuring.

“It is very exciting to be on the forefront of groundbreaking physics, which includes such amazing insight into what the best physicists of the world are doing,” says 16-year-old Langton student Ellerey Ireland.

“MoEDAL has allowed me to see the passion and determination of physicists and opened my mind to where physics can lead,” says Langton student, Saskia Jamieson Bibb, also 16. “I am planning to study physics at university.”

One of the hypothetical particles MoEDAL is designed to detect is the magnetic monopole—essentially a magnet with only one pole. Blas Cabrera, a physics professor at Stanford who is also part of the Particle Physics and Astrophysics faculty at SLAC National Accelerator Laboratory, measured a possible magnetic monopole event in 1982. A group from Imperial College London found a similar possible event in 1986.

More recently analogues of magnetic monopoles have been created in laboratory experiments. But at MoEDAL, they’ll have a chance to catch the real thing, says University of Alberta physicist James Pinfold, the spokesperson for MoEDAL and a visiting professor at King’s College London.  

The theoretical base supporting the existence of magnetic monopoles is strong, he says. “Of all new physics scenarios out there today, magnetic monopoles are the most certain to actually exist.”

Confirmation of the existence of magnetic monopoles could clue researchers in to the nature of the big bang itself, as these particles are theorized to have emerged at the onset of our universe.  

“The discovery of the magnetic monopole or any other exotic physics by MoEDAL would have incredible ramifications that would revolutionize the way we see things,” Pinfold says. “Such a discovery would be as important as that of the electron.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at November 25, 2014 03:52 PM

Tommaso Dorigo - Scientificblogging

Volunteer-Based Peer Review: A Success
A week ago I offered readers of this blog to review a paper I had just written, as its publication process did not include any form of screening (as opposed to what is customary for articles in particle physics, which receive multiple review stages). That's not the first time for me: in the past I did the same with other articles, and usually I received good feedback. So I knew this could work.

read more

by Tommaso Dorigo at November 25, 2014 01:34 PM

Peter Coles - In the Dark

Doomsday is Cancelled…

Last week I posted an item that included a discussion of the Doomsday Argument. A subsequent comment on that post mentioned a paper by Ken Olum, which I finally got around to reading over the weekend, so I thought I’d post a link here for those of you worrying that the world might come to an end before the Christmas holiday.

You can find Olum’s paper on the arXiv here. The abstract reads (my emphasis):

If the human race comes to an end relatively shortly, then we have been born at a fairly typical time in history of humanity. On the other hand, if humanity lasts for much longer and trillions of people eventually exist, then we have been born in the first surprisingly tiny fraction of all people. According to the Doomsday Argument of Carter, Leslie, Gott, and Nielsen, this means that the chance of a disaster which would obliterate humanity is much larger than usually thought. Here I argue that treating possible observers in the same way as those who actually exist avoids this conclusion. Under this treatment, it is more likely to exist at all in a race which is long-lived, as originally discussed by Dieks, and this cancels the Doomsday Argument, so that the chance of a disaster is only what one would ordinarily estimate. Treating possible and actual observers alike also allows sensible anthropic predictions from quantum cosmology, which would otherwise depend on one’s interpretation of quantum mechanics.

I think Olum does identify a logical flaw in the argument, but it’s by no means the only one. I wouldn’t find it at all surprising to be among the first “tiny fraction of all people”, as my genetic characteristics are such that I could not be otherwise. But even if you’re not all that interested in the Doomsday Argument I recommend you read this paper as it says some quite interesting things about the application of probabilistic reasoning elsewhere in cosmology, an area in which quite a lot is written that makes no sense to me whatsoever!

 


by telescoper at November 25, 2014 11:12 AM

November 24, 2014

arXiv blog

Yahoo Labs' Algorithm Identifies Creativity in 6-Second Vine Videos

Nobody knew how to automatically identify creativity until researchers at Yahoo Labs began studying the Vine livestream.

November 24, 2014 11:50 PM

Christian P. Robert - xi'an's og

prayers and chi-square

One study I spotted in Richard Dawkins’ The God delusion this summer by the lake is a study of the (im)possible impact of prayer over patient’s recovery. As a coincidence, my daughter got this problem in her statistics class of last week (my translation):

1802 patients in 6 US hospitals have been divided into three groups. Members in group A was told that unspecified religious communities would pray for them nominally, while patients in groups B and C did not know if anyone prayed for them. Those in group B had communities praying for them while those in group C did not. After 14 days of prayer, the conditions of the patients were as follows:

  • out of 604 patients in group A, the condition of 249 had significantly worsened;
  • out of 601 patients in group B, the condition of 289 had significantly worsened;
  • out of 597 patients in group C, the condition of 293 had significantly worsened.

 Use a chi-square procedure to test the homogeneity between the three groups, a significant impact of prayers, and a placebo effect of prayer.

This may sound a wee bit weird for a school test, but she is in medical school after all so it is a good way to enforce rational thinking while learning about the chi-square test! (Answers: [even though the data is too sparse to clearly support a decision, esp. when using the chi-square test!] homogeneity and placebo effect are acceptable assumptions at level 5%, while the prayer effect is not [if barely].)


Filed under: Books, Kids, Statistics, University life Tagged: binomial distribution, chi-square test, exercises, medical school, prayer, Richard Dawkins, The God Delusion

by xi'an at November 24, 2014 11:14 PM

astrobites - astro-ph reader's digest

Gas to Black Holes: Direct formation of a supermassive black hole in galaxy mergers
Title: Direct Formation of Supermassive Black Holes in Metal-Enriched Gas at the Heart of High-Redshift Galaxy Mergers

Authors: L. Mayer, D. Fiacconi, S. Bonoli, T. Quinn, R. Roskar, S. Shen, J. Wadsley

First Author’s Institution: Center for Theoretical Astrophysics and Cosmology, Inst. for Comp. Sci., & Physik Institut, University of Zurich, Zurich, Switzerland

Paper Status: Submitted to The Astrophysical Journal

Massive galaxies like our Milky Way all contain a supermassive black hole (SMBH) at their center, with masses ranging from 106 to 109 solar masses. The SMBH suspected to sit in the center of our galaxy, known as Sgr A*, is estimated to be around four million solar masses. Although we know they exist, how they form is still an unanswered question in astronomy. The challenging question is how so much mass can collapse into such a small volume (about 100 AU for our SMBH) fast enough that we observe them in the early universe as the power source of quasars, less than a billion years after the Big Bang (z ~ 7).

There are three likely possibilities, all of which involve forming “seed” black holes that grow over time to SMBH size: 1) low mass seeds from the deaths of the first stars, 2) the direct collapse of massive regions of gas into a black hole, forming massive seeds, and 3)  mergers of stars in dense star clusters, forming a very massive star, and, in its death, a very massive black hole. The authors use hydrodynamic simulations to examine the direct collapse to a SMBH of a region of gas formed from the merger of two Milky Way mass galaxies.

Merging Galaxies: A Recipe for a SMBH

The authors use a simulation code called GASOLINE2, which, at its core, models the flow of gas as individual particles in what is called smooth particle hydrodynamics. The biggest challenge in creating direct collapse SMBH seeds is keeping the gas cloud coherent throughout the process. These massive clouds can often break apart, or fragment, during collapse, forming stars or less massive black holes. The authors use a more efficient, lower resolution setup to simulate the merger of two galaxies of masses around 1012 solar masses each, then “zoom-in” with higher resolution in the final merger stages to observe the gas collapse at the core of the newly formed galaxy, exploring whether or not the cloud can direct collapse, or fragments over time. Fig.1 gives a projection of the surface density of gas of their two galaxies, and a zoom into the core of one, roughly three thousand years before the galaxies merge.

Fig. 1:

Fig. 1: The gas surface density of the merging galaxy pair around three thousand years before the final merger occurs. Each panel shows a successive zoom in the simulation, with the final showing the central few parsecs of one of the two galaxies. (Source: FIg. 1 from Mayer et. al. 2014)

The Direct Gas Collapse

The new work the authors put into modeling the direct gas collapse process is to include the effects of radiative cooling and a model that accounts for changes in opacity due to dust, dust heating and cooling, atomic and molecular heating and cooling, and cosmic ray heating. These processes together may stabilize the cloud against fragmentation, more easily forming a SMBH seed, or they may cause dramatic fragmentation of the cloud (bad news for the SMBH). Fig. 2 shows the central galaxy region, where the massive cloud ultimately forms, at five thousand years after the two galaxies merged. The panels show 4 different simulations, each of which testing the effects of including or removing different physical processes. In each case, the central region is a single, massive (roughly 109 solar masses) disk-like structure. The gas clumps around the core are examples of gas fragmentation that could ultimately form stars.

Fig. 2

Fig. 2: Five thousand years after the merger of the two galaxies shown in Fig. 1, this image gives the gas surface density for the new galaxy and its core. The four panels each give the results of four different simulations the authors used to test the importance of different physics. (Source: Fig. 3 of Mayer et. al. 2014)

As the core evolves, it remains intact from heating due to shocks, turbulence, and its high opacity to radiation; these all prevent cooling, which can spawn fragmentation. Unfortunately, the authors can only follow the evolution of the central core for around 50 thousand years before they hit computational limits. By this, I mean that continuing to evolve the simulation would require too high a resolution than is computationally possible. In addition, as the core collapses and shrinks in size, the minimum time step drops dramatically and the simulation speed slows to a crawl.

Fig. 3:

Fig. 3: Gas surface density for the core shown in Run 4 of Fig. 2, 30 thousand years after the galaxy mergers. By this point, the core has fragmented into two massive gas clouds that may ultimately form two SMBHs. (Source: Fig. 10 of Mayer et. al. 2014)

Fig. 3 shows the final evolved state of the core 30 thousand years after merger for Run 4 shown in Fig. 2. As shown here, the core actually fragments into two massive gas clumps, one at the center, and one slightly off-center. These clumps are about 109 and 108 solar masses respectively, and may ultimately form two SMBH’s that could eventually merge into a single SMBH as the galaxy evolves.

The Cloud’s Final Fate

Using analytic calculations and results from previous work, the authors make some simple arguments for how the final gas clouds in Fig. 3 can form into black holes via direct collapse. They argue it is possible that these can form into SMBH’s in a ten thousand year process through a collapse generated by general relativistic instabilities. This work provides new insight into how SMBHs may form in the early universe from the direct collapse of gas clouds. The authors conclude by suggesting that future simulations including general relativity and observations by the James Webb Space Telescope and the Atacama Large Millimeter Array will be invaluable to better understanding how SMBH’s can form from direct collapse of gas clouds.

 

by Andrew Emerick at November 24, 2014 10:04 PM

Quantum Diaries

Neutrinos, claymation and ‘Doctor Who’ at this year’s physics slam

This article appeared in Fermilab Today on Nov. 24, 2014.

Wes Ketchum of the MicroBooNE collaboration is the Physics Slam III champion. Ketchum's slam was on the detection of particles using liquid argon. Photo: Cindy Arnold

Wes Ketchum of the MicroBooNE collaboration is the Physics Slam III champion. Ketchum’s slam was on the detection of particles using liquid argon. Photo: Cindy Arnold

On Nov. 21, for the third year in a row, the Fermilab Lecture Series invited five scientists to battle it out in an event called a physics slam. And for the third year in a row, the slam proved wildly popular, selling out Ramsey Auditorium more than a month in advance.

More than 800 people braved the cold to watch this year’s contest, in which the participants took on large and intricate concepts such as dark energy, exploding supernovae, neutrino detection and the overwhelming tide of big data. Each scientist was given 10 minutes to discuss a chosen topic in the most engaging and entertaining way possible, with the winner decided by audience applause.

Michael Hildreth of the University of Notre Dame kicked things off by humorously illustrating the importance of preserving data — not just the results of experiments, but the processes used to obtain those results. Marcelle Soares-Santos of the Fermilab Center for Particle Astrophysics took the stage dressed as the Doctor from “Doctor Who,” complete with a sonic screwdriver and a model TARDIS, to explore the effects of dark energy through time.

Joseph Zennamo of the University of Chicago brought the audience along on a high-energy journey through the “Weird and Wonderful World of Neutrinos,” as his talk was called. And Vic Gehman of Los Alamos National Laboratory blew minds with a presentation about supernova bursts and the creation of everything and everyone in the universe.

The slammers at this year's Fermilab Physics Slam were, Michael Hildreth, University of Notre Dame (far left); Marcelle Soares-Santos, Fermilab (second from left); Vic Gehman, Los Alamos National Laboratory (third from left); Wes Ketchum, Fermilab (second from right); Joseph Zennamo, University of Chicago. Fermilab Director Nigel Lockyer (third from right) congratulated all the participants. Photo: Cindy Arnold

The slammers at this year’s Fermilab Physics Slam were, Michael Hildreth, University of Notre Dame (far left); Marcelle Soares-Santos, Fermilab (second from left); Vic Gehman, Los Alamos National Laboratory (third from left); Wes Ketchum, Fermilab (second from right); Joseph Zennamo, University of Chicago. Fermilab Director Nigel Lockyer (third from right) congratulated all the participants. Photo: Cindy Arnold

The winner was Fermilab’s Wes Ketchum, a member of the MicroBooNE collaboration. Ketchum’s work-intensive presentation used claymation to show how different particles interact inside a liquid-argon particle detector, depicting them as multicolored monsters bumping into one another and creating electrons for the detector’s sensors to pick up. Audience members won’t soon forget the sight of a large oxygen monster eating red-blob electrons.

After the slam, the five scientists took questions from the audience, including one about dark matter and neutrinos from an eight-year-old boy, sparking much discussion. Chris Miller, speech professor at the College of DuPage, made his third appearance as master of ceremonies for the Physics Slam, and thanked the audience — particularly the younger attendees — for making the trek to Fermilab on a Friday night to learn more about science.

Video of this year’s Physics Slam is available on Fermilab’s YouTube channel.

Andre Salles

by Fermilab at November 24, 2014 05:25 PM

Peter Coles - In the Dark

Farewell to Blackberry…

I’m not really a great one for gadgets so I rarely post about technology. I just thought I’d do a quick post because the weekend saw the end of an era. I had been using a Blackberry smartphone for some time, the latest one being a Blackberry Curve, and even did a few posts on here using the WordPress App for Blackberry. I never found that particular bit of software very easy to use, however, so it was strictly for emergencies only (e.g. when stuck on a train). Other than that I got on pretty well with the old thing, except for the fact that there was no easy way to receive my work email from Sussex University on it. That has been a convenient excuse for me to ignore such communications while away from the internet, but recently it’s become clear that I need to be better connected to deal with pressing matters.

Anyway a few weeks ago I got a text message from Vodafone telling me I was due a free upgrade on my contract so I decided to bite the bullet, ditch the Blackberry and acquire an Android phone instead. I’m a bit allergic to those hideously overpriced Apple products, you see, which made an iPhone unthinkable.  On Saturday morning I paid a quick visit to the vodafone store in Cardiff and after a nice chat – mainly about Rugby (Wales were playing the All Blacks later that day) and the recent comet landing – I left with a new Sony Xperia Z2. I feel a bit sorry for turning my back on Blackberry; they really were innovators at one point, but they made some awful business decisions and have been left behind by the competition. Incidentally, the original company Research In Motion (RIM) was doing well enough 15 years ago to endow the PeRIMeter Institute for Theoretical Physics in Waterloo, Ontario, which was one of the reasons for my loyalty to date. The company is now called Blackberry Limited and has recently gone through major restructuring in its struggle for survival.

The Xperia Z2 is a nice phone, with a nice big display, generally very easy to find your way around, and with a lot more apps available than for Blackberry. I’ve got my Sussex email working and got Twitter, Facebook and WordPress installed; the latter is far better on Android than on Blackberry. The only thing I don’t like is the autocorrect/autocomplete, which is wretched, and which  I haven’t yet figured out how to switch off. The other thing is that it’s completely waterproof, but I haven’t taken it into the shower yet.

I feel quite modern for a change – my old Blackberry did make me feel like an old fogey sometimes – but since I’ve now signed up for another two years of contract before my next upgrade, there’s plenty of time for technology to overtake me again.

 

 


by telescoper at November 24, 2014 05:23 PM

Andrew Jaffe - Leaves on the Line

Oscillators, Integrals, and Bugs

I am in my third year teaching a course in Quantum Mechanics, and we spend a lot of time working with a very simple system known as the harmonic oscillator — the physics of a pendulum, or a spring. In fact, the simple harmonic oscillator (SHO) is ubiquitous in almost all of physics, because we can often represent the behaviour of some system as approximately the motion of an SHO, with some corrections that we can calculate using a technique called perturbation theory.

It turns out that in order to describe the state of a quantum SHO, we need to work with the Gaussian function, essentially the combination exp(-y²/2), multiplied by another set of functions called Hermite polynomials. These latter functions are just, as the name says, polynomials, which means that they are just sums of terms like ayⁿ where a is some constant and n is 0, 1, 2, 3, … Now, one of the properties of the Gaussian function is that it dives to zero really fast as y gets far from zero, so fast that multiplying by any polynomial still goes to zero quickly. This, in turn, means that we can integrate polynomials, or the product of polynomials (which are just other, more complicated polynomials) multiplied by our Gaussian, and get nice (not infinite) answers.

Unfortunately, Wolfram Inc.’s Mathematica (the most recent version 10.0.1) disagrees:

MathematicaGaussHermiteBug

The details depend on exactly which Hermite polynomials I pick — 7 and 16 fail, as shown, but some combinations give the correct answer, which is in fact zero unless the two numbers differ by just one. In fact, if you force Mathematica to split the calculation into separate integrals for each term, and add them up at the end, you get the right answer.

I’ve tried to report this to Wolfram, but haven’t heard back yet. Has anyone else experienced this?

by Andrew at November 24, 2014 04:15 PM

Symmetrybreaking - Fermilab/SLAC

Creating a spark

Science has a long history of creativity generated through collaboration between fields.

A principle of 18th century mechanics holds that if a physical system is symmetric in some way, then there is a conservation law associated with the symmetry. Mathematician Emmy Noether generalized this principle in a proof in 1918. Her theorem, in turn, has provided a very powerful tool in physics, helping to describe the conservation of energy and momentum.

Science has a long history of creativity generated through this kind of collaboration between fields.

In the process of sharing ideas, researchers expose assumptions, discern how to clearly express concepts and discover new connections between them. These connections can be the sparks of creativity that generate entirely new ideas.

In 1895, physicist Wilhelm Roentgen discovered X-rays while studying the effects of sending an electric current through low-pressure gas. Within a year, doctors made the first attempts to use them to treat cancer, first stomach cancer in France and later breast cancer in America. Today, millions of cancer patients’ lives are saved each year with clinical X-ray machines.

A more recent example of collaboration between fields is the Web, originally developed as a way for high-energy physicists to share data. It was itself a product of scientific connection, between hypertext and Internet technologies.

In only 20 years, it has transformed information flow, commerce, entertainment and telecommunication infrastructure.

This connection transformed all of science. Before the Web, learning about progress in other fields meant visiting the library, making a telephone call or traveling to a conference. While such modest impediments never stopped interdisciplinary collaboration, they often served to limit opportunity.

With the Web have come online journals and powerful tools that allow people to search for and instantly share information with anyone, anywhere, anytime. In less than a generation, a remarkable amount of the recorded history of scientific progress of the last roughly 3600 years has become instantly available to anyone with an Internet connection.

Connections provide not only a source of creativity in science but also a way to accelerate science, both by opening up entirely new ways of formulating and testing theory and by providing direct applications of the fruits of basic R&D. The former opens new avenues for understanding our world. The latter provides applications of technologies outside their fields of origin. Both are vital.

High-energy physics is actively working with other fields to jointly solve new problems. One example of this is the Accelerator Stewardship Program, which studies ways that particle accelerators can be used in energy and the environment, medicine, industry, national security and discovery science. Making accelerators that meet the cost, size and operating requirements of other applications requires pushing the technology in new directions. In the process we learn new ways to solve our own problems and produce benefits that are widely recognized and sought after. Other initiatives aim to strengthen intellectual connections between particle physics itself and other sciences.

Working in concert with other fields, we will gain new ways of understanding the world around us.

 

Like what you see? Sign up for a free subscription to symmetry!

by Eric Colby, US Department of Energy, Office of High Energy Physics at November 24, 2014 01:00 PM

ZapperZ - Physics and Physicists

Fermilab Physics Slam 2014
A very entertaining video to watch if you were not at this year's Physics Slam.



Zz.

by ZapperZ (noreply@blogger.com) at November 24, 2014 01:03 AM

November 23, 2014

Lubos Motl - string vacua and pheno

Anton Kapustin: Quantum geometry, a reunion of math and physics
I think that this 79-page presentation by Caltech's Anton Kapustin is both insightful and entertaining.



If you are looking for the "previous slide" button, you may achieve this action simply by clicking 78 times. Click once for the "next slide".

If you have any problems with the embedded Flash version of the talk [click for full screen] above, download Anton's PowerPoint file which you may display using a Microsoft Office viewer or an OpenOffice or a LibreOffice or a Chrome extension or Google Docs or in many other ways.

Spoilers are below.




Anton describes the relationship between mathematics and physics, mathematicians and physicists, and so on. He focuses on the noncommutative character of algebras of observables in quantum mechanics. No mathematician really believed the Feynman's path integral and no physicist was interested in the mathematics by people like Grothendieck.




However, some smart opportunists in the middle – for example, Maxim Kontsevich – were able to derive interesting results (from mathematicians' viewpoint) using the path integral methods applied to the Poisson manifolds. And it wasn't just some lame undergraduate Feynman path integral that was needed. It was the stringy path integral that may be formulated using an associative product.

Hat tip: John Preskill, Twitter

by Luboš Motl (noreply@blogger.com) at November 23, 2014 06:57 PM

ZapperZ - Physics and Physicists

Research Gate
Anyone else here on Research Gate?

First of all, let me first declare that I'm not on Facebook, don't have a Twitter account, etc.. etc. This blog is my only form of "social media" involvement in physics, if you discount online physics forums. So I'm not that into these social media activities. Still, I've been on Research Gate for several years after being invited into it by a colleague.

If you're not familiar with it, Research Gate is a social media platform for ... you guessed it ... researchers. You reveal as much about yourself as you wish in your profile, and you can list all your papers and upload them. The software also "trolls" the journals and online to find publications that you may have authored and periodically asks you to verify that they are yours. Most of mine that are currently listed were found by the software, so it is pretty good.

Of course, the other aspect of such a social media is that you can "follow" others. The software, like any good social media AI, will suggest people that you might know, such as your coauthors, people from the same institution as yours, or any other situation where your name and that person's name appear in the same situation or document. It also keeps tabs on what the people that follows you or ones that you follow are doing, such as new publications being updated, job change, etc.. etc. It also tells you how many people viewed your profile, how many read your publications, and how many times your publications have been downloaded from the Research Gate site.

Another part of Research Gate is that you can submit a question in a particular field, and if that is a field that you've designated as your area of expertise, it will alert you to it so that you have the option of responding. I think this is the most useful feature of this community because this is what makes it "science specific", rather than just any generic social media program.

I am still unsure of the overall usefulness and value of this thing. So far it has been "nice", but I have yet to see it as being a necessity. Although, I must say, I'm pleasantly surprised to see some prominent names in my field of study who are also on it, which is why I continued to be on it as well.

So, if you are also on it, what do you think of it? Do you think this will eventually evolve into something that almost all researchers will someday need?

Zz.

by ZapperZ (noreply@blogger.com) at November 23, 2014 02:09 PM

November 22, 2014

Georg von Hippel - Life on the lattice

Scientific Program "Fundamental Parameters of the Standard Model from Lattice QCD"
Recent years have seen a significant increase in the overall accuracy of lattice QCD calculations of various hadronic observables. Results for quark and hadron masses, decay constants, form factors, the strong coupling constant and many other quantities are becoming increasingly important for testing the validity of the Standard Model. Prominent examples include calculations of Standard Model parameters, such as quark masses and the strong coupling constant, as well as the determination of CKM matrix elements, which is based on a variety of input quantities from experiment and theory. In order to make lattice QCD calculations more accessible to the entire particle physics community, several initiatives and working groups have sprung up, which collect the available lattice results and produce global averages.

We are therefore happy to announce the scientific program "Fundamental Parameters of the Standard Model from Lattice QCD" to be held from August 31 to September 11, 2015 at the Mainz Institute for Theoretical Physics (MITP) at Johannes Gutenberg University Mainz, Germany.

This scientific programme is designed to bring together lattice practitioners with members of the phenomenological and experimental communities who are using lattice estimates as input for phenomenological studies. In addition to sharing the expertise among several communities, the aim of the programme is to identify key quantities which allow for tests of the CKM paradigm with greater accuracy and to discuss the procedures in order to arrive at more reliable global estimates.

We would like to invite you to consider attending this and to apply through our website. After the deadline (March 31, 2015), an admissions committee will evaluate all the applications.

Among other benefits. MITP offers all its participants office space and access to computing facilities during their stay. In addition, MITP will cover local housing expenses for accepted participants. The MITP team will arrange the accommodation individually and also book the accommodation for accepted participants.

Please do not hesitate to contact us at coordinator@mitp.uni-mainz.de if you have any questions.

We hope you will be able to join us in Mainz in 2015!

With best regards,

the organizers:
Gilberto Colangelo, Georg von Hippel, Heiko Lacker, Hartmut Wittig

by Georg v. Hippel (noreply@blogger.com) at November 22, 2014 11:02 PM

Clifford V. Johnson - Asymptotia

Luncheon Reflections
LIAH_Hawthorne_luncheonYou know, I never got around to mentioning here that I am now Director (co-directing with Louise Steinman who runs the ALOUD series) of the Los Angeles Institute for the Humanities (LAIH), a wonderful organisation that I have mentioned here before. It is full of really fascinating people from a range of disciplines: writers, artists, historians, architects, musicians, critics, filmmakers, poets, curators, museum directors, journalists, playwrights, scientists, actors, and much more. These LAIH Fellows are drawn from all over the city, and equally from academic and non-academic sources. The thing is, you'll find us throughout the city involved in all sorts of aspects of its cultural and intellectual life, and LAIH is the one organisation in the city that tries to fully bring together this diverse range of individuals (all high-acheivers in their respective fields) into a coherent force. One of the main things we do is simply sit together regularly and talk about whatever's on our minds, stimulating and shaping ideas, getting updates on works in progress, making suggestions, connections, and so forth. Finding time in one's schedule to just sit together and exchange ideas with no particular agenda is an important thing to do and we take it very seriously. We do this at [...] Click to continue reading this post

by Clifford at November 22, 2014 03:21 AM

November 21, 2014

The Great Beyond - Nature blog

Gates Foundation announces world’s strongest policy on open access research

The Bill & Melinda Gates Foundation has announced the world’s strongest policy in support of open research and open data. If strictly enforced, it would prevent Gates-funded researchers from publishing in well-known journals such as Nature and Science.

On 20 November, the medical charity, of Seattle, Washington, announced that from January 2015, researchers it funds must make open their resulting papers and underlying data-sets immediately upon publication — and must make that research available for commercial re-use. “We believe that published research resulting from our funding should be promptly and broadly disseminated,” the foundation states. It says it will pay the necessary publication fees (which often amount to thousands of dollars per article).

The Foundation is allowing two years’ grace: until 2017, researchers may apply a 12-month delay before their articles and data are made free. At first glance, this suggests that authors may still — for now — publish in journals that do not offer immediate open-access (OA) publishing, such as Science and Nature. These journals permit researchers to archive their peer-reviewed manuscripts elsewhere online, usually after a delay of 6-12 months following publication.

Allowing a year’s delay makes the charity’s open-access policy similar to those of other medical funders, such as the Wellcome Trust or the US National Institutes of Health (NIH). But the charity’s intention to close off this option by 2017 might put pressure on paywalled journals to create an open-access publishing route.

However, the Gates Foundation’s policy has a second, more onerous twist which appears to put it directly in conflict with many non-OA journals now, rather than in 2017. Once made open, papers must be published under a license that legally allows unrestricted re-use — including for commercial purposes. This might include ‘mining’ the text with computer software to draw conclusions and mix it with other work, distributing translations of the text, or selling republished versions.  In the parlance of Creative Commons, a non-profit organization based in Mountain View, California, this is the CC-BY licence (where BY indicates that credit must be given to the author of the original work).

This demand goes further than any other funding agency has dared. The UK’s Wellcome Trust, for example, demands a CC-BY license when it is paying for a paper’s publication — but does not require it for the archived version of a manuscript published in a paywalled journal. Indeed, many researchers actively dislike the thought of allowing such liberal re-use of their work, surveys have suggested. But Gates Foundation spokeswoman Amy Enright says that “author-archived articles (even those made available after a 12-month delay) will need to be available after the 12 month period on terms and conditions equivalent to those in a CC-BY license.”

Most non-OA publishers do not permit authors to apply a CC-BY license to their archived, open, manuscripts. Nature, for example, states that openly archived manuscripts may not be re-used for commercial purposes. So do the American Association for the Advancement of ScienceElsevier and Wiley and many other publishers (in relation to their non-OA journals).

“It’s a major change. It would be major if publishers that didn’t previously use CC-BY start to use it, even for the subset of authors funded by the Gates Foundation. It would be major if publishers that didn’t previously allow immediate or unembargoed OA start to allow it, again even for that subset of authors. And of course it would be major if some publishers refused to publish Gates-funded authors,” says Peter Suber, director of the Office for Scholarly Communication at Harvard University in Cambridge, Massachusetts.

“You could say that Gates-funded authors can’t publish in journals that refuse to use CC-BY. Or you could say that those journals can’t publish Gates-funded authors. It may look like a stand-off but I think it’s the start of a negotiation,” Suber adds — noting that when the NIH’s policy was announced in 2008, many publishers did not want to accommodate all its terms, but now all do.

That said, the Gates Foundation does not leave as large a footprint in the research literature as the NIH. It only funded 2,802 research articles in 2012 and 2013, Enright notes; 30% of these were published in open access journals. (Much of the charity’s funding goes to development projects, rather than to research which will be published in journals).

The Gates Foundation also is not clear on how it will enforce its mandate; many researchers are still resistant to the idea of open data, for instance. (And most open access mandates are not in fact strictly enforced; only recently have the NIH and the Wellcome Trust begun to crack down). But Enright says the charity will be tracking what happens and will write to non-compliant researchers if needs be. “We believe that the foundation’s Open Access Policy is in alignment with current practice and trends in research funded in the public interest.  Hence, we expect that the policy will be readily understood, adopted and complied with by the researchers we fund,” she says.

by Richard Van Noorden at November 21, 2014 06:39 PM

Sean Carroll - Preposterous Universe

Guest Post by Alessandra Buonanno: Nobel Laureates Call for Release of Iranian Student Omid Kokabee

buonannoUsually I start guest posts by remarking on what a pleasure it is to host an article on the topic being discussed. Unfortunately this is a sadder occasion: protesting the unfair detention of Omid Kokabee, a physics graduate student at the University of Texas, who is being imprisoned by the government of Iran. Alessandra Buonanno, who wrote the post, is a distinguished gravitational theorist at the Max Planck Institute for Gravitational Physics and the University of Maryland, as well as a member of the Committee on International Freedom of Scientists of the American Physical Society. This case should be important to everyone, but it’s especially important for physicists to work to protect the rights of students who travel from abroad to study our subject.


Omid Kokabee was arrested at the airport of Teheran in January 2011, just before taking a flight back to the University of Texas at Austin, after spending the winter break with his family. He was accused of communicating with a hostile government and after a trial, in which he was denied contact with a lawyer, he was sentenced to 10 years in Teheran’s Evin prison.

According to a letter written by Omid Kokabee, he was asked to work on classified research, and his arrest and detention was a consequence of his refusal. Since his detention, Kokabee has continued to assert his innocence, claiming that several human rights violations affected his interrogation and trial.

Since 2011, we, the Committee on International Freedom of Scientists (CIFS) of the American Physical Society, have protested the imprisonment of Omid Kokabee. Although this case has received continuous support from several scientific and international human rights organizations, the government of Iran has refused to release Kokabee.

Omid Kokabee

Omid Kokabee has received two prestigious awards:

  • The American Physical Society awarded him Andrei Sakharov Prize “For his courage in refusing to use his physics knowledge to work on projects that he deemed harmful to humanity, in the face of extreme physical and psychological pressure.”
  • The American Association for the Advancement of Science awarded Kokabee the Scientific Freedom and Responsibility Prize.

Amnesty International (AI) considers Kokabee a prisoner of conscience and has requested his immediate release.

Recently, the Committee of Concerned Scientists (CCS), AI and CIFS, have prepared a letter addressed to the Iranian Supreme Leader Ali Khamenei asking that Omid Kokabee be released immediately. The letter was signed by 31 Nobel-prize laureates. (An additional 13 Nobel Laureates have signed this letter since the Nature blog post. See also this update from APS.)

Unfortunately, earlier last month, Kokabee’s health conditions have deteriorated and he has been denied proper medical care. In response, the President of APS, Malcolm Beasley, has written a letter to the Iranian President Rouhani calling for a medical furlough for Omid Kokabee so that he can receive proper medical treatment. AI has also made further steps and has requested urgent medical care for Kokabee.

Very recently, the Iran’s supreme court has nullified the original conviction of Omid Kokabee and has agreed to reconsider the case. Although this is positive news, it is not clear when the new trial will start. Considering Kokabee’s health conditions, it is very important that he is granted a medical furlough as soon as possible.

More public engagement and awareness is needed to solve this unacceptable case of violation of human rights and freedom of scientific research. You can help by tweeting/blogging about it and responding to this Urgent Action that AI has issued. Please note that the date on the Urgent Action is there to create an avalanche effect; it is not a deadline nor it is the end of action.

Alessandra Buonanno for the American Physical Society’s Committee on International Freedom of Scientists (CIFS).

by Sean Carroll at November 21, 2014 05:12 PM

Lubos Motl - string vacua and pheno

An evaporating landscape? Possible issues with the KKLT scenario
By Dr Thomas Van Riet, K.U. Leuven, Belgium

What is this blog post about?

In 2003, in a seminal paper by Kachru, Kallosh, Linde and Trivedi (KKLT) (2000+ cites!), a scenario for constructing a landscape of de Sitter vacua in string theory with small cosmological constant was found. This paper was (and is) conceived as the first evidence that the string theory landscape contains a tremendous amount of de Sitter vacua (not just anti-de Sitter vacua) which could account for the observed dark energy.

The importance of this discovery should not be underestimated since it profoundly changed the way we think about how a fundamental, UV-complete theory of all interactions addresses apparent fine-tuning and naturalness problems we are faced with in high energy physics and cosmology. It changed the way we think string theory makes predictions about the low-energy world that we observe.




It is fair to say that, since the KKLT paper, the multiverse scenario and all of its related emotions have been discussed at full intensity, even been taken up by the media and it has sparked some (unsuccessful) attempts to classify string theory as non-scientific.

In this post I briefly outline the KKLT scenario and highlight certain aspects that are not often described in reviews but are crucial to the construction. Secondly I describe research done since 2009 that sheds doubts on the consistency of the KKLT scenario. I have tried to be as unbiased as possible. But near the end of this post I have taken the freedom to give a personal view on the matter.




The KKLT construction

The main problem of string phenomenology at the time of the KKLT paper was the so-called moduli-stabilisation problem. The string theory vacua that were constructed before the flux-revolution were vacua that, at the classical level, contained hundreds of massless scalars. Massless scalars are a problem for many reasons that I will not go into. Let us stick to the observed fact that they are not there. Obviously quantum corrections will induce a mass, but the expected masses would still be too low to be consistent with observations and various issues in cosmology. Hence we needed to get rid of the massless scalars. This is where fluxes come into the story since they provide a classical mass to many (but typically not all) moduli.

The above argument that masses due to quantum corrections are too low is not entirely solid. What is really the problem is that vacua supported solely by quantum corrections are not calculable. This is called the Dine-Seiberg problem and it roughly goes as follows: if quantum corrections are strong enough to create a meta-stable vacuum we necessarily are in the strong coupling regime and hence out of computational control. Fluxes evade the argument because they induce a classical piece of energy that can stabilize the coupling at a small value. Fluxes are used mainly as a tool for computational control, to stay within the supergravity approximation.

Step 1: fluxes and orientifolds

Step 1 in the KKLT scenario is to start from the classical IIB solution often referred to as GKP (1400+ cites), (see also this paper). What Giddings, Kachru and Polchinski did was to construct compactifications of IIB string theory (in the supergravity limit) down to 4-dimensional Minkowski space using fluxes and orientifolds. Orientifolds are specific boundary conditions for strings that are different from Dirichlet boundary conditions (which would be D-branes). The only thing that is required for understanding this post is to know that orientifolds are like D-branes but with negative tension and negative charge (anti D-brane charge). GKP understood that Minkowski solutions (SUSY and non-SUSY) can be build from balancing the negative energy of the orientifolds \(T_{{\rm O}p}\) against the positive energy of the 3-form fluxes \(F_3\) and \(H_3\):\[

V = H_3^2 + F_3^2 + T_{{\rm O}p} = 0

\] This scalar potential \(V\) is such that it does not depend on the sizes of the compact dimensions. Those sizes are then perceived as massless scalar fields in four dimensions. Many other moduli directions have gained a mass due to the fluxes and all those masses are positive such that the Minkowski space is classically stable.

The 3-form fluxes \(H_3\) and \(F_3\) carry D3 brane charges, as can be verified from the Bianchi identity for the five-form field strength \(F_5\)\[

\dd F_5 = H_3 \wedge F_3 + Q_3\delta

\] The delta-function on the right represent the D3/O3 branes that are really localised charge densities (points) in the internal dimensions, whereas the fluxes correspond to a smooth, spread out, charge distribution. Gauss' law tells us that a compact space cannot carry any charge and consequently the charges in the fluxes have opposite sign to the charges in the localised sources.

I want to stress the physics in the Bianchi identity. To a large extend one can think of the 3-form fluxes as a smeared configuration of actual D3 branes. Not only do they induce D3 charge, they also back-react on the metric because of their positive energy-momentum. We will see below that this is more than an analogy: the fluxes can even materialize into actual D3 branes.

This flux configuration is ‟BPS″, in the sense that various ingredients exert no force on each other: the orientifolds have negative tension such that the gravitational repulsion between fluxes and orientifolds exactly cancels the Coulomb attraction. This will become an issue once we insert SUSY-breaking anti-branes (see below).

Step 2: Quantum corrections

One of the major breakthroughs of the KKLT paper (which I am not criticizing here) is a rather explicit realization of how the aforementioned quantum corrections stabilize all scalar fields in a stable Anti-de Sitter minimum that is furthermore SUSY. As expected quantum corrections do give a mass to those scalar fields that were left massless at the classical level in the GKP solution. From that point of view it was not a surprise. The surprise was the simplicity, the level of explicitness, and most important, the fact that the quantum stabilization can be done in a regime where you can argue that other quantum corrections will not mess up the vacuum. Much of the original classical supergravity background is preserved by the quantum corrections since the stabilization occurs at weak coupling and large volume. Both coupling and volume are dynamical fields that need to be stabilized at self-consistent values, meaning small coupling and large (in string units) volume of the internal space. If this were not the case than one would be too far outside the classical regime for this quantum perturbation to be leading order.

So what KKLT showed is exactly how the Dine-Seiberg problem can be circumvented using fluxes. But, in my opinion, something even more important was done at this step in the KKLT paper. Prior to KKLT one could not have claimed on solid grounds that string theory allows solutions that are perceived to an observer as four-dimensional. Probably the most crude phenomenological demand on a string theory vacuum remained questionable. Of course flux compactifications were known, for example the celebrated Freund-Rubin vacua like \(AdS_5\times S^5\) which were crucial for developing holography. But such vacua are not lower-dimensional in any phenomenological way. If we were to throw you inside the \(AdS_5\times S^5\) you would not see a five-dimensional space, but you would observe all ten dimensions.

KKLT had thus found the first vacua with all moduli fixed that have a Kaluza-Klein scale that is hierarchically smaller than the length-scale of the AdS vacuum. In other words, the cosmological constant in KKLT is really tiny.

But the cosmological constant was negative and the vacuum of KKLT was SUSY. This is where KKLT came with the second, and most vulnerable, insight of their paper: the anti-brane uplifting.

Step 3: Uplifting with anti-D3 branes

Let us go back to the Bianchi identity equation and the physics it entails. If one adds D3 branes to the KKLT background the cosmological constant does not change and SUSY remains unbroken. The reason is that D3 branes are both BPS with respect to the fluxes and the orientifold planes. Intuitively this is again clear from the no-force condition. D3 branes repel orientifolds gravitationally as strong as they attract them "electromagnetically" and vice versa for the fluxes (recall that the fluxes can be seen as a smooth D3 distribution). This also implies that D3 branes can be put at any position of the manifold without changing the vacuum energy: the energy in the tension of the branes gets cancelled by the decrease in fluxes required to cancel the tadpole condition (Gauss' law).

Anti-D3 branes instead break SUSY. Heuristically that is straightforward since the no-force condition is violated. The anti-D3 branes can be drawn towards the non-dynamical O-planes without harm since they cannot annihilate with each other. The fluxes, however, are another story that I will get to shortly. The energy added by the anti-branes is twice the anti-brane tension \(T_{\overline{D3}}\): the gain in energy due to the addition of fluxes, required to cancel off the extra anti-D3 charges, equals the tension of the anti-brane. Hence we get\[

V_{\rm NEW} = V_{\rm SUSY} + 2 T_{\overline{D3}}

\] At first it seems that this new potential can never have a de Sitter critical point since \(T_{\overline{D3}}\) is of the order of the string scale (which is a huge amount of energy) whereas \(V_{\rm SUSY}\) was supposed to be a very tiny cosmological constant. One can verify that the potential has a runaway structure towards infinite volume. What comes to the rescue is space-time warping. Mathematically warping means that the space-time metric has the following form\[

\dd s_{10}^2 = e^{2A} \dd s_4^2 + \dd s_6^2

\] where \(\dd s_4^2\) is the metric of four-dimensional space, \(\dd s_6^2\) the metric on the compact dimensions (conformal Calabi-Yau, in case you care) and \(\exp(2A)\) is the warp-factor, a function that depends on the internal coordinates. A generic compactification contains warped throats, regions of space where the function \(\exp(A)\) can become exponentially small. This is often depicted using phallus-like pictures of warped Calabi-Yau spaces, such as the one below (taken from the KPV paper (I will come to KPV in a minute)):



Consider some localized object with some non-zero energy, then that energy is significantly red-shifted in regions of high warping. For anti-branes the tension gets the following redshift factor\[

\exp(4A) T_{\overline{D3}}.

\] This can bring a string scale energy all the way down to the lowest energy scales in nature. The beauty of this idea is that this redshift occurs dynamically; an anti-brane feels literally a force towards that region since that is where its energy is minimized. So this redshift effect seems completely natural, one just needs a warped throat.

The KKLT scenario then continues by observing that with a tunable warping, a new critical point in the potential arises that is a meta-stable de Sitter vacuum as shown in the picture below.



This was verified by KKLT explicitly using a Calabi-Yau with a single Kähler modulus .

The reason for the name uplifting then becomes obvious; near the critical point of the potential it indeed seems as if the potential is lifted with a constant value to a de Sitter value. This lifting did not happen with a constant value but the dependence of the uplift term on the Kähler modulus is practically constant when compared to the sharp SUSY part of the potential.

I am glossing over many issues, such as the stability of the other directions, but all of this seems under control (the arguments are based on a parametric separation between the complex structure moduli masses and the masses of the Kähler moduli).

The KKLT scrutiny

The issues with the KKLT scenario that have been discussed in the last five years have to do with back-reaction. As mentioned earlier, the no-force condition becomes violated once we insert the anti-D3 branes. Given the physical interpretation of the 3-form fluxes as a cloud of D3 branes, you can guess what the qualitative behavior of the back-reaction is: the fluxes are drawn gravitationally and electromagnetically towards the anti-branes, leading to a local increase of the 3-form flux density near the anti-brane.

Although the above interpretation was not given, this effect was first found in 2009 independently by Bena, Grana and Halmagyi in Saclay (France) and by McGuirk, Shiu and Sumitomo in Madison (Wisconsin, USA). These authors constructed the supergravity solution that describes a back-reacting anti-brane. Clearly this is an impossible job, were it not for three simplifying assumptions:
  • They put the anti-brane inside the non-compact warped Klebanov-Strassler throat since that is the canonical example of a throat in which computations are doable. This geometry consists of a radial coordinate measuring the distance from the tip and five angles that span the manifold which is topologically \(S^2\times S^3\). The non-compactness implies that we can circumvent the use of the quantum corrections of KKLT to have a space-time solution in the first place. Non-compact geometries work differently from compact ones. For example, the energy of the space-time (ADM mass) does not need to effect the cosmological constant of the 4D part of the metric. Roughly, this is because there is no volume modulus that needs to be stabilized. In the end one should ‟glue″ the KS throat, at large distance from the tip, to a compact Calabi-Yau orientifold.

  • The second simplification was to smear the anti-D3 branes over the tip of the throat. This means that the solution describes anti-D3's homogeneously distributed over the tip. In practice this implies that the supergravity equations of motion become a (large) set of coupled ODE's.

  • These two papers solved the ODE's approximately: They treated the anti-brane SUSY breaking as small and expanded the solution in terms of a SUSY-breaking parameter, keeping the first terms in the expansion.
Regardless of these assumptions it was an impressive task to solve the ODE's. In this task the Saclay paper was the more careful one in connecting the solution at small radius to the solution at large radius. In any case these two papers found the same result, which was unexpected at the time: The 3-form flux density became divergent at the tip of the throat. More precisely, the following scalar quantity diverges at the tip:\[

H_3^2 \to \infty.

\] (I am ignoring the string coupling in all equations.) Diverging fluxes near brane sources are rather mundane (a classical electron has a diverging electric field near its position). But the real reason for the worry is that this singularity is not in the field sourced by the brane (since that should be the \(F_5\)-field strength and it indeed blows up as well).

In light of the physical picture I outlined above, this divergence is not that strange to understand. The D3 charges in the fluxes are being pulled towards the anti-D3 branes where they pile up. The sign of the divergence in the 3-form fluxes is indeed that of a D3 charge density and not anti-D3 charge density.

Whenever a supergravity solution has a singularity one has to accept that one is outside of the supergravity approximation and full-blown string theory might be necessary to understand it. And I agree with that. But still singularities can — and should — be interpreted and the interpretation might be sufficient to know or expect that stringy corrections will resolve it.

So what was the attitude of the community when these papers came out? As I recall it, the majority of string cosmologists are not easily woken up and the attitude of the majority of experts that took the time to form an opinion, believed that the three assumptions above (especially the last two) were the reason for this. To cut a long story short (and painfully not mention my own work on showing this was wrong) it is now proven that the same singularity is still there when the assumptions are undone. The full proof was presented in a paper that gets too little love.

So what was the reaction of the few experts that still cared to follow this? They turned to an earlier suggestion by Dymarsky and Maldacena that the real KKLT solution is not described by anti-D3 branes at the tip of the throat but by spherical 5-branes, that carry anti-D3 charges (a.k.a. the Myers effect). This then would resolve the singularity they argued (hoped?). In fact, a careful physicist could have predicted some singularity based on the analogy with other string theory models of 3 branes and 3-form fluxes. Such solutions often come with singularities that are only resolved when the 3-branes are being polarised. But such singularities can be of any form. The fact that it so nicely corresponds to a diverging D3 charge density should not be ignored — and it too often is.

So, again, I agree that the KKLT solution should really contain 5-branes instead of 3-branes and I will discuss this below. But before I do, let me mention a very solid argument of why also this seems not to help.

If indeed the anti-D3 branes ‟puff″ into fuzzy spherical 5-branes leading to a smooth supergravity solution then one should be able to ‟heat up″ the solution. Putting gravity solutions at finite temperature means adding an extra warp-factor in front of the time-component in the metric that creates an event horizon at a finite distance. In a well-known paper by Gubser it was argued that this provides us with a classification of acceptable singularities in supergravity. If a singularity can be cloaked by a horizon by adding sufficient temperature it has a chance of being resolved by string theory. The logic behind this is simple but really smart: if there is some stringy physics that resolves a sugra singularity one can still heat up the branes that live at the singularity. One can then add so much temperature that the horizon literally becomes parsecs in length such that the region at and outside the horizon become amendable to classical sugra and it should be smooth. Here is the suprise: that doesn't work. In a recent paper, the techniques of arXiv:1301.5647 were extended to include finite temperature and what happened is that the diverging flux density simply tracks the horizon, it does not want to fall inside. The metric Ansatz that was used to derive this no-go theorem is compatible with spherical 5-branes inside the horizon. So it seems difficult to evade this no-go theorem.

The reaction sofar on this from the community, apart from a confused referee report, is silence.

But still let us go back to zero temperature since there is some beautiful physics taking place. I said earlier that the true KKLT solution should include 5-branes instead of anti-D3 branes. This was described prior to KKLT in a beautiful paper by Kachru, Pearson and Verlinde, called KPV (again the same letter ‛K′). The KPV paper is both the seed and the backbone of the KKLT paper and the follow-up papers, like KKLMMT, but for some obscure reason is less cited. KPV investigated the ‟open-string″ stability of probe anti-D3 branes placed at the tip of the KS throat. They realised that the 3-form fluxes can materialize into actual D3 branes that annihilate the anti-D3 branes which implies a decay to the SUSY vacuum. But they found that this materialization of the fluxes occurs non-perturbatively if the anti-brane charge \(p\) is small enough\[

\frac{p}{M} \ll 1.

\] In the above equation \(M\) denotes a 3-form flux quantum that sets the size of the tip of the KS throat. The beauty of this paper resides in the fact that they understood how the brane-flux annihilation takes place, but I necessarily have to gloss over this such that you cannot really understand it if you do not already know this. In any case, here it comes: the anti-D3 brane polarizes into a spherical NS5 brane wrapping a finite contractible 2-sphere inside the 3-sphere at the tip of the KS throat as in the below picture:



One can show that this NS5 brane carries \(p\) anti-D3 charges at the South pole and \(M-p\) D3 charges at the North pole. So if it is able to move over the equator from the South to the North pole, the SUSY-breaking state decays into the SUSY vacuum: recall that the fluxes have materialized into \(M\) D3 branes that annihilate with the \(p\) anti-D3 branes leaving \(M-p\) D3 branes behind in the SUSY vacuum. But what pushes the NS5 to the other side? That is exactly the 3-form flux \(H_3\). This part is easy to understand: an NS5 brane is magnetically charged with respect to the \(H_3\) field strength. In the probe limit KPV found that this force is small enough to create a classical barrier if \(p\) is small enough. So we get a meta-stable state, nice and very beautiful. But what would they have thought if they could have looked into the future to see that the same 3-form flux that pushes the NS5 brane diverges in the back-reacted solution? Not sure, but I cannot resist from quoting a sentence out of their paper
One forseeable quantitative difference, for example, is that the inclusion of the back-reaction of the NS5 brane might trigger the classical instabilities for smaller values of \(p/M\) than found above.
It should be clear that this brane-flux mechanism is suggesting a trivial way to resolve the singularity. The anti-brane is thrown into the throat and starts to attract the flux, which keeps on piling up until it becomes too strong causing the flux to annihilate with the anti-brane. Then the flux pile-up stops since there is no anti-brane anymore. At no point does this time-dependent process lead to a singular flux density. The singularity was just an artifact of forcing an intrinsically time-dependent process into a static Ansatz. This idea is explained in two papers: arXiv:1202.1132 and arXiv:1410.8476 .

I am often asked whether a probe computation can ever fail, apart from being slightly corrected? I am not sure, but what I do know is that KPV do not really have a trustworthy probe regime: for details explained in the KPV paper, they have to work in the strongly coupled regime and they furthermore have a spherical NS5 brane wrapping a cycle of stringy length scale, which is also worrisome.

Still one can argue that the NS5 brane back-reaction will be slightly different from the anti-D3 back-reaction exactly such as to resolve the divergence. I am sympathetic to this (if one ignores the trouble with the finite temperature, which one cannot ignore). However, again computations suggest this does not work. Here I will go even faster since this guest blog is getting lengthy.

This issue has been investigated in some papers such as arXiv:1212.4828, and there it was shown, under certain assumptions, that the polarisation does not occur in a way to resolve the divergence. Note that, like the finite temperature situation, the calculation could have worked in favor of the KKLT model, but it did not! At the moment I am working on brane models which have exactly the same 3-form singularity but are conceptually different since the 4D space is AdS and SUSY is not broken. In this circumstance the same singularity does get resolved that way. My point is that the intuition of how the singularity should get resolved does work in certain cases, but it does not work sofar for models relevant to KKLT.

What is the reaction of the community? Well they are cornered to say that it is the simplifications made in the derivation of the ‛no polarisation′ result that is causing troubles.

But wait a minute... could it perhaps be that at this point in time the burden of proof has shifted? Apparently not, and that, in my opinion, starts becoming very awkward.

It is true that there is still freedom for the singularity to be resolved through brane polarisation. There is just one issue with that: to be able to compute this in a supergravity regime requires to tune parameters out of the small \(p\) limit. Bena et. al. have pushed this idea recently in arXiv:1410.7776 and were so kind to assume the singularity gets resolved, but they found the vacuum is then necessarily tachyonic. It can be argued that this is obvious since they necessarily had to take the limit away from what KPV want for stability (remember \(p\ll M\)). But then again, the tachyon they find has nothing to do with a perturbative brane-flux annihilation. Once again a situation in which a honest-to-God computation could have turned into the favor of KKLT, it did not.

Here comes the bias of this post: were it not for a clear physical picture behind the singularity I might be finding myself in the position of being less surprised that there is a camp that is not too worried about the consistency of KKLT. But there is a clear picture with trivial intuition I already alluded to: the singularity, when left unresolved, indicates that the anti-brane is perturbatively unstable and once you realise that, the singularity is resolved by allowing the brane to decay. At least I hope the intuition behind this interpretation was clear. It simply uses that a higher charge density in fluxes (near the anti-D3) increases the probability for the fluxes to materialize into actual D3 branes that eat up the anti-branes. KPV told us exactly how this process occurs: the spherical NS5 brane should not feel a too strong force that pulls it towards the other side of the sphere. But that force is proportional to the density of the 3-form fluxes... and it diverges. End of story.

What now?

I guess that at some point these ‟anti-KKLT″ papers will stop being produced as their producers will run out of ideas for computations that probe the stability of the would-be KKLT vacuum. If the first evidence in favor of KKLT will be found in that endeavor, I can assure you that it will be published in that way. It just never happened thus far.

We are facing the following problem: to fully settle the discussion, computations outside the sugra regime have to be done (although I believe that the finite temperature argument suggests that this will not help). Were fluxes not invented to circumvent this? It seems that the anti-brane back-reaction brings us back to the Dine-Seiberg problem.

So we are left with a bunch of arguments against what is/was a beautiful idea for constructing dS vacua. The arguments against have an order of rigor higher than the original models. I guess we need an extra level of rigor on top from those that want to keep using the original KKLT model.

What about alternative de Sitter embeddings in string theory? Lots of hard work has been done there. Let me do injustice to it by summarizing it as follows: none of these models are convincing to me at least. They are borderline in the supergravity regime or we don't know whether it is trustworthy in supergravity (like with non-geometric fluxes). Very popular are F-term quantum corrections to the GKP vacuum which are used to stabilize the moduli in a dS vacuum. But none of this is from the full 10D point of view. Instead it is between 4D effective field theory and 10D. KKLT at least had a full 10-dimensional picture of uplifting and that is why it can be scrutinized.



It seems as if string theory is allergic to de Sitter vacua. Consider the following: any grad student can find an anti-de Sitter solution in string theory. Why not de Sitter? All claimed de Sitter solutions are always rather phenomenological in the sense that the cosmological constant is small compared with the KK scale. I guess we better first try to find unphysical dS vacua. Say a six-dimensional de Sitter solution with large cosmological constant. But we cannot, or nobody ever did this. Strange, right? Many say: "you just have to work harder". That ‛harder′ always implies ‛less explicit′ and then suddenly a landscape of de Sitter vacua opens up. I doubt that seriously, maybe it just means we are sweeping problems under the carpet of effective field theory?

I hope I have been able to convince you that the search for de Sitter vacua is tough if you want to do this truly top-down. The most popular construction method, the KKLT anti-brane uplifting, has a surprise: a singularity in the form of a diverging flux density. It sofar persistently survives all attempts to resolve it. This divergence is however resolved when you are willing to accept that the de Sitter vacuum is not meta-stable but instead a solution with decaying vacuum energy. Does string theory want to tell us something deep about quantum gravity?

by Luboš Motl (noreply@blogger.com) at November 21, 2014 04:49 PM

CERN Bulletin

CERN Bulletin Issue No. 47-48/2014
Link to e-Bulletin Issue No. 47-48/2014Link to all articles in this issue No.

November 21, 2014 11:14 AM

astrobites - astro-ph reader's digest

A New Way with Old Stars: Fluctuation Spectroscopy

Astronomers use models to derive properties of individual stars that we cannot directly observe, such as mass, age, and radius. This is also the case for a group of stars (a galaxy or a star cluster). How do we test how accurate these models are? Well, we compare model predictions against observations. One problem with current stellar population models is that they remain untested for old populations of stars (because they are rare). These old stars are important because they produce most of the light from massive elliptical galaxies. So a wrong answer from model means a wrong answer on various properties of massive elliptical galaxies such as their age and metallicity. (Houstan, we have a problem.)

Fear not — this paper introduces fluctuation spectroscopy as a new way to test stellar population models for elliptical galaxies. It focuses on a group of stars known as red giants, stars nearing the end of their lives. The spectra of red giants have features (TiO and water molecular bands) that can be used to obtain the chemical abundances, age, and initial mass function (IMF) of a galaxy. Red giants are very luminous. For instance, once our beloved Sun grows into old age as a red giant, it will be thousands of times more luminous than today. As such, red giants dominate the light of early-type galaxy (another name for elliptical galaxy). By looking at an image of an early-type galaxy, we can infer that bright pixels contain more red giants than faint pixels. Figure 1 illustrates this effect. Intensity variations from pixel-to-pixel are due to fluctuations in the number of red giants. By comparing the spectra of pixels with different brightness, one can isolate the spectral features of red giants. Astronomers can then analyse these spectral features to derive galaxy properties to be checked against model predictions.

The top panel shows brightness \textit{variations} in a model elliptical galaxy based on the observed light distribution of NGC 4472. The bottom panel shows a bright (left) and a faint  (right) pixel, while the inset figures are color versus magnitude diagrams of the stars in these pixels. The bright pixel contains many more bright giants than the faint pixel. SBF stands for surface brightness fluctuation.

FIG. 1 – Top left figure shows a model elliptical galaxy based on observation of NGC 4472. The right figure zooms in on a tiny part of the galaxy, and shows the pixel-to-pixel brightness variations within that tiny region. Figures on the bottom panel further zoom in on a bright (white) and a faint (black) pixel. The bright pixel (bottom left) contains many more bright red giant stars, represented as red dots, compared to the faint pixel (bottom right). The inset figures are color versus magnitude diagrams of the stars in these pixels, where there are more luminous giant stars (open circles) in the bright pixel.

The authors applied fluctuation spectroscopy on NGC 4472, the brightest galaxy in the Virgo cluster. They obtained images of the galaxy at six different wavelengths using the narrow-band filters (filters that allow only a few wavelengths of light, or emission lines, to pass through; see this or this) in the Advanced Camera for Surveys aboard the Hubble Space Telescope. In addition, they acquired deep broad-band images (images obtained using broad-band filters that allow a large portion of light to go through) of the galaxy. These broad-band images, because of their high signal-to-noise compared to the narrow-band images (broad-band images receive more light than narrow-band images and so have higher signals), are used to measure the flux in each pixel in order to measure how brightness changes. Next, the authors divided narrow-band images in two adjacent narrow-band filters. Recall that since narrow-band filters allow only certain emission lines to get through, the ratio of flux in two narrow-band filters –an “index image”– is a proxy to the distribution of stellar types in each pixel because different stars produce different emission lines. The money plot of this paper, Figure 2, shows the relation between the averaged indices of index image and surface brightness fluctuation; it illuminates the fact that pixels with more red giants (larger SBF) produce a different spectrum (indices of index images) than pixels with less giants (lower SBF).

By fitting observed index variations with models, we can obtain a predicted spectrum. The authors compared observed index variations of NGC 4472 with modeled index variations derived from Conroy & van Dokkum (2012) stellar population synthesis models, shown in Figure 3, which performs well in characterizing the galaxy.

The last thing that the authors analysed are the effects of changing model parameters on the indices of index images, in particular by varying age, metallicity, and the IMF. They found that the indices are sensitive to age and metallicity, thereby enabling them to exclude models that produce incompatible ages and metallicities with observations. One interesting result is that the indices are also sensitive to the presence of late M giant stars, which allows one to constrain their contribution to the total light from a galaxy. This is useful because standard stellar population synthesis models for early-type galaxies do not include the presence of these cool giants.

In conclusion, the authors introduced fluctuation spectroscopy as a probe of stellar type distributions in old populations. They applied this method to NGC 4472 and found that results of observation agree very well with model predictions. Various perturbations are introduced into the model with the most important result being that one can quantify the contribution of late M giants to the integrated light of early-type galaxies. Before ending, the authors propose directions for future work, which include obtaining actual spectra rather than narrow-band images and studying larger ranges of surface brightness fluctuations.

Averaged indices of index images for different narrow-band filter combinations versus surface brightness fluctuation for NGC 4472. In this paper, the authors focus on an SBF range from 0.95 (low fluctuation) to 1.05 (high fluctuation). Most of the filter combinations exhibit a clear relation between index values and the SBF.

FIG. 2 – Vertical axis is the flux ratio in a narrow-band filter and the adjacent band. It is a measure of the different number of different stars present. The horizontal axis is surface brightness fluctuation, SBF. SBF = 1 is the mean, while SBF < 1 represents little fluctuation and SBF > 1 represents high fluctuation. There is a trend between index and SBF because red giants produce a larger-than-average brightness and a different spectrum that changes the index of different index images.

This figure compares observed indices (dots) with model indices (lines). The model predictions agree amazingly well with observations. The bottom panel is the residual, or the difference between observed and predicted indices.

FIG. 3 – The top panel compares observed indices (dots) of NGC 4472 with model indices (lines). The vertical and horizontal axes are the same as Figure 2. The bottom panel shows the differences between observed and predicted indices. These figures suggest that model predictions agree amazingly well with observations.

 

by Suk Sien Tie at November 21, 2014 06:50 AM

November 20, 2014

astrobites - astro-ph reader's digest

Real-Time Stellar Evolution

Images of four similar planetary nebulae taken by the Hubble Space Telescope using a narrow Hα filter. All of these have H-rich central stars.

Images of four planetary nebulae taken by the Hubble Space Telescope using a narrow Hα filter. All of these feature hydrogen-rich central stars.

To get an idea of how stars live and die, we can’t just pick one and watch its life unfold in real time. Most stars live for billions of years! So instead, we do a population census of sorts. Much like you can study how humans age by taking a “snapshot” of individuals ranging from newborn to elderly, so too can we study the lives of stars.

But like all good things in life (and stars), there are exceptions. Sometimes, stellar evolution happens on more human timescales—tens to hundreds of years rather than millions or billions. One such exception is the topic of today’s paper: planetary nebulae, and the rapidly dying stellar corpses responsible for all that glowing gas.

All stars similar to our Sun, or up to about eight times as massive, will end their lives embedded in planetary nebulae like these. The name is a holdover from their discovery and general appearance—we have long known that planetary nebulae have nothing to do with planets. Instead, they are the former outer layers of a star: an envelope of material hastily ejected when gravity can no longer hold a star together. In its final death throes, what’s left of the star rapidly heats up and begins to ionize gas in the nebula surrounding it.

A Deathly Glow

Ionized gas is the telltale sign that the central star in a planetary nebula isn’t quite done yet. When high-energy light from a dying star rams into gas in its planetary nebula, some atoms of gas are so energized that electrons are torn from their nuclei. Hotter central stars emit more light, making the ionized gas glow brighter. This final stage of stellar evolution is what the authors of today’s paper observe in real time for a handful of planetary nebulae.

Most planetary nebulae show increasing oxygen emission with time as the central star heats up and ionizes gas in the nebula. The stars are classified into one of three categories based on their spectra. Points indicate the average change in oxygen emission per year, and dashed lines show simple stellar evolution models for stars with final masses between 0.6 and 0.7 times that of the Sun.

The figure above shows how oxygen emission in many planetary nebulae has changed brightness over time. Each point represents data spanning at least ten years and brings together new observations with previously published values in the literature. Distinct symbols assign each star to one of three categories: stars with lots of hydrogen in their spectra (H rich), Wolf-Rayet ([WR]) stars with many emission lines in their spectra (indicating lots of hot gas very close to the star), and weak emission line stars (wels). The fact that most stars show an increase in planetary nebula emission—the stars are heating up—agrees with our expectations.

Oxygen emission flux as a function of time for three planetary nebulae over 30+ years. The top two systems, M 1-11 and M 1-12, have Hydrogen-rich stars that cause increasing emission as expected. The bottom pane, SwSt 1, contains a Wolf-Rayet star and shows a surprising decreasing trend.

Oxygen emission flux as a function of time for three planetary nebulae over 30+ years. The top two systems, M 1-11 and M 1-12, have hydrogen-rich stars that cause increasing emission as expected. The bottom pane, SwSt 1, shows a Wolf-Rayet star with a surprising decreasing trend.

The earliest observation in this study is from 1978. Spectrographs and imaging techniques have improved markedly since then! While some changes in flux are from different observing techniques, the authors conclude that at least part of each flux increase is real. What’s more, hydrogen-rich stars seem to agree with relatively simple evolution models, shown as dashed lines on the figure above. (Stars move toward the right along the lines as they evolve.) More evolved stars cause oxygen in the nebula to glow ever brighter, but the rate of increase in oxygen emission slows as the star ages and loses fuel.

There’s Always an Oddball

However, the authors find that some planetary nebulae don’t behave quite as consistently. None of the more evolved Wolf-Rayet systems show increasing emission with time. In fact, one of them, in the bottom pane of the figure to the right, shows a steady decline in oxygen emission! This suggests the hot gas closest to the star may be weakening even as the star is getting hotter, but it is not fully understood.

This unique glimpse into real-time stellar evolution is possible because so many changes happen to a star as it nears the end of its life. Eventually, these hot stellar remnants will become white dwarfs and slowly cool for eternity. Until then, not-dead-yet stars and their planetary nebulae have lots to teach us.

by Meredith Rawls at November 20, 2014 07:07 PM

Symmetrybreaking - Fermilab/SLAC

CERN frees LHC data

Anyone can access collision data from the Large Hadron Collider through the new CERN Open Data Portal.

Today CERN launched its Open Data Portal, which makes data from real collision events produced by LHC experiments available to the public for the first time.

“Data from the LHC program are among the most precious assets of the LHC experiments, that today we start sharing openly with the world,” says CERN Director General Rolf Heuer. “We hope these open data will support and inspire the global research community, including students and citizen scientists.”

The LHC collaborations will continue to release collision data over the coming years.

The first high-level and analyzable collision data openly released come from the CMS experiment and were originally collected in 2010 during the first LHC run. Open source software to read and analyze the data is also available, together with the corresponding documentation. The CMS collaboration is committed to releasing its data three years after collection, after they have been thoroughly studied by the collaboration.

“This is all new and we are curious to see how the data will be re-used,” says CMS data preservation coordinator Kati Lassila-Perini. “We’ve prepared tools and examples of different levels of complexity from simplified analysis to ready-to-use online applications. We hope these examples will stimulate the creativity of external users.”

In parallel, the CERN Open Data Portal gives access to additional event data sets from the ALICE, ATLAS, CMS and LHCb collaborations that have been prepared for educational purposes. These resources are accompanied by visualization tools.

All data on OpenData.cern.ch are shared under a Creative Commons CC0 public domain dedication. Data and software are assigned unique DOI identifiers to make them citable in scientific articles. And software is released under open source licenses. The CERN Open Data Portal is built on the open-source Invenio Digital Library software, which powers other CERN Open Science tools and initiatives.


CERN published a version of this article as a press release.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at November 20, 2014 04:39 PM

Symmetrybreaking - Fermilab/SLAC

CERN frees LHC data

Anyone can access collision data from the Large Hadron Collider through the new CERN Open Data Portal.

Today CERN launched its Open Data Portal, which makes data from real collision events produced by LHC experiments available to the public for the first time.

“Data from the LHC program are among the most precious assets of the LHC experiments, that today we start sharing openly with the world,” says CERN Director General Rolf Heuer. “We hope these open data will support and inspire the global research community, including students and citizen scientists.”

The LHC collaborations will continue to release collision data over the coming years.

The first high-level and analyzable collision data openly released come from the CMS experiment and were originally collected in 2010 during the first LHC run. Open source software to read and analyze the data is also available, together with the corresponding documentation. The CMS collaboration is committed to releasing its data three years after collection, after they have been thoroughly studied by the collaboration.

“This is all new and we are curious to see how the data will be re-used,” says CMS data preservation coordinator Kati Lassila-Perini. “We’ve prepared tools and examples of different levels of complexity from simplified analysis to ready-to-use online applications. We hope these examples will stimulate the creativity of external users.”

In parallel, the CERN Open Data Portal gives access to additional event data sets from the ALICE, ATLAS, CMS and LHCb collaborations that have been prepared for educational purposes. These resources are accompanied by visualization tools.

All data on OpenData.cern.ch are shared under a Creative Commons CC0 public domain dedication. Data and software are assigned unique DOI identifiers to make them citable in scientific articles. And software is released under open source licenses. The CERN Open Data Portal is built on the open-source Invenio Digital Library software, which powers other CERN Open Science tools and initiatives.


CERN published a version of this article as a press release.

 

Like what you see? Sign up for a free subscription to symmetry!

by Kathryn Jepsen at November 20, 2014 04:39 PM

arXiv blog

Twitter "Exhaust" Reveals Patterns of Unemployment

Twitter data mining reveals surprising detail about socioeconomic indicators but at a fraction of the cost of traditional data-gathering methods, say computational sociologists.


Human behaviour is closely linked to social and economic status. For example, the way an individual travels round a city is influenced by their job, their income and their lifestyle.

November 20, 2014 03:51 PM

Jester - Resonaances

Update on the bananas
One of the most interesting physics stories of this year was the discovery of an unidentified 3.5 keV x-ray  emission line from galactic clusters. This so-called bulbulon can be interpreted as a signal of a sterile neutrino dark matter particle decaying into an active neutrino and  a photon. Some time ago I wrote about the banana paper that questioned the dark matter origin of the signal. Much has happened since, and I owe you an update. The current experimental situation is summarized in this plot:

To be more specific, here's what's happening.

  •  Several groups searching for the 3.5 keV emission have reported negative results. One of those searched for the signal in dwarf galaxies, which offer a  much cleaner environment allowing for a more reliable detection. No signal was found, although the limits do not exclude conclusively the original bulbulon claim. Another study looked for the signal in multiple galaxies. Again, no signal was found, but this time the reported limits are in severe tension with the sterile neutrino interpretation of the bulbulon. Yet another study failed to find the 3.5 keV line in  Coma, Virgo and Ophiuchus clusters, although they detect it in the Perseus cluster. Finally, the banana group analyzed the morphology of the 3.5 keV emission from the Galactic center and Perseus and found it incompatible with dark matter decay.
  • The discussion about the existence of the 3.5 keV emission from the Andromeda galaxy ongoing. The conclusions seem to depend on the strategy to determine the continuum x-ray emission. Using data from the XMM satellite, the banana group fits the background in the 3-4 keV range  and does not find the line, whereas this paper argues it is more kosher to fit in the 2-8 keV range, in which case the line can be detected in exactly the same dataset. It is not obvious who is right, although the fact that the significance of the signal depends so strongly on the background fitting procedure is not encouraging. 
  • The main battle rages on around K-XVIII (X-n stands for the X atom stripped of n-1 electrons; thus, K-XVIII is the potassium ion with 2 electrons). This little bastard has emission lines at 3.47 keV and 3.51 keV which could account for the bulbulon signal. In the original paper, the bulbuline group invokes a model of plasma emission that allows them to constrain  the flux due to the K-XVIII emission from  the  measured ratios of the strong S-XVI/S-XV and Ca-XX/Ca-XIX lines. The banana paper argued that the bulbuline model is unrealistic as it  gives inconsistent predictions for some plasma line ratios. The bulbuline group pointed out that the banana group used wrong numbers to estimate the line emission strenghts. The banana group maintains that their conclusions still hold when the error is corrected. It all boils down to the question whether the allowed range for the K-XVIII emission strength assumed by the bulbine group is conservative enough. Explaining the 3.5 keV feature solely by K-XVIII requires assuming element abundance ratios that are very different than the solar one, which may or may not be realistic.   
  •  On the other hand, both groups have converged on the subject of chlorine. In the banana  paper it  was pointed out that the 3.5 keV line may be due to the Cl-XVII (hydrogen-like chlorine ion) Lyman-β transition which happens to be at 3.51 keV. However the bulbuline group subsequently derived limits on the corresponding Lyman-α line at 2.96 keV. From these limits, one can deduce in a fairly model-independent way that the contribution of Cl-XVII Lyman-β transition is negligible.   

To clarify the situation we need more replies to comments on replies, and maybe also  better data from future x-ray satellite missions. The significance of the detection depends, more than we'd wish, on dirty astrophysics involved in modeling the standard x-ray emission from galactic plasma. It seems unlikely that the sterile neutrino model with the originally reported parameters will stand, as it is in tension with several other analyses. The probability of the 3.5 keV signal being of dark matter origin is certainly much lower than a few months ago. But the jury is still out, and it's not impossible to imagine that more data and more analyses will tip the scales the other way.

Further reading: how to protect yourself from someone attacking you with a banana.


by Jester (noreply@blogger.com) at November 20, 2014 01:24 PM

Tommaso Dorigo - Scientificblogging

Extraordinary Claims: Review My Paper For $10
Bringing the concept of peer review to another dimension, I am offering you to read a review article I just wrote. You are invited to contribute to its review by suggesting improvements, corrections, changes or amendments to the text. I sort of need some scrutiny of this paper since it is not a report of CMS results -and thus I have not been forced by submit it for internal review to my collaboration.

read more

by Tommaso Dorigo at November 20, 2014 11:32 AM

November 19, 2014

astrobites - astro-ph reader's digest

Could we detect signs of life on a massive super-Earth?

Super-Earths are the Starbucks of the modern world–you can find them everywhere, its not exactly what you want but it’s just good enough to satisfy your desire for something better. Super-Earths are not technically Earth-like since they are up to 10 Earth masses and have thick hydrogen (H2) atmospheres. However, they are rocky like Earth, they have an atmosphere like Earth, and if they are in the habitable zone, there is a good chance they could have liquid water like Earth. Case and point: they are just good enough.

Unfortunately, in the next 15 years, the only way we will be able to characterize a super-Earth, is if it’s orbiting an M-type star. Since M-type stars are smaller and dimmer than the Sun, the planets orbiting them need to be closer in so that the planet get enough warmth to sustain liquid water. Therefore, habitable zone planets around M-type stars could be observed in transit once every ~20 days rather than once every year for an Earth twin. This bodes well for future missions that will try and characterize exoplanets such as the James Webb Space Telescope (JWST).

So, if super-Earths orbiting M-type stars are our best bet at characterization, it pays to think about what signs of life, or biosignatures, could hypothetically be detected in one of their atmospheres. Seager et al. investigate several biosignatures and aim to identify which are likely to build up to detectable levels in an H2-dominated super-Earth orbiting an M-type star.

Biosignatures and Photochemistry

To test the “build up” of any molecule, let’s say ABX, in an atmosphere, you need to know what molecular species are creating ABX and what molecular species or processes are destroying ABX. In the world of photochemistry, we refer to these as sources and sinks. The photochemical model that Seager et al. use includes 111 species, involved in 824 chemical reactions and 71 photochemical reactions. Dwell on that parameter space… A photochemical reaction occurs when a molecule absorbs a photon of light and is broken down into smaller components. We call this process photolysis and it can be a major sink for biosignatures, depending on how much UV flux the star is giving off. Let’s take Earth as an example.

Since oxygen, O2, is a abundantly produced by life on Earth, it is one of Earth’s dominant biosignature gases. O2 is destroyed by photolysis when it interacts with, you guessed it, UV light. On Earth though, UV radiation from our Sun isn’t that high, so O2 is free to build up in the atmosphere. If we were to increase the UV radiation Earth received, it is likely that O2 would all be destroyed and would cease being one of Earth’s dominant biosignature gases.

Because M stars might have a much higher UV flux than our Sun, it is uncertain how much UV flux a super-Earth orbiting an M star will receive. Therefore, in order to asses which biosignature gases will build up on an exoplanetary atmosphere orbiting an M star, we need to assess each of the bisoignature gas’s removal rate, or the rate at which a molecule is destroyed by photolysis or any other reaction.

The rate at which H, O, and OH destroy CH3Cl as a function of UV flux received from the parent star. The dashed lines represent the case of a 10% N2, 90% H2 atmosphere. The diamond and the circle show cases for an N2 dominated atmosphere and a present day atmosphere, respectively, Main point: Removal rate increases with UV flux. Image credit: Seager et al. (2013) ApJ

In order to illustrate this effect, Seager et al. took a biosignature gas, CH3Cl, and calculated the removal rate by reactions with H, O and OH as a function of UV flux. As we’d expect, the figure above shows that the removal rate increases with UV flux. This means that if we encounter a super-Earth around an M-type star that has a high UV flux, the rate of removal of a biosignature gas will depend largely on the concentration of the gas and how quickly it is being destroyed by H, O and OH.

The Most Likely Biosignature Gas

After considering the removal rate of several biosignature gases, Seager et al. find that ammonia (NH3) is likely to build up in the atmosphere of a super-Earth orbiting an M star. NH3 is created when a microbe harvests energy from a chemical energy gradient. On Earth, ammonia is not produced in large quantities so there isn’t a lot of it in our atmosphere. However, if an alien world produced as much ammonia as humans produced oxygen, it may actually be detectable in their atmosphere.

In a world where NH3 is a viable biosignature, life would be vastly different from what we see on Earth. It would need to be able to break the H2 and N2 bonds in the reaction: 3H2 + N2→ 2NH3. Since this reaction is exothermic (releases heat), it could be used to harvest energy. Is this possible though? Seager et. al. say that although there is no chemical reaction on Earth that can break both bonds of H2 and N2, there is no physical reason that it can’t happen.

Thermal emission spectra for a 90% H2, 10% N2 super-Earth (10 Earth masses, 1.75 Earth radii). Each color spectrum represents a different concentration of ammonia. Higher ammonia concentrations create stronger emission features. Main point: If life was producing lots of NH3, we would be able to see it in the spectrum of a super-Earth orbiting an M star. Image credit: Seager et al. ApJ, 2013

The plot above shows what the spectrum of a planet would look like if it were producing lots of ammonia. This spectrum is taken in “thermal emission” which means that we are looking at the planet when it is just about to disappear behind its parent star. There are strong NH3 emission features (labeled) from 2-100 microns. JWST will be able to make observations in the 1-30 micron range and will likely observe at least a handful of super-Earths orbiting M-type stars. So, should we expect to find one of these NH3 producing life forms? This is where I leave the Seager et. al. paper and let your imagination take over.

by Natasha Batalha at November 19, 2014 10:39 PM

Clifford V. Johnson - Asymptotia

Chalkboards Everywhere!
black_hole_engines_slide I love chalkboards (or blackboards if you prefer). I love showing up to give a talk somewhere and just picking up the chalk and going for it. No heavily over-packed slides full of too many fast moving things, as happens too much these days. If there is coloured chalk available, that's fantastic - special effects. It is getting harder to find these boards however. Designers of teaching rooms and other spaces seem embarrassed by them, and so they either get smaller or disappear, often in favour of the less than magical whiteboard. So in my continued reinvention of the way I produce slides for projection (I do this every so often), I've gone another step forward in returning to the look (and [...] Click to continue reading this post

by Clifford at November 19, 2014 06:29 PM

Symmetrybreaking - Fermilab/SLAC

LHCb experiment finds new particles

A new LHCb result adds two new composite particles to the quark model.

Today the LHCb experiment at CERN’s Large Hadron Collider announced the discovery of two new particles, each consisting of three quarks.

The particles, known as the Xi_b'- and Xi_b*-, were predicted to exist by the quark model but had never been observed. The LHCb collaboration submitted a paper reporting the finding to the journal Physical Review Letters.

Similar to the protons that the LHC accelerates and collides, these two new particles are baryons and made from three quarks bound together by the strong force.

But unlike protons—which are made of two up quarks and one down quark—the new Xi_b particles both contain one beauty quark, one strange quark and one down quark. Because the b quarks are so heavy, these particles are more than six times as massive as the proton.

“We had good reason to believe that we would be able to see at least one of these two predicted particles,” says Steven Blusk, an LHCb researcher and associate professor of physics at Syracuse University. “We were lucky enough to see both. It’s always very exciting to discover something new.”

Even though these two new particles contain the same combination of quarks, they have a different configuration of spin—which is a quantum mechanical property that describes a particle’s angular momentum. This difference in spin makes Xi_b*- a little heavier than Xi_b'-.

“Nature was kind and gave us two particles for the price of one," says Matthew Charles of the CNRS's LPNHE laboratory at Paris VI University. "The Xi_b'- is very close in mass to the sum of its decay products. If it had been just a little lighter, we wouldn't have seen it at all.”

In addition to the masses of these particles, the research team studied their relative production rates, their widths—which is a measurement of how unstable they are—and other details of their decays. The results match up with predictions based on the theory of Quantum Chromodynamics (QCD).

“QCD is a powerful framework that describes the interactions of quarks, but it is not that precise,” Blusk says. “If we do see something new, we need to be able to say that is not the result of uncertainties in QCD, but that it is in fact something new and unexpected. That is why we need precision data and precision measurements like these—to refine our models.”

The LHCb detector is one of the four main Large Hadron Collider experiments. It is specially designed to study hadrons and search for new particles.

“As you go up in mass, it becomes harder to discover new particles and requires unique detector capabilities,” Blusk says. “These new measurements really exploit the strengths of the LHCb detector, which has the unique ability to clearly identify hadrons.”

The measurements were made with the data taken at the LHC during 2011-2012. The LHC is currently being prepared—after its first long shutdown—to operate at higher energies and with more intense beams. It is scheduled to restart by spring 2015.

“I’m a firm believer that whenever you look for something, there is always the possibility that you will instead find something completely unexpected,” Blusk says. “Doing these generic searches opens the door for discovering new physics. We are just starting to explore b-baryon sector, and more data from the next run of the LHC will allow us to discover more particles not see before.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at November 19, 2014 04:53 PM

Symmetrybreaking - Fermilab/SLAC

LHCb experiment finds new particles

A new LHCb result adds two new composite particles to the quark model.

Today the LHCb experiment at CERN’s Large Hadron Collider announced the discovery of two new particles, each consisting of three quarks.

The particles, known as the Xi_b'- and Xi_b*-, were predicted to exist by the quark model but had never been observed. The LHCb collaboration submitted a paper reporting the finding to the journal Physical Review Letters.

Similar to the protons that the LHC accelerates and collides, these two new particles are baryons and made from three quarks bound together by the strong force.

But unlike protons—which are made of two up quarks and one down quark—the new Xi_b particles both contain one beauty quark, one strange quark and one down quark. Because the b quarks are so heavy, these particles are more than six times as massive as the proton.

“We had good reason to believe that we would be able to see at least one of these two predicted particles,” says Steven Blusk, an LHCb researcher and associate professor of physics at Syracuse University. “We were lucky enough to see two. It’s always very exciting to discover something new and unexpected.”

Even though these two new particles contain the same combination of quarks, they have a different configuration of spin—which is a quantum mechanical property that describes a particle’s angular momentum. This difference in spin makes Xi_b*- a little heavier than Xi_b'-.

“Nature was kind and gave us two particles for the price of one," says Matthew Charles of the CNRS's LPNHE laboratory at Paris VI University. "The Xi_b'- is very close in mass to the sum of its decay products’ masses. If it had been just a little lighter, we wouldn't have seen it at all.”

In addition to the masses of these particles, the research team studied their relative production rates, their widths—which is a measurement of how unstable they are—and other details of their decays. The results match up with predictions based on the theory of Quantum Chromodynamics (QCD).

“QCD is a powerful framework that describes the interactions of quarks, but it is difficult to compute properties of particles with high precision,” Blusk says. “If we do see something new, we need to be able to say that is not the result of uncertainties in QCD, but that it is in fact something new and unexpected. That is why we need precision data and precision measurements like these—to refine our models.”

The LHCb detector is one of the four main Large Hadron Collider experiments. It is specially designed to search for new forces of nature by studying the decays of particles containing beauty and charm quarks.

“As you go up in mass, it becomes harder to discover new particles,” Blusk says. “These new measurements really exploit the strengths of the LHCb detector, which has the unique ability to clearly identify hadrons.”

The measurements were made with the data taken at the LHC during 2011-2012. The LHC is currently being prepared—after its first long shutdown—to operate at higher energies and with more intense beams. It is scheduled to restart by spring 2015.

“Whenever you look for something, there is always the possibility that you will instead uncover something completely unexpected,” Blusk says. “Doing these generic searches opens the door for discovering new particles. We are just starting to explore b-baryon sector, and more data from the next run of the LHC will allow us to discover more particles not see before.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Sarah Charley at November 19, 2014 04:53 PM

The n-Category Cafe

Integral Octonions (Part 8)

This time I’d like to summarize some work I did in the comments last time, egged on by a mysterious entity who goes by the name of ‘Metatron’.

As you probably know, there’s an archangel named Metatron who appears in apocryphal Old Testament texts such as the Second Book of Enoch. These texts rank Metatron second only to YHWH himself. I don’t think the Metatron posting comments here is the same guy. However, it’s a good name for someone interested in lattices and geometry, since there’s a variant of the Cabbalistic Tree of Life called Metatron’s Cube, which looks like this:

This design includes within it the <semantics>G 2<annotation encoding="application/x-tex">\mathrm{G}_2</annotation></semantics> root system, a 2d projection of a stellated octahedron, and a perspective drawing of a hypercube.

Anyway, there are lattices in 26 and 27 dimensions that play rather tantalizing and mysterious roles in bosonic string theory. Metatron challenged me to find octonionic descriptions of them. I did.

Given a lattice <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> in <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-dimensional Euclidean space, there’s a way to build a lattice <semantics>L ++<annotation encoding="application/x-tex">L^{++}</annotation></semantics> in <semantics>(n+2)<annotation encoding="application/x-tex">(n+2)</annotation></semantics>-dimensional Minkowski spacetime. This is called the ‘over-extended’ version of <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics>.

If we start with the lattice <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> in 8 dimensions, this process gives a lattice called <semantics>E 10<annotation encoding="application/x-tex">\mathrm{E}_{10}</annotation></semantics>, which plays an interesting but mysterious role in superstring theory. This shouldn’t come as a complete shock, since superstring theory lives in 10 dimensions, and it can be nicely formulated using octonions, as can the lattice <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics>.

If we start with the lattice called <semantics>D 24<annotation encoding="application/x-tex">\mathrm{D}_{24}</annotation></semantics>, this over-extension process gives a lattice <semantics>D 24 ++<annotation encoding="application/x-tex">\mathrm{D}_{24}^{++}</annotation></semantics>. This describes the ‘cosmological billiards’ for the 3d compactification of the theory of gravity arising from bosonic string theory. Again, this shouldn’t come as a complete shock, since bosonic string theory lives in 26 dimensions.

Last time I gave a nice description of <semantics>E 10<annotation encoding="application/x-tex">\mathrm{E}_{10}</annotation></semantics>: it consists of <semantics>2×2<annotation encoding="application/x-tex">2 \times 2</annotation></semantics> self-adjoint matrices with integral octonions as entries.

It would be nice to get a similar description of <semantics>D 24 ++<annotation encoding="application/x-tex">\mathrm{D}_{24}^{++}</annotation></semantics>. Indeed, one exists! But to find it, it’s actually easier to go up to 27 dimensions, because the space of <semantics>3×3<annotation encoding="application/x-tex">3 \times 3</annotation></semantics> self-adjoint matrices with octonion entries is 27-dimensional. And indeed, there’s a 27-dimensional lattice waiting to be described with octonions.

You see, for any lattice <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> in <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-dimensional Euclidean space, there’s also a way to build a lattice <semantics>L +++<annotation encoding="application/x-tex">L^{+++}</annotation></semantics> in <semantics>(n+3)<annotation encoding="application/x-tex">(n+3)</annotation></semantics>-dimensional Minkowski spacetime, called the ‘very extended’ version of <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics>.

If we do this to <semantics>L=E 8<annotation encoding="application/x-tex">L = \mathrm{E}_8</annotation></semantics> we get an 11-dimensional lattice called <semantics>E 11<annotation encoding="application/x-tex">\mathrm{E}_{11}</annotation></semantics>, which has mysterious connections to M-theory. But if we do it to <semantics>D 24<annotation encoding="application/x-tex">\mathrm{D}_{24}</annotation></semantics> we get a 27-dimensional lattice sometimes called <semantics>K 27<annotation encoding="application/x-tex">\mathrm{K}_{27}</annotation></semantics>. You can read about both these lattices here:

I’ll prove that both <semantics>E 11<annotation encoding="application/x-tex">\mathrm{E}_{11}</annotation></semantics> and <semantics>K 27<annotation encoding="application/x-tex">\mathrm{K}_{27}</annotation></semantics> have nice descriptions in terms of integral octonions. To do this, I’ll use the explanation of over-extended and very extended lattices given here:

These constructions use a 2-dimensional lattice called <semantics>H<annotation encoding="application/x-tex">\mathrm{H}</annotation></semantics>. Let’s get to know this lattice. It’s very simple.

A 2-dimensional Lorentzian lattice

Up to isometry, there’s a unique even unimodular lattice in Minkowski spacetime whenever its dimension is 2 more than a multiple of 8. The simplest of these is <semantics>H<annotation encoding="application/x-tex">\mathrm{H}</annotation></semantics>: it’s the unique even unimodular lattice in 2-dimensional Minkowski spacetime.

There are various ways to coordinatize <semantics>H<annotation encoding="application/x-tex">\mathrm{H}</annotation></semantics>. The easiest, I think, is to start with <semantics> 2<annotation encoding="application/x-tex">\mathbb{R}^2</annotation></semantics> and give it the metric <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> with

<semantics>g(x,x)=2uv<annotation encoding="application/x-tex"> g(x,x) = -2 u v </annotation></semantics>

when <semantics>x=(u,v)<annotation encoding="application/x-tex">x = (u,v)</annotation></semantics>. Then, sitting in <semantics> 2<annotation encoding="application/x-tex">\mathbb{R}^2</annotation></semantics>, the lattice <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}^2</annotation></semantics> is even and unimodular. So, it’s a copy of <semantics>H<annotation encoding="application/x-tex">\mathrm{H}</annotation></semantics>.

Let’s get to know it a bit. The coordinates <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> and <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> are called lightcone coordinates, since the <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> and <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> axes form the lightcone in 2d Minkowski spacetime. In other words, the vectors

<semantics>=(1,0),=(0,1)<annotation encoding="application/x-tex"> \ell = (1,0), \quad \ell' = (0,1) </annotation></semantics>

are lightlike, meaning

<semantics>g(,)=0,g(,)=0<annotation encoding="application/x-tex"> g(\ell,\ell) = 0 , \quad g(\ell', \ell') = 0 </annotation></semantics>

Their sum is a timelike vector

<semantics>τ=+=(1,1)<annotation encoding="application/x-tex"> \tau = \ell + \ell' = (1,1)</annotation></semantics>

since the inner product of <semantics>τ<annotation encoding="application/x-tex">\tau</annotation></semantics> with itself is negative; in fact

<semantics>g(τ,τ)=2<annotation encoding="application/x-tex"> g(\tau,\tau) = -2 </annotation></semantics>

Their difference is a spacelike vector

<semantics>σ==(1,1)<annotation encoding="application/x-tex"> \sigma = \ell - \ell' = (1,-1) </annotation></semantics>

since the inner product of <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> with itself is positive; in fact

<semantics>g(σ,σ)=2<annotation encoding="application/x-tex"> g(\sigma,\sigma) = 2 </annotation></semantics>

Since the vectors <semantics>τ<annotation encoding="application/x-tex">\tau</annotation></semantics> and <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> are orthogonal and have length <semantics>2<annotation encoding="application/x-tex">\sqrt{2}</annotation></semantics> in the metric <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics>, we get a square of area <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> with corners

<semantics>0,τ,σ,τ+σ<annotation encoding="application/x-tex"> 0, \tau, \sigma, \tau + \sigma </annotation></semantics>

that is,

<semantics>(0,0),(1,1),(1,1),(2,0)<annotation encoding="application/x-tex"> (0,0),\; (1,1),\; (1,-1), \;(2,0) </annotation></semantics>

If you draw a picture, you can see by dissection that this square has twice the area of the unit cell

<semantics>(0,0),(1,0),(0,1),(1,1)<annotation encoding="application/x-tex"> (0,0),\; (1,0), \; (0,1) , \; (1,1) </annotation></semantics>

So, the unit cell has area 1, and the lattice is unimodular as claimed. Furthermore, every vector in the lattice has even inner product with itself, so this lattice is even.

Over-extended lattices

Given a lattice <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> in Euclidean <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics>,

<semantics>L ++=LH<annotation encoding="application/x-tex">L^{++} = L \oplus \mathrm{H} </annotation></semantics>

is a lattice in <semantics>(n+2)<annotation encoding="application/x-tex">(n+2)</annotation></semantics>-dimensional Minkowski spacetime, also known as <semantics> n+1,1<annotation encoding="application/x-tex">\mathbb{R}^{n+1,1}</annotation></semantics>. This lattice <semantics>L ++<annotation encoding="application/x-tex">L^{++}</annotation></semantics> is called the over-extension of <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics>.

A direct sum of even lattices is even. A direct sum of unimodular lattices is unimodular. Thus if <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> is even and unimodular, so is <semantics>L ++<annotation encoding="application/x-tex">L^{++}</annotation></semantics>.

All this is obvious. But here are some deeper facts about even unimodular lattices. First, they only exist in <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics> when <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> is a multiple of 8. Second, they only exist in <semantics> n+1,1<annotation encoding="application/x-tex">\mathbb{R}^{n+1,1}</annotation></semantics> when <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> is a multiple of 8.

But here’s the really amazing thing. In the Euclidean case there can be lots of different even unimodular lattices in a given dimension. In 8 dimensions there’s just one, up to isometry, called <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics>. In 16 dimensions there are two. In 24 dimensions there are 24. In 32 dimensions there are at least 1,160,000,000, and the number continues to explode after that. On the other hand, in the Lorentzian case there’s just one even unimodular lattice in a given dimension, if there are any at all.

More precisely: given two even unimodular lattices in <semantics> n+1,1<annotation encoding="application/x-tex">\mathbb{R}^{n+1,1}</annotation></semantics>, they are always isomorphic to each other via an isometry: a linear transformation that preserves the metric. We then call them isometric.

Let’s look at some examples. Up to isometry, <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> is the only even unimodular lattice in 8-dimensional Euclidean space. We can identify it with the lattice of integral octonions, <semantics>O𝕆<annotation encoding="application/x-tex">\mathbf{O} \subseteq \mathbb{O}</annotation></semantics>, with the inner product

<semantics>g(X,X)=2XX *<annotation encoding="application/x-tex"> g(X,X) = 2 X X^*</annotation></semantics>

<semantics>L ++<annotation encoding="application/x-tex">L^{++}</annotation></semantics> is usually called <semantics>E 10<annotation encoding="application/x-tex">E_{10}</annotation></semantics>. Up to isometry, this is the unique even unimodular lattice in 10 dimensions. There are lots of ways to describe it, but last time we saw that it’s the lattice of <semantics>2×2<annotation encoding="application/x-tex">2 \times 2</annotation></semantics> self-adjoint matrices with integral octonions as entries:

<semantics>𝔥 2(O)={(a X X * b):a,b,XO}<annotation encoding="application/x-tex"> \mathfrak{h}_2(\mathbf{O}) = \left\{ \; \left( \begin{array}{cc} a & X \\ X^* & b \end{array} \right) : \; a,b \in \mathbb{Z}, \; X \in \mathbf{O} \; \right\} </annotation></semantics>

where the metric comes from <semantics>2<annotation encoding="application/x-tex">-2</annotation></semantics> times the determinant:

<semantics>x=(a X X * b)g(x,x)=det(x)=2XX *2ab<annotation encoding="application/x-tex"> x = \left( \begin{array}{cc} a & X \\ X^* & b \end{array} \right) \;\; \implies \;\; g(x,x) = - \det(x) = 2 X X^* - 2 a b </annotation></semantics>

We’ll see a fancier formula like this later on.

There are 24 even unimodular lattices in 24-dimensional Euclidean space. One of them is

<semantics>E 8E 8E 8<annotation encoding="application/x-tex"> \mathrm{E}_8 \oplus \mathrm{E}_8 \oplus \mathrm{E}_8 </annotation></semantics>

Another is <semantics>D 24<annotation encoding="application/x-tex">\mathrm{D}_24</annotation></semantics>. This is the lattice of vectors in <semantics> 24<annotation encoding="application/x-tex">\mathbb{R}^{24}</annotation></semantics> where the components are integers and their sum is even. It’s also the root lattice of the Lie group <semantics>Spin(48)<annotation encoding="application/x-tex">\mathrm{Spin}(48)</annotation></semantics>.

If we take the over-extension of any of these lattices, we get an even unimodular lattice in 26-dimensional Minkowski spacetime… and all these are isometric! The over-extension process ‘washes out the difference’ between them. In particular,

<semantics>D 24 ++(E 8E 8E 8) ++<annotation encoding="application/x-tex"> \mathrm{D}_{24}^{++} \cong (\mathrm{E}_8 \oplus \mathrm{E}_8 \oplus \mathrm{E}_8)^{++} </annotation></semantics>

This is nice because up to a scale factor, <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> is the lattice of integral octonions. So, there’s a description of <semantics>D 24 ++<annotation encoding="application/x-tex">\mathrm{D}_{24}^{++}</annotation></semantics> using three integral octonions! But the story is prettier if we go up an extra dimension.

Very extended lattices

After the over-extended version <semantics>L ++<annotation encoding="application/x-tex">L^{++}</annotation></semantics>of a lattice <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> in Euclidean space comes the ‘very extended’ version, called <semantics>L +++<annotation encoding="application/x-tex">L^{+++}</annotation></semantics>. If you ponder the paper by Gaberdiel et al, you can see this is the direct sum of the over-extension <semantics>L ++<annotation encoding="application/x-tex">L^{++}</annotation></semantics> and a 1-dimensional lattice called <semantics>A 1<annotation encoding="application/x-tex">\mathrm{A}_1</annotation></semantics>. <semantics>A 1<annotation encoding="application/x-tex">\mathrm{A}_1</annotation></semantics> is just <semantics><annotation encoding="application/x-tex">\mathbb{Z}</annotation></semantics> with the metric

<semantics>g(x,x)=2x 2<annotation encoding="application/x-tex"> g(x,x) = 2 x^2 </annotation></semantics>

It’s even but not unimodular.

In short, the very extended version of <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> is

<semantics>L +++=L ++A 1=LHA 1<annotation encoding="application/x-tex">L^{+++} = L^{++} \oplus \mathrm{A}_1 = L \oplus \mathrm{H} \oplus \mathrm{A}_1 </annotation></semantics>

If <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> is even, so is <semantics>L +++<annotation encoding="application/x-tex">L^{+++}</annotation></semantics>. But if <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> is unimodular, this will not be true of <semantics>L +++<annotation encoding="application/x-tex">L^{+++}</annotation></semantics>.

The very extended version of <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> is called <semantics>E 11<annotation encoding="application/x-tex">\mathrm{E}_{11}</annotation></semantics>. This a fascinating thing, but I want to talk about the very extended version of <semantics>D 24<annotation encoding="application/x-tex">\mathrm{D}_{24}</annotation></semantics>, and how to describe it using octonions.

Let <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics> be the space of <semantics>3×3<annotation encoding="application/x-tex">3 \times 3</annotation></semantics> self-adjoint octonionic matrices. It’s 27-dimensional since a typical element looks like

<semantics>x=(a X Y X * b Z Y * Z * c)<annotation encoding="application/x-tex"> x = \left( \begin{array}{ccc} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & c \end{array} \right) </annotation></semantics>

where <semantics>a,b,c,X,Y,Z𝕆<annotation encoding="application/x-tex">a,b,c \in \mathbb{R}, X,Y,Z \in \mathbb{O}</annotation></semantics>. It’s called the exceptional Jordan algebra. We don’t need to know about Jordan algebras now, but this concept encapsulates the fact that if <semantics>x𝔥 3(𝕆)<annotation encoding="application/x-tex">x \in \mathfrak{h}_3(\mathbb{O})</annotation></semantics>, so is <semantics>x 2<annotation encoding="application/x-tex">x^2</annotation></semantics>.

There’s a 2-parameter family of metrics on the exceptional Jordan algebra that are invariant under all Jordan algebra automorphisms. They have

<semantics>g(x,x)=αtr(x 2)+βtr(x) 2<annotation encoding="application/x-tex"> g(x,x) = \alpha \tr(x^2) + \beta \tr(x)^2 </annotation></semantics>

for <semantics>α,β<annotation encoding="application/x-tex">\alpha, \beta \in \mathbb{R}</annotation></semantics> with <semantics>α0<annotation encoding="application/x-tex">\alpha \ne 0</annotation></semantics>. Some are Euclidean and some are Lorentzian.

Sitting inside the exceptional Jordan algebra is the lattice of <semantics>3×3<annotation encoding="application/x-tex">3 \times 3</annotation></semantics> self-adjoint matrices with integral octonions as entries:

<semantics>𝔥 3(O)={(a X Y X * b Z Y * Z * c):a,b,c,X,Y,ZO}<annotation encoding="application/x-tex"> \mathfrak{h}_3(\mathbf{O}) = \left\{ \; \left( \begin{array}{ccc} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & c \end{array} \right) :\; a,b,c \in \mathbb{Z}, \; X,Y,Z \in \mathbf{O} \; \right\} </annotation></semantics>

And here’s the cool part:

Theorem. There is a Lorentzian inner product <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> on the exceptional Jordan algebra that is invariant under all automorphisms and makes the lattice <semantics>𝔥 3(O)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbf{O})</annotation></semantics> isometric to <semantics>K 27D 24 +++<annotation encoding="application/x-tex">\mathrm{K}_{27} \cong \mathrm{D}_{24}^{+++}</annotation></semantics>.

Proof. We will prove that the metric

<semantics>g(x,x)=tr(x 2)tr(x) 2<annotation encoding="application/x-tex"> g(x,x) = \tr(x^2) - \tr(x)^2 </annotation></semantics>

obeys all the conditions of this theorem. From what I’ve already said, it is invariant under all Jordan algebra automorphisms. The challenge is to show that it makes <semantics>𝔥 3(O)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbf{O})</annotation></semantics> isometric to <semantics>D 24 +++<annotation encoding="application/x-tex">\mathrm{D}_{24}^{+++}</annotation></semantics>. But instead of <semantics>D 24 +++<annotation encoding="application/x-tex">\mathrm{D}_{24}^{+++}</annotation></semantics>, we can work with <semantics>(E 8E 8E 8) +++<annotation encoding="application/x-tex">(\mathrm{E}_8 \oplus \mathrm{E}_8 \oplus \mathrm{E}_8)^{+++}</annotation></semantics>, since we have seen that

<semantics>D 24 +++(E 8E 8E 8) +++<annotation encoding="application/x-tex"> \mathrm{D}_{24}^{+++} \cong (\mathrm{E}_8 \oplus \mathrm{E}_8 \oplus \mathrm{E}_8)^{+++} </annotation></semantics>

Let us examine the metric <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> in more detail. Take any element <semantics>x𝔥 3(O)<annotation encoding="application/x-tex">x \in \mathfrak{h}_3(\mathbf{O})</annotation></semantics>:

<semantics>x=(a X Y X * b Z Y * Z * c)<annotation encoding="application/x-tex"> x = \left( \begin{array}{ccc} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & c \end{array} \right) </annotation></semantics>

where <semantics>a,b,c,X,Y,Z𝕆<annotation encoding="application/x-tex">a,b,c \in \mathbb{R}, X,Y,Z \in \mathbb{O}</annotation></semantics>. Then

<semantics>tr(x 2)=a 2+b 2+c 2+2(XX *+YY *+ZZ *)<annotation encoding="application/x-tex"> tr(x^2) = a^2 + b^2 + c^2 + 2(X X^* + Y Y^* + Z Z^*)</annotation></semantics>

while

<semantics>tr(x) 2=(a+b+c) 2<annotation encoding="application/x-tex"> tr(x)^2 = (a + b + c)^2 </annotation></semantics>

Thus

<semantics>g(x,x) = tr(x 2)tr(x) 2 = 2(XX *+YY *+ZZ *)2(ab+bc+ca)<annotation encoding="application/x-tex"> \begin{array}{ccl} g(x,x) &=& tr(x^2) - tr(x)^2 \\ &=& 2(X X^* + Y Y^* + Z Z^*) - 2(a b + b c + c a) \end{array} </annotation></semantics>

It follows that with this metric, the diagonal matrices are orthogonal to the off-diagonal matrices. An off-diagonal matrix <semantics>x𝔥 3(O)<annotation encoding="application/x-tex">x \in \mathfrak{h}_3(\mathbf{O})</annotation></semantics> is a triple <semantics>(X,Y,Z)O 3<annotation encoding="application/x-tex">(X,Y,Z) \in \mathbf{O}^3</annotation></semantics>, and has

<semantics>g(x,x)=2(XX *+YY *+ZZ *)<annotation encoding="application/x-tex"> g(x,x) = 2(X X^* + Y Y^* + Z Z^*) </annotation></semantics>

Thanks to the factor of 2, this metric makes the lattice of these off-diagonal matrices isometric to <semantics>E 8E 8E 8<annotation encoding="application/x-tex">\mathrm{E}_8 \oplus \mathrm{E}_8 \oplus \mathrm{E}_8</annotation></semantics>. Since

<semantics>(E 8E 8E 8) +++=E 8E 8E 8HA 1<annotation encoding="application/x-tex"> (\mathrm{E}_8 \oplus \mathrm{E}_8 \oplus \mathrm{E}_8)^{+++} = \mathrm{E}_8 \oplus \mathrm{E}_8 \oplus \mathrm{E}_8 \oplus \mathrm{H} \oplus \mathrm{A}_1 </annotation></semantics>

it thus suffices to show that the 3-dimensional Lorentzian lattice of diagonal matrices in <semantics>𝔥 3(O)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbf{O})</annotation></semantics> is isometric to

<semantics>HA 1<annotation encoding="application/x-tex"> \mathrm{H} \oplus \mathrm{A}_1 </annotation></semantics>

A diagonal matrix <semantics>x𝔥 3(O)<annotation encoding="application/x-tex">x \in \mathfrak{h}_3(\mathbf{O})</annotation></semantics> is a triple <semantics>(a,b,c) 3<annotation encoding="application/x-tex">(a,b,c) \in \mathbb{Z}^3</annotation></semantics>, and on these triples the inner product <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> is given by

<semantics>g(x,x)=2(ab+bc+ca)<annotation encoding="application/x-tex"> g(x,x) = -2(a b + b c + c a) </annotation></semantics>

If we restrict attention to triples of the form <semantics>x=(a,b,0)<annotation encoding="application/x-tex">x = (a,b,0)</annotation></semantics>, we get a 2-dimensional Lorentzian lattice: a copy of <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}^2</annotation></semantics> with inner product

<semantics>g(x,x)=2ab<annotation encoding="application/x-tex"> g(x,x) = -2a b</annotation></semantics>

This is just <semantics>H<annotation encoding="application/x-tex">\mathrm{H}</annotation></semantics>.

We can use this to show that the lattice of all triples <semantics>(a,b,c) 3<annotation encoding="application/x-tex">(a,b,c) \in \mathbb{Z}^3</annotation></semantics>, with the inner product <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics>, is isometric to <semantics>HA 1<annotation encoding="application/x-tex">\mathrm{H} \oplus \mathrm{A}_1</annotation></semantics>.

Remember, <semantics>A 1<annotation encoding="application/x-tex">\mathrm{A}_1</annotation></semantics> is a 1-dimensional lattice generated by a spacelike vector whose norm squared is 2. So, it suffices to show that the lattice <semantics> 3<annotation encoding="application/x-tex">\mathbb{Z}^3</annotation></semantics> is generated by vectors of the form <semantics>(a,b,0)<annotation encoding="application/x-tex">(a,b,0)</annotation></semantics> together with a spacelike vector of norm squared 2 that is orthogonal to all those of the form <semantics>(a,b,0)<annotation encoding="application/x-tex">(a,b,0)</annotation></semantics>.

To do this, we need to describe the inner product <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> on <semantics> 3<annotation encoding="application/x-tex">\mathbb{Z}^3</annotation></semantics> more explicitly. For this, we can use polarization identity

<semantics>g(x,x)=12(g(x+x,x+x)g(x,x)g(x,x))<annotation encoding="application/x-tex"> g(x,x') = \frac{1}{2}( g(x+x',x+x') - g(x,x) - g(x',x')) </annotation></semantics>

Remember, if <semantics>x=(a,b,c)<annotation encoding="application/x-tex">x = (a,b,c)</annotation></semantics> we have

<semantics>g(x,x)=2(ab+bc+ca)<annotation encoding="application/x-tex"> g(x,x) = -2(a b + b c + c a) </annotation></semantics>

So, if we also have <semantics>x=(a,b,c)<annotation encoding="application/x-tex">x' = (a',b',c')</annotation></semantics>, the polarization identity gives

<semantics>g(x,x)=(ab+ab)(bc+bc)(ca+ca)<annotation encoding="application/x-tex"> g(x,x') = -(a b'+a' b) - (b c'+ b c') - (c a' + c'a)</annotation></semantics>

We are looking for a spacelike vector <semantics>x=(a,b,c)<annotation encoding="application/x-tex">x' = (a',b',c')</annotation></semantics> that is orthogonal to all those of the form <semantics>x=(a,b,0)<annotation encoding="application/x-tex">x = (a,b,0)</annotation></semantics>. For this, it is necessary and sufficient to have

<semantics>0=g((1,0,0),(a,b,c))=bc<annotation encoding="application/x-tex"> 0 = g((1,0,0),(a',b',c')) = - b' - c' </annotation></semantics>

and

<semantics>0=g((0,1,0),(a,b,c))=ac<annotation encoding="application/x-tex"> 0 = g((0,1,0), (a',b',c')) = - a' - c' </annotation></semantics>

An example is <semantics>x=(1,1,1)<annotation encoding="application/x-tex">x' = (1,1,-1)</annotation></semantics>. This has

<semantics>g(x,x)=2(111)=2<annotation encoding="application/x-tex"> g(x',x') = -2(1 - 1 - 1) = 2 </annotation></semantics>

so it is spacelike, as desired. Even better, it has norm squared two. And even better, this vector <semantics>x<annotation encoding="application/x-tex">x'</annotation></semantics>, along with those of the form <semantics>(a,b,0)<annotation encoding="application/x-tex">(a,b,0)</annotation></semantics>, generates the lattice <semantics> 3<annotation encoding="application/x-tex">\mathbb{Z}^3</annotation></semantics>.

So we have shown what we needed: the lattice of all triples <semantics>(a,b,c) 3<annotation encoding="application/x-tex">(a,b,c) \in \mathbb{Z}^3</annotation></semantics> is generated by those of the form <semantics>(a,b,0)<annotation encoding="application/x-tex">(a,b,0)</annotation></semantics> together with a spacelike vector with norm squared 2 that is orthogonal to all those of the form <semantics>(a,b,0)<annotation encoding="application/x-tex">(a,b,0)</annotation></semantics>. <semantics><annotation encoding="application/x-tex">\blacksquare</annotation></semantics>

This theorem has three nice spinoffs:

Corollary. With the same Lorentzian inner product <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> on the exceptional Jordan algebra, the lattice <semantics>D 24 ++<annotation encoding="application/x-tex">\mathrm{D}_{24}^{++}</annotation></semantics> is isometric to the sublattice of <semantics>𝔥 3(O)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbf{O})</annotation></semantics> where a fixed diagonal entry is set equal to zero, e.g.:

<semantics>{(a X Y X * b Z Y * Z * 0):a,b,X,Y,ZO}<annotation encoding="application/x-tex"> \left\{ \; \left( \begin{array}{ccc} a & X & Y \\ X^* & b & Z \\ Y^* & Z^* & 0 \end{array} \right) : \; a,b \in \mathbb{Z}, \; X,Y,Z \in \mathbf{O} \; \right\} </annotation></semantics>

Proof. Use the fact that with the metric <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics>, the diagonal matrices

<semantics>{(a 0 0 0 b 0 0 0 0):a,b}<annotation encoding="application/x-tex"> \left\{ \; \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & 0 \end{array} \right) : \; a,b \in \mathbb{Z} \; \right\} </annotation></semantics>

form a copy of <semantics>H<annotation encoding="application/x-tex">\mathrm{H}</annotation></semantics>, so the matrices above form a copy of

<semantics>E 8E 8E 8H(E 8E 8E 8) ++D 24 ++<annotation encoding="application/x-tex"> \mathrm{E}_8 \oplus \mathrm{E}_8 \oplus \mathrm{E}_8 \oplus \mathrm{H} \cong (\mathrm{E}_8 \oplus \mathrm{E}_8 \oplus \mathrm{E}_8)^{++} \cong \mathrm{D}_{24}^{++} \qquad \qquad \qquad \blacksquare </annotation></semantics>

Corollary. With the same Lorentzian inner product <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> on the exceptional Jordan algebra, the lattice <semantics>E 11=E 8 +++<annotation encoding="application/x-tex">\mathrm{E}_{11} = E_8^{+++}</annotation></semantics> is isometric to the sublattice of <semantics>𝔥 3(O)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbf{O})</annotation></semantics> where two fixed off-diagonal entries are set equal to zero, e.g.:

<semantics>{(a X 0 X * b 0 0 0 c):a,b,c,XO}<annotation encoding="application/x-tex"> \left\{ \; \left( \begin{array}{ccc} a & X & 0 \\ X^* & b & 0 \\ 0 & 0 & c \end{array} \right) : \; a,b,c \in \mathbb{Z}, \; X\in \mathbf{O} \; \right\} </annotation></semantics>

Proof. Use the fact that with the metric <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics>, the diagonal matrices

<semantics>{(a 0 0 0 b 0 0 0 c):a,b}<annotation encoding="application/x-tex"> \left\{ \; \left( \begin{array}{ccc} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end{array} \right) : \; a,b \in \mathbb{Z} \; \right\} </annotation></semantics>

form a copy of <semantics>HA 1<annotation encoding="application/x-tex">\mathrm{H} \oplus \mathrm{A}_1</annotation></semantics>, so the matrices above form a copy of

<semantics>E 8HA 1E 8 +++<annotation encoding="application/x-tex"> \mathrm{E}_8 \oplus \mathrm{H} \oplus \mathrm{A}_1 \cong \mathrm{E}_8^{+++} \qquad \qquad \qquad \blacksquare </annotation></semantics>

Corollary. With the same Lorentzian inner product <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> on the exceptional Jordan algebra, the lattice <semantics>E 10=E 8 ++<annotation encoding="application/x-tex">\mathrm{E}_{10} = \mathrm{E}_8^{++}</annotation></semantics> is isometric to the sublattice of <semantics>𝔥 3(O)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbf{O})</annotation></semantics> where two fixed off-diagonal entries and one diagonal entry are set equal to zero, e.g.:

<semantics>{(a X 0 X * b 0 0 0 0):a,b,c,XO}<annotation encoding="application/x-tex"> \left\{ \; \left( \begin{array}{ccc} a & X & 0 \\ X^* & b & 0 \\ 0 & 0 & 0 \end{array} \right) : \; a,b,c \in \mathbb{Z}, \; X\in \mathbf{O} \; \right\} </annotation></semantics>

Proof. Use the previous corollary; this is the obvious copy of <semantics>E 8 ++E 8H<annotation encoding="application/x-tex">\mathrm{E}_8^{++} \cong \mathrm{E}_8 \oplus \mathrm{H}</annotation></semantics> inside <semantics>E 8 +++E 8HA 1<annotation encoding="application/x-tex">\mathrm{E}_8^{+++} \cong \mathrm{E}_8 \oplus \mathrm{H} \oplus \mathrm{A}_1</annotation></semantics>. <semantics><annotation encoding="application/x-tex">\blacksquare</annotation></semantics>

by john (baez@math.ucr.edu) at November 19, 2014 02:07 AM

November 18, 2014

Clifford V. Johnson - Asymptotia

Three Cellos
three_cellos_14_11_14These three fellows, perched on wooden boxes, just cried out for a quick sketch of them during the concert. It was the LA Phil playing Penderecki's Concerto Grosso for Three Cellos, preceded by the wonderful Rapsodie Espagnole by Ravel and followed by that sublime (brought tears to my eyes - I'd not heard it in so long) serving of England, Elgar's Enigma Variations. . . . . . -cvj Click to continue reading this post

by Clifford at November 18, 2014 07:07 PM

Symmetrybreaking - Fermilab/SLAC

Auger reveals subtlety in cosmic rays

Scientists home in on the make-up of cosmic rays, which are more nuanced than previously thought.

Unlike the twinkling little star of nursery rhyme, the cosmic ray is not the subject of any well-known song about an astronomical wonder. And yet while we know all about the make-up of stars, after decades of study scientists still wonder what cosmic rays are.

Thanks to an abundance of data collected over eight years, researchers in the Pierre Auger collaboration are closer to finding out what cosmic rays—in particular ultrahigh-energy cosmic rays—are made of. Their composition would tell us more about where they come from: perhaps a black hole, a cosmic explosion or colliding galaxies.

Auger’s latest research has knocked out two possibilities put forward by the prevailing wisdom: that UHECRs are dominated by either lightweight protons or heavier nuclei such as iron. According to Auger, one or more middleweight components, such as helium or nitrogen nuclei, must make up a significant part of the cosmic-ray mix.

“Ten years ago, people couldn’t posit that ultrahigh-energy cosmic rays would be made of something in between protons and iron,” says Fermilab scientist and Auger collaborator Eun-Joo Ahn, who led the analysis. “The idea would have garnered sidelong glances.”

Cosmic rays are particles that rip through outer space at incredibly high energies. UHECRs, upwards of 1018 electronvolts, are rarely observed, and no one knows exactly where they originate.

One way physicists reach back to a cosmic ray’s origins is by looking to the descendants of its collisions. The collision of one of these breakneck particles with the Earth’s upper atmosphere sets off a domino effect, generating more particles that in turn collide with air and produce still more. These ramifying descendants form an air shower, spreading out like the branches of a tree reaching toward the Earth. Twenty-seven telescopes at the Argentina-based Auger Observatory look for ultraviolet light resulting from the cosmic rays, and 1600 detectors, distributed over a swath of land the size of Rhode Island, record the showers’ signals.

Scientists measure how deep into the atmosphere—how close to Earth—the air shower is when it maxes out. The closer to the Earth, the more lightweight the original cosmic ray particle is likely to be. A proton, for example, would penetrate the atmosphere more deeply before setting off an air shower than would an iron nucleus.

Auger scientists compared their data with three different simulation models to narrow the possible compositions of cosmic rays.

Auger’s favoring a compositional middle ground between protons and iron nuclei is based on a granular take on their data, a first for cosmic-ray research. In earlier studies, scientists distilled measurements of shower depths to two values: the average and standard deviation of all shower depths in a given cosmic-ray energy range. Their latest study, however, made no such generalization. Instead, it used the full distribution of data on air shower depth. If researchers measured 1000 different air shower depths for a specific UHECR energy, all 1000 data points—not just the average—went into Auger’s simulation models.

The result was a more nuanced picture of cosmic ray composition. The analysis also gave researchers greater insight into their simulations. For one model, the data and predictions could not be matched no matter the composition of the cosmic ray, giving scientists a starting point for constraining the model further.

“Just getting the distribution itself was exciting,” Ahn says.

Auger will continue to study cosmic rays at even higher energies, gathering more statistics to answer the question: What exactly are cosmic rays made of?

 

Like what you see? Sign up for a free subscription to symmetry!

by Leah Hesla at November 18, 2014 04:43 PM

Symmetrybreaking - Fermilab/SLAC

Auger reveals subtlety in cosmic rays

Scientists home in on the make-up of cosmic rays, which are more nuanced than previously thought.

Unlike the twinkling little star of nursery rhyme, the cosmic ray is not the subject of any well-known song about an astronomical wonder. And yet while we know all about the make-up of stars, after decades of study scientists still wonder what cosmic rays are.

Thanks to an abundance of data collected over eight years, researchers in the Pierre Auger collaboration are closer to finding out what cosmic rays—in particular ultrahigh-energy cosmic rays—are made of. Their composition would tell us more about where they come from: perhaps a black hole, a cosmic explosion or colliding galaxies.

Auger’s latest research has knocked out two possibilities put forward by the prevailing wisdom: that UHECRs are dominated by either lightweight protons or heavier nuclei such as iron. According to Auger, one or more middleweight components, such as helium or nitrogen nuclei, must make up a significant part of the cosmic-ray mix.

“Ten years ago, people couldn’t posit that ultrahigh-energy cosmic rays would be made of something in between protons and iron,” says Fermilab scientist and Auger collaborator Eun-Joo Ahn, who led the analysis. “The idea would have garnered sidelong glances.”

Cosmic rays are particles that rip through outer space at incredibly high energies. UHECRs, upwards of 1018 electronvolts, are rarely observed, and no one knows exactly where they originate.

One way physicists reach back to a cosmic ray’s origins is by looking to the descendants of its collisions. The collision of one of these breakneck particles with the Earth’s upper atmosphere sets off a domino effect, generating more particles that in turn collide with air and produce still more. These ramifying descendants form an air shower, spreading out like the branches of a tree reaching toward the Earth. Twenty-seven telescopes at the Argentina-based Auger Observatory look for ultraviolet light resulting from the cosmic rays, and 1600 detectors, distributed over a swath of land the size of Rhode Island, record the showers’ signals.

Scientists measure how deep into the atmosphere—how close to Earth—the air shower is when it maxes out. The closer to the Earth, the more lightweight the original cosmic ray particle is likely to be. A proton, for example, would penetrate the atmosphere more deeply before setting off an air shower than would an iron nucleus.

Auger scientists compared their data with three different simulation models to narrow the possible compositions of cosmic rays.

Auger’s favoring a compositional middle ground between protons and iron nuclei is based on a granular take on their data, a first for cosmic-ray research. In earlier studies, scientists distilled measurements of shower depths to two values: the average and standard deviation of all shower depths in a given cosmic-ray energy range. Their latest study, however, made no such generalization. Instead, it used the full distribution of data on air shower depth. If researchers measured 1000 different air shower depths for a specific UHECR energy, all 1000 data points—not just the average—went into Auger’s simulation models.

The result was a more nuanced picture of cosmic ray composition. The analysis also gave researchers greater insight into their simulations. For one model, the data and predictions could not be matched no matter the composition of the cosmic ray, giving scientists a starting point for constraining the model further.

“Just getting the distribution itself was exciting,” Ahn says.

Auger will continue to study cosmic rays at even higher energies, gathering more statistics to answer the question: What exactly are cosmic rays made of?

 

Like what you see? Sign up for a free subscription to symmetry!

by Leah Hesla at November 18, 2014 04:43 PM

The n-Category Cafe

The Kan Extension Seminar in the Notices

Emily has a two-page article in the latest issue of the Notices of the American Mathematical Society, describing her experience of setting up and running the Kan extension seminar. In my opinion, the seminar was an exciting innovation for both this blog and education at large. It also resulted in some excellent posts. Go read it!

Daniel Kan

by leinster (Tom.Leinster@ed.ac.uk) at November 18, 2014 01:04 PM

Lubos Motl - string vacua and pheno

CMS sees excess of same-sign dimuons "too"
An Xmas rumor deja vu

There are many LHC-related hep-ex papers on the arXiv today, and especially
Searches for the associated \(t\bar t H\) production at CMS
by Liis Rebane of CMS. The paper notices a broad excess of like-sign dimuon events. See the last 2+1 lines of Table 1 for numbers.




Those readers who remember all 6,000+ blog posts on this blog know very well that back in December 2012, there was a "Christmas rumor" about an excess seen by the other major LHC collaboration, ATLAS.




ATLAS was claimed to have observed 14 events – which would mean a 5-sigma excess – of same-sign dimuon events with the invariant mass\[

m_{\rm inv}(\mu^\pm \mu^\pm) = 105\GeV.

\] Quite a bizarre Higgs-like particle with \(Q=\pm 2\), if a straightforward explanation exists. Are the ATLAS and CMS seeing the same deviation from the Standard Model?

by Luboš Motl (noreply@blogger.com) at November 18, 2014 07:51 AM

November 17, 2014

Marco Frasca - The Gauge Connection

That’s a Higgs but how many?

ResearchBlogging.org

CMS and ATLAS collaborations are yet up to work producing results from the datasets obtained in the first phase of activity of LHC. The restart is really near the corner and, maybe already the next summer, things can change considerably. Anyway what they get from the old data can be really promising and rather intriguing. This is the case for the recent paper by CMS (see here). The aim of this work is to see if a heavier state of Higgs particle exists and the kind of decay they study is Zh\rightarrow l^+l^-bb. That is, one has a signature with two leptons moving in opposite directions, arising from the dacy of the Z, and two bottom quarks arising from the decay of the Higgs particle. The analysis of this decay aims to get hints of existence of a heavier pseudoscalar Higgs state. This can be greatly important for SUSY extensions of the Standard Model that foresee more than one Higgs particle.

Often CMS presents its results with some intriguing open questions and also this is the case and so, it is worth this blog entry. Here is the main result

CMS study of Zh->llbbThe evidence, as said in the paper, is that there is a 2.6-2.9 sigma evidence at 560 GeV and a smaller one at around 300 GeV. Look elsewhere effect reduces the former at 1.1 sigma and the latter is practically negligible. Overall, this is pretty negligible but, as always, with more data at the restart, could become something real or just fade away. It should be appreciated the fact that a door is left open anyway and a possible effect is pointed out.

My personal interpretation is that such higher excitations do exist but their production rates are heavily suppressed with the respect to the observed ground state at 126 GeV and so, negligible with the present datasets. I am also convinced that the current understanding of the breaking of SUSY, currently adopted in MSSM-like to go beyond the Standard Model, is not the correct one provoking the early death of such models. I have explained this in a coupled of papers of mine (see here and here). It is my firm conviction that the restart will yield exciting results and we should be really happy to have such a powerful machine in our hands to grasp them.

Marco Frasca (2013). Scalar field theory in the strong self-interaction limit Eur. Phys. J. C (2014) 74:2929 arXiv: 1306.6530v5

Marco Frasca (2012). Classical solutions of a massless Wess-Zumino model J.Nonlin.Math.Phys. 20:4, 464-468 (2013) arXiv: 1212.1822v2


Filed under: Particle Physics, Physics Tagged: ATLAS, CERN, CMS, Higgs particle, Standard Model, Supersymmetry

by mfrasca at November 17, 2014 04:54 PM

Matt Strassler - Of Particular Significance

At the Naturalness 2014 Conference

Greetings from the last day of the conference “Naturalness 2014“, where theorists and experimentalists involved with the Large Hadron Collider [LHC] are discussing one of the most widely-discussed questions in high-energy physics: are the laws of nature in our universe “natural” (= “generic”), and if not, why not? It’s so widely discussed that one of my concerns coming in to the conference was whether anyone would have anything new to say that hadn’t already been said many times.

What makes the Standard Model’s equations (which are the equations governing the known particles, including the simplest possible Higgs particle) so “unnatural” (i.e. “non-generic”) is that when one combines the Standard Model with, say, Einstein’s gravity equations. or indeed with any other equations involving additional particles and fields, one finds that the parameters in the equations (such as the strength of the electromagnetic force or the interaction of the electron with the Higgs field) must be chosen so that certain effects almost perfectly cancel, to one part in a gazillion* (something like 10³²). If this cancellation fails, the universe described by these equations looks nothing like the one we know. I’ve discussed this non-genericity in some detail here.

*A gazillion, as defined on this website, is a number so big that it even makes particle physicists and cosmologists flinch. [From Old English, gajillion.]

Most theorists who have tried to address the naturalness problem have tried adding new principles, and consequently new particles, to the Standard Model’s equations, so that this extreme cancellation is no longer necessary, or so that the cancellation is automatic, or something to this effect. Their suggestions have included supersymmetry, warped extra dimensions, little Higgs, etc…. but importantly, these examples are only natural if the lightest of the new particles that they predict have masses that are around or below 1 TeV/c², and must therefore be directly observable at the LHC (with a few very interesting exceptions, which I’ll talk about some other time). The details are far too complex to go into here, but the constraints from what was not discovered at LHC in 2011-2012 implies that most of these examples don’t work perfectly. Some partial non-automatic cancellation, not at one part in a gazillion but at one part in 100, seems to be necessary for almost all of the suggestions made up to now.

So what are we to think of this?

  • Maybe one of the few examples that is entirely natural and is still consistent with current data is correct, and will turn up at the LHC in 2015 or 2016 or so, when the LHC begins running at higher energy per collision than was available in 2011-2012.
  • Maybe one of the examples that isn’t entirely natural is correct. After all, one part in 100 isn’t awful to contemplate, unlike one part in a gazillion. We do know of other weird things about the world that are improbable, such as the fact that the Sun and the Moon appear to be almost exactly the same size in the Earth’s sky. So maybe our universe is slightly non-generic, and therefore discoveries of new particles that we might have expected to see in 2011-2012 are going to be delayed until 2015 or beyond.
  • Maybe naturalness is simply not a good guide to guessing our universe’s laws, perhaps because the universe’s history, or its structure, forced it to be extremely non-generic, or perhaps because the universe as a whole is generic but huge and variegated (this is often called a “multiverse”, but be careful, because that word is used in several very different ways — see here for discussion) and we can only live in an extremely non-generic part of it.
  • Maybe naturalness is not a good guide because there’s something wrong with the naturalness argument, perhaps because quantum field theory itself, on which the argument rests, or some other essential assumption, is breaking down.

Some of the most important issues at this conference are: how can we determine experimentally which of these possibilities is correct (or whether another we haven’t thought of is correct)? In this regard, what measurements do we need to make at the LHC in 2015 and beyond? What theoretical directions concerning naturalness have been underexplored, and might any of them suggest new measurements at LHC (or elsewhere) that have not yet been attempted?

I am afraid my time is too limited to report on highlights. Most of the progress reported at this conference has been incremental rather than major steps; there weren’t any big new solutions to the naturalness problem proposed.  But it has been a good opportunity for an exchange of ideas among theorists and experimentalists, with a number of new approaches to LHC measurements being presented and discussed, and with some interesting conversation regarding the theoretical and conceptual issues surrounding naturalness, selection bias (sometimes called “anthropics”), and the behavior of quantum field theory.


Filed under: LHC News, Particle Physics Tagged: atlas, cms, Higgs, LHC, naturalness

by Matt Strassler at November 17, 2014 01:33 PM

November 16, 2014

The n-Category Cafe

Jaynes on Mathematical Courtesy

In the last years of his life, fierce Bayesian Edwin Jaynes was working on a large book published posthumously as Probability Theory: The Logic of Science (2003). Jaynes was a lively writer. In an appendix on “Mathematical formalities and style”, he really let rip, railing against modern mathematical style. Here’s a sample:

Nowadays, if you introduce a variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> without repeating the incantation that it is in some set or ‘space’ <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, you are accused of dealing with an undefined problem. If you differentiate a function <semantics>f(x)<annotation encoding="application/x-tex">f(x)</annotation></semantics> without first having stated that it is differentiable, you are accused of lack of rigor. If you note that your function <semantics>f(x)<annotation encoding="application/x-tex">f(x)</annotation></semantics> has some special property natural to the application, you are accused of lack of generality. In other words, every statement you make will receive the discourteous interpretation.

Discuss.

This is taken from the final section of this appendix, on “Mathematical courtesy”. Here’s most of the rest of it:


Obviously, mathematical results cannot be communicated without some decent standards of precision in our statements. But a fanatical insistence on one particular form of precision and generality can be carried so far that it defeats its own purpose; 20th century mathematics often degenerates into an idle adversary game instead of a communication process.

The fanatic is not trying to understand your substantive message at all, but only trying to find fault with your style of presentation. He will strive to read nonsense into what you are saying, if he can possibly find any way of doing so. In self-defense, writers are obliged to concentrate their attention on every tiny, irrelevant, nit-picking detail of how things are said rather than on what is said. The length grows; the content shrinks.

Mathematical communication would be much more efficient and pleasant if we adopted a different attitude. For one who makes the courteous interpretation of what others write, the fact that <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is introduced as a variable already implies that there is some set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> of possible values. Why should it be necessary to repeat that incantation every time a variable is introduced, thus using up two symbols where one would do? (Indeed, the range of values is usually indicated more clearly at the point where it matters, by adding conditions such as (<semantics>0<x<1<annotation encoding="application/x-tex">0 \lt x \lt 1</annotation></semantics>) after an equation.)

For a courteous reader, the fact that a writer differentiates <semantics>f(x)<annotation encoding="application/x-tex">f(x)</annotation></semantics> twice already implies that he considers it twice differentiable; why should he be required to say everything twice? If he proves proposition <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> in enough generality to cover his application, why should he be obliged to use additional space for irrelevancies about the most general possible conditions under which <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> would be true?

A scourge as annoying as the fanatic is his cousin, the compulsive mathematical nitpicker. We expect that an author will define his technical terms, and then use them in a way consistent with his definitions. But if any other author has ever used the term with a slightly different shade of meaning, the nitpicker will be right there accusing you of inconsistent terminology. The writer has been subjected to this many times; and colleagues report the same experience.

Nineteenth century mathematicians were not being nonrigorous by their style; they merely, as a matter of course, extended simple civilized courtesy to others, and expected to receive it in return. This will lead one to try to read sense into what others write, if it can possibly be done in view of the whole context; not to pervert our reading of every mathematical work into a witch-hunt for deviations from the Official Style.

Therefore […] we issue the following:

Emancipation Proclamation

Every variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> that we introduce is understood to have some set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> of possible values. Every function <semantics>f(x)<annotation encoding="application/x-tex">f(x)</annotation></semantics> that we introduce is understood to be sufficiently well-behaved so that what we do with it makes sense. We undertake to make every proof general enough to cover the application we make of it. It is an assigned homework problem for the reader who is interested in the question to find the most general conditions under which the result would hold.

We could convert many 19th century mathematical works to 20th century standards by making a rubber stamp containing this Proclamation, with perhaps another sentence using the terms ‘sigma-algebra, Borel field, Radon-Nikodym derivative’, and stamping it on the first page.

Modern writers could shorten their works substantially, with improved readability and no decrease in content, by including such a Proclamation in the copyright message, and writing thereafter in the 19th century style. Perhaps some publishers, seeing these words, may demand that they do this for economic reasons; it would be a service to science.

by leinster (Tom.Leinster@ed.ac.uk) at November 16, 2014 11:15 PM

Michael Schmitt - Collider Blog

Quark contact interactions at the LHC

So far, no convincing sign of new physics has been uncovered by the CMS and ATLAS collaborations. Nonetheless, the scientists continue to look using a wide variety of approaches. For example, a monumental work on the coupling of the Higgs boson to vector particles has been posted by the CMS Collaboration (arXiv:1411.3441). The authors conducted a thorough and very sophisticated statistical analysis of the kinematic distributions of all relevant decay modes, with the conclusion that the data for the Higgs boson are fully consistent with the standard model expectation. The analysis and article are too long for a blog post, however, so please see the paper if you want to learn the details.

The ATLAS Collaboration posted a paper on generic searches for new physics signals based on events with three leptons (e, μ and τ). This paper (arXiv:1411.2921) is longish one describing a broad-based search with several categories of events defined by lepton flavor and charge and other event properties. In all categories the observation confirms the predictions based on standard model processes: the smallest p-value is 0.05.

A completely different search for new physics based on a decades-old concept was posted by CMS (arXiv:1411.2646). We all know that the Fermi theory of weak interactions starts with a so-called contact interaction characterized by an interaction vertex with four legs. The Fermi constant serves to parametrize the interaction, and the participation of a vector boson is immaterial when the energy of the interaction is low compared to the boson mass. This framework is the starting point for other effective theories, and has been employed at hadron colliders when searching for deviations in quark-quark interactions, as might be observable if quarks were composite.

The experimental difficulty in studying high-energy quark-quark scattering is that the energies of the outgoing quarks are not so well measured as one might like. (First, the hadronic jets that materialize in the detector do not precisely reflect the quark energies, and second, jet energies cannot be measured better than a few percent.) It pays, therefore, to avoid using energy as an observable and to get the most out of angular variables, which are well measured. Following analyses done at the Tevatron, the authors use a variable χ = exp(|y1-y2|), which is a simple function of the quark scattering angle in the center-of-mass frame. The distribution of events in χ can be unambiguously predicted in the standard model and in any other hypothetical model, and confronted with the data. So we have a nice case for a goodness-of-fit test and pairwise hypothesis testing.

The traditional parametrization of the interaction Lagrangian is:
contact_lagrangian
where the η parameters have values -1, 0, +1 and specify the chirality of the interaction; the key parameter is the mass scale Λ. An important detail is that this interaction Lagrangian can interfere with the standard model piece, and the interference can be either destructive or constructive, depending on the values of the η parameters.

The analysis proceeds exactly as one would expect: events must have at least two jets, and when there are more than two, the two highest-pT jets are used and the others ignored. Distributions of χ are formed for several ranges of di-jet invariant mass, MJJ, which extends as high as 5.2 TeV. The measured χ distributions are unfolded, i.e., the effects of detector resolution are removed from the distribution on a statistical basis. The main sources of systematic uncertainty come from the jet energy scale and resolution and are based on an extensive parametrization of jet uncertainties.

Since one is looking for deviations with respect to the standard model prediction, it is very important to have an accurate prediction. Higher-order terms must be taken into account; these are available at next-to-leading order (NLO). In fact, even electroweak corrections are important and amount to several percent as a strong function of χ — see the plot on the right. EWK_AllMassBins_R05_authors The scale uncertainties are a few percent (again showing the a very precise SM prediction is non-trivial event for pp→2J) and fortunately the PDF uncertainties are small, at the percent level. Theoretical uncertainties dominate for MJJ near 2 TeV, while statistical uncertainties dominate for MJJ above 4 TeV.

The money plot is this one:
Measured and predicted distributions in χ
Optically speaking, the plot is not exciting: the χ distributions are basically flat and deviations due to a mass scale Λ = 10 TeV would be mild. Such deviations are not observed. Notice, though, that the electroweak corrections do improve the agreement with the data in the lowest χ bins. Loosely speaking, this improvement corresponds to about one standard deviation and therefore would be significant if CMS actually had evidence for new physics in these distributions. As far as limits are concerned, the electroweak corrections are “worth” 0.5 TeV.

The statistical (in)significance of any deviation is quantified by a ratio of log-likelihoods: q = -2ln(LSM+NP/LSM) where SM stands for standard model and NP for new physics (i.e., one of distinct possibilities given in the interaction Lagrangian above). Limits are derived on the mass scale Λ depending on assumed values for the η parameters; they are very nicely summarized in this graph:Summary of limits placed on mass scales.
The limits for contact interactions are roughly at the 10 TeV scale — well beyond the center-of-mass energy of 8 TeV. I like this way of presenting the limits: you see the expected value (black dashed line) and an envelope of expected statistical fluctuations from this expectation, with the observed value clearly marked as a red line. All limits are slightly more stringent than the expected ones (these are not independent of course).

The authors also considered models of extra spatial dimensions and place limits on the scale of the extra dimensions at the 7 TeV level.

So, absolutely no sign of new physics here. The LHC will turn on in 2015 at a significantly higher center-of-mass energy (13 TeV), and given the ability of this analysis to probe mass scales well above the proton-proton collision energy, a study of the χ distribution will be interesting.


by Michael Schmitt at November 16, 2014 04:45 PM

Lubos Motl - string vacua and pheno

CMS: locally 2.6 or 2.9 sigma excess for another \(560\GeV\) Higgs boson \(A\)
And there are theoretical reasons why this could be the right mass

Yesterday, the CMS Collaboration at the LHC published the results of a new search:
Search for a pseudoscalar boson \(A\) decaying into a \(Z\) and an \(h\) boson in the \(\ell^+\ell^- \bar b b\) final state
They look at collisions with the \(\ell\ell bb\) final state and interpret it using the two higgs doublet model scenarios.




There are no stunning excesses in the data.




But I think it's always a good idea to point out what is the most significant excess they see in the data, and the CMS folks do just that in this paper, too.

On page 10, one may see Figure 4 and Figure 5 that show the main results.



According to Figure 4, a new Higgs boson with \(\Gamma=0\) has some cross section (multiplied by the branching ratio) that stays within the 2-sigma band but reveals a deficit "slightly exceeding 2 sigma" for \(m_A=240\GeV\) and slight 2-sigma excesses for \[

m_A = 260\GeV, \quad 315\GeV, \quad 560 \GeV.

\] And let's not forget about a different CMS search that suggested \(m_H=137\GeV\).

The excess for \(m_A=560\GeV\) has the local significance of 2.6 sigma which reduces to just 1.1 sigma "globally", after the look-elsewhere-effect correction.

As Figure 5 (which is similar but fuzzier) shows, this excess for \(m_A=560\GeV\) becomes even larger, 2.9 sigma (or 1.6 sigma globally) if we assume a larger decay width of this \(A\) boson, namely \(\Gamma=30\GeV\). The significance levels are mentioned in the paper, too.

That is somewhat intriguing. If there's another search for such bosons, don't forget to look for similar excesses at this mass. But it's nothing to lose your sleep over, of course.

Recall that the minimum supersymmetric standard model – a special, more motivated subclass of the two-higgs-doublet model – predicts five Higgs particles because \(8-3=5\) expresses the a priori real scalar degrees of freedom minus those eaten by the 3 broken symmetry generators.

These 5 bosons may be denoted \(h,H,A,H^\pm\). The first three bosons are neutral, the last two are charged. \(A\) is the only CP-odd CP-eigenstate.

If you want to get excited by a paper/talk that "predicted" this \(m_A=560\GeV\) while \(m_h=125\GeV\), open this June 2014 talk
The post-Higgs MSSM scenario
by Abdelahk Djouadi of CNRS Paris. On page 13, he deduces that a "best fit" in MSSM has\[

\tan\beta=1, \quad m_A = 560\GeV,\\
m_h = 125\GeV, \quad m_H = 580\GeV,\\
m_{H^\pm} = 563 \GeV

\] although the sentence right beneath that indicates that the author thinks that many other points are rather good fits, too. Good luck to that prediction, anyway. ;-)

The very same scenario with the same values of the masses is also defended in this May 2014 paper by Jérémie Quevillon who argues that these values of the new Higgses are almost inevitable consequences of supersymmetry given the superpartner masses' being above \(1\TeV\).

It sounds cool despite the fact that the simplest, truly MSSM-based scenarios corresponding to their "best fit" involve superpartners around \(100\TeV\). The discovery of the Higgses near \(560\GeV\) in 2015 would be circumstantial evidence in favor of supersymmetry, nevertheless.

Update: Abdelahk Djouadi told me that their scenario only predicts some 0.5 fb cross section (with the factors added) but one needs about 5 fb to explain the excess above. So it's bad news.

by Luboš Motl (noreply@blogger.com) at November 16, 2014 03:50 PM

ZapperZ - Physics and Physicists

"Should I Go Into Physics Or Engineering?"
I get asked that question a lot, and I also see similar question on Physics Forums. Kids who are either still in high school, or starting their undergraduate  years are asking which area of study should they pursue. In fact, I've seen cases where students ask whether they should do "theoretical physics" or "engineering", as if there is nothing in between those two extremes!

My response has always been consistent. I why them why can't they have their cake and eat it too?

This question often arises out of ignorance of what physics really encompasses. Many people, especially high school students, still think of physics as being this esoteric subject matter, dealing with elementary particles, cosmology, wave-particle duality, etc.. etc., things that they don't see involving everyday stuff. On the other hand, engineering involves things that they use and deal with everyday, where the product are often found around them. So obviously, with such an impression, those two areas of study are very different and very separate.

I try to tackle such a question by correcting their misleading understanding of what physics is and what a lot of physicists do. I tell them that physics isn't just the LHC or the Big Bang. It is also your iPhone, your medical x-ray, your MRI, your hard drive, your silicon chips, etc. In fact, the largest percentage of practicing physicists are in the field of condensed matter physics/material science, an area of physics that study the basic properties of materials, the same ones that are used in modern electronics. I point to them many of the Nobel Prize in physics that were awarded to condensed matter physicists or for invention of practical items (graphene, lasers, etc.). So already, the idea of having to choose between doing physics, and doing something "practical and useful" may not be mutually exclusive.

Secondly, I point to different areas of physics in which physics and engineering smoothly intermingle. I've mentioned earlier about the field of accelerator physics, in which you see both physics and engineering come into play. In fact, in this field, you have both physicists and electrical engineers, and they often do the same thing. The same can be said about those in instrumentation/device physics. In fact, I have also seen many high energy physics graduate students who work on detectors for particle colliders who looked more like electronics engineers than physicists! So for those working in this field, the line between doing physics and doing engineering is sufficiently blurred. You can do exactly what you want, leaning as heavily towards the physics side or engineering side as much as you want, or straddle exactly in the middle. And you can approach these fields either from a physics major or an electrical engineering major. The point here is that there are areas of study in which you can do BOTH physics and engineering!

Finally, the reason why you don't have to choose to major in either physics or engineering is because there are many schools that offer a major in BOTH! My alma mater, the University of Wisconsin-Madison (Go Badgers!) has a major called AMEP - Applied Mathematics, Engineering, and Physics - where with your advisor, you can tailor a major that straddles two of more of the areas in math, physics, and engineering. There are other schools that offer majors in Engineering Physics or something similar. In other words, you don't have to choose between physics or engineering. You can just do BOTH!

Zz.

by ZapperZ (noreply@blogger.com) at November 16, 2014 01:29 PM

Tommaso Dorigo - Scientificblogging

A New Search For The A Boson With CMS
I am quite happy to report today that the CMS experiment at the CERN Large Hadron Collider has just published a new search which fills a gap in studies of extended Higgs boson sectors. It is a search for the decay of the A boson into Zh pairs, where the Z in turn decays to an electron-positron or a muon-antimuon pair, and the h is assumed to be the 125 GeV Higgs and is sought for in its decay to b-quark pairs. 

If you are short of time, this is the bottomline: no A boson is found in Run 1 CMS data, and limits are set in the parameter space of the relevant theories. But if you have a bit more time to spend here, let's start with the beginning - What's the A boson, you might wonder for a start. 

read more

by Tommaso Dorigo at November 16, 2014 09:54 AM

November 15, 2014

Lubos Motl - string vacua and pheno

Is our galactic black hole a neutrino factory?
When I was giving a black hole talk two days ago, I would describe Sagittarius A*, a black hole in the center of the Milky Way, our galaxy, as our "most certain" example of an astrophysical black hole that is actually observed in the telescopes. Its mass is 4 million solar masses – the object is not a negligible dwarf.

Accidentally, a term paper and presentation I would do at Rutgers more than 15 years ago was about Sgr A*. Of course, I had no doubt it was a black hole at that time.



Today, science writers affiliated with all the usual suspects (e.g. RT) would run the story that Sgr A* is a high-energy neutrino factory.

Why now? Well, a relevant paper got published in Physical Review D. Again, it wasn't today, it was almost 2 months ago, but a rational justification of the explosion of hype in the mid of November 2014 simply doesn't exist. Someone in NASA helped the media to explode – by this press release – and they did explode, copying from each other in the usual way.




The actual paper was published as the July 2014 preprint
Neutrino Lighthouse at Sagittarius A*
by Bai, Barger squared, Lu, Peterson, and Salvado. Their main argument in favor of the bizarrely sounding claim that "Sgr A* produces high-energy neutrinos" comes from something that looks like a timing coincidence.




Chandra X-ray Observatory and its NuSTAR and Swift friend – all in space – would detect some outbursts or flares between 2010 and 2013. And the timing and (limited data about) locations seemed remarkably close to some detection of high-energy neutrinos by IceCube on the South Pole.

IceCube saw an exceptional neutrino 2-3 hours before a remarkable X-ray flare seen in the space X-ray telescopes, and so on. The confidence level is just around 99%. Yes, the word "before" sounds like the stories about OPERA that would detect "faster than light" neutrinos.

To my taste, the confidence level supporting the arguments is lousy. But even if I accept the possibility that the neutrinos are coming from the direction of Sgr A*, they're almost certainly not due to the black hole itself. Or at least, I would be stunned if the event horizon – which is what allows us to call the object a black hole – were needed for the emission of these high-energy neutrinos.

In particular, I emphasize that the Hawking radiation for such macroscopic black holes should be completely negligible, and emitting virtually no massive particles (and neutrinos are light from some viewpoints but very massive relatively to the typical Hawking quanta).

It seems much more likely to me that the X-rays as well as (possibly) the neutrinos are due to some messy astrophysical effects in the vicinity of the black hole. What are these astrophysical effects?

They propose that the neutrinos are created by decays of charged pions – which seems like a very likely birth of neutrinos to me (at least if one assumes that beyond the Standard Model physics is not participating). But these charged pions are there independently of the event horizon, aren't they? If the neutrinos arise from decaying charged pions near the black hole, there should also be neutral pions and their decays should produce gamma rays (near a TeV) which should be visible to the CTA, HAWC, H.E.S.S. and VERITAS experiments, they say.

At this moment, the paper has 3 citations.

The first one, by Brian Vlček et al. (sorry, it is vastly easier to choose the Czech name and write this complicated disclaimer than to remember the non-Czech name), refers to IceCube that says that the origin of the neutrinos could be LS 5039, a binary object, which is clearly distinct from Sgr A* but I guess it's close enough. Correct me if I misunderstood something about the apparent identification of these two explanations.

Murase talks about the neutrino flux around the Fermi bubbles in the complicated galactic central environment. These thoughts have the greatest potential to be relevant for fundamental physics, I think. Esmaili et al. counts the paper about the "neutrino lighthouse" among 15 or so "speculative" papers ignited by the IceCube's surprising observation of high-energy neutrinos.

So I do think that this lighthouse neutrino paper was overhyped, much like most papers that attract the journalists' attention, but sometimes it's good if random papers are reported in the media as long as they are not completely pathetic, and this one arguably isn't "quite" pathetic.

by Luboš Motl (noreply@blogger.com) at November 15, 2014 04:40 PM

John Baez - Azimuth

A Second Law for Open Markov Processes

guest post by Blake Pollard

What comes to mind when you hear the term ‘random process’? Do you think of Brownian motion? Do you think of particles hopping around? Do you think of a drunkard staggering home?

Today I’m going to tell you about a version of the drunkard’s walk with a few modifications. Firstly, we don’t have just one drunkard: we can have any positive real number of drunkards. Secondly, our drunkards have no memory; where they go next doesn’t depend on where they’ve been. Thirdly, there are special places, such as entrances to bars, where drunkards magically appear and disappear.

The second condition says that our drunkards satisfy the Markov property, making their random walk into a Markov process. The third condition is really what I want to tell you about, because it makes our Markov process into a more general ‘open Markov process’.

There are a collection of places the drunkards can be, for example:

V= \{ \text{bar},\text{sidewalk}, \text{street}, \text{taco truck}, \text{home} \}

We call this set V the set of states. There are certain probabilities associated with traveling between these places. We call these transition rates. For example it is more likely for a drunkard to go from the bar to the taco truck than to go from the bar to home so the transition rate between the bar and the taco truck should be greater than the transition rate from the bar to home. Sometimes you can’t get from one place to another without passing through intermediate places. In reality the drunkard can’t go directly from the bar to the taco truck: he or she has to go from the bar to sidewalk to the taco truck.

This information can all be summarized by drawing a directed graph where the positive numbers labelling the edges are the transition rates:

For simplicity we draw only three states: home, bar, taco truck. Drunkards go from home to the bar and back, but they never go straight from home to the taco truck.

We can keep track of where all of our drunkards are using a vector with 3 entries:

\displaystyle{ p(t) = \left( \begin{array}{c} p_h(t) \\ p_b(t) \\ p_{tt}(t) \end{array} \right) \in \mathbb{R}^3 }

We call this our population distribution. The first entry p_h is the number of drunkards that are at home, the second p_b is how many are at the bar, and the third p_{tt} is how many are at the taco truck.

There is a set of coupled, linear, first-order differential equations we can write down using the information in our graph that tells us how the number of drunkards in each place change with time. This is called the master equation:

\displaystyle{ \frac{d p}{d t} = H p }

where H is a 3×3 matrix which we call the Hamiltonian. The off-diagonal entries are nonnegative:

H_{ij} \geq 0, i \neq j

and the columns sum to zero:

\sum_i H_{ij}=0

We call a matrix satisfying these conditions infinitesimal stochastic. Stochastic matrices have columns that sum to one. If we take the exponential of an infinitesimal stochastic matrix we get one whose columns sum to one, hence the label ‘infinitesimal’.

The Hamiltonian for the graph above is

H = \left( \begin{array}{ccc} -2 & 5 & 10 \\ 2 & -12 & 0 \\ 0 & 7 & -10 \end{array} \right)

John has written a lot about Markov processes and infinitesimal stochastic Hamiltonians in previous posts.

Given two vectors p,q \in \mathbb{R}^3 describing the populations of drunkards which obey the same master equation, we can calculate the relative entropy of p relative to q:

\displaystyle{ S(p,q) = \sum_{ i \in V} p_i \ln \left( \frac{p_i}{q_i} \right) }

This is an example of a ‘divergence’. In statistics, a divergence a way of measuring the distance between probability distributions, which may not be symmetrical and may even not obey the triangle inequality.

The relative entropy is important because it decreases monotonically with time, making it a Lyapunov function for Markov processes. Indeed, it is a well known fact that

\displaystyle{ \frac{dS(p(t),q(t) ) } {dt} \leq 0 }

This is true for any two population distributions which evolve according to the same master equation, though you have to allow infinity as a possible value for the relative entropy and negative infinity for its time derivative.

Why is entropy decreasing? Doesn’t the Second Law of Thermodynamics say entropy increases?

Don’t worry: the reason is that I have not put a minus sign in my definition of relative entropy. Put one in if you like, and then it will increase. Sometimes without the minus sign it’s called the Kullback–Leibler divergence. This decreases with the passage of time, saying that any two population distributions p(t) and q(t) get ‘closer together’ as they get randomized with the passage of time.

That itself is a nice result, but I want to tell you what happens when you allow drunkards to appear and disappear at certain states. Drunkards appear at the bar once they’ve had enough to drink and once they are home for long enough they can disappear. The set of places where drunkards can appear or disappear B is called the set of boundary states.  So for the above process

B = \{ \text{home},\text{bar} \}

is the set of boundary states. This changes the way in which the population of drunkards changes with time!

The drunkards at the taco truck obey the master equation. For them,

\displaystyle{ \frac{dp_{tt}}{dt} = 7p_b -10 p_{tt} }

still holds. But because the populations can appear or disappear at the boundary states the master equation no longer holds at those states! Instead it is useful to define the flow of drunkards into the i^{th} state by

\displaystyle{ \frac{Dp_i}{Dt} = \frac{dp_i}{dt}-\sum_j H_{ij} p_j}

This quantity describes by how much the rate of change of the populations at the boundary states differ from that given by the master equation.

The reason why we are interested in open Markov processes is because you can take two open Markov processes and glue them together along some subset of their boundary states to get a new open Markov process! This allows us to build up or break down complicated Markov processes using open Markov processes as the building blocks.

For example we can draw the graph corresponding to the drunkards’ walk again, only now we will distinguish boundary states from internal states by coloring internal states blue and having boundary states be white:

Consider another open Markov process with states

V=\{ \text{home},\text{work},\text{bar} \}

where

B=\{ \text{home}, \text{bar}\}

are the boundary states, leaving

I=\{\text{work}\}

as an internal state:

Since the boundary states of this process overlap with the boundary states of the first process we can compose the two to form a new Markov process:

Notice the boundary states are now internal states. I hope any Markov process that could approximately model your behavior has more interesting nodes! There is a nice way to figure out the Hamiltonian of the composite from the Hamiltonians of the pieces, but we will leave that for another time.

We can ask ourselves, how does relative entropy change with time in open Markov processes? You can read my paper for the details, but here is the punchline:

\displaystyle{ \frac{dS(p(t),q(t) ) }{dt} \leq \sum_{i \in B} \frac{Dp_i}{Dt}\frac{\partial S}{\partial p_i} + \frac{Dq_i}{Dt}\frac{\partial S}{\partial q_i} }

This is a version of the Second Law of Thermodynamics for open Markov processes.

It is important to notice that the sum is only over the boundary states! This inequality tells us that relative entropy still decreases inside our process, but depending on the flow of populations through the boundary states the relative entropy of the whole process could either increase or decrease! This inequality will be important when we study how the relative entropy changes in different parts of a bigger more complicated process.

That is all for now, but I leave it as an exercise for you to imagine a Markov process that describes your life. How many states does it have? What are the relative transition rates? Are there states you would like to spend more or less time in? Are there states somewhere you would like to visit?

Here is my paper, which proves the above inequality:

• Blake Pollard, A Second Law for open Markov processes.

If you have comments or corrections, let me know!


by John Baez at November 15, 2014 03:00 AM

November 14, 2014

CERN Bulletin

CHIS - Information concerning the health insurance of frontalier workers who are family members of a CHIS main member

We recently informed you that the Organization was still in discussions with the Host State authorities to clarify the situation regarding the health insurance of frontalier workers who are family members (as defined in the Staff Rules and Regulations) of a CHIS main member, and that we were hoping to arrive at a solution soon.

 

After extensive exchanges, we finally obtained a response a few days ago from the Swiss authorities, with which we are fully satisfied and which we can summarise as follows:

1) Frontalier workers who are currently using the CHIS as their basic health insurance can continue to do so.

2) Family members who become frontalier workers, or those who have not yet exercised their “right to choose” (droit d’option) can opt to use the CHIS as their basic health insurance. To this end, they must complete the form regarding the health insurance of frontaliers, ticking the LAMal box and submitting their certificate of CHIS membership (available from UNIQA). 

3) For family members who joined the LAMal system since June 2014, CERN is in contact with the Swiss authorities and the Geneva Health Insurance Service with a view to securing an exceptional arrangement allowing them to leave the LAMal system and use the CHIS as their basic health insurance.

4) People who exercised their “right to choose” and opted into the French Sécurité sociale or the Swiss LAMal system before June 2014 can no longer change, as the decision is irreversible. As family members, however, they remain beneficiaries of the CHIS, which then serves as their complementary insurance.

5) If a frontalier family member uses the CHIS as his or her basic health insurance and the main member concerned ceases to be a member of the CHIS or the relationship between the two ends (divorce or dissolution of a civil partnership), the frontalier must join LAMal.

We hope that this information satisfies your expectations and concerns. We would like to thank the Host State authorities for their help in clarifying these highly complex issues.

We remind you that staff members, fellows and beneficiaries of the CERN Pension Fund must declare the professional situation and health insurance cover of their spouse or partner, as well as any changes in this regard, pursuant to Article III 6.01 of the CHIS Rules. In addition, in cases where a spouse or partner wishes to use the CHIS as his or her basic insurance and receives income from a professional activity or a retirement pension, the main member must pay a supplementary contribution based on the income of the spouse or partner, in accordance with Article III 5.07 of the CHIS Rules. For more information, see www.cern.ch/chis/DCSF.asp.

The CHIS team is on hand to answer any questions you may have on this subject, which you can submit to Chis.Info@cern.ch. The above information, as well as the Note Verbale from the Permanent Mission of Switzerland, is available in the frontaliers section of the CHIS website: www.cern.ch/chis/frontaliers.asp

November 14, 2014 03:11 PM

CERN Bulletin

Micro club
Opération NEMO   Pour finir en beauté les activités spéciales que le CMC a réalisé pendant cette année 2014, pour commémorer le 60ème anniversaire du CERN, et le 30ème du Micro Club, l’ Opération NEMO aura cette année un caractère très particulier. Nous allons proposer 6 fabricants de premier ordre qui offriront chacun deux ou trois produits à des prix exceptionnels. L’opération débute le lundi 17 novembre 2014. Elle se poursuivra  jusqu’au samedi 6 décembre inclus. Les délais de livraison seront de deux à trois semaines, selon les fabricants. Donc les commandes faites la dernière semaine, du 1 au 6 décembre, risquent d’arriver qu'au début du mois de janvier 2015. Liste de fabricants participant à cette dernière opération de l’année : Apple Computer, Lenovo, Toshiba, Brother, LaCie et Western Digital. Par exemple, pour Apple, seulement le MacBook Pro 15” Retina, toutes configurations et tous claviers possibles, fait partie de cette opération. Pour les autres fabricants mentionnés nous aurons, dès lundi, des détails sur les propositions qui nous seront offertes. Pour toute demande d’information ou commande, envoyer un mail à : cmc.orders@cern.ch. Cordialement, Votre CMC Team.

by Micro club at November 14, 2014 02:34 PM

CERN Bulletin

France @ CERN | Come and meet 37 French companies at the 2014 “France @ CERN” Event | 1-3 December
The 13th “France @ CERN” event will take place from 1 December to 3 December 2014. Thanks to Ubifrance, the French agency for international business development, 37 French firms will have the opportunity to showcase their know-how at CERN.   These companies are looking forward to meeting you during the B2B sessions which will be held on Tuesday, 2 December (afternoon) and on Wednesday, 3 December (afternoon) in buildings 500 and 61 or at your convenience in your own office. The fair’s opening ceremony will take place on Tuesday, 2 December (morning) in the Council Chamber in the presence of Rolf Heuer, Director-General of CERN and Nicolas Niemtchinow, Ambassador, Permanent Representative of France to the United Nations in Geneva and to international organisations in Switzerland. For more information about the event and the 37 participating French firms, please visit: http://www.la-france-au-cern.com/

November 14, 2014 02:31 PM

CERN Bulletin

Upcoming renovations in Building 63
La Poste will close its doors in Building 63 on Friday, 28 November. It moves to Building 510 and where it will open on 1 December (see picture).   UNIQA will close its HelpDesk in Building 63 on Wednesday, 26 November and will re-open the next day in Building 510. La Poste and UNIQA are expected to return to their renovated office space between April and May 2015.

November 14, 2014 02:25 PM