Particle Physics Planet


September 02, 2015

Lubos Motl - string vacua and pheno

GR, QG: horizons, gravitational waves, Hawking radiation etc. are more than beliefs
We haven't visited a black hole – no one will ever return alive from the black hole interior. We haven't directly and/or clearly observed the Hawking radiation, the Unruh radiation, a singularity in general relativity, gravitational waves let alone gravitons, excited strings, additional compactified spatial dimensions, superpartners, quanta of the inflaton field, regions with a different value of the inflaton field, regions with differently compactified extra dimensions than our way, and many other things.

Nevertheless, theoretical physicists generally assume that all those things – or at least most of them – exist. They do so most of the time. They say that they "believe" that those things are needed. Is the word "believe" another proof that their activity has evolved into a religion?

Not at all. The word "believe X" simply means "my opinion is that X is probably right". Both religious and irreligious people have the right to "believe". The churches have no monopoly over the word. And religious and irreligious people may become convinced about something equally staunchly and feel the same psychological certainty about something. Where the churches differ from the scientists is in the methods to arrive to a "belief".

The methods matter. And because they're so different, science – including modern physics – is something totally different than religion.




The religious people generally decide that "X is true" if someone, especially some religious authorities, say or write that "X is true" or that "X is positively correlated with God" in any sense. The scientific people don't believe the holy texts or religious authorities. They should ultimate decide that "X is true" if X is a simple enough hypothesis that implies something about Nature which seems to agree with the experience even though there exist lots of alternatives to X that would disagree.




The scientific method starts by "guessing" a hypothesis. The hypothesis may be falsified by the observations. Or it may survive. When it survives many tests, it becomes important in the scientific thinking. When such a successful theory is falsified in the future, newer replacements are being guessed as "variations" of the old theory of some kind because the scientists know that the replacement theory will have to pass the same tests that the old theory did.

It's important that the hypotheses are not being "directly deduced or read" from Nature. All the work of "guessing" viable hypotheses is a creative work of induction that relies on some special talents of the people. David Hume (1711-1776) was the philosopher who famously understood this point – that the "creation" of viable hypotheses doesn't and can't proceed by strict reasoning but it has to be about induction. Science just can't work without induction. The scientific process always depends on some kind of "reverse engineering" and there may exist no universal recipe how to successfully "reverse engineer" in every situation. Search for "David Hume" and "induction".

Interestingly enough (1642-1726), Isaac Newton didn't appreciate that point yet. In a 1713 essay, he presented his famous slogan hypotheses non fingo, "I feign no hypothesis". He was clearly convinced that he was deducing the laws of physics directly from the observed phenomena. They were no "hypotheses" for him. They were the "truth" that he directly saw. That's what he believed and it was clearly wrong. After all, Newton's laws were wrong, when presented as the complete truth about Nature, for several independent reasons.

So what Newton actually did was to apply his creative genius and invent a great guess – along with the new discipline of mathematics that was needed to make it work. His framework of physics remained the basis for all of physics in the following 250 years before it was found insufficient. A part of his "hypotheses non fingo" may have been due to his genuine misunderstanding. A part of it may have been (probably pretended) modesty: Everyone else could have noticed Newton's laws written all over Nature, too.

No, they couldn't.

Physics differs from some less structured discipline by its building a tall pyramid or skyscraper where the higher floors depend on the lower ones. In the analogy, the older and approximate theories may be either be placed at the bottom or the top – there are different ways to adjust the details of this metaphor. But in all cases, the stability of the skyscraper depends on the robustness of the floors. You can't build a good skyscraper out of sand.

So physicists are really working – are forced to be working – with some concrete blocks. It's actually at least iron-reinforced concrete. What they want to understand simply has many levels or layers or floors and resilient building blocks are needed for any structure that can sustain those floors. Mathematics, often nearly rigorous and boasting very accurate numbers and propositions everywhere, is what makes these blocks so sound.

This robustness guarantees that physicists may confidently predict the results of many experiments they haven't observed. The physical laws and especially the metalaws – the laws about laws – that the physicists first guess and then validate are so powerful that physicists may trust (again, without any link to religion) interpolations and even extrapolations that would be pure speculations in less structured and less rigorous scientific disciplines.

A zoologist may observe a spider that has six legs. A larger rabbit has four. A human has two. Does it follow that an even greater elephant has one leg? You may see that almost none of these extrapolations would ever work in zoology. But in physics, those extrapolations typically do work. They work because some quantities are described as basically smooth functions of others. The form of the function may be guessed and constrained by observations and linearized in some regions and lots of other things may be done with them. With these tools, you may predict infinitely many "similar" situations, situations that wouldn't be similar for a zoologist.

And the extrapolations are not just extrapolations of some values of parameters. Physicists may deduce the "natural" answers to qualitatively different, more complex questions out of their validated answers to different, simpler, or less elementary questions. Physicists are doing such things all the time. They wouldn't have gotten anywhere if they were not doing so. If they were imitating the zoologist, pretty much every situation and every question about it would be completely independent from the previous ones, from all the observations that have already been made. The "structure" of physics would evaporate.

But physics has lots of "structure". Physics is an iron-reinforced concrete that runs through Nature and connects objects and situations that are apparently so different and so distant that no non-physicist would dare to connect them (except for the religious people who connect them through God, of course). In effect, physicists may have and do have lots of (irreligious) faith about numerous questions whose answers simply couldn't have been extracted from direct observations – and most likely, they won't be extracted directly from the experience in the coming years, decades, centuries, and more.

I claim that those who don't understand why physicists feel so confident while answering questions without "directly observing" the answer simply do not understand the very character of physics, what kind of science it is and what it allows us to do and why.



A "linearly polarized" basis of the 2-dimensional space of polarizations of a gravitational wave.

Gravitational waves

The gravitational waves are analogous to the electromagnetic waves. They also have two independent polarizations for each direction+frequency. But the very existence of the gravitational waves as a consequence of Einstein's equations remained controversial for an unreasonably long time. The linearized calculation is straightforward.

But some people had irrational problems with the linearizations and they preferred to (semi-religiously) believe their prejudices that general relativity should ban the gravitational waves. These prejudices were remnants of Mach's principle which basically wants you to believe that the empty spacetime without "objects" is always the same. But GR says otherwise. The gravitational waves may be present or absent and the two situations demonstrably differ (well, if the spacetime allows the presence of some measurement apparatuses).

We haven't directly observed the gravitational waves. LIGO was running between 2002 and 2010 and found no one. A new gadget, Advanced LIGO, is in the construction on the same site, I think. Italy's VIRGO started in 2007 and has found nothing so far, either. LISA, the space antenna to measure those things in space, should be launched in 2034. Even the plans are very far – and they may still turn out to be overly optimistic.

A person who is viscerally hostile towards physics may call the opinion that gravitational waves exist "religion". A physicist knows that the person from the previous sentence is ignoramus. After all, a Nobel prize in physics has already been given primarily for the indirect detection of gravitational waves. In 1993, Hulse and Taylor were rewarded for having seen a pulsar, a pair of neutron stars orbiting one another, whose orbital period is getting shorter every revolution. The rate of "speeding" exactly agrees with the calculation in general relativity that implies that these two objects emit gravitational waves, lose energy, and therefore are falling closer to each other. The time from the zeroth revolution to the \(N\)-th revolution is a nice quadratic function of \(N\) with a negative quadratic coefficient.

Can it be coincidence? Could a theory without gravitational waves imply the same "acceleration" of the orbits of this object? Miracles – something that has nothing to do with any explanations we are aware of now – may in principle occur. But scientists generally assume that they are unlikely.

If we admit that we're not 100% certain that gravitational waves exist, what probability should we assign to them? In his discussion with David Simmons-Duffin, Richard Muller suggested 80%. Well, if science would imply a conclusion to be true with the 80% probability, scientists would have a reason to build on this assumption and it is deeply misleading to call this type of work "religion", David stressed. It's science. 80% certainty is something we must get used to.

On the other hand, I think that the figure 80% for the existence of the gravitational waves is just insanely low. My figure is about 99.99999%, some six-sigma. Einstein's equations have been successfully tested – every piece of them, if you wish, but the pieces co-exist with each other in a pretty much unique way (if you only write down the two-derivative terms). A theory that deviates from GR so substantially that it predicts no gravitational waves would almost certainly have made some other predictions that would have been falsified by now. And the pulsar is one of them. A theory without gravitational waves seems to be 99% certain to have no explanation of the acceleration of the pulsar. And even if it had some different explanation, the value of the quadratic coefficient it predicts would almost certainly – 99% – be different than the observed one. If you combine these lines of evidence and appreciate that they're largely independent of each other, you may get something like 99.99999% certainty that the the gravitational waves exist.

While David has said that 80% would still make it fair to call it science, the science couldn't be extended too far. Once you were depending on 3 or 4 independent assumptions whose likelihood is just 80%, you would already deal with an axiomatic system that is less likely to be true than 50%. I am absolutely confident that this is not the case for modern fundamental physics as we know it. The near-certainty extends through many additional floors of claims about Nature that we haven't directly observed – that we have observed even "much less" than the gravitational waves.

OK, I obviously consider those who disagree with the existence of gravitational waves to be close to the "deniers of high school science" who also reject the existence of ice ages or heliocentrism. If you're one of them, please try to adjust your comments to the fact that I basically consider you a wild animal, a skunk of a sort.

Gravitons

Now, another layer. The gravitational waves are analogous to the electromagnetic waves. And electromagnetic waves are now interpreted as streams of many photons. Similarly, gravitational waves have to be composed of many gravitons. All successful theorists obviously agree with that statement; all others are considered cranks even though most well-known physicists are pathologically polite and avoid this accurate description.

Why do we know that gravitational waves are composed of gravitons?

The theoretical reasons work just like in the case of the electromagnetic field. Consider a somewhat messy gravitational wave packet moving in a certain direction that still has a pretty well-defined frequency \(f\). The fields at a point are approximately periodic with the period \(1/f\). But because the wave function of an energy-state depends on time as \[

\exp(Et/i\hbar)

\] and a general superposition of such terms isn't periodic, not even "up to phase", we have to demand that the allowed values of \(E\) that are included in the superposition differ by integer multiples of \(E=\hbar \omega = 2\pi\hbar f = hf\), the energy of one graviton. The argument and the energy-frequency relationship is identical for gravitons and photons. It actually holds for all particles in the Universe (although only bosons are able to team up in macroscopically strong "classical" waves).

So the energy carried by electromagnetic or gravitational waves has to be quantized in the units of \(E=hf\).

If you don't like it, there is another argument that already worked in the electromagnetic case: the ultraviolet catastrophe. If you agree that general relativity implies the existence of gravitational waves, they may exist for any value of the vector \(\vec k\). Even in a finite box, there are infinitely many allowed values of \(\vec k\) – belonging to a lattice. Each of them behaves as a mode and in a thermal oven, it will carry the average energy \(kT/2\), as any degree of freedom in thermodynamics. (Well, it's two times \(kT/2\) for the kinetic and potential part of the energy, and times another two for the number of polarizations per \(\vec k\).)

Most of this infinite collection of contributions scaling like \(kT/2\) has to be suppressed if space is supposed to carry a finite energy density at a nonzero temperature. It is suppressed because the modes with too high \(|\vec k|\) contribute much less than \(kT/2\). It's because the single photon is too energetic for them and they become exponentially unlikely to harbor more than zero photons. So these high-momentum degrees of freedom are basically frozen and only the modes with low \(|\vec k|\) substantially contribute to the energy of the space.

Again, this argument works identically to the case of the electromagnetic field. If there weren't gravitons, the high-frequency modes of the gravitational field wouldn't be suppressed, they would carry the same thermal energy as well, and the heat capacity of a cubic meter of empty space would be infinite.

At any rate, I think it is utterly unreasonable from a scientific viewpoint to attribute the probability lower than 99.99995% to the existence of gravitons.

Hawking radiation

Jacob Bekenstein made some ingenious guesses about the entropy of the black hole – which is proportional to the area of the event horizon. Bekenstein's claims were indeed guesses or speculations, if you wish, which were supported by analogies and visions. Building upon them would resemble religion to some extent – but still an extremely scientific yet beautiful religion.

However, Hawking has changed the status of those things. He derived the Hawking radiation – the thermal radiation whose temperature is basically the gravitational acceleration at the event horizon in the Planck units – out of the framework whose all assumptions are clearly formulated and have basically been validated.

His framework is the quantum field theory on curved spaces. Or semiclassical gravity. The matter (non-gravitational) fields are treated in the usual quantum field theory way, with creation and annihilation operators etc. The metric tensor field (spacetime geometry) is treated as a classical field which isn't possible in a complete and consistent theory but it's OK in an approximation. It's an analogy of the Born-Oppenheimer approximation in which the positions of the nuclei are first treated classically, and only the electrons are quantized, and only at the end, one may treat the nuclei quantum mechanically as well, with the full eigenvalues of the electrons' problem playing the role of "potential energy" between the nuclei. One may justify the applicability of the semiclassical gravity calculation in a similar way as in the Born-Oppenheimer approximation.

But at the end, Hawking's is just a technical calculation. The very fact that some particles are created may already be seen in the toy model, the Unruh effect (which was ironically found after Hawking's complex calculation), and the Unruh effect just boils down to the Bogoliubov transformation.

If you have a harmonic oscillator and you write Hamiltonians such as\[

H(\alpha,\beta) = \alpha x^2 + \beta p^2,

\] then classically, \(x=0\) and \(p=0\) would be "the ground state" of zero energy, regardless of the values of \(\alpha,\beta\in \RR^+\). However, quantum mechanically, you can't have a well-defined \(x\) and \(p\) at the same moment, thanks to the uncertainty principle. The ground state is a Gaussian and its width depends on \(\alpha/ \beta\). So the ground state for one value of \(\alpha / \beta\) is a linear superposition of the ground state and excited states of a different Hamiltonian with a different \(\alpha/ \beta\).

The Unruh effect, and with some purely technical complicatations, the Hawking effect as well are just an infinite-dimensional version of the harmonic oscillator argument above. Observers who slice the spacetime differently – like differently accelerating observers – have to use different Hamiltonians and different Hamiltonians correspond to different quantum mechanical ground state. So what seems as the "empty vacuum" to the inertial observer becomes a "heat bath with lots of particle" from the accelerating observer's viewpoint.

In the case of the Hawking radiation, the radiation becomes "real" even for the ordinary distant observer because thanks to the curvature created by the black hole, this late distant observer's frame is basically connected with a frame that was accelerating when the black hole was young.

Quantum field theory on curved spaces is a highly trustworthy tool for a modern physicist. But it isn't the only piece of evidence that the Hawking radiation has to exist. The radiation may be derived "microscopically" in specific models of string theory, too. I think that it is unreasonable to assign a probability lower than 99.9999% to the existence of the Hawking radiation.

Event horizons

I should have started with that. Why do we believe that black holes actually exist? Well, the black hole at the galactic center is an important example. We observe that it is devouring highly energetic objects that are shining and do all kinds of things and once they're devoured, the radiation disappears.

This can't happen without event horizons because if there were any ordinary object over there, the "dinner" would have added the energy to the object, and the object would have an increasing temperature and would be increasingly radiating. The more it would eat, the more it would radiate. We observe exactly the opposite. So the apparent temperature of the object (the big black hole) we observe through its radiation must be vastly lower than the local temperature of the matter that falls in, and the basically unlimited red shift is needed for that.

Needless to say, the main reason why I (irreligiously) believe in the existence of black holes and event horizons is that they clearly follow from general relativity – in a regime where the curvature is as mild as in all the situations where GR has succeeded. I don't think it's too plausible for black holes not to exist. At least 99.999999% certainty.

Extra dimensions

The probability will obviously drop here because we're approaching more abstract layers of physics. But the probability that extra dimensions are relevant for some more accurate description of Nature than the Standard Model is still above 99.9%, even if I claimed that I am not sure which kind of string/M-theory if any is valid.

Even if it turned out to be irrelevant for Nature for some reasons that I can't imagine, string/M-theory has already taught us that some assumptions we may have had are just wrong. The assumption has been that \(d=3+1\) is the only value we need and care about. String theory not only "forced us" to study theories in different dimensions. But it also showed that the higher number may be consistent with all the basic observations we can do. In fact, it taught us that the extra dimensions are extremely helpful in explaining the spectrum of the elementary particles (including the three/many generations) and the gauge groups. But string/M-theory has also told us that consistency can actually dictate the dimensionality to be a particular number different than \(d=3+1\).

The critical dimension of strings is \(d=9+1\) and M-theory has \(d=10+1\) and there may exist mutually dual descriptions so it makes no sense to ask which of them is more right. All of them are equally right.

I view the critical dimensions as extremely important but I am only 95% sure that the right form of extra dimensions that will be relevant in the future phenomenology will copy the 10- or 11-dimensional spacetimes of supersymmetric string/M-theory. I can imagine that something like "free fermionic heterotic models" will be superior and won't allow any geometrization of these degrees of freedom. Or, less likely, something like sub- or supercritical string theory may be needed.

But the general lesson that "the dimensions we easily see don't have to be the only ones" is a lesson that physicists are unlikely to unlearn again. The Earth is much more easily visible than other planets which doesn't mean that there aren't other planets. It would really be contrived, unnatural, if all the planets were equally visible from a hospitable perspective. The dimensions are analogous. There's no reason why all the dimensions that act as degrees of freedom on the world sheet or anything that generalizes it must be the dimensions that we know from long-distance physics.

It's hard to convey all this reasoning to someone who doesn't understand the whole string-theoretical predictive framework. But I view these lessons of string theory to be analogous to the apple that Adam and Eve ate in the paradise. Once they did so, they were able to notice that they had sexual organs that can be played with. It's hard to unlearn that lesson once you actually start to play with those things. ;-) The people who are hoping in a future of physics that will forget about the extra dimensions again are analogous to the people who want to forget that they had any sexuality, people who want to un-eat the apple. I just don't think that anything of the sort is realistic.

Strings

String theory was born as a temporarily failed theory of the strong interaction (that in the atomic nuclei). A year later, it was shown that it actually describe the gravitational force consistently. Ten more years later, it was seen that it actually predicts the exact general types of matter particles and forces that are needed to explain everything we know in Nature.

String theory is consistent where all other conceivable competing theories look hopelessly inconsistent. It confirms – from totally different viewpoints – previous insights including the Bekenstein-Hawking entropy of black holes and the Hawking radiation and lots of other things. It allows us to geometrize many things that seemed to have nothing to do with geometry.

But I actually think that we have much more "direct" and speculation-free ways to see that string theory has to be taken seriously when one studies quantum gravity beyond the approximations above.

Take AdS/CFT. Imagine that you have any consistent theory of quantum gravity and it allows you to be formulated on an anti de Sitter background. That background has the isometry that is isomorphic to the conformal symmetry on the boundary. Because most of the volume of the AdS space lies "very close to the boundary", there should be a way to describe the quantum gravitational physics on the AdS space that works with the degrees of freedom that are localized on the boundary.

Use this argument or some 't Hooft-Susskind-like general arguments about holography. But you will decide that there should exist an equivalent description using a boundary CFT (conformal field theory). So try to construct the theory by considering conformal field theories. The supersymmetric ones are the easier ones to be guaranteed to be exactly conformal. You will find a few simple examples, especially the \(\NNN=4\) superconformal Yang-Mills theory in \(d=4\), and you may analyze what the physics in the bulk looks like if the AdS/CFT holds.

You will find out that at low energies, the bulk is the 10D supergravity. But if you study the physics at a higher accuracy, you will see that the exact theory in the bulk isn't just supergravity – which is inconsistent – but it has to be the type IIB string theory. The symmetries are those predicted by string theory, not SUGRA, and you will also find all the excited and interacting strings and branes that all other descriptions of string theory imply.

There are lots of consistency reasons why the "improvements" that string/M-theory actually makes to supergravity are absolutely needed for consistency. You may either see those facts on the examples – there are lots of CFTs, in assorted dimensions, that produce the AdS bulk dual matching a vacuum of 10- or 11-dimensional string/M-theory. Or you may check these arguments by careful reasoning. See e.g. Two roads from \(\NNN=8\) SUGRA to string/M-theory.

I think that the probability one should assign to the statement that "quantum gravity has to be extended to string theory to remain well-defined" is at least 99.5%.

String theory is several floors "above" the first layers of scientific assumptions that some people already question. But I am still confident about those. String theorists are actually confident about roughly 5 additional "detailed floors" built within string theory. String theorists don't just write papers saying "string theory of course". They make detailed advances within string theory – making lots of specific statements about particular situations or more special questions. To a sloppy thinker, those things could look hopelessly speculative. But they are not. Due to the iron-enforced concrete of the mathematical formalism, machinery, and argumentation that the physical reason is based upon, theoretical physicists in general and string theorists in particular can get much much further from the direct observations.

At the end, I do think that people are probably making some mistake – or missing something simple, something that may change some opinions in the future and make the replacement statements look much more obvious than we can imagine today. But what's important is that the critics of theoretical physics have nothing whatever to do with these future discoverers and geniuses. The contemporary critics actually don't have any valuable idea, any viable alternative, or any valuable anything, for that matter. These individuals are just piles of stinky crap and it is deeply dishonest for them (e.g. Richard Muller) to pretend that they are something different.

The future realizations in physics may bring – and, hopefully, will bring – new Eureka moments and paradigm shifts. But the right recipe to get them isn't to criticize science, compare it with religion, or to focus on ideas that don't look promising. One – and the physics community – simply has to enforce some meritocratic criteria and focus on ideas that do look promising for one reason or another. So if the physicists are missing something important, it's all of them who is missing it.

And even if some of the assumptions we are making today will be found strictly speaking invalid in the future, this invalidity may be (and probably is) totally unhelpful for the next 100 years of the progress in science. If the truth is "totally different" than the (irreligious) beliefs of the best contemporary theoretical physicists but you don't know exactly what it is and why it is what it is, you should better shut your mouth because vague enough propositions about a "different future" are always "true in some sense" but their value for the scientific progress is usually zero or negative.

Talented theoretical physicists may only divide their time to ideas that are already known or that have been "glimpsed" by themselves or someone else. The totally unknown types of ideas and theories aren't eligible for the competition yet. You may discover them and present the evidence why they're as promising as (or more promising than) the ideas on the market today. But if you have nothing of the sort, you have zero rational or moral justification to pretend that you're a peer (or even better than) the top physicists today.

by Luboš Motl (noreply@blogger.com) at September 02, 2015 02:35 PM

The n-Category Cafe

How Do You Handle Your Email?

The full-frontal assault of semester begins in Edinburgh in twelve days’ time, and I have been thinking hard about coping strategies. Perhaps the most incessantly attention-grabbing part of that assault is email.

Reader: how do you manage your email? You log on and you have, say, forty new mails. If your mail is like my mail, then that forty is an unholy mixture of teaching-related and admin-related mails, with a relatively small amount of spam and perhaps one or two interesting mails on actual research (at grave peril of being pushed aside by the rest).

So, you’re sitting there with your inbox in front of you. What, precisely, do you do next?

People have all sorts of different styles of approaching their inbox. Some try to deal with each new mail before they’ve read the other new ones. Some read or scan all of them first, before replying to anything. Some do them in batches according to subject. Some use flags. What do you do?

And then, there’s the question of what you do with emails that are read but require action. Do you use your inbox as a reminder of things to do? Do you try to keep your inbox empty? Do you use other folders as “to do” lists?

And for old mails that you’ve dealt with but want to keep, how do you store them? One huge folder? Hundreds, with one for each correspondent? Somewhere in between?

There are all sorts of guides out there telling you how to manage your email effectively, but I’ve never seen one tailored to academics. So I’m curious to know: how do you, dear reader, handle your email?

by leinster (tom.leinster@ed.ac.uk) at September 02, 2015 12:18 PM

Marco Frasca - The Gauge Connection

Higgs even more standard

ResearchBlogging.org

LHCP 2015 is going on at St. Peterburg and new results were presented by the two main collaborations at CERN. CMS and ATLAS combined the results from run 1 and improved the quality of the measured data of the Higgs particle discovered on 2012. CERN press release is here. I show you the main picture about the couplings between the Higgs field and the other particles in the Standard Model widely exposed in all the social networks

Combined couplings for the Higgs by ATLAS and CMS

What makes this plot so striking is the very precise agreement with the Standard Model. Anyhow, the ellipses are somewhat large yet to grant new physics creeping in at run 2. My view is that the couplings, determining the masses of the particles in the Standard Model, are less sensible to new physics than the strength of the signal at various decays. Also this plot is available (hat tip to Adam Falkowski)

Combined strengths at various decays by ATLAS and CMS

In this plot you can see that the Standard Model, represented by a star, is somewhat at the border of the areas of the ZZ and WW decays and that of the WW decay is making smaller. This does not imply that in the future deviations from the Standard Model will be seen here but leave the impression that this could happen in run 2 with the increasing precision expected for these measurements.

The strengths are so interesting because the Higgs sector of the Standard Model can be solved exactly with the propagator providing the values of them (see here). These generally disagree from those obtained by standard perturbation theory even if by a small extent. Besides, Higgs particle should have internal degrees of freedom living also in higher excited states. All of this to be seen at run 2 as the production rate of these states appears to be smaller as higher is their mass.

Run 2 is currently ongoing even if the expected luminosity will not be reached for this year. For sure, the next year summer conferences could provide a wealth of shocking new results. Hints are already seen by both the main collaborations and LHCb. Something new is just behind the corner.

Marco Frasca (2015). A theorem on the Higgs sector of the Standard Model arxiv arXiv: 1504.02299v1


Filed under: Mathematical Physics, Particle Physics, Physics Tagged: ATLAS, CERN, CMS, Higgs particle, LHCb

by mfrasca at September 02, 2015 09:03 AM

Lubos Motl - string vacua and pheno

Richard Muller's incredibly dumb comments about quantum gravity
I am often getting e-mail from Quora.com, a server where people ask and answer questions. I am largely avoiding that server because while some texts over there may be insightful or interesting, there are tons of widely spread delusions written by the ordinary people for other ordinary people. They often drive me up the wall, I think that I've been exposed to that stuff many times, and I just don't want to add any exposure.

But today, I clicked at the Theoretical Physics category over there and was quickly led to a question about the unification of general relativity and quantum mechanics. It was the answer by Richard Muller of Berkeley that has simply stunned me.




Probably years ago, someone asked the following question:
Why do physicists believe that a mathematically consistent model that unites quantum mechanics and general relativity exists?

If mathematics breaks down when applied to black holes, why do scientists believe that mathematics can describe black holes? Perhaps the search for the mathematics that unites quantum mechanics and general relativity is pointless. Mathematics is a useful tool for modelling the universe in many ways, but maybe black holes are exceptional entities that can't be modeled by mathematics. Is it possible that the interior of black holes are so alien to our known universe and so different that current mathematical models/abstractions simply do not apply.
Well, the person who wrote this question is a layman who obviously doesn't understand what the words "mathematics" and "science" mean. Mathematics can't "fail" as a description of Nature because mathematics is, pretty much by definition, the language of the most accurate and reliable description of anything.

Mathematics may be connected with our observations differently than we thought and may imply different conclusions than what we thought but it can't "cease to apply". Only particular mathematically formulated statements and/or theories may "cease to apply". Even if the Universe were ultimately found to be governed by the mood swings of God, a malicious anthropomorphic dictator, the most accurate description of His mood swings would still be in terms of mathematics.




Most laymen simply do not have a clue but it was much more shocking to see Richard Muller, a well-known professor at Berkeley, who basically agreed with the hardcore layperson's misconceptions above (in his 2-day-old answer) and who actually added something even crazier ideas to the mix.

Richard A. Muller has worked on astro- and geo-related problems in physics (the Richard S. Muller in electronics is someone else!), became a very popular instructor at Berkeley, wrote some popular books about "physics for the U.S. presidents" and similar popular if not populist stuff, and was a major part of the team that created the Berkeley Earth Surface Temperature (BEST) dataset.

He must understand tons of ordinary things even though many of his claims may be controversial. But he decided that it was right for him to answer the aforementioned Quora question about the nature of quantum gravity. And his answer is juicy, indeed. It starts with an ambitious statement:
It is almost a physics religion to believe that relativity can ultimately be combined with quantum physics.
Wow. So it's a physics religion to "believe" that relativity can ultimately be combined with quantum physics. Dr Muller clearly stands next to the most insane pro-religion nuts who say that theoretical physics is just another religion – one that only tries to compete with The Church of Jesus Christ of Latter-day Saints.

This is a completely technical issue that can only be studied by the purely scientific method and that defines the "heart" of the domain where the scientific method is completely dominant. Why would a sane person start to talk about religion in this context? There is nothing to "believe". One may only collect the evidence, use the evidence to eliminate most of the candidate theories, interpret the evidence, and derive new conclusions from the theories that survive.

We know that the most accurate laws of Nature ultimately combine relativity with quantum physics for a simple reason – both relativity and quantum physics has been observed to hold in Nature. The theories that agree both with relativity and with quantum mechanics seem to be heavily constrained but that doesn't reduce the probability that both quantum mechanics and relativity is needed in the accurate laws. The probability is 100% because both parts of physics have been seen to control Nature. Relativity has been seen to hold. Quantum mechanics has seen to hold. Because each of the two – mostly independent – sentences hold, it's also true that they are "combined".

What Mr Muller (sorry, I just can't recognize the science degree of someone who makes moronic assertions of this caliber) is exactly as idiotic as the claim
It is almost a scientific religion to believe that the heliocentric theory of the Solar System can ultimately be combined with Darwin's evolution.
They have to be combined because both of them are true, aren't they? Even if heliocentrism makes Darwin's natural selection harder in any way, we indisputably know that the hurdles can't be insurmountable because both theories work, don't we? Does Mr Muller believe that science ends and religion takes over as soon as we discover at least two (independent) insights about Nature? This seems to be the only possible broader logic that may "justify" his claim about the "physics religion".

I don't want to expand this blog post by kilobytes of examples of experiments that have validated quantum mechanics, special relativity, or general relativity. In the quantum case, the confirmation has been most impressive. Calculate all the (up to) five-loop diagrams to find the prediction of Quantum Electrodynamics for the anomalous magnetic moment of the electron. The prediction agrees with the measurements perfectly – at the experimental error margin that is roughly a part per trillion. The agreement surely can't be a coincidence, can it? Especially because lots of other, sometimes comparably accurate predictions have been verified.

All non-gravitational observations that have ever been made have been nontrivially compatible with special relativity as well and all the gravitational phenomena we know seem to agree with general relativity. So whatever the "best theory of the Universe" that people will learn about in 2100 AD or 2500 AD is going to be, it will have to reproduce the predictions of relativity and quantum mechanics in the domains where those theories have already been confirmed.

In some extreme conditions where we don't have any experiments, the best theory in 2100 AD or 2500 AD may predict things that go beyond QM and GR separately – and maybe even beyond our current understanding of string theory. But the theory will still have to pass the tests that quantum mechanics and relativity have passed. So it will be a theory unifying quantum mechanics and general relativity. All other, "non-unifying" theories have been and will have been eliminated by the scientific method because they will have contradicted the empirical evidence!

Mr Muller elaborates upon the claim about "physics religion":
There is no evidence for this other than the fact that all the other forces of physics have been "unified".
Sorry, as I wrote above, there are billions of pieces of evidence supporting these two pillars of contemporary physics – all observations ever made by the humans agree with them. And because they have been done in one Universe, it is a "unification" of these two pillars that these observations make absolutely unavoidable. It's the separation of the knowledge to many theories that is an artificial sociological if not psychological sleight-of-hand. At the end, we have one Nature and we know many things about it – including the insights of relativity and quantum mechanics – that may be logically independent but that still "interfere" with each other because they fight to govern the same Universe. A viable description of the Universe is unavoidably a "unifying" theory. To suggest something else means to deny 100% of the evidence available in all of science. It means to be a complete lunatic.

It also means to deny tons of particular achievements of science such as the electromagnetic theory (the unification of electricity, magnetism, and light), the electroweak theory (the unification of the electromagnetism and beta-decay using the W-bosons and Z-bosons), and lots of other things that are really not questioned by serious physicists in 2015.
It is possible that relativity is different; that it is geometric and not quantum mechanical.
Mr Muller's sentence "relativity is different" is ill-defined, probably deliberately so. What does it mean for it to be "different"? Different than what? Different in what respect? Relativity is a theory "like" other theories in science in the sense that it is a scientific theory; it is different in the sense that its insights are independent from those of Darwin's theory or quantum mechanics, for example.

The second part of the sentence, "it is geometric and not quantum mechanical", only seems to reflect Mr Muller's ignorance of completely basic things in modern theoretical physics. The relativity is indeed geometric. But "geometric" does not mean "not quantum mechanical". The adjectives "geometric" and "quantum mechanical" self-evidently don't contradict each other. After all, Nature is known to be both.

The non-existence of the contradiction is, once again, totally analogous to the non-contradiction between any other two established facts of science. So Mr Muller's claim is as moronic as the claim:
It is possible that Darwin's theory of evolution is different; that it agrees with the natural selection and not with heliocentrism.
Well, a correct theory about the motion of planets and the life on one of them has to agree both with heliocentrism and with Darwin's theory of evolution. In the very same way, a correct theory of Nature has to agree both with quantum mechanics and relativity. The claim that these two parts of the contemporary physics cannon contradict one another is clearly wrong. They can't contradict one another because both of them are known to be true.

One may see that the actual "drift" of Mr Muller is nothing else than the drift of a rank-and-file anti-quantum zealot. He dreams about a future event in which quantum mechanics will be "undone" once again. But that can't happen because quantum mechanics may be pretty much defined as a consistent theory of physics that is not classical, and we need this theory because classical physics has been falsified. It's dead so it can never return again.

Muller is vague about "which relativity" he means. If the discussion is about special relativity, well, special relativity and quantum mechanics have been perfectly unified from 1930 or so – by quantum field theory (and perhaps from the 1927 Dirac's equation, we could say). These days, we know two related frameworks that unify them: quantum field theory and string theory. There is nothing "religious" about quantum field theory or Dirac's equation. They're vital parts of modern physics whose validity should be understood by every professional physicist (among theoretical high-energy physicists, the same thing is true for string theory as well) and I really feel highly uncomfortable if electronics allows people to become professors even if they are missing this Modern Physics 101.
But most physicists think that will not be the case, in large part because quantum physics has been so successful in the past. That's why they are looking to unify them. But it is worthwhile to recognize that this is based on hope, not on any firm physics or mathematics reason.
There is nothing based on "hope" here. Everyone is free to propose and develop a theory that ultimately "isn't" quantum mechanical. The only problem is that "hope" isn't enough for these alternatives and the fact is that no such alternative that would make any sense is known. On the contrary, the proofs that the principles of quantum mechanics have to apply universally, including on gravity, seem to be pretty much rigorous and to deny them means to be a scientifically illiterate idiot.

What Mr Muller says not to be the case – that there is a firm physics and mathematics reason – is the actual key to meaningfully analyze all these questions. And Mr Muller simply has nothing whatever to do with science. He acts as a shaman. Sichu Lu responded appropriately to Mr Muller's musings yesterday:
With all due respect. the reason that physicists believe that QG exists and it exists in such a form that would incorporate both GR and QM is because both of these theories work to extremely high precision and are extremely well tested, yet we don't know how they would work in much higher energy scales then we can currently test. So they are effective field theories. When we have a better theory in physics, we don't throw away what we previously know. Indeed the new theory would have to able to reproduce the old theory at some limit. We expect them to unify because we expect to keep being able to understand nature through physics. Sure, there is no "evidence" that this will continue to be the case. But it's like saying to a biologist, tomorrow you might discover an organism on the planet that isn't subject to evolution, and it might have genetic information coded in some way that isn't a double helix. It just wouldn't make any sense in the light what we know about nature already.
Amen to that. What is known, is known. And this knowledge may be classified as a collection of effective theories. Any viable theories in the future will have to agree with the effective theories in the realm where the effective theories have been successfully validated. The more complete theories will continue to be "quantum and relativistic" for a simple reason: these adjectives mean some "upgrade" of the physics toolkit onto a new level, and the un-upgraded theories – classical and non-relativistic theories – have simply been irreversibly ruled out.

We don't know what the future in science is exactly going to look like. But we know that it won't build on theories that have been proven wrong. A miraculous twist could hypothetically make the adjectives "quantum" and "relativistic" doubtful. But to hype this possibility is exactly on par with hyping any other hypothetical "revolution" that will e.g. imply that the DNA doesn't play any role in biology, or something equally radical. It's just totally stupid for someone to turn these insane speculations into pillars of his thinking about a scientific discipline – and to dismiss the actual science as "religion" or "hope".

As Sichu Lu correctly hinted, Muller's comment basically suggests that physics will cease to hold as a science. Muller's comments can't conceivably mean anything less than that. It's remarkable that Mr Muller is wrong about all the assertions he makes, including those that are not needed for his broader claim:
Mathematics doesn't break down for black holes. True, there is a singularity, but this singularity is invisible from the rest of the universe; it is cloaked behind an event horizon.
First, the claim that every singularity is cloaked behind an event horizon is just a hypothesis, the Cosmic Censorship Conjecture. It seems to be "morally" hold in the real, 3+1-dimensional world. But almost every rigorously formulated version of this conjecture seems to be provably wrong and in higher-dimensional spacetimes, the conjecture turned out to be wrong even "morally".

So the claim that every singularity is cloaked behind an event horizon is an example of a claim that should be classified as a "hope", unlike the indisputable facts that Mr Muller has called "hopes".

This hope, the Cosmic Censorship Conjecture, doesn't really have any good reason to hold. One "needs" to believe in this conjecture if he wants to believe that the classical general relativity is basically enough for all the macroscopic phenomena. But it doesn't have to be enough! A consistent and complete theory of quantum gravity may maintain its consistency and predictive power even in the presence of objects that look like singularities from the classical GR approximate viewpoint.

So there is no rational reason to believe that all singularities have to be cloaked. A more complete theory has to fix some (at least small) problems of the classical GR, anyway; so there's no reason to expect that it won't have any "larger purpose" at the same moment.

Now, even if they were cloaked, this cloaking doesn't imply what Mr Muller wrote that it implied. The cloaking doesn't imply that "mathematics doesn't break down"; he clearly meant "the mathematical description of general relativity doesn't break down". Pretty much by definition of a singularity in GR, the particular mathematical description does break down in the presence of a singularity. (String theory allows completely consistent physics of strings on geometries that look singular as GR geometries, but I don't want to go into this advanced stuff.) The breakdown may only be experimentally observable by an infalling observer who "remains around" until the very end. But this is just an excuse.

After all, the Hawking radiation is known to subtly depend on the detailed degrees of freedom that may also be described as those "inside" a black hole. So a singular value of any of these degrees of freedom may be a problem even outside, even if the event horizon basically cloaks the singularity.

To summarize, Mr Muller wants to make the classical GR look more healthy than it is. When one is rigorous enough, the appearance of singularities in classical GR actually does imply that a more complete theory – a quantum theory – is needed. This logic is fully analogous to the observation that the "states in the Hilbert space of a compact object" should better be discrete because classically, there are infinitely many points in the phase space which should formally imply that the entropy is infinite.

You may see that Mr Muller tries to make classical GR look healthier than it is. As a rank-and-file anti-quantum zealot, this is a part of the pattern. The following half-paragraph he wrote makes it totally clear that he wants to sling mud on quantum mechanics:
Mathematics does break down for the standard model of electromagnetism-weak-strong forces, which is based on quantum field theory. This theory has infinities that have to be "renormalized" in a way that isn't based on mathematics.
So the methods that people use to renormalize the Standard Model and QFTs are "not based on mathematics"? Wow. What are they based upon? Theology or psychology? Mr Muller is clearly batšit crazy.

One needs some advanced mathematical knowledge to renormalize quantum field theories; and in all these methods, one can reliably say which algorithms are right and which algorithms are wrong. And whether you aesthetically like the intermediate formulae or not, the calculations assume experimentally measured finite parameters and produce finite and experimentally verifiable (and successfully verified) predictions for further experiments. What can possibly be his basis for the claim that renormalization is "not based on mathematics"? Clearly, any "problem" that one encounters during the renormalization of a renormalizable quantum field theory is just an illusion. There is no genuine problem here.

It is complete nonsense that "mathematics breaks down" for quantum field theories in general.

In particular, QCD is a fully consistent theory making predictions at all length scale – including arbitrarily long and short ones. It needs renormalization but renormalization is a rock-solid well-defined collection of mathematical rules that lead to unambiguous and arbitrarily accurate results for the observables.

The Standard Model, which also includes the electroweak theory which also has the Higgs boson, is perturbatively renormalizable. So if we calculate anything as a power law expansion in the small coupling constants, this theory is as well-defined as QCD. Non-perturbatively, when "tinier than any power of the coupling constant" effects are taken into account, the Standard Model is an inconsistent theory due to the Landau poles for the \(U(1)\) gauge coupling and/or the Higgs self-interaction. But one needs to probe processes at energies much higher than the Planck energy to be sensitive to those. We know that at those high energies, the Standard Model is already inadequate because it ignores gravity – which becomes more important than those non-perturbative Standard Model effects at these trans-Planckian energies.

That's why the Landau poles (tiny, faraway inconsistencies) of the quantum field theories are usually much less harmful than some general inconsistencies that sick theories could suffer from. But in principle, the correct theory must be free of the small inconsistencies, too.
Much of the enthusiasm for string theory is that it addresses this problem, while introducing many more (extra dimensions, huge numbers of parameters, etc). Personally, given the problems of string theory, I am not optimistic that it will be with us 20 years from now.
Mr Muller counts himself among the anti-string crackpots, too, and adds his own creative idiosyncrasy. As of 2015, string theory obviously doesn't have any known problems that would indicate that it is going to be eliminated. Everything we know makes it rather clear or at least suggestive that it is a framework capable of reproducing all the successes of QFTs and GR; and curing at least a big part of their diseases.

Extra dimensions are an example of a wonderful far-reaching prediction of string theory, not a "problem" as crackpots including Mr Muller sometimes love to suggest. And string theory may be seen to admit no continuously adjustable dimensionless non-dynamical parameters whatsoever. Mr Muller is clearly a complete idiot if he hasn't been capable of noticing this fact even though he has already spent more than 70 years on Earth.

The claim about "optimism" is clearly a piece of fraudulent demagogy. Folks like Mr Muller don't want progress in string theory. They are dreaming about some problem with string theory that someone will find. Such a problem could "undo" Mr Muller's and other men's striking intellectual inferiority in comparison e.g. with the Berkeley string theorists. He wants something to "kill" string theory because he doesn't want to learn it. He wants to remain not only an anti-string crackpot but even an anti-quantum zealot.

But such an insight doesn't seem to be likely in the next 20 or 50 years. Quantum mechanics has irreversibly become a pillar of modern physics and the same thing largely holds for string theory, too. They have been connected to the actual observations that modern physics builds upon so tightly that to dream about their re-segregation means to be utterly unrealistic about the future of physics.

Even if a problem were found with string theory, what would it mean for it "not to be with us in 20 years"? It's obvious that the theory has already led to some striking results of mathematical character that can't go away and won't go away. So at Berkeley in 2035, Mr Muller can teach the future U.S. presidents or hippies (or students who "unify" both groups) that "string theory is no longer with us" – and prevent the students from studying important questions about the Universe – but those people in 2035 who will be more intelligent physicists than Muller's obedient students will surely continue in the research of it.

The last paragraph about the black hole coordinate systems is stupid, too:
The math of black holes, when treated consistently by a reference frame that is not falling into it, is in good shape. Most of the unknowns are based on attempts to make it a quantum theory. Hawking radiation is a first step, but we don't even know for sure that Hawking radiation exists; it has never been seen experimentally, and maybe never will be seen.
The classical GR that Mr Muller has previously excessively defended works well, on the contrary, in the reference frames connected with observers who are falling into the black hole! The coordinate systems of not-infalling observers include the Schwarzschild coordinates and they have a problem – the coordinate singularity – already at the event horizon. This problem makes these systems unsuitable for the description of observations by all the observers who do fall in.

It may be true that most of the unknowns are based on the quantum aspects of general relativity, if I make his statement a little bit more accurate and sensible. But most of the knowns are about the quantum aspects, too. There's just a more diverse spectrum of physical phenomena to be studied in the quantum theory than in its classical counterpart.

Now, the Hawking radiation is the "first step" and everything that the future U.S. presidents and Mr Muller need to know about it is that "we're not even sure whether it exists". Sorry, you and your future U.S. presidents don't even know whether the Hawking's radiation exists because you are a bunch of arrogant morons who are as dumb as a doorknob and who would love to define the standard of "what people should know and what they shouldn't know". Other physicists not only know that this effect discovered in the early 1970s exists. They have also made hundreds or thousands of additional steps – steps described in lots of papers – that your brains lacking talent, patience, and curiosity aren't even dreaming about.

It's nasty for Mr Muller to try to sell his staggering intellectual limitations as a virtue. I find it amazing that Mr Muller found it a good idea to embarrass himself in this way. Isn't there a theoretical physicist at Berkeley who would find some time to tutor Mr Muller?

by Luboš Motl (noreply@blogger.com) at September 02, 2015 06:15 AM

Emily Lakdawalla - The Planetary Society Blog

Ten-day Taxi Trip to International Space Station Underway
A ten-day International Space Station taxi flight is underway following the Wednesday liftoff of a three-person crew from Kazahkstan.

September 02, 2015 05:18 AM

September 01, 2015

Emily Lakdawalla - The Planetary Society Blog

New Horizons extended mission target selected
The New Horizons mission has formally selected its next target after Pluto: a tiny, dim, frozen world currently named 2014 MU69. The spacecraft will perform a series of four rocket firings in October and November to angle its trajectory to pass close by 2014 MU69 in early January 2019. In so doing, New Horizons will become the first flyby craft to pass by a target that was not discovered before the spacecraft launched.

September 01, 2015 11:09 PM

Christian P. Robert - xi'an's og

an afternoon in emergency

Last Thursday, I drove a visiting friend from Australia to the emergency department of the nearby hospital in Paris as he exhibited symptoms of deep venous thrombosis following his 27 hour trip from down-under. It fortunately proved to be a false alert (which alas was not the case for other colleagues flying long distance in the recent past). And waiting for my friend gave me the opportunity to observe the whole afternoon of an emergency entry room (since my last visit to an emergency room was not that propitious for observation…)

First, the place was surprisingly quiet, both in terms of traffic and in the interactions between people. No one burst in screaming for help or collapsed before reaching the front desk! Maybe because this was an afternoon of a weekday rather than Saturday night, maybe because emergency services like firemen had their separate entry. Since this was the walk-in entry, the dozen or so people who visited the ward that afternoon walked in, waited in line and were fairly quickly seen by a nurse or a physician to decide on a course of action. Most of them did not come back to the entry room. While I saw a few others leave by taxi or with relatives. The most dramatic entry was a man leaning heavily on his wife, who seemed to have had a fall while playing polo (!) and who recovered rather fast (but not fast enough to argue with his wife about giving up polo!). Similarly, the interactions with the administrative desk were devoid of the usual tension when dealing with French bureaucrats, who often seem eager to invent new paperwork to delay action: the staff was invariably helpful, even with patients missing documents, and the only incident was with a taxi driver refusing to take an elderly patient home because of a missing certificate no other taxi seemed to require.

Second, and again this was surprising for me, I did not see many instances of people coming to the emergency department to bypass waiting or paying for a doctor, even though some were asked why they had not seen a doctor before (not much intimacy at the entry desk…). One old man with a missing leg spent some time in the room discussing with hospital social workers about where to spend the night but, as the homeless shelters around were all full, they ended up advising him to find a protected spot for the night, while agreeing to keep his bags for a day. It was raining rather heavily and the man was just out of cardiology so I found the advice a bit harsh. However, he was apparently a regular and I saw him later sitting in his wheelchair under an awning in a nearby street, begging from passer-bys.

The most exciting event of the afternoon (apart from the good news that there was no deep venous thrombosis, of course!) was the expulsion of a young woman who had arrived on a kick-scooter one hour earlier, not gone to the registration desk, and was repeatedly drinking coffees and eating snacks from the vending machine while exiting now and then to smoke a cigarette and while bothering with the phone chargers in the room. A security guard arrived and told her to leave, which she did, somewhat grudgingly. For the whole time, I could not fathom what was the point of her actions, but being the Jon Snow of emergency wards, what do I know?!


Filed under: Travel Tagged: deep vein thrombosis, emergency department, French hospital, Paris

by xi'an at September 01, 2015 10:15 PM

Emily Lakdawalla - The Planetary Society Blog

Populating the OSIRIS-REx Science Deck
The assembly of the OSIRIS-REx spacecraft continues, with many elements integrated onto the spacecraft ahead of schedule. Last month both OTES and OVIRS were delivered to Lockheed Martin and installed on the science deck.

September 01, 2015 07:34 PM

Clifford V. Johnson - Asymptotia

PBS Shoot Fun

pbs_shoot_selfieMore adventures in communicating the awesomeness of physics! Yesterday I spent a gruelling seven hours in the sun talking about the development of various ideas in physics over the centuries for a new show (to air next year) on PBS. Interestingly, we did all of this at a spot that, in less dry times, would have been underwater. It was up at lake Piru, which, due to the drought, is far below capacity. You can see this by going to google maps, looking at the representation of its shape on the map, and then clicking the satellite picture overlay to see the much changed (and reduced) shape in recent times.

There's an impressive dam at one end of the lake/reservoir, and I will admit that I did not resist the temptation to pull over, look at a nice view of it from the road on the way home, and say out loud "daaayuum". An offering to the god Pun, you see.

pbs_shoot_piru

Turns out that there's a wide range of wildlife, large and small, trudging around on the [...] Click to continue reading this post

The post PBS Shoot Fun appeared first on Asymptotia.

by Clifford at September 01, 2015 05:56 PM

Peter Coles - In the Dark

Adventures with the One-Point Distribution Function

As I promised a few people, here are the slides I used for my talk earlier today at the meeting I am attending. Actually I was given only 30 minutes and used up a lot of that time on two things that haven’t got much to do with the title. One was a quiz to identify the six famous astronomers (or physicists) who had made important contributions to statistics (Slide 2) and the other was on some issues that arose during the discussion session yesterday evening. I didn’t in the end talk much about the topic given in the title, which was about how, despite learning a huge amount about certain aspects of galaxy clustering, we are still far from a good understanding of the one-point distribution of density fluctuations. I guess I’ll get the chance to talk more about that in the near future!

P.S. I think the six famous faces should be easy to identify, so there are no prizes but please feel free to guess through the comments box!


by telescoper at September 01, 2015 05:47 PM

Symmetrybreaking - Fermilab/SLAC

Combined results find Higgs still standard

The CMS and ATLAS experiments combined forces to more precisely measure properties of the Higgs boson.

The ATLAS and CMS experiments on the Large Hadron Collider were designed to be partners in discovery.

In 2012, both experiments reported evidence of a Higgs-like boson, the fundamental particle that gives mass to the other fundamental particles.

ATLAS reported the mass of this new boson to be in the mass region of 126 billion electronvolts, and CMS found it to be in the region of 125. In May 2015, the two experiments combined their measurements, refining the Higgs mass closer to 125.09 GeV.

Sticking with the philosophy that two experiments are better than one, scientists from the ATLAS and CMS collaborations presented combined measurements of other Higgs properties earlier today at the third annual Large Hadron Collider Physics Conference in St. Petersburg, Russia.

This particular analysis focused on the interaction of the Higgs boson with other particles, known as coupling strength. The combined measurements are more precise than each experiment could accomplish alone, and results establish that the Higgs mechanism grants mass to both the matter and force-carrying particles as predicted by the Standard Model of particle physics.

“The analysis, presented for the first time at the LHCP Conference, fully exploits the data collected in Run 1 at the LHC by the two experiments,” says Nick Wardle, a CERN Fellow on CMS. “The uncertainties on the couplings of the Higgs boson are reduced by almost 30 percent, making these measurements of Higgs boson production and decay the most precise obtained to date.”

In the Standard Model, how strongly the Higgs boson couples to another particle determines that particle’s mass and the rate at which a Higgs boson decays into other particles.

For instance, the Higgs boson couples strongly with the bottom quark and very weakly with the electron; therefore, the bottom quark has a much greater mass than the electron and the Higgs will commonly decay into a bottom quark and its antiquark.

One of the objectives of combining the ATLAS and CMS data is to examine some Higgs decay signals that were picked up by each experiment but did not have the statistical significance to validate.

“For example, the Higgs boson decaying to a pair of tau leptons is established with a greater statistical significance when ATLAS and CMS data are combined,” says Ketevi Assamagan, an ATLAS physicist at Brookhaven National Laboratory.

While the discovery and measurement of the mass of the Higgs itself was perhaps the most notable driver of research during the first run of the LHC, measurements of Higgs couplings and their impact on Higgs boson production and decay will be important to searches for new physics in the current run.

 

Like what you see? Sign up for a free subscription to symmetry!

 

by Katie Elyce Jones at September 01, 2015 05:47 PM

Georg von Hippel - Life on the lattice

Fundamental Parameters from Lattice QCD, Day Two
This morning, we started with a talk by Taku Izubuchi, who reviewed the lattice efforts relating to the hadronic contributions to the anomalous magnetic moment (g-2) of the muon. While the QED and electroweak contributions to (g-2) are known to great precision, most of the theoretical uncertainty presently comes from the hadronic (i.e. QCD) contributions, of which there are two that are relevant at the present level of precision: the contribution from the hadronic vacuum polarization, which can be inserted into the leading-order QED correction, and the contribution from hadronic light-by-light scattering, which can be inserted between the incoming external photon and the muon line. There are a number of established methods for computing the hadronic vacuum polarization, both phenomenologically using a dispersion relation and the experimental R-ratio, and in lattice field theory by computing the correlator of two vector currents (which can, and needs to, be refined in various way in order to achieve competitive levels of precision). No such well-established methods exist yet for the light-by-light scattering, which is so far mostly described using models. There are however, now efforts from a number of different sides to tackle this contribution; Taku mainly presented the appproach by the RBC/UKQCD collaboration, which uses stochastic sampling of the internal photon propagators to explicitly compute the diagrams contributing to (g-2). Another approach would be to calculate the four-point amplitude explicitly (which has recently been done for the first time by the Mainz group) and to decompose this into form factors, which can then be integrated to yield the light-by-light scattering contribution to (g-2).

The second talk of the day was given by Petros Dimopoulos, who reviewed lattice determinations of D and B leptonic decays and mixing. For the charm quark, cut-off effects appear to be reasonably well-controlled with present-day lattice spacings and actions, and the most precise lattice results for the D and Ds decay constants claim sub-percent accuracy. For the b quark, effective field theories or extrapolation methods have to be used, which introduces a source of hard-to-assess theoretical uncertainty, but the results obtained from the different approaches generally agree very well amongst themselves. Interestingly, there does not seem to be any noticeable dependence on the number of dynamical flavours in the heavy-quark flavour observables, as Nf=2 and Nf=2+1+1 results agree very well to within the quoted precisions.

In the afternoon, the CKMfitter collaboration split off to hold their own meeting, and the lattice participants met for a few one-on-one or small-group discussions of some topics of interest.

by Georg v. Hippel (noreply@blogger.com) at September 01, 2015 03:29 PM

Tommaso Dorigo - Scientificblogging

Bel's Temple In Palmyra Is No More
Images of the systematic destruction of archaeological sites and art pieces in Syria are no news any more, but I was especially saddened to see before/after aerial pictures of Palmyra's site today, which demonstrate how the beautiful temple of Bel has been completely destroyed by explosives. A picture of the temple is shown below.

read more

by Tommaso Dorigo at September 01, 2015 08:21 AM

Lubos Motl - string vacua and pheno

Both CMS and ATLAS see \(5.2\TeV\) dijet (and trijet)
One month ago, I discussed two intriguing dijet events seen by the CMS collaboration at the LHC. They had a pretty high mass. An event from the \(8\TeV\) run in 2012 had the total mass of \(5.15\TeV\) while a fresh event from the \(13\TeV\) run in 2015 had the total mass of \(5.0\TeV\) or so.

Two collisions isn't a terribly high number so these events, even though their energy is really higher than the energy of the "following" high-energy events, may deserve to be ignored even more strictly than the bumps in many other searches.

Perhaps, to make you more excited, you were saying, the competitors at ATLAS would have to see the \(5.2\TeV\) event as well. Moreover, a miracle should better cut the \(0.2\TeV\) gap between the two CMS events. What happened a month later?

Ladies and Gentlemen, the competitors at ATLAS have seen the \(5.2\TeV\) event as well. Moreover, CMS has adjusted their energies and the \(5.0\TeV\) event from the 2015 run is quoted as a \(5.2\TeV\) event as well so the gap is gone! And as a bonus, ATLAS has also seen a \(5.2\TeV\) event in a multijet analysis. Isn't it starting to look intriguing?




Last night, ATLAS released three papers from the \(13\TeV\) run while the CMS showed us one. ATLAS gives us an analysis of Z-plus-jets, both teams unmask their new narrow dijet analyses, and ATLAS adds a search for strong gravity with trijets and similar beasts.




At the end of July, you were given this picture from CMS:



The right graph shows the highest energy dijet from the \(8\TeV\) run to be \(5.2\TeV\) but it doesn't look impressive because there are other nearby high-energy dijets, too. However, the left graph from the fresh \(13\TeV\) run shows a rather big gap followed by one \(5.0\TeV\) event. They could be signs of the same particle.

Now, CMS has provided us with the paper which contains a new version of this left graph:



The total luminosity is 42 inverse picobarns instead of 37 – not too much progress in the last month – but you see that the graph is basically the same except that the estimated energy of the dijets has been significantly increased and now the black swan event has \(5.2\TeV\) or \(5.3\TeV\).

What about ATLAS? They have a dijet search out, too. Figure 13 shows a colorful "photograph" of their highest-mass event in the dijet mass search. The scalar sum of the jet \(p_T\) is \(5.2\TeV\). The ATLAS search has found "nothing conclusive" but Figures 1,3,7 are pretty cool for our discussion. Here is Figure 7:



Just look how isolated the \(5.2\TeV\) ATLAS dijet event seems to be. In every \(100\GeV\) wide bin up to \(4.0\TeV\) except for the bin between \(3.8\) and \(3.9\TeV\), there is at least one event. Then there is a big window \(4.0\)-\(5.2\TeV\) with no events at all. And then you have the isolated collision in \(5.2\)-\(5.3\TeV\), the same bin as one in the new CMS analysis.



Note that these \(5.2\TeV\) events were two totally different collisions that took place at different places – inside the ATLAS detector and the CMS detector, respectively. CMS is a more compact but chunky detector in France. ATLAS is a larger but relatively lighter one on the opposite side of the ring. Because it's in Switzerland, as the Swiss anthem at 0:12 of Don Garbutt's video above proves, the ATLAS collisions didn't even take place in the European Union. ;-) But these collisions seem to speak the same language (namely French LOL).

And there's a bonus. ATLAS has also presented its search for strong gravity. No conclusive evidence of anything has been seen but look at this Figure 7:



It shows events with at least three jets – so the dijet event we mentioned previously should be absent. The graph shows lots of events up to \(4\TeV\), then a big desert, and then an event between \(5.2\) and \(5.3\TeV\). Well, the \(x\)-axis isn't really the mass in this case, it's the \(H_T\) variable – the sum of \(p_T\) of three or four leading jets, of the lepton if any, and of the missing transverse energy. But for such extremely high-energy collisions, \(H_T\) and \(m\) could be very close, my intuition says.

So right now, we seem to have four collisions in four different searches – two from CMS and two from ATLAS – that seem to contain a hint of a particle of mass \(5.2\TeV\) capable of decaying to jets. Well, the number of these \(5.2\TeV\) events should better double again. Wait for another month. ;-)

by Luboš Motl (noreply@blogger.com) at September 01, 2015 07:18 AM

August 31, 2015

Christian P. Robert - xi'an's og

likelihood-free inference in high-dimensional models

“…for a general linear model (GLM), a single linear function is a sufficient statistic for each associated parameter…”

Water Tower, Michigan Avenue, Chicago, Oct. 31, 2012The recently arXived paper “Likelihood-free inference in high-dimensional models“, by Kousathanas et al. (July 2015), proposes an ABC resolution of the dimensionality curse [when the dimension of the parameter and of the corresponding summary statistics] by turning Gibbs-like and by using a component-by-component ABC-MCMC update that allows for low dimensional statistics. In the (rare) event there exists a conditional sufficient statistic for each component of the parameter vector, the approach is just as justified as when using a generic ABC-Gibbs method based on the whole data. Otherwise, that is, when using a non-sufficient estimator of the corresponding component (as, e.g., in a generalised [not general!] linear model), the approach is less coherent as there is no joint target associated with the Gibbs moves. One may therefore wonder at the convergence properties of the resulting algorithm. The only safe case [in dimension 2] is when one of the restricted conditionals does not depend on the other parameter. Note also that each Gibbs step a priori requires the simulation of a new pseudo-dataset, which may be a major imposition on computing time. And that setting the tolerance for each parameter is a delicate calibration issue because in principle the tolerance should depend on the other component values. I ran a comparative experiment on a toy normal target, using either empirical mean and variance (blue), or empirical median and mad (brown), with little difference in the (above) output. Especially when considering that I set the tolerance somewhat arbitrarily. This could be due to the fact that the pairs are quite similar in terms of their estimation properties. However, I then realised that the empirical variance is not sufficient for the variance conditional on the mean parameter. I looked at the limiting case (with zero tolerance), which amounts to simulating σ first and then μ given σ, and ran a (Gibbs and iid) simulation. The difference, as displayed below (red standing for the exact ABC case), is not enormous, even though it produces a fatter tail in μ. Note the interesting feature that I cannot produce the posterior distribution of the parameters given the median and mad statistics. Which is a great introduction to ABC!

15078613R code follows:

N=10
data=rnorm(N,mean=3,sd=.5)

#ABC with insufficient statistics
medata=median(data)
madata=mad(data)

varamad=rep(0,100)
for (i in 1:100)
  varamad[i]=mad(sample(data,N,rep=TRUE))
tol=c(.01*mad(data),.05*mad(varamad))

T=1e6
mu=rep(median(data),T)
sig=rep(mad(data),T)
for (t in 2:T){
  mu[t]=rnorm(1)
  psudo=rnorm(N,mean=mu[t],sd=sig[t-1])
  if (abs(medata-median(psudo))>tol[1])
   mu[t]=mu[t-1]

  sig[t]=1/rexp(1)
  psudo=rnorm(N,mean=mu[t],sd=sig[t])
  if (abs(madata-mad(psudo))>tol[2])
   sig[t]=sig[t-1]
}
#ABC with more sufficient statistics
meaata=mean(data)
sddata=sd(data)

varamad=rep(0,100)
for (i in 1:100)
  varamad[i]=sd(sample(data,N,rep=TRUE))
tol=c(.1*sd(data),.1*sd(varamad))

for (t in 2:T){
  mu[t]=rnorm(1)
  psudo=rnorm(N,mean=mu[t],sd=sig[t-1])
  if (abs(meaata-mean(psudo))>tol[1])
   mu[t]=mu[t-1]

  sig[t]=1/rexp(1)
  psudo=rnorm(N,mean=mu[t],sd=sig[t])
  if (abs(sddata-sd(psudo))>tol[2])
   sig[t]=sig[t-1]
}

#MCMC with false sufficient
sig=1/sqrt(rgamma(T,shape=.5*N,rate=1+.5*var(data)))
mu=rnorm(T,mean(data)/(1+sig^2/N),sd=1/sqrt(1+N/sig^2)))

 


Filed under: Books, R, Statistics, University life Tagged: ABC, ABC-Gibbs, compatible conditional distributions, convergence of Gibbs samplers, curse of dimensionality, exact ABC, Gibbs sampling, median, median absolute deviation, R

by xi'an at August 31, 2015 10:15 PM

Peter Coles - In the Dark

Sono arrivato a Castiglioncello

Well, made it to Castiglioncello on schedule but was too early to check into my hotel so I went directly to the first session and thence to the welcoming cocktail party and accompanying sunset.

image

image

Which was nice. When I did get to the hotel however I found the WIFI isn’t working so I had to post this via my mobile at not inconsiderable expense. I was hoping to download a few things for my talk tomorrow too.

That grumble aside it seems a nice place. And it’s sunny!

PS. Apologies for the grammatical error in the title of the original version of this post!


by telescoper at August 31, 2015 08:43 PM

The n-Category Cafe

Wrangling generators for subobjects

Guest post by John Wiltshire-Gordon

My new paper arXiv:1508.04107 contains a definition that may be of interest to category theorists. Emily Riehl has graciously offered me this chance to explain.

In algebra, if we have a firm grip on some object <semantics>X<annotation encoding="application/x-tex"> X </annotation></semantics>, we probably have generators for <semantics>X<annotation encoding="application/x-tex"> X </annotation></semantics>. Later, if we have some quotient <semantics>X/<annotation encoding="application/x-tex"> X / \sim </annotation></semantics>, the same set of generators will work. The trouble comes when we have a subobject <semantics>YX<annotation encoding="application/x-tex"> Y \subseteq X </annotation></semantics>, which (given typical bad luck) probably misses every one of our generators. We need theorems to find generators for subobjects.

Category theory offers a clean definition of generation: if <semantics>C<annotation encoding="application/x-tex"> C </annotation></semantics> is some category of algebraic objects and <semantics>FU<annotation encoding="application/x-tex"> F \dashv U </annotation></semantics> is a free-forgetful adjunction with <semantics>U:CSet<annotation encoding="application/x-tex"> U : C \longrightarrow \mathrm{Set} </annotation></semantics>, then it makes sense to say that a subset <semantics>SUX<annotation encoding="application/x-tex"> S \subseteq U X </annotation></semantics> generates <semantics>X<annotation encoding="application/x-tex"> X </annotation></semantics> if the adjunct arrow <semantics>FSX<annotation encoding="application/x-tex"> F S \rightarrow X </annotation></semantics> is epic.

Certainly <semantics>R<annotation encoding="application/x-tex"> R </annotation></semantics>-modules fit into this setup nicely, and groups, commutative rings, etc. What about simplicial sets? It makes sense to say that some simplicial set <semantics>X<annotation encoding="application/x-tex"> X </annotation></semantics> is “generated” by its 1-simplices, for example: this is saying that <semantics>X<annotation encoding="application/x-tex"> X </annotation></semantics> is 1-skeletal. But simplicial sets come with many sorts of generator…Ah, and they also come with many forgetful functors, given by evaluation at the various objects of <semantics>Δ op<annotation encoding="application/x-tex"> \Delta^{op} </annotation></semantics>.

Let’s assume we’re in a context where there are many forgetful functors, and many corresponding notions of generation. In fact, for concreteness, let’s think about cosimplicial vector spaces over the rational numbers. A cosimplicial vector space is a functor <semantics>ΔVect<annotation encoding="application/x-tex"> \Delta \longrightarrow \mathrm{Vect} </annotation></semantics>, and so for each <semantics>dΔ<annotation encoding="application/x-tex"> d \in \Delta </annotation></semantics> we have a functor <semantics>U d:Vect ΔSet<annotation encoding="application/x-tex"> U_d : \mathrm{Vect}^{\Delta} \longrightarrow \mathrm{Set} </annotation></semantics> with <semantics>U dV=Vd<annotation encoding="application/x-tex"> U_d V = V d </annotation></semantics> and left adjoint <semantics>F d<annotation encoding="application/x-tex"> F_d </annotation></semantics>. We will say that a vector <semantics>vVd<annotation encoding="application/x-tex"> v \in V d </annotation></semantics> sits in degree <semantics>d<annotation encoding="application/x-tex"> d </annotation></semantics>, and generally think of <semantics>V<annotation encoding="application/x-tex"> V </annotation></semantics> as a vector space graded by the objects of <semantics>Δ<annotation encoding="application/x-tex"> \Delta </annotation></semantics>.

Definition A cosimplicial vector space <semantics>V<annotation encoding="application/x-tex"> V </annotation></semantics> is generated in degree <semantics>dΔ<annotation encoding="application/x-tex"> d \in \Delta </annotation></semantics> if the component at <semantics>V<annotation encoding="application/x-tex"> V </annotation></semantics> of the counit <semantics>F dU dVV<annotation encoding="application/x-tex"> F_d U_d V \longrightarrow V </annotation></semantics> is epic. Similarly, <semantics>V<annotation encoding="application/x-tex"> V </annotation></semantics> is generated in degrees <semantics>{d i}<annotation encoding="application/x-tex"> \{d_i \} </annotation></semantics> if <semantics> iF d iU d iVV<annotation encoding="application/x-tex"> \oplus_i F_{d_i} U_{d_i} V \longrightarrow V </annotation></semantics> is epic.

Example Let <semantics>V=F d{*}<annotation encoding="application/x-tex"> V = F_d \{ \ast \} </annotation></semantics> be the free cosimplicial vector space on a single vector in degree <semantics>d<annotation encoding="application/x-tex"> d </annotation></semantics>. Certainly <semantics>V<annotation encoding="application/x-tex"> V </annotation></semantics> is generated in degree <semantics>d<annotation encoding="application/x-tex"> d </annotation></semantics>. It’s less obvious that <semantics>V<annotation encoding="application/x-tex"> V </annotation></semantics> admits a unique nontrivial subobject <semantics>WV<annotation encoding="application/x-tex"> W \hookrightarrow V </annotation></semantics>. Let’s try to find generators for <semantics>W<annotation encoding="application/x-tex"> W </annotation></semantics>. It turns out that <semantics>Wd=0<annotation encoding="application/x-tex"> W d = 0 </annotation></semantics>, so no generators there. Since <semantics>W0<annotation encoding="application/x-tex"> W \neq 0 </annotation></semantics>, there must be generators somewhere… but where?

Theorem (Wrangling generators for cosimplicial abelian groups): If <semantics>V<annotation encoding="application/x-tex"> V </annotation></semantics> is a cosimplicial abelian group generated in degrees <semantics>{d i}<annotation encoding="application/x-tex"> \{ d_i \} </annotation></semantics>, then any subobject <semantics>WV<annotation encoding="application/x-tex"> W \hookrightarrow V </annotation></semantics> is generated in degrees <semantics>{d i+1}<annotation encoding="application/x-tex"> \{d_i + 1 \} </annotation></semantics>.

Ok, so now we know exactly where to look for generators for subobjects: exactly one degree higher than our generators for the ambient object. The generators have been successfully wrangled.

The preorder on degrees of generation <semantics> d<annotation encoding="application/x-tex"> \leq_d </annotation></semantics>

Time to formalize. Let <semantics>U d,U x,U y:CSet<annotation encoding="application/x-tex"> U_d, U_x, U_y: C \longrightarrow \mathrm{Set} </annotation></semantics> be three forgetful functors, and let <semantics>F d,F x,F y<annotation encoding="application/x-tex"> F_d, F_x, F_y </annotation></semantics> be their left adjoints. When the labels <semantics>d,x,y<annotation encoding="application/x-tex"> d, x, y </annotation></semantics> appear unattached to <semantics>U<annotation encoding="application/x-tex"> U </annotation></semantics> or <semantics>F<annotation encoding="application/x-tex"> F </annotation></semantics>, they represent formal “degrees of generation,” even though <semantics>C<annotation encoding="application/x-tex"> C </annotation></semantics> need not be a functor category. In this broader setting, we say <semantics>VC<annotation encoding="application/x-tex"> V \in C </annotation></semantics> is generated in (formal) degree <semantics><annotation encoding="application/x-tex"> \star </annotation></semantics> if the component of the counit <semantics>F U VV<annotation encoding="application/x-tex"> F_{\star} U_{\star} V \longrightarrow V </annotation></semantics> is epic. By the unit-counit identities, if <semantics>V<annotation encoding="application/x-tex"> V </annotation></semantics> is generated in degree <semantics><annotation encoding="application/x-tex"> \star </annotation></semantics> , the whole set <semantics>U V<annotation encoding="application/x-tex"> U_{\star} V </annotation></semantics> serves as a generating set.

Definition Say <semantics>x dy<annotation encoding="application/x-tex"> x \leq_d y </annotation></semantics> if for all <semantics>VC<annotation encoding="application/x-tex"> V \in C </annotation></semantics> generated in degree <semantics>d<annotation encoding="application/x-tex"> d </annotation></semantics>, every subobject <semantics>WV<annotation encoding="application/x-tex"> W \hookrightarrow V </annotation></semantics> generated in degree <semantics>x<annotation encoding="application/x-tex"> x </annotation></semantics> is also generated in degree <semantics>y<annotation encoding="application/x-tex"> y </annotation></semantics>.

Practically speaking, if <semantics>x dy<annotation encoding="application/x-tex"> x \leq_d y </annotation></semantics>, then generators in degree <semantics>x<annotation encoding="application/x-tex"> x </annotation></semantics> can always be replaced by generators in degree <semantics>y<annotation encoding="application/x-tex"> y </annotation></semantics> provided that the ambient object is generated in degree <semantics>d<annotation encoding="application/x-tex"> d </annotation></semantics>.

Suppose that we have a complete understanding of the preorder <semantics> d<annotation encoding="application/x-tex"> \leq_d </annotation></semantics> , and we’re trying to generate subobjects inside some object generated in degree <semantics>d<annotation encoding="application/x-tex"> d </annotation></semantics>. Then every time <semantics>x dy<annotation encoding="application/x-tex"> x \leq_d y </annotation></semantics>, we may replace generators in degree <semantics>x<annotation encoding="application/x-tex"> x </annotation></semantics> with their span in degree <semantics>y<annotation encoding="application/x-tex"> y </annotation></semantics>. In other words, the generators <semantics>SU xV<annotation encoding="application/x-tex"> S \subseteq U_x V </annotation></semantics> are equivalent to generators <semantics>Im(U yF xSU yV)U yV<annotation encoding="application/x-tex"> \mathrm{Im}(U_y F_x S \longrightarrow U_y V) \subseteq U_y V </annotation></semantics>. Arguing in this fashion, we may wrangle all generators upward in the preorder <semantics> d<annotation encoding="application/x-tex"> \leq_d </annotation></semantics>. If <semantics> d<annotation encoding="application/x-tex"> \leq_d </annotation></semantics> has a finite system of elements <semantics>m 1,m 2,,m k<annotation encoding="application/x-tex"> m_1, m_2, \ldots, m_k </annotation></semantics> capable of bounding any other element from above, then all generators may be replaced by generators in degrees <semantics>m 1,m 2,,m k<annotation encoding="application/x-tex"> m_1, m_2, \ldots, m_k </annotation></semantics>. This is the ideal wrangling situation, and lets us restrict our search for generators to this finite set of degrees.

In the case of cosimplicial vector spaces, <semantics>d+1<annotation encoding="application/x-tex"> d + 1 </annotation></semantics> is a maximum for the preorder <semantics> d<annotation encoding="application/x-tex"> \leq_d </annotation></semantics> with <semantics>dΔ<annotation encoding="application/x-tex"> d \in \Delta </annotation></semantics>. So any subobject of a simplicial vector space generated in degree <semantics>d<annotation encoding="application/x-tex"> d </annotation></semantics> is generated in degree <semantics>d+1<annotation encoding="application/x-tex"> d + 1 </annotation></semantics>. (It is also true that, for example, <semantics>d+2<annotation encoding="application/x-tex"> d + 2 </annotation></semantics> is a maximum for the preorder <semantics> d<annotation encoding="application/x-tex"> \leq_d </annotation></semantics>. In fact, we have <semantics>d+1 dd+2 dd+1<annotation encoding="application/x-tex"> d + 1 \leq_d d+2 \leq_d d + 1 </annotation></semantics>. That’s why it’s important that <semantics> d<annotation encoding="application/x-tex"> \leq_d </annotation></semantics> is a preorder, and not a true partial order.)

Connection to the preprint arXiv:1508.04107

In the generality presented above, where a formal degree of generation is a free-forgetful adjunction to <semantics>Set<annotation encoding="application/x-tex"> \mathrm{Set} </annotation></semantics>, I do not know much about the preorder <semantics> d<annotation encoding="application/x-tex"> \leq_d </annotation></semantics>. The paper linked above is concerned with the case <semantics>C=(Mod R) 𝒟<annotation encoding="application/x-tex"> C = (\mathrm{Mod}_R)^{\mathcal{D}} </annotation></semantics> of functor categories of <semantics>𝒟<annotation encoding="application/x-tex"> \mathcal{D} </annotation></semantics>-shaped diagrams of <semantics>R<annotation encoding="application/x-tex"> R </annotation></semantics>-modules. In this case I can say a lot.

In Definition 1.1, I give a computational description of the preorder <semantics> d<annotation encoding="application/x-tex"> \leq_d </annotation></semantics>. This description makes it clear that if <semantics>𝒟<annotation encoding="application/x-tex"> \mathcal{D} </annotation></semantics> has finite hom-sets, then you could program a computer to tell you whenever <semantics>x dy<annotation encoding="application/x-tex"> x \leq_d y </annotation></semantics>.

In Section 2.2, I give many different categories <semantics>𝒟<annotation encoding="application/x-tex"> \mathcal{D} </annotation></semantics> for which explicit upper bounds are known for the preorders <semantics> d<annotation encoding="application/x-tex"> \leq_d </annotation></semantics>. (In the paper, an explicit system of upper bounds for every preorder is called a homological modulus.)

Connection to the field of Representation Stability

If you’re interested in more context for this work, I highly recommend two of Emily Riehl’s posts from February of last year on Representation Stability, a subject begun by Tom Church and Benson Farb. With Jordan Ellenberg, they explained how certain stability patterns can be considered consequences of structure theory for the category of <semantics>FI<annotation encoding="application/x-tex"> \mathrm{FI} </annotation></semantics>-modules <semantics>(Vect ) FI<annotation encoding="application/x-tex"> (\mathrm{Vect}_{\mathbb{Q}})^{\mathrm{FI}} </annotation></semantics> where <semantics>FI<annotation encoding="application/x-tex"> \mathrm{FI} </annotation></semantics> is the category of finite sets with injections. In the category of <semantics>FI<annotation encoding="application/x-tex"> \mathrm{FI} </annotation></semantics>-modules, the preorders <semantics> n<annotation encoding="application/x-tex"> \leq_n </annotation></semantics> have no finite system of upper bounds. In contrast, for <semantics>Fin<annotation encoding="application/x-tex"> \mathrm{Fin} </annotation></semantics>-modules, every preorder has a maximum! (Here <semantics>Fin<annotation encoding="application/x-tex"> \mathrm{Fin} </annotation></semantics> is the usual category of finite sets). So having all finite set maps instead of just the injections gives much better control on generators for subobjects. As an application, Jordan and I use this extra control to obtain new results about configuration spaces of points on a manifold. You can read about it on his blog.

For more on the recent progress of representation stability, you can also check out the bibliography of my paper or take a look at exciting new results by CEF, as well as Rohit Nagpal, Andy Putman, Steven Sam, and Andrew Snowden, and Jenny Wilson.

by riehl (eriehl@math.jhu.edu) at August 31, 2015 04:51 PM

Georg von Hippel - Life on the lattice

Fundamental Parameters from Lattice QCD, Day One
Greetings from Mainz, where I have the pleasure of covering a meeting for you without having to travel from my usual surroundings (I clocked up more miles this year already than can be good from my environmental conscience).

Our Scientific Programme (which is the bigger of the two formats of meetings that the Mainz Institute of Theoretical Physics (MITP) hosts, the smaller being Topical Workshops) started off today with two keynote talks summarizing the status and expectations of the FLAG (Flavour Lattice Averaging Group, presented by Tassos Vladikas) and CKMfitter (presented by Sébastien Descotes-Genon) collaborations. Both groups are in some way in the business of performing weighted averages of flavour physics quantities, but of course their backgrounds, rationale and methods are quite different in many regards. I will no attempt to give a line-by-line summary of the talks or the afternoon discussion session here, but instead just summarize a few
points that caused lively discussions or seemed important in some other way.

By now, computational resources have reached the point where we can achieve such statistics that the total error on many lattice determinations of precision quantities is completely dominated by systematics (and indeed different groups would differ at the several-σ level if one were to consider only their statistical errors). This may sound good in a way (because it is what you'd expect in the limit of infinite statistics), but it is also very problematic, because the estimation of systematic errors is in the end really more of an art than a science, having a crucial subjective component at its heart. This means not only that systematic errors quoted by different groups may not be readily comparable, but also that it become important how to treat systematic errors (which may also be correlated, if e.g. two groups use the same one-loop renormalization constants) when averaging different results. How to do this is again subject to subjective choices to some extent. FLAG imposes cuts on quantities relating to the most important sources of systematic error (lattice spacings, pion mass, spatial volume) to select acceptable ensembles, then adds the statistical and systematic errors in quadrature, before performing a weighted average and computing the overall error taking correlations between different results into account using Schmelling's procedure. CKMfitter, on the other hand, adds all systematic errors linearly, and uses the Rfit procedure to perform a maximum likelihood fit. Either choice is equally permissible, but they are not directly compatible (so CKMfitter can't use FLAG averages as such).

Another point raised was that it is important for lattice collaborations computing mixing parameters to not just provide products like fB√BB, but also fB and BB separately (as well as information about the correlation between these quantities) in order to help making the global CKM fits easier.

by Georg v. Hippel (noreply@blogger.com) at August 31, 2015 04:34 PM

Symmetrybreaking - Fermilab/SLAC

Construction approved for world's most powerful digital camera

It would take 1500 high-definition television screens to display just one image from the Large Synoptic Survey Telescope's high-resolution camera.

The US Department of Energy has approved the start of construction for a 3.2-gigapixel digital camera—the world’s largest—at the heart of the Large Synoptic Survey Telescope. Assembled at SLAC National Accelerator Laboratory, the camera will be the eye of LSST, revealing unprecedented details of the universe and helping unravel some of its greatest mysteries.

The construction milestone, known as Critical Decision 3, is the last major approval decision before the acceptance of the finished camera, says LSST Director Steven Kahn: “Now we can go ahead and procure components and start building it.”

Starting in 2022, LSST will take digital images of the entire visible southern sky every few nights from atop a mountain called Cerro Pachón in Chile. It will produce a wide, deep and fast survey of the night sky, cataloguing by far the largest number of stars and galaxies ever observed. During a 10-year time frame, LSST will detect tens of billions of objects—the first time a telescope will observe more galaxies than there are people on Earth—and will create movies of the sky with unprecedented detail. Funding for the camera comes from the DOE, while financial support for the telescope and site facilities, the data management system, and the education and public outreach infrastructure of LSST comes primarily from the National Science Foundation.

The telescope’s camera—the size of a small car and weighing more than three tons—will capture full-sky images at such high resolution that it would take 1500 high-definition television screens to display just one of them.

This has already been a busy year for the LSST Project. Its dual-surface primary/tertiary mirror—the first of its kind for a major telescope—was completed; a traditional stone-laying ceremony in northern Chile marked the beginning of on-site construction of the facility; and a nearly 2000-square-foot, 2-story-tall clean room was completed at SLAC to accommodate fabrication of the camera.

“We are very gratified to see everyone’s hard work appreciated and acknowledged by this DOE approval,” says SLAC Director Chi-Chang Kao. “SLAC is honored to be partnering with the National Science Foundation and other DOE labs on this groundbreaking endeavor. We’re also excited about the wide range of scientific opportunities offered by LSST, in particular increasing our understanding of dark energy.”

Components of the camera are being built by an international collaboration of universities and labs, including DOE’s Brookhaven National Laboratory, Lawrence Livermore National Laboratory and SLAC. SLAC is responsible for overall project management and systems engineering, camera body design and fabrication, data acquisition and camera control software, cryostat design and fabrication, and integration and testing of the entire camera. Building and testing the camera will take approximately five years.

The LSST’s camera will include a filter-changing mechanism and shutter. This animation shows that mechanism, which allows the camera to view different wavelengths; the camera is capable of viewing light from near-ultraviolet to near-infrared (0.3-1 μm) wavelengths.

Illustration by: SLAC National Accelerator Laboratory

SLAC is also designing and constructing the NSF-funded database for the telescope’s data management system. LSST will generate a vast public archive of data—approximately 6 million gigabytes per year, or the equivalent of shooting roughly 800,000 images with a regular 8-megapixel digital camera every night, albeit of much higher quality and scientific value. This data will help researchers study the formation of galaxies, track potentially hazardous asteroids, observe exploding stars and better understand dark matter and dark energy, which together make up 95 percent of the universe but whose natures remain unknown.

“We have a busy agenda for the rest of 2015 and 2016,” Kahn says. “Construction of the telescope on the mountain is well underway. The contracts for fabrication of the telescope mount and the dome enclosure have been awarded and the vendors are at full steam.”

Nadine Kurita, camera project manager at SLAC, says fabrication of the state-of-the-art sensors for the camera has already begun, and contracts are being awarded for optical elements and other major components. “After several years of focusing on designs and prototypes, we are excited to start construction of key parts of the camera. The coming year will be crucial as we assemble and test the sensors for the focal plane.”

The National Research Council’s Astronomy and Astrophysics decadal survey, Astro2010, ranked the LSST as the top ground-based priority for the field for the current decade.  The recent report of the Particle Physics Project Prioritization Panel of the federal High Energy Physics Advisory Panel, setting forth the strategic plan for US particle physics, also recommended completion of the LSST.

“We’ve been working hard for years to get to this point,” Kurita says. “Everyone is very excited to start building the camera and take a big step toward conducting a deep survey of the Southern night sky.”


This article is based on a SLAC press release.

 

Like what you see? Sign up for a free subscription to symmetry!

August 31, 2015 03:46 PM

ZapperZ - Physics and Physicists

The History of Antiprotons
Antiprotons, the one-half of the particle used in the collision at the departed Tevatron at Fermilab, have had a long and distinguished history in the development of elementary particle physics. This CERN Courier article traces its history and all the important milestones in our knowledge due to the discovery of this particle.

Over the decades, antiprotons have become a standard tool for studies in particle physics; the word "antimatter" has entered into mainstream language; and antihydrogen is fast becoming a laboratory for investigations in fundamental physics. At CERN, the Antiproton Decelerator (AD) is now an important facility for studies in fundamental physics at low energies, which complement the investigations at the LHC’s high-energy frontier. This article looks back at some of the highlights in the studies of the antiworld at CERN, and takes a glimpse at what lies in store at the AD. 

Zz.

by ZapperZ (noreply@blogger.com) at August 31, 2015 12:31 PM

Jester - Resonaances

Weekend plot: SUSY limits rehashed
Lake Tahoe is famous for preserving dead bodies in good condition over many years,  therefore it is a natural place to organize the SUSY conference. As a tribute to this event, here is a plot from a recent ATLAS meta-analysis:
It shows the constraints on the gluino and the lightest neutralino masses in the pMSSM. Usually, the most transparent way to present experimental limits on supersymmetry is by using simplified models. This consists in picking two or more particles out of the MSSM zoo, and assuming that they are the only ones playing role in the analyzed process. For example, a popular simplified model has a gluino and a stable neutralino interacting via an effective quark-gluino-antiquark-neutralino coupling. In this model, gluino pairs are produced at the LHC through their couplings to ordinary gluons, and then each promptly decays to 2 quarks and  a neutralino via the effective couplings. This shows up in a detector as 4 or more jets and the missing energy carried off by the neutralinos. Within this simplified model, one can thus interpret the LHC multi-jets + missing energy data as constraints on 2 parameters: the gluino mass and  the lightest neutralino mass. One result of this analysis is that, for a massless neutralino, the gluino mass is constrained to be bigger than about 1.4 TeV, see the white line in the plot.

A non-trivial question is what happens to these limits if one starts to fiddle with the remaining one hundred parameters of the MSSM.  ATLAS tackles this question in the framework of the pMSSM,  which is a version of the  MSSM where all flavor and CP violating parameters are set to zero. In the resulting 19-dimensional parameter space,  ATLAS picks a large number of points that reproduce the correct Higgs mass and are consistent with various precision measurements. Then they check what fraction of the points with a given m_gluino and m_neutralino survives the constraints from all ATLAS supersymmetry searches so far. Of course, the results will depend on how the parameter space is sampled, but nevertheless  we can get a feeling of how robust are the limits obtained in simplified models. It is interesting that the gluino mass limits turn out to be quite robust. From the plot one  can see that, for a light neutralino, it is difficult to live with m_gluino < 1.4 TeV, and that there's no surviving points with  m_gluino < 1.1 TeV. Similar conclusion are  not true for all simplified models, e.g.,  the limits on squark masses in simplified models can be very much  relaxed by going to the larger parameter space of the pMSSM. Another thing worth noticing is that the blind spot near the m_gluino=m_neutralino diagonal is not really there: it is covered by ATLAS monojet searches.  

The LHC run-2 is going slow, so we still have some time to play with  the run-1 data. See the ATLAS paper for many more plots. New stronger limits on supersymmetry are not expected before next summer.

by Jester (noreply@blogger.com) at August 31, 2015 11:20 AM

Tommaso Dorigo - Scientificblogging

Highlights From ICNFP 2015
The fourth edition of the International Conference on New Frontiers in Physics has ended yesterday evening, and it is time for a summary. However, this year I must say that I am not in a good position to give an overview of the most interesting physics discussion that have taken place here, as I was involved in the organization of events for the conference and I could only attend a relatively small fraction of the presentations.
ICNFP offers a broad view on the forefront topics of many areas of physics, with the main topics being nuclear and particle physics, yet with astrophysics and theoretical developments in quantum mechanics and related subjects also playing a major role. 

read more

by Tommaso Dorigo at August 31, 2015 09:34 AM

August 30, 2015

Christian P. Robert - xi'an's og

ergodicity of approximate MCMC chains with applications to large datasets

bhamAnother arXived paper I read on my way to Warwick! And yet another paper written by my friend Natesh Pillai (and his co-author Aaron Smith, from Ottawa). The goal of the paper is to study the ergodicity and the degree of approximation of the true posterior distribution of approximate MCMC algorithms that recently flourished as an answer to “Big Data” issues… [Comments below are about the second version of this paper.] One of the most curious results in the paper is the fact that the approximation may prove better than the original kernel, in terms of computing costs! If asymptotically in the computing cost. There also are acknowledged connections with the approximative MCMC kernel of Pierre Alquier, Neal Friel, Richard Everitt and A Boland, briefly mentioned in an earlier post.

The paper starts with a fairly theoretical part, to follow with an application to austerity sampling [and, in the earlier version of the paper, to the Hoeffding bounds of Bardenet et al., both discussed earlier on the ‘Og, to exponential random graphs (the paper being rather terse on the description of the subsampling mechanism), to stochastic gradient Langevin dynamics (by Max Welling and Yee-Whye Teh), and to ABC-MCMC]. The assumptions are about the transition kernels of a reference Markov kernel and of one associated with the approximation, imposing some bounds on the Wasserstein distance between those kernels, K and K’. Results being generic, there is no constraint as to how K is chosen or on how K’ is derived from K. Except in Lemma 3.6 and in the application section, where the same proposal kernel L is used for both Metropolis-Hastings algorithms K and K’. While I understand this makes for an easier coupling of the kernels, this also sounds like a restriction to me in that modifying the target begs for a similar modification in the proposal, if only because the tails they are a-changin’

In the case of subsampling the likelihood to gain computation time (as discussed by Korattikara et al. and by Bardenet et al.), the austerity algorithm as described in Algorithm 2 is surprising as the average of the sampled data log-densities and the log-transform of the remainder of the Metropolis-Hastings probability, which seem unrelated, are compared until they are close enough.  I also find hard to derive from the different approximation theorems bounding exceedance probabilities a rule to decide on the subsampling rate as a function of the overall sample size and of the computing cost. (As a side if general remark, I remain somewhat reserved about the subsampling idea, given that it requires the entire dataset to be available at every iteration. This makes parallel implementations rather difficult to contemplate.)


Filed under: pictures, Statistics, Travel, University life Tagged: ABC-MCMC, accelerated ABC, Approximate Bayesian computation, austerity sampling, ergodicity, MCMC, Metropolis-Hastings algorithms, Monte Carlo Statistical Methods, Natesh Pillai, subsampling, Wasserstein distance

by xi'an at August 30, 2015 10:15 PM

Peter Coles - In the Dark

Sono arrivato a Pisa

En route to a workshop in the picturesque village of Castiglioncello, which is on the coast of Tuscany on a promontory sticking out into the Ligurian Sea, I decided to travel a day early and stay over in Pisa. I flew direct from London Gatwick to Pisa and it’s not far from the airport by train to my final destination, but despite travelling to Italy many times over the years I’ve never actually visited Pisa so I thought I’d take the opportunity to have a look around before making the short journey to Castiglioncello in the morning. In any case the cost of the flight was much lower to travel on a Sunday and the hotel I’m in is quite cheap so it seemed like a good deal. It’s lovely and warm here – 32 degrees in fact, at 7pm local time, so I had a pleasant stroll among the tourists.

Here are a few pictures to prove I was here! The first is the main road nearest to my hotel, brought back a lot of memories of my days as a student:

DSC_0160[1]

Now a couple of obligatory shots of the Leaning Tower. It was difficult to photograph because of the setting sun, so they’re not perfect but I was in a bit of a rush to get something to eat and, well you know, that there is little point in having the inclination if you haven’t got the time..

DSC_0161[1]DSC_0162[1]

The final one is of the Scuola Normale Superiore in the Piazza dei Cavalieri.

DSC_0163[1]

I rather like the shadow of the statue, which seems to be creeping up the stairs!

Anyway, I wish you all a pleasant bank holiday back in Blighty. I hope to blog from the conference, but if I don’t get time or the wifi craps out, I’ll be back online when I return at the end of the week.


by telescoper at August 30, 2015 06:38 PM

Clifford V. Johnson - Asymptotia

Fresh Cycle

brompton_30_08_2015I've been a bit quiet here the last week or so, you may have noticed. I wish I could say it was because I've been scribbling some amazing new physics in my notebooks, or drawing several new pages for the book, or producing some other simply measurable output, but I cannot. Instead, I can only report that it was the beginning of a new semester (and entire new academic year!) this week just gone, and this - and all the associated preparations and so forth - coincided with several other things including working on several drafts of a grant renewal proposal.

The best news of all is that my new group of students for my class (graduate electromagnetism, second part) seems like a really good and fun group, and I am looking forward to working with them. We've had two lectures already and they seem engaged, and eager to take part in the way I like my classes to run - interactively and investigatively. I'm looking forward to working with them over the semester.

Other things I've been chipping away at in preparation for the next couple of months include launching the USC science film competition (its fourth year - I skipped last year because of family leave), moving my work office (for the first time in the 12 years I've been here), giving some lectures at an international school, organizing a symposium in celebration of the centenary of Einstein's General Relativity, and a number of shoots for some TV shows that might be of [...] Click to continue reading this post

The post Fresh Cycle appeared first on Asymptotia.

by Clifford at August 30, 2015 06:24 PM

John Baez - Azimuth

The Inverse Cube Force Law

Here you see three planets. The blue planet is orbiting the Sun in a realistic way: it’s going around an ellipse.

The other two are moving in and out just like the blue planet, so they all stay on the same circle. But they’re moving around this circle at different rates! The green planet is moving faster than the blue one: it completes 3 orbits each time the blue planet goes around once. The red planet isn’t going around at all: it only moves in and out.

What’s going on here?

In 1687, Isaac Newton published his Principia Mathematica. This book is famous, but in Propositions 43–45 of Book I he did something that people didn’t talk about much—until recently. He figured out what extra force, besides gravity, would make a planet move like one of these weird other planets. It turns out an extra force obeying an inverse cube law will do the job!

Let me make this more precise. We’re only interested in ‘central forces’ here. A central force is one that only pushes a particle towards or away from some chosen point, and only depends on the particle’s distance from that point. In Newton’s theory, gravity is a central force obeying an inverse square law:

F(r) = - \displaystyle{ \frac{a}{r^2} }

for some constant a. But he considered adding an extra central force obeying an inverse cube law:

F(r) = - \displaystyle{ \frac{a}{r^2} + \frac{b}{r^3} }

He showed that if you do this, for any motion of a particle in the force of gravity you can find a motion of a particle in gravity plus this extra force, where the distance r(t) is the same, but the angle \theta(t) is not.

In fact Newton did more. He showed that if we start with any central force, adding an inverse cube force has this effect.

There’s a very long page about this on Wikipedia:

Newton’s theorem of revolving orbits, Wikipedia.

I haven’t fully understood all of this, but it instantly makes me think of three other things I know about the inverse cube force law, which are probably related. So maybe you can help me figure out the relationship.

The first, and simplest, is this. Suppose we have a particle in a central force. It will move in a plane, so we can use polar coordinates r, \theta to describe its position. We can describe the force away from the origin as a function F(r). Then the radial part of the particle’s motion obeys this equation:

\displaystyle{ m \ddot r = F(r) + \frac{L^2}{mr^3} }

where L is the magnitude of particle’s angular momentum.

So, angular momentum acts to provide a ‘fictitious force’ pushing the particle out, which one might call the centrifugal force. And this force obeys an inverse cube force law!

Furthermore, thanks to the formula above, it’s pretty obvious that if you change L but also add a precisely compensating inverse cube force, the value of \ddot r will be unchanged! So, we can set things up so that the particle’s radial motion will be unchanged. But its angular motion will be different, since it has a different angular momentum. This explains Newton’s observation.

It’s often handy to write a central force in terms of a potential:

F(r) = -V'(r)

Then we can make up an extra potential responsible for the centrifugal force, and combine it with the actual potential V into a so-called effective potential:

\displaystyle{ U(r) = V(r) + \frac{L^2}{2mr^2} }

The particle’s radial motion then obeys a simple equation:

\ddot{r} = - U'(r)

For a particle in gravity, where the force obeys an inverse square law and V is proportional to -1/r, the effective potential might look like this:

This is the graph of

\displaystyle{ U(r) = -\frac{4}{r} + \frac{1}{r^2} }

If you’re used to particles rolling around in potentials, you can easily see that a particle with not too much energy will move back and forth, never making it to r = 0 or r = \infty. This corresponds to an elliptical orbit. Give it more energy and the particle can escape to infinity, but it will never hit the origin. The repulsive ‘centrifugal force’ always overwhelms the attraction of gravity near the origin, at least if the angular momentum is nonzero.

On the other hand, suppose we have a particle moving in an attractive inverse cube force! Then the potential is proportional to 1/r^2, so the effective potential is

\displaystyle{ U(r) = \frac{c}{r^2} + \frac{L^2}{mr^2} }

where c is negative for an attractive force. If this attractive force is big enough, namely

\displaystyle{ c < -\frac{L^2}{m} }

then this force can exceed the centrifugal force, and the particle can fall in to r = 0. If we keep track of the angular coordinate \theta, we can see what’s really going on. The particle is spiraling in to its doom, hitting the origin in a finite amount of time!

This should remind you of a black hole, and indeed something similar happens there, but even more drastic:

Schwarzschild geodesics: effective radial potential energy, Wikipedia.

For a nonrotating uncharged black hole, the effective potential has three terms. Like Newtonian gravity it has an attractive -1/r term and a repulsive 1/r^2 term. But it also has an attractive term -1/r^3 term! In other words, it’s as if on top of Newtonian gravity, we had another attractive force obeying an inverse fourth power law! This overwhelms the others at short distances, so if you get too close to a black hole, you spiral in to your doom.

For example, a black hole can have an effective potential like this:

But back to inverse cube force laws! I know two more things about them. A while back I discussed how a particle in an inverse square force can be reinterpreted as a harmonic oscillator:

Planets in the fourth dimension, Azimuth.

There are many ways to think about this, and apparently the idea in some form goes all the way back to Newton! It involves a sneaky way to take a particle in a potential

\displaystyle{ V(r) \propto r^{-1} }

and think of it as moving around in the complex plane. Then if you square its position—thought of as a complex number—and cleverly reparametrize time, you get a particle moving in a potential

\displaystyle{ V(r) \propto r^2 }

This amazing trick can be generalized! A particle in a potential

\displaystyle{ V(r) \propto r^p }

can transformed to a particle in a potential

\displaystyle{ V(r) \propto r^q }

if

(p+2)(q+2) = 4

A good description is here:

• Rachel W. Hall and Krešimir Josić, Planetary motion and
the duality of force laws
, SIAM Review 42 (2000), 115–124.

This trick transforms particles in r^p potentials with p ranging between -2 and +\infty to r^q potentials with q ranging between +\infty and -2. It’s like a see-saw: when p is small, q is big, and vice versa.

But you’ll notice this trick doesn’t actually work at p = -2, the case that corresponds to the inverse cube force law. The problem is that p + 2 = 0 in this case, so we can’t find q with (p+2)(q+2) = 4.

So, the inverse cube force is special in three ways: it’s the one that you can add on to any force to get solutions with the same radial motion but different angular motion, it’s the one that naturally describes the ‘centrifugal force’, and it’s the one that doesn’t have a partner! We’ve seen how the first two ways are secretly the same. I don’t know about the third, but I’m hopeful.

Quantum aspects

Finally, here’s a fourth way in which the inverse cube law is special. This shows up most visibly in quantum mechanics… and this is what got me interested in this business in the first place.

You see, I’m writing a paper called ‘Struggles with the continuum’, which discusses problems in analysis that arise when you try to make some of our favorite theories of physics make sense. The inverse square force law poses interesting problems of this sort, which I plan to discuss. But I started wanting to compare the inverse cube force law, just so people can see things that go wrong in this case, and not take our successes with the inverse square law for granted.

Unfortunately a huge digression on the inverse cube force law would be out of place in that paper. So, I’m offloading some of that material to here.

In quantum mechanics, a particle moving in an inverse cube force law has a Hamiltonian like this:

H = -\nabla^2 + c r^{-2}

The first term describes the kinetic energy, while the second describes the potential energy. I’m setting \hbar = 1 and 2m = 1 to remove some clutter that doesn’t really affect the key issues.

To see how strange this Hamiltonian is, let me compare an easier case. If p < 2, the Hamiltonian

H = -\nabla^2 + c r^{-p}

is essentially self-adjoint on C_0^\infty(\mathbb{R}^3 - \{0\}), which is the space of compactly supported smooth functions on 3d Euclidean space minus the origin. What this means is that first of all, H is defined on this domain: it maps functions in this domain to functions in L^2(\mathbb{R}^3). But more importantly, it means we can uniquely extend H from this domain to a self-adjoint operator on some larger domain. In quantum physics, we want our Hamiltonians to be self-adjoint. So, this fact is good.

Proving this fact is fairly hard! It uses something called the Kato–Lax–Milgram–Nelson theorem together with this beautiful inequality:

\displaystyle{ \int_{\mathbb{R}^3} \frac{1}{4r^2} |\psi(x)|^2 \,d^3 x \le \int_{\mathbb{R}^3} |\nabla \psi(x)|^2 \,d^3 x }

for any \psi\in C_0^\infty(\mathbb{R}^3).

If you think hard, you can see this inequality is actually a fact about the quantum mechanics of the inverse cube law! It says that if c \ge -1/4, the energy of a quantum particle in the potential c r^{-2} is bounded below. And in a sense, this inequality is optimal: if c < -1/4, the energy is not bounded below. This is the quantum version of how a classical particle can spiral in to its doom in an attractive inverse cube law, if it doesn’t have enough angular momentum. But it’s subtly and mysteriously different.

You may wonder how this inequality is used to prove good things about potentials that are ‘less singular’ than the c r^{-2} potential: that is, potentials c r^{-p} with p < 2. For that, you have to use some tricks that I don’t want to explain here. I also don’t want to prove this inequality, or explain why its optimal! You can find most of this in some old course notes of mine:

• John Baez, Quantum Theory and Analysis, 1989.

See especially section 15.

But it’s pretty easy to see how this inequality implies things about the expected energy of a quantum particle in the potential c r^{-2}. So let’s do that.

In this potential, the expected energy of a state \psi is:

\displaystyle{  \langle \psi, H \psi \rangle =   \int_{\mathbb{R}^3} \overline\psi(x)\, (-\nabla^2 + c r^{-2})\psi(x) \, d^3 x }

Doing an integration by parts, this gives:

\displaystyle{  \langle \psi, H \psi \rangle = \int_{\mathbb{R}^3} |\nabla \psi(x)|^2 + cr^{-2} |\psi(x)|^2 \,d^3 x }

The inequality I showed you says precisely that when c = -1/4, this is greater than or equal to zero. So, the expected energy is actually nonnegative in this case! And making c greater than -1/4 only makes the expected energy bigger.

Note that in classical mechanics, the energy of a particle in this potential ceases to be bounded below as soon as c < 0. Quantum mechanics is different because of the uncertainty principle! To get a lot of negative potential energy, the particle’s wavefunction must be squished near the origin, but that gives it kinetic energy.

It turns out that the Hamiltonian for a quantum particle in an inverse cube force law has exquisitely subtle and tricky behavior. Many people have written about it, running into ‘paradoxes’ when they weren’t careful enough. Only rather recently have things been straightened out.

For starters, the Hamiltonian for this kind of particle

H = -\nabla^2 + c r^{-2}

has different behaviors depending on c. Obviously the force is attractive when c > 0 and repulsive when c < 0, but that’s not the only thing that matters! Here’s a summary:

c \ge 3/4. In this case H is essentially self-adjoint on C_0^\infty(\mathbb{R}^3 - \{0\}). So, it admits a unique self-adjoint extension and there’s no ambiguity about this case.

c < 3/4. In this case H is not essentially self-adjoint on C_0^\infty(\mathbb{R}^3 - \{0\}). In fact, it admits more than one self-adjoint extension! This means that we need extra input from physics to choose the Hamiltonian in this case. It turns out that we need to say what happens when the particle hits the singularity at r = 0. This is a long and fascinating story that I just learned yesterday.

c \ge -1/4. In this case the expected energy \langle \psi, H \psi \rangle is bounded below for \psi \in C_0^\infty(\mathbb{R}^3 - \{0\}). It turns out that whenever we have a Hamiltonian that is bounded below, even if there is not a unique self-adjoint extension, there exists a canonical ‘best choice’ of self-adjoint extension, called the Friedrichs extension. I explain this in my course notes.

c < -1/4. In this case the expected energy is not bounded below, so we don’t have the Friedrichs extension to help us choose which self-adjoint extension is ‘best’.

To go all the way down this rabbit hole, I recommend these two papers:

• Sarang Gopalakrishnan, Self-Adjointness and the Renormalization of Singular Potentials, B.A. Thesis, Amherst College.

• D. M. Gitman, I. V. Tyutin and B. L. Voronov, Self-adjoint extensions and spectral analysis in the Calogero problem, Jour. Phys. A 43 (2010), 145205.

The first is good for a broad overview of problems associated to singular potentials such as the inverse cube force law; there is attention to mathematical rigor the focus is on physical insight. The second is good if you want—as I wanted—to really get to the bottom of the inverse cube force law in quantum mechanics. Both have lots of references.

Also, both point out a crucial fact I haven’t mentioned yet: in quantum mechanics the inverse cube force law is special because, naively, at least it has a kind of symmetry under rescaling! You can see this from the formula

H = -\nabla^2 + cr^{-2}

by noting that both the Laplacian and r^{-2} have units of length-2. So, they both transform in the same way under rescaling: if you take any smooth function \psi, apply H and then expand the result by a factor of k,, you get k^2 times what you get if you do those operations in the other order.

In particular, this means that if you have a smooth eigenfunction of H with eigenvalue \lambda, you will also have one with eigenfunction k^2 \lambda for any k > 0. And if your original eigenfunction was normalizable, so will be the new one!

With some calculation you can show that when c \le -1/4, the Hamiltonian H has a smooth normalizable eigenfunction with a negative eigenvalue. In fact it’s spherically symmetric, so finding it is not so terribly hard. But this instantly implies that H has smooth normalizable eigenfunctions with any negative eigenvalue.

This implies various things, some terrifying. First of all, it means that H is not bounded below, at least not on the space of smooth normalizable functions. A similar but more delicate scaling argument shows that it’s also not bounded below on C_0^\infty(\mathbb{R}^3 - \{0\}), as I claimed earlier.

This is scary but not terrifying: it simply means that when c \le -1/4, the potential is too strongly negative for the Hamiltonian to be bounded below.

The terrifying part is this: we’re getting uncountably many normalizable eigenfunctions, all with different eigenvalues, one for each choice of k. A self-adjoint operator on a countable-dimensional Hilbert space like L^2(\mathbb{R}^3) can’t have uncountably many normalizable eigenvectors with different eigenvalues, since then they’d all be orthogonal to each other, and that’s too many orthogonal vectors to fit in a Hilbert space of countable dimension!

This sounds like a paradox, but it’s not. These functions are not all orthogonal, and they’re not all eigenfunctions of a self-adjoint operator. You see, the operator H is not self-adjoint on the domain we’ve chosen, the space of all smooth functions in L^2(\mathbb{R}^3). We can carefully choose a domain to get a self-adjoint operator… but it turns out there are many ways to do it.

Intriguingly, in most cases this choice breaks the naive dilation symmetry. So, we’re getting what physicists call an ‘anomaly’: a symmetry of a classical system that fails to give a symmetry of the corresponding quantum system.

Of course, if you’ve made it this far, you probably want to understand what the different choices of Hamiltonian for a particle in an inverse cube force law actually mean, physically. The idea seems to be that they say how the particle changes phase when it hits the singularity at r = 0 and bounces back out.

(Why does it bounce back out? Well, if it didn’t, time evolution would not be unitary, so it would not be described by a self-adjoint Hamiltonian! We could try to describe the physics of a quantum particle that does not come back out when it hits the singularity, and I believe people have tried, but this requires a different set of mathematical tools.)

For a detailed analysis of this, it seems one should take Schrödinger’s equation and do a separation of variables into the angular part and the radial part:

\psi(r,\theta,\phi) = \Psi(r) \Phi(\theta,\phi)

For each choice of \ell = 0,1,2,\dots one gets a space of spherical harmonics that one can use for the angular part \Phi. The interesting part is the radial part, \Psi. Here it is helpful to make a change of variables

u(r) = \Psi(r)/r

At least naively, Schrödinger’s equation for the particle in the cr^{-2} potential then becomes

\displaystyle{ \frac{d}{dt} u = -iH u }

where

\displaystyle{ H = -\frac{d^2}{dr^2} + \frac{c + \ell(\ell+1)}{r^2} }

Beware: I keep calling all sorts of different but related Hamiltonians H, and this one is for the radial part of the dynamics of a quantum particle in an inverse cube force. As we’ve seen before in the classical case, the centrifugal force and the inverse cube force join forces in an ‘effective potential’

\displaystyle{ U(r) = kr^{-2} }

where

k = c + \ell(\ell+1)

So, we have reduced the problem to that of a particle on the open half-line (0,\infty) moving in the potential kr^{-2}. The Hamiltonian for this problem:

\displaystyle{ H = -\frac{d^2}{dr^2} + \frac{k}{r^2} }

is called the Calogero Hamiltonian. Needless to say, it has fascinating and somewhat scary properties, since to make it into a bona fide self-adjoint operator, we must make some choice about what happens when the particle hits r = 0. The formula above does not really specify the Hamiltonian.

This is more or less where Gitman, Tyutin and Voronov begin their analysis, after a long and pleasant review of the problem. They describe all the possible choices of self-adjoint operator that are allowed. The answer depends on the values of k, but very crudely, the choice says something like how the phase of your particle changes when it bounces off the singularity. Most choices break the dilation invariance of the problem. But intriguingly, some choices retain invariance under a discrete subgroup of dilations!

So, the rabbit hole of the inverse cube force law goes quite deep, and I expect I haven’t quite gotten to the bottom yet. The problem may seem pathological, verging on pointless. But the math is fascinting, and it’s a great testing-ground for ideas in quantum mechanics—very manageable compared to deeper subjects like quantum field theory, which are riddled with their own pathologies. Finally, the connection between the inverse cube force law and centrifugal force makes me think it’s not a mere curiosity.

Credits

The animation was made by ‘WillowW’ and placed on Wikicommons. It’s one of a number that appears in this Wikipedia article:

Newton’s theorem of revolving orbits, Wikipedia.

I made the graphs using the free online Desmos graphing calculator.


by John Baez at August 30, 2015 07:26 AM

August 29, 2015

Christian P. Robert - xi'an's og

no country for ‘Og snaps?!

A few days ago, I got an anonymous comment complaining about my tendency to post pictures “no one is interested in” on the ‘Og and suggesting I moved them to another electronic media like Twitter or Instagram as to avoid readers having to sort through the blog entries for statistics related ones, to separate the wheat from the chaff… While my first reaction was (unsurprisingly) one of irritation, a more constructive one is to point out to all (un)interested readers that they can always subscribe by RSS to the Statistics category (and skip the chaff), just like R bloggers only post my R related entries. Now, if more ‘Og’s readers find the presumably increasing flow of pictures a nuisance, just let me know and I will try to curb this avalanche of pixels… Not certain that I succeed, though!


Filed under: Mountains, pictures, Travel Tagged: blogging, Maple Pass loop, North Cascades National Park, Og, R, Rainy Pass, Washington State

by xi'an at August 29, 2015 10:15 PM

Peter Coles - In the Dark

Statistics in Astronomy

A few people at the STFC Summer School for new PhD students in Cardiff last week asked if I could share the slides. I’ve given the Powerpoint presentation to the organizers so presumably they will make the presentation available, but I thought I’d include it here too. I’ve corrected a couple of glitches I introduced trying to do some last-minute hacking just before my talk!

As you will inferfrom the slides, I decided not to compress an entire course on statistical methods into a one-hour talk. Instead I tried to focus on basic principles, primarily to get across the importance of Bayesian methods for tackling the usual tasks of hypothesis testing and parameter estimation. The Bayesian framework offers the only mathematically consistent way of tackling such problems and should therefore be the preferred method of using data to test theories. Of course if you have data but no theory or a theory but no data, any method is going to struggle. And if you have neither data nor theory you’d be better off getting one of the other before trying to do anything. Failing that, you could always go down the pub.

Rather than just leave it at that I thought I’d append some stuff  I’ve written about previously on this blog, many years ago, about the interesting historical connections between Astronomy and Statistics.

Once the basics of mathematical probability had been worked out, it became possible to think about applying probabilistic notions to problems in natural philosophy. Not surprisingly, many of these problems were of astronomical origin but, on the way, the astronomers that tackled them also derived some of the basic concepts of statistical theory and practice. Statistics wasn’t just something that astronomers took off the shelf and used; they made fundamental contributions to the development of the subject itself.

The modern subject we now know as physics really began in the 16th and 17th century, although at that time it was usually called Natural Philosophy. The greatest early work in theoretical physics was undoubtedly Newton’s great Principia, published in 1687, which presented his idea of universal gravitation which, together with his famous three laws of motion, enabled him to account for the orbits of the planets around the Sun. But majestic though Newton’s achievements undoubtedly were, I think it is fair to say that the originator of modern physics was Galileo Galilei.

Galileo wasn’t as much of a mathematical genius as Newton, but he was highly imaginative, versatile and (very much unlike Newton) had an outgoing personality. He was also an able musician, fine artist and talented writer: in other words a true Renaissance man.  His fame as a scientist largely depends on discoveries he made with the telescope. In particular, in 1610 he observed the four largest satellites of Jupiter, the phases of Venus and sunspots. He immediately leapt to the conclusion that not everything in the sky could be orbiting the Earth and openly promoted the Copernican view that the Sun was at the centre of the solar system with the planets orbiting around it. The Catholic Church was resistant to these ideas. He was hauled up in front of the Inquisition and placed under house arrest. He died in the year Newton was born (1642).

These aspects of Galileo’s life are probably familiar to most readers, but hidden away among scientific manuscripts and notebooks is an important first step towards a systematic method of statistical data analysis. Galileo performed numerous experiments, though he certainly carry out the one with which he is most commonly credited. He did establish that the speed at which bodies fall is independent of their weight, not by dropping things off the leaning tower of Pisa but by rolling balls down inclined slopes. In the course of his numerous forays into experimental physics Galileo realised that however careful he was taking measurements, the simplicity of the equipment available to him left him with quite large uncertainties in some of the results. He was able to estimate the accuracy of his measurements using repeated trials and sometimes ended up with a situation in which some measurements had larger estimated errors than others. This is a common occurrence in many kinds of experiment to this day.

Very often the problem we have in front of us is to measure two variables in an experiment, say X and Y. It doesn’t really matter what these two things are, except that X is assumed to be something one can control or measure easily and Y is whatever it is the experiment is supposed to yield information about. In order to establish whether there is a relationship between X and Y one can imagine a series of experiments where X is systematically varied and the resulting Y measured.  The pairs of (X,Y) values can then be plotted on a graph like the example shown in the Figure.

XY

In this example on it certainly looks like there is a straight line linking Y and X, but with small deviations above and below the line caused by the errors in measurement of Y. This. You could quite easily take a ruler and draw a line of “best fit” by eye through these measurements. I spent many a tedious afternoon in the physics labs doing this sort of thing when I was at school. Ideally, though, what one wants is some procedure for fitting a mathematical function to a set of data automatically, without requiring any subjective intervention or artistic skill. Galileo found a way to do this. Imagine you have a set of pairs of measurements (xi,yi) to which you would like to fit a straight line of the form y=mx+c. One way to do it is to find the line that minimizes some measure of the spread of the measured values around the theoretical line. The way Galileo did this was to work out the sum of the differences between the measured yi and the predicted values mx+c at the measured values x=xi. He used the absolute difference |yi-(mxi+c)| so that the resulting optimal line would, roughly speaking, have as many of the measured points above it as below it. This general idea is now part of the standard practice of data analysis, and as far as I am aware, Galileo was the first scientist to grapple with the problem of dealing properly with experimental error.

error

The method used by Galileo was not quite the best way to crack the puzzle, but he had it almost right. It was again an astronomer who provided the missing piece and gave us essentially the same method used by statisticians (and astronomy) today.

Gauss_11Karl Friedrich Gauss (left) was undoubtedly one of the greatest mathematicians of all time, so it might be objected that he wasn’t really an astronomer. Nevertheless he was director of the Observatory at Göttingen for most of his working life and was a keen observer and experimentalist. In 1809, he developed Galileo’s ideas into the method of least-squares, which is still used today for curve fitting.

This approach involves basically the same procedure but involves minimizing the sum of [yi-(mxi+c)]2 rather than |yi-(mxi+c)|. This leads to a much more elegant mathematical treatment of the resulting deviations – the “residuals”.  Gauss also did fundamental work on the mathematical theory of errors in general. The normal distribution is often called the Gaussian curve in his honour.

After Galileo, the development of statistics as a means of data analysis in natural philosophy was dominated by astronomers. I can’t possibly go systematically through all the significant contributors, but I think it is worth devoting a paragraph or two to a few famous names.

I’ve already written on this blog about Jakob Bernoulli, whose famous book on probability was (probably) written during the 1690s. But Jakob was just one member of an extraordinary Swiss family that produced at least 11 important figures in the history of mathematics.  Among them was Daniel Bernoulli who was born in 1700.  Along with the other members of his famous family, he had interests that ranged from astronomy to zoology. He is perhaps most famous for his work on fluid flows which forms the basis of much of modern hydrodynamics, especially Bernouilli’s principle, which accounts for changes in pressure as a gas or liquid flows along a pipe of varying width.
But the elder Jakob’s work on gambling clearly also had some effect on Daniel, as in 1735 the younger Bernoulli published an exceptionally clever study involving the application of probability theory to astronomy. It had been known for centuries that the orbits of the planets are confined to the same part in the sky as seen from Earth, a narrow band called the Zodiac. This is because the Earth and the planets orbit in approximately the same plane around the Sun. The Sun’s path in the sky as the Earth revolves also follows the Zodiac. We now know that the flattened shape of the Solar System holds clues to the processes by which it formed from a rotating cloud of cosmic debris that formed a disk from which the planets eventually condensed, but this idea was not well established in the time of Daniel Bernouilli. He set himself the challenge of figuring out what the chance was that the planets were orbiting in the same plane simply by chance, rather than because some physical processes confined them to the plane of a protoplanetary disk. His conclusion? The odds against the inclinations of the planetary orbits being aligned by chance were, well, astronomical.

The next “famous” figure I want to mention is not at all as famous as he should be. John Michell was a Cambridge graduate in divinity who became a village rector near Leeds. His most important idea was the suggestion he made in 1783 that sufficiently massive stars could generate such a strong gravitational pull that light would be unable to escape from them.  These objects are now known as black holes (although the name was coined much later by John Archibald Wheeler). In the context of this story, however, he deserves recognition for his use of a statistical argument that the number of close pairs of stars seen in the sky could not arise by chance. He argued that they had to be physically associated, not fortuitous alignments. Michell is therefore credited with the discovery of double stars (or binaries), although compelling observational confirmation had to wait until William Herschel’s work of 1803.

It is impossible to overestimate the importance of the role played by Pierre Simon, Marquis de Laplace in the development of statistical theory. His book A Philosophical Essay on Probabilities, which began as an introduction to a much longer and more mathematical work, is probably the first time that a complete framework for the calculation and interpretation of probabilities ever appeared in print. First published in 1814, it is astonishingly modern in outlook.

Laplace began his scientific career as an assistant to Antoine Laurent Lavoiser, one of the founding fathers of chemistry. Laplace’s most important work was in astronomy, specifically in celestial mechanics, which involves explaining the motions of the heavenly bodies using the mathematical theory of dynamics. In 1796 he proposed the theory that the planets were formed from a rotating disk of gas and dust, which is in accord with the earlier assertion by Daniel Bernouilli that the planetary orbits could not be randomly oriented. In 1776 Laplace had also figured out a way of determining the average inclination of the planetary orbits.

A clutch of astronomers, including Laplace, also played important roles in the establishment of the Gaussian or normal distribution.  I have also mentioned Gauss’s own part in this story, but other famous astronomers played their part. The importance of the Gaussian distribution owes a great deal to a mathematical property called the Central Limit Theorem: the distribution of the sum of a large number of independent variables tends to have the Gaussian form. Laplace in 1810 proved a special case of this theorem, and Gauss himself also discussed it at length.

A general proof of the Central Limit Theorem was finally furnished in 1838 by another astronomer, Friedrich Wilhelm Bessel– best known to physicists for the functions named after him – who in the same year was also the first man to measure a star’s distance using the method of parallax. Finally, the name “normal” distribution was coined in 1850 by another astronomer, John Herschel, son of William Herschel.

I hope this gets the message across that the histories of statistics and astronomy are very much linked. Aspiring young astronomers are often dismayed when they enter research by the fact that they need to do a lot of statistical things. I’ve often complained that physics and astronomy education at universities usually includes almost nothing about statistics, because that is the one thing you can guarantee to use as a researcher in practically any branch of the subject.

Over the years, statistics has become regarded as slightly disreputable by many physicists, perhaps echoing Rutherford’s comment along the lines of “If your experiment needs statistics, you ought to have done a better experiment”. That’s a silly statement anyway because all experiments have some form of error that must be treated statistically, but it is particularly inapplicable to astronomy which is not experimental but observational. Astronomers need to do statistics, and we owe it to the memory of all the great scientists I mentioned above to do our statistics properly.


by telescoper at August 29, 2015 11:56 AM

August 28, 2015

Christian P. Robert - xi'an's og

walking the PCT

The last book I read in the hospital was wild, by Cheryl Strayed, which was about walking the Pacific Crest Trail (PCT) as a regenerating experience. The book was turned into a movie this year. I did not like the book very much and did not try to watch the film, but when I realised my vacation rental would bring me a dozen miles from the PCT, I planned a day hike along this mythical trail… Especially since my daughter had dreams of hiking the trail one day. (Not realising at the time that Cheryl Strayed had not come that far north, but had stopped at the border between Oregon and Washington.OLYMPUS DIGITAL CAMERA)

The hike was really great, staying on a high ridge for most of the time and offering 360⁰ views of the Eastern North Cascades (as well as forest fire smoke clouds in the distance…) Walking on the trail was very smooth as it was wide enough, with a limited gradient and hardly anyone around. Actually, we felt like intruding tourists on the trail, with our light backpacks, since the few hikers we crossed were long-distance hikers, “doing” the trail with sometimes backpacks that looked as heavy as Strayed’s original “Monster”. And sometimes with incredibly light ones. A great specificity of those people is that they all were more than ready to share their experiences and goals, with no complaint about the hardship of being on the trail for several months! And sounding more sorry than eager to reach the Canadian border and the end of the PCT in a few more dozen miles… For instance, a solitary female hiker told us of her plans to get back to the section near Lake Chelan she had missed the week before due to threatening forest fires. A great entry to the PCT, with the dream of walking a larger portion in an undefined future…


Filed under: Books, Kids, Mountains, pictures, Running, Travel Tagged: backpacking, North Cascades National Park, Oregon, Pacific crest trail, PCT, vacations, Washington State

by xi'an at August 28, 2015 10:15 PM

astrobites - astro-ph reader's digest

A Cepheid Anomaly

Title: The Strange Evolution of Large Magellanic Cloud Cepheid OGLE-LMC-CEP1812

Authors: Hilding R. Neilson, Robert G. Izzard, Norbert Langer, and Richard Ignace

First Author’s Institution: Department of Astronomy & Astrophysics, University of Toronto

Status: Accepted to A&A

Figure 1: The Cepheid RS Puppis, one of the brightest and longest-period (41.4 days) Cepheids in the Milky-Way.  The striking appearance of this Cepheid is a result of the light echoes around it. Image taken with the Hubble Space Telescope.

Figure 1: RS Puppis, one of the brightest and longest-period (41.4 days) Cepheids in the Milky-Way. The striking appearance of this Cepheid is a result of the light echoes around it. Image taken with the Hubble Space Telescope.

It’s often tempting to think of stars as unchanging—especially on human timescales—but the more we study the heavens, the more it becomes clear that that isn’t true. Novae, supernovae, and gamma-ray bursts are all examples of sudden increases in brightness that stars can experience. There are also many kinds of variable stars—stars that regularly or irregularly change in brightness from a variety of mechanisms. Classical Cepheid variables are supergiant stars that periodically increase and decrease in luminosity due to their radial pulsations. They are stars that breathe, expanding and contracting like your lungs do when you inhale and exhale. Their regular periods, which are strongly related to their luminosity by the Leavitt law, make them important for measuring distance. Despite their importance in cosmology (as standard candles) and stellar astrophysics (by giving us insight into stellar evolution), there is still a lot that we don’t understand about classical Cepheid variables. One of the biggest mysteries that remains in characterizing them is the Cepheid mass discrepancy.

The Cepheid mass discrepancy refers to the fact that, at the same luminosity and temperature, stellar pulsation models generally predict that the Cepheids will have lower masses than stellar evolution models suggest they would. Several possible resolutions to the Cepheid mass discrepancy have been proposed, including pulsation-driven stellar mass loss, rotation, changes in radiative opacities, convective core overshooting in the Cepheid’s progenitor, or a combination of all of these. Measuring the Cepheid’s mass independently would help us constrain this problem, but as you might imagine, it’s not easy to weigh a star. Instead of a scale, our gold standard for determining stellar masses is an eclipsing binary.

An eclipsing binary system is just a system in which one of the orbiting stars passes in front of the other in our line of sight, blocking some of the other star’s light. This leads to variations in the amount of light that we see from the system. Because the orbits of the stars must be aligned just edge on to us this happen, eclipsing binaries are quite rare discoveries. However, when we do have such a system, we know with exactly what angle of inclination we are observing it. This makes it possible for us to accurately apply Kepler’s laws to get a measurement for the mass. Eclipsing binaries are highly prized for this reason (they’ve also gained some attention for being a highly-accurate way to measure extragalactic distances, but that’s another story altogether).

Cepheids in eclipsing binary systems are even rarer—there are currently a total of four that have been discovered in the LMC. One has been discussed on astrobites before (it’s worth looking at the previous bite just to check out the crazy light curve). Since there are so few, and since their masses are so integral to understanding them and determining their basic properties, it’s even more important to study each system as carefully as we can to help us solve these stellar mysteries. The authors of today’s paper take a close look at one of the few eclipsing binary systems we have that contains a Cepheid. 

Screenshot 2015-08-28 07.58.08

Figure 2: Figure 1 from the paper, depicting the stellar evolution model for the 3.8 solar mass Cepheid and its 2.7 solar mass red giant companion. The blue and orange shapes show the regions of the Hertzsprung-Russell diagram for each star that is consistent with its measured radius.

Unfortunately, rather than helping us, the subject of today’s astrobite, CEP1812, seems to cause more problems for us. Stellar evolution models indicate that the Cepheid appears to be about 100 Myr younger than its companion star, a red giant (and we expect them to be the same age). Figure 2 shows the evolutionary tracks of the red giant and the Cepheid. Previous papers have suggested that the Cepheid could have captured its companion, but today’s authors believe that this Cepheid could be even stranger—it may have formed from the merger of two smaller main sequence stars. This would mean that the current binary system was once a system of three stars, with the current red giant being the largest of the three. The merger would explain the apparently-younger age of the Cepheid because the resulting star would then evolve like a star that started its life with a mass the sum of the two merged stars, but it would look younger. The red giant, which would have been the outermost star, could have played a role in inducing oscillations in the orbits of the two smaller stars that caused them to merge.

The authors test this proposal by evolving a stellar model of two smaller stars for about 300 Myr before allowing them to merge. The mass gets accreted onto the larger of the two stars in about 10 Myr, and the resulting star then evolves normally, but appears younger because the merger mixes additional hydrogen (lower mass stars are less efficient at hydrogen burning) into the core of the star, which matches the observations.

The authors argue that if CEP1812 is formed from the merger of two main sequence stars, it would be an anomalous Cepheid. Anomalous Cepheids have short periods (up to ~2.5 days), are formed by the merger of two low-mass stars, usually only about ~1 solar mass, have low metallicity, and are found in older stellar populations. There they stand out, since these Cepheids end up being much larger and younger than the surrounding ones. However, CEP1812 is about 3.8 solar masses and also at a higher metallicity than most anomalous Cepheids, making it a highly unusual candidate. CEP1812’s period-luminosity and light curve shape are also consistent with both classical Cepheids (which are the kind we use in measuring distances) and with anomalous Cepheids.

If CEP1812 is an anomalous Cepheid, it straddles the border between what we think of as “classical” Cepheids and what we think of as “anomalous” Cepheids. This would make it a risky Cepheid to use for cosmology, but interesting for bridging the gap between two different classes of Cepheids. The possibility of it being an anomalous Cepheid is just one resolution to its unique properties. However, if it does turn out that CEP1812 not just another classical Cepheid, it could be the first of a fascinating subset of rare stars that we haven’t studied yet. Ultimately it’s still too soon to tell, but the authors have opened up an interesting new possibility for exploration. 

 

by Caroline Huang at August 28, 2015 06:21 PM

Peter Coles - In the Dark

Talking and Speaking

Just back to Brighton after a pleasant couple of days in Cardiff, mainly dodging the rain but also making a small contribution to the annual STFC Summer School for new PhD students in Astronomy. Incidentally it’s almost exactly 30 years since I attended a similar event, as a new student myself, at the University of Durham.

Anyway, I gave a lecture yesterday morning on Statistics in Astronomy (I’ll post the slides on here in due course). I was back in action later in the day at a social barbecue held at Techniquest in Cardiff Bay.

Here’s the scene just before I started my turn:

image

It’s definitely an unusual venue to be giving a speech, but it was fun to do. Here’s a picture of me in action, taken by Ed Gomez:

image

I was asked to give a “motivational speech” to the assembled students but I figured that since they had all already  chosen to do a PhD they probably already had enough motivation. In any case I find it a bit patronising when oldies like me assume that they have to “inspire” the younger generation of scientists. In my experience, any inspiring is at least as likely to happen in the opposite direction! So in the event  I just told a few jokes and gave a bit of general advice, stressing for example the importance of ignoring your supervisor and of locating the departmental stationery cupboard as quickly as possible. 

It was very nice to see some old friends as well as all the new faces at the summer school. I’d like to take this opportunity to wish everyone about to  embark on a PhD, whether in Astronomy or some other subject, all the very best. You’ll find it challenging but immensely rewarding, so enjoy the adventure!

Oh, and thanks to the organisers for inviting me to take part. I was only there for one day, but the whole event seemed to go off very well!


by telescoper at August 28, 2015 04:29 PM

ATLAS Experiment

Lepton Photon 2015 – Into the Dragon’s Lair

This was my first time in Ljubljana, the capital city of Slovenia – a nation rich with forests and lakes, and the only country that connects the Alps, the Mediterranean and the Pannonian Plain. The slight rain was not an ideal welcome, but knowing that such an important conference that was to be held there – together with a beautiful evening stroll – relaxed my mind.

The guardian.

The guardian.

At first, I thought I was somewhere in Switzerland. The beauty of the city and kindness of the local people just amazed me. Similar impressions overwhelmed me once the conference started – it was extremely well organized, with top-level speakers and delicious food. And though I met several colleagues there that I already knew, I felt as though I knew all the participants – so the atmosphere at the presentations was nothing short of enthusiastic and delightful.

Before the beginning of the conference, the ATLAS detector just started getting the first data from proton collisions at 13 TeV center-of-mass energy, with a proton bunch spacing of 25 ns. The conference’s opening ceremony was followed by two excellent talks: Dr. Mike Lamont presented the LHC performance in Run 2 and Prof. Beate Heinemann discussed the ATLAS results from Run 2.

Furthermore, at the start of the Lepton Photon 2015 conference, the ALICE experiment announced results confirming the fundamental symmetry of nature (CPT), agreeing with the recent BASE experiment results from lower energy scale measurement.

The main building of the University of Ljubljana

The main building of the University of Ljubljana

The public lecture by Prof. Alan Guth on cosmic inflation and multiverse was just as outstanding as expected. He entered the conference room with a student bag on his shoulder and a big, warm smile on his face – the ultimate invitation to both scientists and Ljubljana’s citizens. His presentation did an excellent job at explaining, to both experienced and young scientists, the hard work of getting to know the unexplored. While listening to Prof. Guth’s presentation, it seemed like the hour passed in only a few minutes – so superb his talk was.

I was also impressed by some of the participants. Many showed great interest in the lectures, and asked tough, interesting questions.

To briefly report on the latest results, as well as the potential of future searches for physics beyond the Standard Model, the following achievements were covered during the conference: the recent discovery by the LHCb experiment of a new class of particles known as pentaquark;, the observed flavor anomalies in semi-leptonic B meson decay rates seen by the BaBar, the Belle and the LHCb experiments; the muon g-2 anomaly; recent results on charged lepton flavor violation; hints of violation of lepton universality in RK and R(D(*)); and the first observation and evidence of the very rare decays of Bs and B0 mesons, respectively.

The conference centre.

The conference center.

The second part of the conference featured poster sessions, where younger scientists were able to present their latest working achievements. Six of them were selected and offered the opportunity to give a plenary presentation, where they gave useful and well prepared talks.

The ending conference lecture was given by Prof. Jonathan Ellis, who provided an excellent closing summary and overview of the conference talks and presented results, with an emphasis on future potential discoveries and underlying theories.

To conclude, I have to stress that our very competent and kind colleagues from the Josef Stefan Institute in Ljubljana (as well as other international collaborative institutes) did a great job organizing this tremendous symposium. They’ve set a high standard for the future conferences to come.


pic_tatjana_jovin1 Tatjana Agatonovic Jovin is research assistant at the Institute of Physics in Belgrade, Serbia. She joined ATLAS in 2009, doing her PhD at the University of Belgrade. Her research included searches for new physics that can show up in decays of strange B mesons by measuring CP-violating weak mixing phase and decay rate difference using time-dependent angular analysis. In addition to her fascination with physics she loves hiking, skiing, music and fine arts!

by Tatjana Agatonovic Jovin at August 28, 2015 04:11 PM

Lubos Motl - string vacua and pheno

Decoding the near-Planckian spectrum from CMB patterns
In March 2015, physics stars Juan Maldacena and Nima Arkani-Hamed (IAS Princeton) wrote their first paper together:
Cosmological Collider Physics
At that time, I didn't discuss it because it looked a bit technical for the blogosphere but things look a bit different now, partially because some semi-popular news outlets discussed it.



Cliff Moore's camera, Seiberg, Maldacena, Witten, Arkani-Hamed, sorted by distance

What they propose is to pretty much reverse-engineer very fine irregularities in the Cosmic Microwave Background – the non-Gaussianities – and decode them according to their high-tech method and write down the spectrum of elementary particles that are very heavy (comparably heavy to the Hubble scale during inflation) which may include Kaluza-Klein modes or excited strings.




Numerous people have said that "the Universe is a wonderful and powerful particle collider" because it allows us to study particle physics phenomena at very high energies – by looking into the telescope (because the expansion of the Universe has stretched these tiny length scales of particle physics to cosmological length scales). But Juan and Nima went further – approximately by 62 pages further. ;-)




What are the non-Gaussianities? If I simplify a little bit, we may measure the temperature of the cosmic microwave background in different directions. The temperature is about\[

T = 2.7255\pm 0.0006 \,{\rm K}

\] and the microwave leftover (remnant: we call it the Relict Radiation in Czech) from the Big Bang looks like a thermal, black-body radiation emitted by an object whose temperature is this \(T\). Such an object isn't exactly hot – it's about minus 270 Celsius degrees – but the absolute temperature \(T\) is nonzero, nevertheless, so the object thermally radiates. The typical frequency is in the microwave range – the kind of waves from your microwave oven. (And 1% of the noise on a classical TV comes from the CMB.) The object – the whole Universe – used to be much hotter but it has calmed down as it expanded, along with all the wavelengths.

The Universe was in the state of (near) thermal equilibrium throughout much of its early history. Up to the year 380,000 after the cosmic Christ (the Big Bang: what the cosmic non-Christians did with their Christ at that moment was so stunning that it still leaves cosmologists speechless) or so, the temperature was so high that the atoms were constantly ionized.

Only when the temperature gradually dropped beneath a critical temperature for atomic physics, it became a good idea for electrons to sit down in the orbits and help to create the atoms. Unlike the free particles in the plasma, atoms are electrically neutral, and therefore interact with the electromagnetic field much more weakly (at least with the modes of the electromagnetic field that have low enough frequencies).

OK, around 380,000 AD, the Universe became almost transparent for electromagnetic waves and light. The light – that was in equilibrium at that time – started to propagate freely. Its spectrum was the black-body curve and the only thing that has changed since that time was the simple cooling i.e. reduction of the frequency (by a factor) and the reduction of the intensity (by a similarly simple factor).



The CMB is the most accurate natural thermal black body radiation (the best Planck curve) we know in Nature. However, when we look at the CMB radiation more carefully, we see that the temperature isn't quite constant. It varies by 0.001% or 0.01% in different directions:\[

T(\theta,\phi) = 2.725\,{\rm K} + \Delta T (\theta,\phi)

\] The function \(\Delta T\) arises from some direction-dependent "delay" of the arrival of business-as-usual after the inflationary era. If the inflaton stabilized a bit later, we get a slightly higher (or lower?) temperature in that direction – which was also associated with a little bit different energy density in that region (region in some direction away from us, at the right distance so that the light it sent at 380,000 AD just hit our telescopes today).

The function \(\Delta T\) depends on the spherical angles and may be expanded in the spherical harmonics. To study the magnitude of the temperature fluctuations, you want to measure things like \[

\sum_{m=-\ell}^\ell\Delta T_{\ell, m} \Delta T_{\ell', m'}

\] The different spherical harmonic coefficients \(\Delta T_{\ell m}\) are basically uncorrelated with one another, so you expect to get close to zero, up to noise, unless \(\ell=\ell'\) and \(m=-m'\). In that case, you get a nonzero result and it's a function of \(\ell\) that you know as the "CMB power spectrum".



I wrote \(\Delta T\) as a function of the angles or the quantum numbers \(\ell,m\) but in the early cosmology, it's more natural to derive this \(\Delta T\) from the inflaton field and appreciate that this field is a function of \(\vec k\), the momentum 3-vector. By looking at the \(\Delta T\), we may only determine the dependence on two variables in a slice, not three.

At any rate, the correlation functions\[

\langle \Delta T (\vec k_1) \Delta T (\vec k_2) \Delta T(\vec k_3) \rangle

\] averaged over all directions etc. seem to be zero according to our best observations so far. No non-Gaussianities have been observed. Again, why non-Gaussianities? Because the probability density for the \(\Delta T\) function to be something is given by the Ansatz\[

\rho[\Delta T(\theta,\phi)] = \exp \zav{ - \Delta T\cdot M \cdot \Delta T }

\] where \(M\) is a "matrix" that depends on two continuous indices – that take values on the two-sphere – and the dot product involves an integral over the two-sphere instead of a discrete summation over the indices. Fine. You see that probability density functional generalizes the function \(\exp(-x^2)\), a favorite function of Carl Friedrich Gauß, which is why this Ansatz is referred to as the Gaussian one.

The probability distribution is mathematically analogous to the wave function or functional of the multi-dimensional or infinite-dimensional harmonic oscillator or the wave functional for the ground state of a free (non-interacting, quadratic) quantum field theory (which is an infinite-dimensional harmonic oscillator, anyway). Or the integrand of the path integral in a free quantum field theory.

And this mathematical analogy may be exploited to calculate lots of things. In fact, it's not just a mathematical analogy. Within the inflationary framework, the \(n\)-point functions calculated from the CMB temperature are \(n\)-point functions of the inflaton field in a quantum field theory.

The \(n\) in the \(n\)-point function counts how many points on the two-sphere, or how many three-vectors \(\vec k\), the correlation function depends on. The correlation functions in QFT may be computed using the Feynman diagrams. In free QFTs, you have no vertices and just connect \(n\) external propagators. It's clear that in a free QFT, the 3-point functions vanish. All the odd-point functions vanish, in fact. And the 4-point and other even higher-point functions may be computed by Wick's theorem – the summation over different pairings of the propagator.

Back to 2015

No non-Gaussianities have been seen so far – all observations are compatible with the assumption that the probability density functional for \(\Delta T\) has the simple Gaussian form, a straightforward infinite-dimensional generalization of the normal distribution \(\exp(-x^2/2\sigma^2)\). However, cosmologists and cosmoparticle physicists have dreamed about the possible discovery of non-Gaussianities and what it could teach us.

It could be a signal of some inflaton (cubic or more complex) self-interactions, new particles, new effects of many kinds. But which of them? Almost all previous physicists wanted to barely see "one new physical effect around the corner" that is stored in the first non-Gaussianities that someone may discover.

Only Nima and Juan started to think big. Even though no one has seen any non-Gaussianity yet, they are already establishing a new computational industry to get tons of detailed information from lots of numbers describing the non-Gaussianity that will be observed sometime in the future. They don't want to discover just "one" new effect that modestly generalizes inflation by one step, like most other model builders.

They ambitiously intend to extract all the information about the particle spectrum and particle interactions (including all hypothetical new particle species) from the correlation functions of \(\Delta T\) and its detailed non-Gaussianities once they become available. Their theoretical calculations were the hardest step, of course. The other steps are easy. Once Yuri Milner finds the extraterrestrial aliens, he, Nima, and Juan will convince them to fund a project to measure the non-Gaussianities really accurately, assuming that the ETs are even richer than the Chinese.

OK, once it's done, you will have functions like\[

\langle \Delta T(\vec k_1) \,\Delta T (\vec k_2) \,\Delta T(\vec k_3) \rangle

\] By the translational symmetry (or momentum conservation), this three-point function is only nonzero for \[

\vec k_1+\vec k_2+ \vec k_3 = 0

\] which means if and only if the three vectors define sides of a triangle (oriented, in a closed loop). The three-point functions seem to be zero according to the observations so far. But once they will be seen to be nonzero, the value may be theoretically calculated as the effect of extra particle species (or the same inflaton, if it is self-interacting).

A new field of spin \(s\) and mass \(m\) will contribute a function of \(\vec k_1, \vec k_2,\vec k_3\) to the three-point function – a function of the size and shape of the triangle – whose dependence on the shape stores the information about \(s\) and \(m\). When you focus on triangles that are very "thin", they argue and show, the mass of the particle (naturally expressed in the units of the Hubble radius, if you wish) is stored in the exponent of a power law that says how much the correlation function drops (or increases?) when the triangle becomes even thinner.

Some dependence on the spin \(s\) is imprinted to the dependence on some angle defining the triangle.

And all the new particles' contributions add up. In fact, they "interfere" with each other and the relative phase has observable implications, too.

It's a big new calculational framework, basically mimicking the map between "Lagrangians of a QFT" and "its \(n\)-point functions" in a different context. They look at three-point functions as well as four-point functions. The contributions to these correlation functions seem to resemble correlation functions we know from the world sheet of string theory.

And they also show how these expressions have to simplify when the system is conformally (or slightly broken conformally or de Sitter) symmetric. Theirs is a very sophisticated toolkit that may serve as a dictionary between the patterns in the CMB and the particle spectrum and interactions near the inflationary Hubble scale.

Testability

I was encouraged to write this blog post by this text in the Symmetry Magazine,
Looking for strings inside inflation
Troy Rummler wrote about a very interesting topic and he has included some useful and poetic remarks. For example, Edward Witten called Juan's and Nima's work "the most innovative one" he heard about at Strings 2015. Juan's slides are here and the 29-minute YouTube talk is here. And Witten has also said that science doesn't repeat itself but it "rhymes" because Nature's clever tricks are recycled at many levels.

Well, I still feel some dissatisfaction with that article.

First, it doesn't really make it clear that Arkani-Hamed, Maldacena, and Witten are not just three of the random physicists or even would-be physicists that are the heroes of most of the hype in the popular science news outlets. All of them are undoubtedly among the top ten physicists who live on Earth right now.

Second, I just hate this usual post-2006 framing of the story in terms of the slogan that "string theory would be nearly untestable which is why all the theoretical physicists have to work hard on doable tests of string theory".

What's wrong with that slogan in the present context?
  1. String theory is testable in principle, it has been known to be testable for decades, and that's what matters for its being a 100% sensible topic of deep scientific research.
  2. String theory seems hard to test by realistic experiments that will be performed in several years and almost all sane people have always thought so already when they started to work on strings.
  3. The work by Arkani-Hamed and Maldacena hasn't changed that: it will surely take a lot of time to observe non-Gaussianities and observe them accurately enough for their dictionary and technology to become truly relevant. So even though theirs is a method to look into esoteric short-distance physics via ordinary telescopes, it's still a very futuristic project.
  4. The Nima-Juan work doesn't depend on detailed features of string theory much.
It is meant to recover the spectrum and interactions of an effective field theory at the Hubble scale, whether this effective field theory is an approximation to string theory or not. In fact, the term "string" appears in one paragraph of their paper only (in the introduction).

The paragraph talks about some characteristic particles predicted by string theory whose discovery (through the CMB) could "almost settle" string theory. For example, they believe that a weakly coupled (but not decoupled) spin-4 particle would make string theory unavoidable because non-string theories are incompatible with weakly coupled particles of spin \(s\gt 2\). This is a part of the lore, somewhat ambitious lore. I think it's morally correct but it only applies to "elementary" particles and the definition of "elementary" is guaranteed to become problematic as we approach the Planck scale. For example, the lightest black hole microstates – the heavier cousins of elementary particles but with Planckian masses – are guaranteed to be "in between" composite and elementary objects. Quantum gravity provides us with "bootstrap" constraints that basically say that the high-mass behavior of the spectrum must be a reshuffling of the low-mass spectrum (UV-IR correspondence, something that is seen both in perturbative string theory as well as the quantum physics of black holes).

The scale of inflation is almost certainly "at least several orders of magnitude" beneath the Planck scale so this problem may be absent in their picture. But maybe it's not absent. Theorists want to be sure that they have the right wisdom about all these things – but truth to be told, we haven't seen a spin-4 particle in the CMB yet. ;-)

It's a very interesting piece of work that is almost guaranteed to remain in the domain of theorists for a very long time. And it's unfortunate that the media – including "media published by professional institutions such as the Fermilab and SLAC" – keep on repeating this ideology that the theorists are "obliged" to work on practical tests of theories and they are surely doing so. They are not "obliged" and they are mostly not doing these things.

The Planckian physics has always seemed to be far from practically doable experiments. The Juan-Nima paper is an example of the efforts that have the chance to reduce this distance. But I think that they would agree that this distance remains extremely large and the small chance that the distance will shrink down to zero isn't the only and necessary motivation of their research. Theorists just want to know – they are immensely curious about – the relationships between pairs of groups of ideas and data even if both sides of the link remain unobservable in practice!



The most likely shape of a newborn galaxy, extracted from the spectrum of our Calabi-Yau compactification through the Nima-Juan algorithm

I am a bit confused about the actual chances that the sufficient number of non-Gaussianities and their patterns may ever be extracted from the CMB data. The low enough \(\ell\) modes of the CMB simply seem to be Gaussian and we won't get any new numbers or new patterns stored in them, will we? The cosmic variance – the unavoidable noise resulting from the "finiteness of the two-sphere and or the visible Universe" i.e. from the finite number of the relevant spherical harmonics – seems to constrain the accuracy of the data we may extract "permanently". So maybe such patterns could be encoded in the very high values of \(\ell\) i.e. small angular distances on the sky?

For example, there could be patterns in the shape of the galaxies (inside the galaxies), and not just the broad intergalactic space. For example, if it turned out that the most likely shape of the newborn galaxy is given by the blue picture above (one gets the spiraling mess resembling the Milky Way once the shape evolves for a long enough time), it could prove that God and His preferred compactification of string/M-theory is neither Argentinian nor Persian. I am not quite sure whether such a discovery would please Witten, however. ;-)

by Luboš Motl (noreply@blogger.com) at August 28, 2015 02:24 PM

Quantum Diaries

Double time

In particle physics, we’re often looking for very rare phenomena, which are highly unlikely to happen in any given particle interaction. Thus, at the LHC, we want to have the greatest possible proton collision rates; the more collisions, the greater the chance that something unusual will actually happen. What are the tools that we have to increase collision rates?

Remember that the proton beams are “bunched” — there isn’t a continuous current current of protons in a beam, but a series of smaller bunches of protons, each only a few centimeters long, with gaps of many centimeters between each bunch.  The beams are then timed so that bunches from each beam pass through each other (“cross”) inside one of the big detectors.  A given bunch can have 10E11 protons in it, and when two bunches cross, perhaps tens of the protons in each bunch — a tiny fraction! — will interact.  This bunching is actually quite important for the operation of the detectors — we can know when bunches are crossing, and thus when collisions happen, and then we know when the detectors should really be “on” to record the data.

If one were to have a fixed number of protons in the machine (and thus a fixed total amount of beam current), you could imagine two ways to create the same number of collisions: have N bunches per beam, each with M protons, or 2N bunches per beam with M/sqrt(2) protons.  The more bunches in the beam, the more closely spaced they would have to be, but that can be done.  From the perspective of the detectors, the second scenario is much preferred.  That’s because you get fewer proton collisions per bunch crossing, and thus fewer particles streaming through the detectors.  The collisions are much easier to interpret if you have fewer collisions per crossing; among other things, you need less computer processing time to reconstruct each event, and you will have fewer mistakes in the event reconstruction because there aren’t so many particles all on top of each other.

In the previous LHC run (2010-12), the accelerator had “50 ns spacing” between proton bunches, i.e. bunch crossings took place every 50 ns.  But over the past few weeks, the LHC has been working on running with “25 ns spacing,” which would allow the beam to be segmented into twice as many bunches, with fewer protons per bunch.  It’s a new operational mode for the machine, and thus some amount of commissioning and tuning and so forth are required.  A particular concern is “electron cloud” effects due to stray particles in the beampipe striking the walls and ejecting more particles, which is a larger effect with smaller bunch spacing.  But from where I sit as one of the experimenters, it looks like good progress has been made so far, and as we go through the rest of this year and into next year, 25 ns spacing should be the default mode of operation.  Stay tuned for what physics we’re going to be learning from all of this!

by Ken Bloom at August 28, 2015 03:32 AM

August 27, 2015

Emily Lakdawalla - The Planetary Society Blog

Dropping Orion in the Desert: NASA Completes Key Parachute Test
NASA’s Orion spacecraft completed a key parachute test Aug. 26 at the U.S. Army Yuma Proving Ground in Yuma, Arizona.

August 27, 2015 06:05 PM

Symmetrybreaking - Fermilab/SLAC

Looking for strings inside inflation

Theorists from the Institute for Advanced Study have proposed a way forward in the quest to test string theory.

Two theorists recently proposed a way to find evidence for an idea famous for being untestable: string theory. It involves looking for particles that were around 14 billion years ago, when a very tiny universe hit a growth spurt that used 15 billion times more energy than a collision in the Large Hadron Collider.

Scientists can’t crank the LHC up that high, not even close. But they could possibly observe evidence of these particles through cosmological studies, with the right technological advances.

Unknown particles

During inflation—the flash of hyperexpansion that happened 10-33 seconds after the big bang— particles were colliding with astronomical power. We see remnants of that time in tiny fluctuations in the haze of leftover energy called the cosmic microwave background.

Scientists might be able to find remnants of any prehistoric particles that were around during that time as well.

“If new particles existed during inflation, they can imprint a signature on the primordial fluctuations, which can be seen through specific patterns,” says theorist Juan Maldacena of the Institute for Advanced Study in Princeton, New Jersey.

Maldacena and his IAS collaborator, theorist Nima Arkani-Hamed, have used quantum field theory calculations to figure out what these patterns might look like. The pair presented their findings at an annual string theory conference held this year in Bengaluru, India, in June.

The probable, impossible string

String theory is frequently summed up by its basic tenet: that the fundamental units of matter are not particles. They are one-dimensional, vibrating strings of energy.

The theory’s purpose is to bridge a mathematic conflict between quantum mechanics and Einstein’s theory of general relativity. Inside a black hole, for example, quantum mechanics dictates that gravity is impossible. Any attempt to adjust one theory to fit the other causes the whole delicate system to collapse. Instead of trying to do this, string theory creates a new mathematical framework in which both theories are natural results. Out of this framework emerges an astonishingly elegant way to unify the forces of nature, along with a correct qualitative description of all known elementary particles.

As a system of mathematics, string theory makes a tremendous number of predictions. Testable predictions? None so far.

Strings are thought to be the smallest objects in the universe, and computing their effects on the relatively enormous scales of particle physics experiments is no easy task. String theorists predict that new particles exist, but they cannot compute their masses.

To exacerbate the problem, string theory can describe a variety of universes that differ by numbers of forces, particles or dimensions. Predictions at accessible energies depend on these unknown or very difficult details. No experiment can definitively prove a theory that offers so many alternative versions of reality.

Putting string theory to the test

But scientists are working out ways that experiments could at least begin to test parts of string theory. One prediction that string theory makes is the existence of particles with a unique property: a spin of greater than two.

Spin is a property of fundamental particles. Particles that don’t spin decay in symmetric patterns. Particles that do spin decay in asymmetric patterns, and the greater the spin, the more complex those patterns get. Highly complex decay patterns from collisions between these particles would have left signature impressions on the universe as it expanded and cooled.

Scientists could find the patterns of particles with greater than spin 2 in subtle variations in the distribution of galaxies or in the cosmic microwave background, according to Maldacena and Arkani-Hamed. Observational cosmologists would have to measure the primordial fluctuations over a wide range of length scales to be able to see these small deviations.

The IAS theorists calculated what those measurements would theoretically be if these massive, high-spin particles existed. Such a particle would be much more massive than anything scientists could find at the LHC.

A challenging proposition

Cosmologists are already studying patterns in the cosmic microwave background. Experiments such as Planck, BICEP and POLAR BEAR are searching for polarization, which would be evidence that a nonrandom force acted on it. If they rewind the effects of time and mathematically undo all other forces that have interacted with this energy, they hope that what pattern remains will match the predicted twists imbued by inflation.

The patterns proposed by Maldacena and Arkani-Hamed are much subtler and much more susceptible to interference. So any expectation of experimentally finding such signals is still a long way off.

But this research could point us toward someday finding such signatures and illuminating our understanding of particles that have perhaps left their mark on the entire universe.

The value of strings

Whether or not anyone can prove that the world is made of strings, people have proven that the mathematics of string theory can be applied to other fields.

In 2009, researchers discovered that string theory math could be applied to conventional problems in condensed matter physics. Since then researchers have been applying string theory to study superconductors.

Fellow IAS theorist Edward Witten, who received the Fields Medal in 1990 for his mathematical contributions to quantum field theory and Supersymmetry, says Maldacena and Arkani-Hamed’s presentation was among the most innovative work he saw at the Strings ‘15 conference.

Witten and others believe that such successes in other fields indicate that string theory actually underlies all other theories at some deeper level.

"Physics—like history—does not precisely repeat itself,” Witten says. However, with similar structures appearing at different scales of lengths and energies, “it does rhyme.”

 

Like what you see? Sign up for a free subscription to symmetry!

 

by Troy Rummler at August 27, 2015 05:52 PM

Lubos Motl - string vacua and pheno

LHCb: 2-sigma violation of lepton universality
Since the end of June, I mentioned the ("smaller") LHCb collaboration at the LHC twice. They organize their own Kaggle contest and they claim to have discovered a pentaquark.

In their new article Evidence suggests subatomic particles could defy the standard model, Phys.ORG just made it clear that I largely missed a hep-ex paper at the end of June,
Measurement of the ratio of branching fractions \(\mathcal{B}(\overline{B}^0 \to D^{*+}τ^{-}\overlineν_τ))/\mathcal{B}(\overline{B}^0 \to D^{*+}μ^{-}\overlineν_μ)\)
by Brian Hamilton and about 700 co-authors. The paper will appear in Physical Review Letters in a week – which is why it made it to Phys.ORG now. An early June TRF blog post could have been about the same thing but the details weren't available.




What is going on? They measured the number of decays of the \(\bar B^0\) mesons produced within their detector in 2011 and 2012 that have another meson, \(D^{*+}\), in the final state, along with the negative-lepton and the corresponding antineutrino.




Well, the decay obviously needs the cubic vertex with the \(W^\pm\)-boson – i.e. the charged current – and this current should contain the term "creating the muon and its antineutrino" and the term "creating the tau and its antineutrino" with equal coefficients. There are no Higgs couplings involved in the process so the different generations of the leptons behave "the same" because they transform as the "same kind of doublets" that the \(W^\pm\)-bosons are mixing with each other.

The decay rates with all the \(\mu\) replaced by \(\tau\) should be "basically" the same. Well, because the masses and therefore kinematics is different, the Standard Model predicts the ratio of the two decay rates to be\[

{\mathcal R}(D^*) = \frac{\mathcal{B}(\overline{B}^0 \to D^{*+}\tau^{-}\overline \nu_\tau)}{
\mathcal{B}(\overline{B}^0 \to D^{*+}\mu^{-}\overline \nu_\mu)} = 0.252\pm 0.003

\] The error of the theoretical prediction is just 1 percent or so. This is the usual accuracy that the Standard Model allows us, at least when the process doesn't depend on the messy features of the strong force too much. This decay ultimately depends on the weak interactions – those with the \(W^\pm\)-bosons as the intermediate particle – which is why the accuracy is so good.

Well, the LHCb folks measured that quantity and got the following value of the ratio\[

{\mathcal R}(D^*) = 0.336 \pm 0.027 \text{ (stat) } \pm 0.030 \text{ (syst) }

\] which is 33% higher and, using the "Pythagorean" total error \(0.040\) combining the statistical and systematic one, it is about 2.1 standard deviations higher than the (accurately) predicted value.

As always, 2.1 sigma is no discovery to be carved in stone (even though it's formally or naively some "96% certainty of a new effect") but it is an interesting deviation, especially because there are other reasons to think that the "lepton universality" could fail. What am I talking about?

In the "lepton universality", the coupling of the \(W^\pm\)-boson to the charged_lepton-plus-neutrino pair is proportional to a \(3\times 3\) unit matrix in the space of the three generations. The unit matrix and its multiples are nice and simple.

Well, there are two related ways how a generic matrix differs from a multiple of the unit matrix:
  1. its diagonal elements are not equal to each other
  2. the off-diagonal elements are nonzero
In a particular basis, these are two different "failures" of a matrix. We describe them (or the physical effects that they cause if the matrix is used for the charged currents) as "violations of lepton university" and "flavor violations", respectively. But it's obvious that in a general basis, you can't distinguish them. A diagonal matrix with different diagonal entries looks like a non-diagonal matrix in other bases.

So the violation of the "lepton universality" discussed in this LHCb paper and this blog post (different diagonal entries for the muon and tau) is "fundamentally" a symptom of the same effect as the "flavor violation" (non-zero off-diagonal entries). And the number of these flavor-violating anomalies has grown pretty large! Most interestingly, CMS saw a 2.4 excess in decays of the Higgs to \(\mu\) and \(\tau\) which seem to represent about 1% (plus minus 0.4% if you wish) of the decays even though such flavor-violating decays are prohibited.

LHCb has announced several other minor flavor-violating results but because they depend on some mesons, they are less catchy for an elementary particle physicist.

The signs of the flavor violation may be strengthening. If a huge, flavor-violating deviation from the Standard Model is seen and some discoveries are made, we will be able to say that "we saw that paradigm shift coming". Although right now, we may be seeing that this discovery may also be going away if Nature is less generous, too. ;-)

by Luboš Motl (noreply@blogger.com) at August 27, 2015 04:51 PM

Quantum Diaries

The Tesla experiment
CMS scientist Bo Jayatilaka assumes the driver seat in a Tesla Model S P85D as part of a two-day road trip experiment. Photo: Sam Paakkonen

CMS scientist Bo Jayatilaka assumes the driver seat in a Tesla Model S P85D as part of a two-day road trip experiment. Photo: Sam Paakkonen

On May 31, about 50 miles from the Canadian border, an electric car struggled up steep hills, driving along at 40 miles per hour. The sun was coming up and rain was coming down. Things were looking bleak. The car, which usually plotted the route to the nearest charging station, refused to give directions.

“It didn’t even say turn around and go back,” said Bo Jayatilaka, who was driving the car. “It gave up and said, ‘You’re not going to make it.’ The plot disappeared.”

Rewind to a few weeks earlier: Tom Rammer, a Chicago attorney, had just won two days with a Tesla at a silent cell phone auction for the American Cancer Society. He recruited Mike Kirby, a Fermilab physicist, to figure out how to get the most out of those 48 hours.

Rammer and Kirby agreed that the answer was a road trip. Their initial plan was a one-way trip to New Orleans. Another involved driving to Phoenix and crossing the border to Mexico for a concert. Tesla politely vetoed these options. Ultimately, Rammer and Kirby decided on an 867-mile drive from Chicago to Boston. Their goal was to pick up Jayatilaka, a physicist working on the CMS experiment, and bring him back to Fermilab. To document their antics, the group hired a film crew of six to follow them on their wild voyage from the Windy City to Beantown.

Jayatilaka joked that he didn’t trust Rammer and Kirby to arrange the trip on their own, so they also drafted Jen Raaf, a Fermilab physicist on the MicroBooNE experiment, whose organizational skills would balance their otherwise chaotic approach.

“There was no preparing. Every time I brought it up Tom said, ‘Eh, it’ll get done,’” Raaf laughed. Jayatilaka added that shortly after Raaf came on board they started seeing spreadsheets sent around and itineraries being put together.

“I had also made contingency plans in case we couldn’t make it to Boston,” Raaf said, with a hint of foreshadowing.

The Tesla plots the return trip to Chicago, locating the nearest charging station. Photo: Sam Paakkonen

The Tesla plots the return trip to Chicago, locating the nearest charging station. Photo: Sam Paakkonen

On May 29, Rammer, Kirby and Raaf picked up the Tesla and embarked on their journey. The car’s name was Barbara. She was a black Model S P85D, top of the line, and she could go from zero to 60 in 3.2 seconds.

“I think the physics of it is really interesting,” Jayatilaka said. “The reason it’s so fast is that the motor is directly attached to wheels. With cars we normally drive there is a very complicated mechanical apparatus that converts small explosions into something that turns far away from where the explosions are. And this thing just goes. You press the button and it goes.”

The trip started out on flat terrain, making for smooth, easy driving. But eventually the group hit mountains, which ate up Barbara’s battery capacity. In the spirit of science, these physicists pushed the boundaries of what they knew, testing Barbara’s limits as they braved undulating roads, encounters with speed-hungry Porsches and Canadian border patrol.

“If you have something and it’s automated, you need to know the limitations of that algorithm. The computer does a great job of calculating the range for a given charge, but we do much better knowing the terrain and what’s going to happen. We need to figure out what we are better at and what the algorithm is better at,” Kirby said. “The trip was about learning the car. The algorithm is going to get better because of all of the experiences of all of the drivers.”

The result of the experiment was that Barbara didn’t make it all the way to Boston. As they approached the east coast, it became clear to Kirby and Raaf that they wouldn’t have made it back in time to drop off the car. Although Rammer was determined to see the trip through to the end, he eventually gave in somewhere in New Jersey, and they decided to cut the trip short. Jayatilaka met the group in a parking lot in Springfield, Massachusetts, and they plotted the quickest route back to Chicago.

Flash forward to that bleak moment on May 31. After crossing the border, just as things were looking hopeless, Barbara’s systems suddenly came back to life. She directed the group to a charging station in chilly Kingston, Ontario. Around 6:30 in the morning, they rolled into the station. The battery level: zero percent. After a long charge and another full day of driving, they pulled into the Tesla dealership in Chicago around 8:55 p.m., minutes before their time with Barbara was up.

“The car was just alien technology to us when we started,” Jayatilaka said. “It was completely unfamiliar. We all came away from it thinking that we could have done this road trip so much better with those two days of experience. We felt like we actually understood.”

Ali Sundermier

by Fermilab at August 27, 2015 01:40 PM

Tommaso Dorigo - Scientificblogging

Thou Shalt Have One Higgs - $100 Bet Won !
One of the important things in life is to have a job you enjoy and which is a motivation for waking up in the morning. I can say I am lucky enough to be in that situation. Besides providing me with endless entertainment through the large dataset I enjoy analyzing, and the constant challenge to find new ways and ideas to extract more information from data, my job also gives me the opportunity to gamble - and win money, occasionally.

read more

by Tommaso Dorigo at August 27, 2015 09:41 AM

arXiv blog

The 20 Most Infamous Cyberattacks of the 21st Century (Part II)

The number of cyberattacks are on the increase. Here is the second part of a list of the the most egregious attacks this millennium.

August 27, 2015 04:21 AM

astrobites - astro-ph reader's digest

From Large to Small: Astrophysical Signs of Dark Matter Particle Interactions

Title: Dark Matter Halos as Particle Colliders: A Unified Solution to Small-Scale Structure Puzzles from Dwarfs to Clusters
Authors: M. Kaplinghat, S. Tulin, H.-B. Yu
First Author’s Institution: Department of Physics and Astronomy, University of California, Irvine, CA

 

 

The very large helps us to learn about the very small, as anyone who’s stubbed a toe—rudely brought face to face with the everyday quantum reality of Pauli’s exclusion principle and the electrostatic repulsion of electrons—knows.  Astrophysics, the study of the largest things in existence, is no less immune to this marvelous fact. One particularly striking example is dark matter. It’s been a few decades since we realized that it exists, but we remain woefully unenlightened as to what this mysterious substance might be made of. Theories on its nature are legion—it’s hot! it’s cold! it’s warm! it’s sticky! it’s fuzzy! it’s charged! it’s atomic! it’s MACHO! it’s WIMP-y! it’s a combo of the above!

How are we to navigate and whittle down this veritable circus of dark matter particle theories? It turns out that an assumption about the nature of the subatomic dark matter particle can lead to observable effects on astrophysical scales.  The game of tracing from microphysics to astrophysics has identified a clear set of dark matter properties: it’s cold (thus its common appellation, “cold dark matter,” or CDM, for short), collisionless, stable (i.e. it doesn’t spontaneously decay), and neutrally charged (unlike protons and electrons). CDM’s been wildly successful at explaining many astrophysical observations, except for one—it fails to reproduce the small scale structure of the universe (that is, at galaxy cluster scales and smaller). Dark matter halos at such scales, for instance, are observed to have constant-density cores, while CDM predicts peaky centers.

What aspect of the dark matter particle might we have overlooked that can explain away the small scale problems of CDM?  One possibility is that dark matter is “sticky.” Sticky dark matter particles can collide with other dark matter particles, or are “self-interacting” (thus the model’s formal name, self-interacting dark matter, or SIDM for short). Collisions between dark matter particles can redistribute angular momentum in the centers of dense dark matter halos, pushing particles with little angular momentum in the centers of peaky dark matter halos outwards—producing cores.  If you know how sticky the dark matter is (quantitatively described by the dark matter particle’s self-interaction cross section, which gives the probability that two dark matter particles will collide) you can predict the sizes of these cores.

The authors of today’s paper derived the core sizes of observed dark matter halos ranging in mass from 10^9 to 10^15 solar masses—which translates to dwarf galaxies up through clusters of galaxies—then derived the self-interaction cross sections that the size of each halo’s core implied. This isn’t particularly new work, but it’s the first time that this has been done for an ensemble of dark matter halos.  Since halos with different masses have different characteristic velocities (i.e. velocity dispersions), this lets us measure whether dark matter is more or less sticky at different velocities.  Their range of halo masses allowed them to probe a velocity range from 20 km/s (in dwarf galaxies) to 2000 km/s (in galaxy clusters).

And what did they find? The cross section appears to have a velocity dependence, but a weak one. For the halos of dwarf galaxies, a cross section of about 1.9 cm^2/g is preferred, whereas for the largest halos, those of galaxy clusters, they find that a cross section that’s an order of magnitude smaller—about 0.1 cm^2/g—is preferred. There’s some scatter in the results, but the scatter can be accounted for by differences in how concentrated the dark matter in each halo is (which depends on how it formed).

But that’s just the tip of the iceberg.  The velocity dependence can be used to back out even more details about the dark matter particle itself. To demonstrate this, the authors assume a simple dark matter model, in which dark matter-dark matter interactions occur with the help of a second particle (the “mediator”) that’s comparatively massless—the “dark photon” model. Under these assumptions, they predict that the dark matter particle has a mass of about 15 GeV, and the mediator has a mass of 17 Mev.

These are exciting and illuminating results, but we are still a long ways from our goal of identifying the dark matter particle.  The authors’ analysis did not include baryonic effects such as supernovae feedback, which can also help produce cores (but may not be able to fully account for them), and better constraints on the self-interaction cross section are needed (based on merging galaxy clusters, for instance).  The astrophysical search for more details on the elusive dark matter particle continues!

 

 

 

Cover image:  The Bullet Cluster.  Overlaid on an HST image is a weak lensing mass map in blue and a map of the gas (as measured by x-ray emission by Chandra) in pink.  The clear separation between the mass and the gas was a smoking gun for the existence of dark matter. It’s also been intensely studied for signs of dark matter self-interactions.

Disclaimer:  I’ve collaborated with the first author of this paper, but chose to write on this paper because I thought it was cool, not as an advertisement!

by Stacy Kim at August 27, 2015 02:52 AM

August 26, 2015

Emily Lakdawalla - The Planetary Society Blog

Webcomic: Poetry in space
Take a delightful, pixelated journey with French artist Boulet as he explains his love for the "infinite void" of the "mathematical skies."

August 26, 2015 03:11 PM

Symmetrybreaking - Fermilab/SLAC

Scientists accelerate antimatter

Accelerating positrons with plasma is a step toward smaller, cheaper particle colliders.

A study led by researchers from SLAC National Accelerator Laboratory and the University of California, Los Angeles, has demonstrated a new, efficient way to accelerate positrons, the antimatter opposites of electrons. The method may help boost the energy and shrink the size of future linear particle colliders—powerful accelerators that could be used to unravel the properties of nature’s fundamental building blocks.

The scientists had previously shown that boosting the energy of charged particles by having them “surf” a wave of ionized gas, or plasma, works well for electrons. While this method by itself could lead to smaller accelerators, electrons are only half the equation for future colliders. Now the researchers have hit another milestone by applying the technique to positrons at SLAC’s Facility for Advanced Accelerator Experimental Tests, a US Department of Energy Office of Science user facility.

“Together with our previous achievement, the new study is a very important step toward making smaller, less expensive next-generation electron-positron colliders,” says SLAC’s Mark Hogan, co-author of the study published today in Nature. “FACET is the only place in the world where we can accelerate positrons and electrons with this method.”

SLAC Director Chi-Chang Kao says, “Our researchers have played an instrumental role in advancing the field of plasma-based accelerators since the 1990s. The recent results are a major accomplishment for the lab, which continues to take accelerator science and technology to the next level.”

Shrinking particle colliders

Researchers study matter’s fundamental components and the forces between them by smashing highly energetic particle beams into one another. Collisions between electrons and positrons are especially appealing, because unlike the protons being collided at CERN’s Large Hadron Collider – where the Higgs boson was discovered in 2012—these particles aren’t made of smaller constituent parts.

“These collisions are simpler and easier to study,” says SLAC’s Michael Peskin, a theoretical physicist not involved in the study. “Also, new, exotic particles would be produced at roughly the same rate as known particles; at the LHC they are a billion times more rare.”

However, current technology to build electron-positron colliders for next-generation experiments would require accelerators that are tens of kilometers long. Plasma wakefield acceleration is one way researchers hope to build shorter, more economical accelerators.

Previous work showed that the method works efficiently for electrons: When one of FACET’s tightly focused bundles of electrons enters an ionized gas, it creates a plasma “wake” that researchers use to accelerate a trailing second electron bunch.

Computer simulations of the interaction of electrons (left) and positrons (right) with a plasma.

Artwork by: W. An, UCLA

Creating a plasma wake for antimatter

For positrons—the other required particle ingredient for electron-positron colliders—plasma wakefield acceleration is much more challenging. In fact, many scientists believed that no matter where a trailing positron bunch was placed in a wake, it would lose its compact, focused shape or even slow down.

“Our key breakthrough was to find a new regime that lets us accelerate positrons in plasmas efficiently,” says study co-author Chandrashekhar Joshi from UCLA.

Instead of using two separate particle bunches—one to create a wake and the other to surf it—the team discovered that a single positron bunch can interact with the plasma in such a way that the front of it generates a wake that both accelerates and focuses its trailing end. This occurs after the positrons have traveled about four inches through the plasma.  

“In this stable state, about 1 billion positrons gained 5 billion electronvolts of energy over a short distance of only 1.3 meters,” says former SLAC researcher Sebastien Corde, the study’s first author, who is now at the Ecole Polytechnique in France. “They also did so very efficiently and uniformly, resulting in an accelerated bunch with a well-defined energy.”

Looking into the future

All of these properties are important qualities for particle beams in accelerators. In the next step, the team will look to further improve their experiment.

“We performed simulations to understand how the stable state was created,” says co-author Warren Mori of UCLA. “Based on this understanding, we can now use simulations to look for ways of exciting suitable wakes in an improved, more controlled way. This will lead to ideas for future experiments.”

This study underscores the critical importance of test facilities such as FACET, says Lia Merminga, associate laboratory director for accelerators at TRIUMF in Canada.

“Plasma wakefield acceleration of positrons has been a longstanding problem in this field,” she says. “Today's announcement is a breakthrough that offers a possible solution.”

Although plasma-based particle colliders will not be built in the near future, the method could be used to upgrade existing accelerators much sooner.

“It’s conceivable to boost the performance of linear accelerators by adding a very short plasma accelerator at the end,” Corde says. “This would multiply the accelerator’s energy without making the entire structure significantly longer.”

Additional contributors included researchers from the University of Oslo in Norway and Tsinghua University in China. The research was supported by the US Department of Energy, the National Science Foundation, the Research Council of Norway and the Thousand Young Talents Program of China.




This article is based on a SLAC press release.

 

Like what you see? Sign up for a free subscription to symmetry!

August 26, 2015 01:00 PM

ZapperZ - Physics and Physicists

She's Still Radioactive!
She, as in Marie Curie.

This article examines what has happened to the personal effects of Marie Curie, the "Mother of Modern Physics".

Still, after more than 100 years, much of Curie's personal effects including her clothes, furniture, cookbooks, and laboratory notes remain contaminated by radiation, the Christian Science Monitor reports.

Regarded as national and scientific treasures, Curie's laboratory notebooks are stored in lead-lined boxes at France's national library in Paris.

While the library allows visitors to view Curie's manuscripts, all guests are expected to sign a liability waiver and wear protective gear as the items are contaminated with radium 226, which has a half-life of about 1,600 years, according to Christian Science Monitor.

What they didn't report, and this is where the devil-is-in-the-details part is missing, is what level of radioactivity is given off by these objects. You just don't want to sign something and not know the level you will be exposed to (which, btw, if you work in the US or at a US National Lab, a RWP (radiation work permit) must be posted at the door detailing the type of radiation and the level of radiation at a certain distance).

I suspect that this level is just slightly above background, and that's why they are isolated, but not large enough for concern. Still, the nit-picker in me would like to know such details!

Zz.

by ZapperZ (noreply@blogger.com) at August 26, 2015 12:55 PM

astrobites - astro-ph reader's digest

SETI Near and Far – Searching for Alien Technology

PAPER 1

PAPER 2

 

Think like an Alien

Without a doubt, one of the most profound questions ever asked is if there are other sentient, intelligent lifeforms in the Universe. Countless books, movies, TV shows, and radio broadcasts have fueled our imagination as to what intelligent alien life might look like, what technology they would harness, and what they would do when confronted by humanity. Pioneers such as Francis Drake and Carl Sagan transformed this quandary from the realm of the imagination to the realm of science with the foundation of the Search for Extraterrestrial Intelligence (SETI) Institute. The search for extraterrestrial intelligence goes far beyond listening for radio transmissions and sending out probes carrying golden disks encrypted with humanity’s autobiography. Some of the other ways astronomers have been attempting to quantify the amount of intelligent life in the Universe can be found in all these astrobites, and today’s post will be summarizing two recent additions to astro-ph that study how we might look for alien technology using the tools in our astrophysical arsenal. The aim of both these studies is to search for extraterrestrial civilizations that may have developed technologies and structures that are still out of our reach, and these technologies may have observable effects that we can see from Earth. This post provides a very brief overview of these studies, so check out the actual articles for a more in-depth and interesting read!

Sailing through the Solar System

20140709_LightSail1_Space031

Figure 1. Artist rendition of a light sail.

The future of space exploration via rocket propulsion faces a dilemma. To travel interplanetary distances in a reasonable amount of time we need to travel really fast, and to go really fast rockets need lots of fuel. However, lots of fuel means lots of weight, and lots of weight means it takes more fuel to accelerate. One popular idea for the future of space travel is the use of light sails (see figure 1), which would use radiation pressure to accelerate a spacecraft without the burden of exorbitant amounts of chemical fuel. Though the sail could reflect sunlight as a means of propulsion, beaming intense radiation from a planet to the light sail could provide more propulsion especially at greater distances from the star (if the sail had perfect reflectivity and was located 1 AU away from a sun-like star, the solar radiation would only provide a force of about 10 micronewtons per square meter of the sail, which is about the force required to hold up a strand of hair against the acceleration of Earth’s gravity). Hopefully in the not-so-distant future, we will be able to use this technology for quick and efficient interplanetary travel. Intelligent life in our galaxy, however, may already be utilizing this means of transportation. But how would we be able to tell if someone out there is is using something like this?

Screen Shot 2015-08-25 at 2.06.52 PM

Figure 2. Diagram showing the likely leakage for a light sail system developed for Earth-Mars transit. The dashed cyan arrow shows the path of the light sail, and the beam profile is shaded in green. The inset shows the log of the intensity within the beam profile in the Fraunhofer regime. Figure 1 in paper 1.

The authors of paper 1 analyze this means of transportation and the accompanying electromagnetic signature we may be able to observe by studying a mock launch of a spacecraft from Earth to Mars. Without delving into too much detail about the optics of the beam, during part of the acceleration period of the spacecraft some of the beamed radiation will be subject to “leakage” missing the sail and propagating out into space (figure 2). Since the two planets the ship is travelling between would lie on nearly the same orbital plane (like Earth and Mars), the radiation beam and subsequent leakage would be directed along the orbital plane as well. However, like a laser beam the leakage would be concentrated on a very small angular area, and to have any chance of detecting the leakage from this mode of transportation we would need to be looking at exoplanetary systems that are edge-on as viewed from the Earth…exactly the kind of systems that are uncovered by transit exoplanet surveys like Kepler! Also, assuming an alien civilization is as concerned we are about minimizing cost and maximizing efficiency, the beaming arrays would likely utilize microwave radiation. This would make the beam more easily distinguishable from the light of the host star and allow it to be detectable from distances on the order of 100 parsecs by SETI radio searches using telescopes such as the Parkes and Green Bank Telescope.

Nature’s Nuclear Reactors

xZjfF2z

Figure 3. Artist rendition of a Dyson sphere.

Though most SETI efforts are confined to our own galaxy, there are potential methods by which we can uncover an alien supercivilization in a galaxy far, far away. As an intelligent civilization grows in population and technological capabilities, it is assumed that their energy needs will exponentially increase. A popular concept in science fiction to satisfy this demand for energy is a Dyson sphere (see figure 3). These megastructure essentially act as giant solar panels that completely encapsulate a star, capturing most or all of the star’s energy and using it for the energy needs of an advanced civilization. To get an idea of how much energy this could provide, if we were able to capture all of the energy leaving the Sun with an 100% efficient Dyson sphere, it would give us enough energy to power 2 trillion Earths given our current energy consumption. If this much energy isn’t enough, alien super-civilizations could theoretically repeat this process for other stars in their galaxy. Paper 2 considers this type of super-civilization (known as a Kardashev Type III Civilization) and how we may be able to detect their presence in distant galaxies.

The key to detecting this incredibly advanced type of astroengineering is by using the Tully-Fisher relationship – an empirical relationship relating the luminosity of a spiral galaxies to the width of its emission lines (a gauge of how fast the galaxy is rotating). If an alien super-civilization were to harness the power of a substantial fraction of the stars in their galaxy, the galaxy would appear dimmer to a distant observer since a large portion of its starlight is being absorbed by the Dyson spheres. These galaxies would then appear to be distinct outliers in the Tully-Fisher relationship, since the construction of Dyson spheres would have little effect on galaxy’s gravitational potential and rotational velocity, but decrease its observable luminosity. The authors of this study looked at a large sample of spiral galaxies, and picked out the handful that were underluminous by 1.5 magnitudes (75% less luminous) compared to the Tully-Fisher relationship for further analysis (figure 4).

Screen Shot 2015-08-25 at 9.49.07 AM

Figure 4. A Tully-Fisher diagram containing the sample of objects chosen in the study. The solid line indicates the Tully-Fisher relationship, with the y-axis as the I-band magnitude and the x-axis as the log line width. Numbered dots mark the 11 outliers more than 1.5 mag less luminous from the Tully-Fisher relationship, with blue, green, and red indicating classes of differing observational certainty (see paper 2 for more details). Figure 1 in paper 2.

To further gauge whether these candidates have sufficient evidence supporting large-scale astroengineering, the authors looked at their infrared emission. Dyson spheres would likely be efficient at absorbing optical and ultraviolet radiation, but would still need to radiate away excess heat in the infrared. In theory, if one of the candidate galaxies had low optical/ultraviolet luminosity but an excess in the infrared it could provide more credence to the galaxy-wide Dyson sphere hypothesis. However, in reality, this becomes a highly non-trivial problem that depends on the types of stars associated with Dyson spheres, the temperature at which the spheres operate, and the dust content of the galaxy (see paper 2 for more details). Needless to say, better evidence of large-scale astroengineering in a distant galaxy would require a spiral galaxy with very well-measured parameters be a strong outlier in the Tully-Fisher relationship. Though none of the candidates in this study showed clear signs of alien engineering, the authors were able to set a tentative upper limit of ~0.3% of disk galaxies harboring Kardashev Type III Civilizations. Though an extraterrestrial species this advanced is difficult to fathom, the Universe would be a very lonely place if humans were the only form of intelligent life, and this kind of imaginative exploration may one day tell us that we have company in the cosmos.

by Michael Zevin at August 26, 2015 02:01 AM

August 25, 2015

ATLAS Experiment

Getting ready the next discovery

I’m just on my way back home after a great week spent in Ljubljana where I joined (and enjoyed!) the XXVII edition of the Lepton-Photon conference.

revi_pic_2

Ljubljana city center (courtesy of Revital Kopeliansky).

During the Lepton-Photon conference many topics were discussed, including particle physics at colliders, neutrino physics, astroparticle physics as well as cosmology.

In spite of the wide spectrum of scientific activities shown in Lepton-Photon, the latest measurements by the experiments at the Large Hadron Collider (LHC) based on 13 TeV proton-proton collision data were notable highlights of the conference and stimulated lively discussions.

The investigation of the proton-proton interactions in this new, yet unexplored, energy regime is underway using new data samples provided by LHC. One of the first analyses performed by ATLAS is the measurement of proton-proton inelastic cross section; this analysis has a remarkable relevance for the understanding of cosmic-ray interactions in the terrestrial atmosphere, thus offering a natural bridge between experiments in high-energy colliders and astroparticle physics.

Dragon sculpture in the Dragon Bridge in Ljubljana.

Dragon sculpture in the Dragon Bridge in Ljubljana.

While we are already greatly excited about the new results based on the 13 TeV collisions provided by LHC, it is also clear that the best is yet to come! As discussed during the conference, the Higgs boson discovered in 2012 by the ATLAS and CMS collaborations still has many unknown properties; its couplings with quarks and leptons need to be directly measured. Remarkably, by the end of next year, the data provided by LHC will have enough Higgs boson events to perform the measurements of many Higgs-boson couplings with good experimental accuracy.

Precision measurements of the Higgs boson properties offer a way to look for new physics at LHC, complementary to direct searches for new particles in the data. Direct searches for new particles, or new physics, at LHC will play a major role in the coming months and years.

A few “hints” of possible new-physics signals were already observed in the data collected by ATLAS at lower energy in 2011 and 2012. Unfortunately such hints are still far from any confirmation and the analysis of the 13 TeV proton-proton collision data will clarify the current intriguing scenarios.

Although LHC is in its main running phase, with many years of foreseen operation ahead of us, the future of particle physics is already being actively discussed, starting from the future world-wide accelerator facilities.

During Lepton-Photon, many projects were presented including proposals for new infrastructure at CERN, in Japan and in China. All these proposals show a strong potential for major scientific discoveries and will be further investigated, posing the basis for particle physics for the next fifty years to come.

Social dinner during the Lepton-Photon conference.

Social dinner during the Lepton-Photon conference.

Without a doubt one of the most inspiring moments of this conference was the public lecture about cosmological inflation given by Alan Guth. It attracted more than one thousand people from Ljubljana and stimulated an interesting debate. In his lecture, Alan Guth stressed the relevant steps forward taken by the scientific community in the understanding of the formation and the evolution of the Universe.

At the same time, Alan Guth remarked on our lack of knowledge of many basic aspects of our Universe, including the dark matter and dark energy puzzles. Dark energy is typically associated to very high energy scales, about one quadrillion times higher than the energy of protons accelerated by LHC; therefore, it is expected that dark energy can’t be studied with accelerated particle beams. On the other hand, dark matter particles are associated with much lower energy scales, and thus they are within the reach of many experiments, including ATLAS and CMS!


miapittura_new Nicola joined the ATLAS experiment in 2009 as a Master’s student at INFN Lecce and Università del Salento in Italy, where he also contributed to the ATLAS physics program as PhD student. He is currently a postdoctoral researcher at Aristotle University of Thessaloniki. His main research activity concerns the ATLAS Standard Model physics, including hard strong-interactions and electroweak measurements. Beyond particle physics, he loves traveling, hiking, kayaking, martial arts, contemporary art, and rock-music festivals.

by orlando at August 25, 2015 08:02 PM

Symmetrybreaking - Fermilab/SLAC

All about supernovae

Exploding stars have an immense capacity to destroy—and create.

Somewhere in the cosmos, a star is reaching the end of its life.

Maybe it’s a massive star, collapsing under its own gravity. Or maybe it’s a dense cinder of a star, greedily stealing matter from a companion star until it can’t handle its own mass.

Whatever the reason, this star doesn’t fade quietly into the dark fabric of space and time. It goes kicking and screaming, exploding its stellar guts across the universe, leaving us with unparalleled brightness and a tsunami of particles and elements. It becomes a supernova. Here are ten facts about supernovae that will blow your mind.

1. The oldest recorded supernova dates back almost 2000 years

In 185 AD, Chinese astronomers noticed a bright light in the sky. Documenting their observations in the Book of Later Han, these ancient astronomers noted that it sparkled like a star, appeared to be half the size of a bamboo mat and did not travel through the sky like a comet. Over the next eight months this celestial visitor slowly faded from sight. They called it a “guest star.”

Two millennia later, in the 1960s, scientists found hints of this mysterious visitor in the remnants of a supernova approximately 8000 light-years away. The supernova, SN 185, is the oldest known supernova recorded by humankind.

2. Many of the elements we’re made of come from supernovae

Everything from the oxygen you’re breathing to the calcium in your bones, the iron in your blood and the silicon in your computer was brewed up in the heart of a star.

As a supernova explodes, it unleashes a hurricane of nuclear reactions. These nuclear reactions produce many of the building blocks of the world around us. The lion’s share of elements between oxygen and iron comes from core-collapse supernovae, those massive stars that collapse under their own gravity. They share the responsibility of producing the universe’s iron with thermonuclear supernovae, white dwarves that steal mass from their binary companions. Scientists also believe supernovae are a key site for the production of most of the elements heavier than iron.

3. Supernovae are neutrino factories

In a 10-second period, a core-collapse supernova will release a burst of more than 1058 neutrinos, ghostly particles that can travel undisturbed through almost everything in the universe.

Outside of the core of a supernova, it would take a light-year of lead to stop a neutrino. But when a star explodes, the center can become so dense that even neutrinos take a little while to escape. When they do escape, neutrinos carry away 99 percent of the energy of the supernova.

Scientists watch for that burst of neutrinos using an early warning system called SNEWS. SNEWS is a network of neutrino detectors across the world. Each detector is programmed to send a datagram to a central computer whenever it sees a burst of neutrinos. If more than two experiments observe a burst within 10 seconds, the computer issues an automatic alert to the astronomical community to look out for an exploding star.

But you don’t have to be an expert astronomer to receive an alert. Anyone can sign up to be among the first to know that a star's core has collapsed.

4. Supernovae are powerful particle accelerators

Supernovae are natural space laboratories; they can accelerate particles to at least 1000 times the energy of particles in the Large Hadron Collider, the most powerful collider on Earth.

The interaction between the blast of a supernova and the surrounding interstellar gas creates a magnetized region, called a shock. As particles move into the shock, they bounce around the magnetic field and get accelerated, much like a basketball being dribbled closer and closer to the ground. When they are released into space, some of these high-energy particles, called cosmic rays, eventually slam into our atmosphere, colliding with atoms and creating showers of secondary particles that rain down on our heads.

5. Supernovae produce radioactivity

In addition to forging elements and neutrinos, the nuclear reactions inside of supernovae also cook up radioactive isotopes. Some of this radioactivity emits light signals, such as gamma rays, that we can see in space.

This radioactivity is part of what makes supernovae so bright. It also provides us with a way to determine if any supernovae have blown up near Earth. If a supernova occurred close enough to our planet, we’d be sprayed with some of these unstable nuclei. So when scientists come across layers of sediment with spikes of radioactive isotopes, they know to investigate whether what they’ve found was spit out by an exploding star.

In 1998, physicists analyzed crusts from the bottom of the ocean and found layers with a surge of 60Fe, a rare radioactive isotope of iron that can be created in copious amounts inside supernovae. Using the rate at which 60Fe decays over time, they were able to calculate how long ago it landed on Earth. They determined that it was most likely dumped on our planet by a nearby supernova about 2.8 million years ago.

6. A nearby supernova could cause a mass extinction

If a supernova occurred close enough, it could be pretty bad news for our planet. Although we’re still not sure about all the ways being in the midst of an exploding star would affect us, we do know that supernovae emit truckloads of high-energy photons such as X-rays and gamma rays. The incoming radiation would strip our atmosphere of its ozone. All of the critters in our food chain from the bottom up would fry in the sun’s ultraviolet rays until there was nothing left on our planet but dirt and bones.

Statistically speaking, a supernova in our own galaxy has been a long time coming.

Supernovae occur in our galaxy at a rate of about one or two per century. Yet we haven’t seen a supernova in the Milky Way in around 400 years. The most recent nearby supernova was observed in 1987, and it wasn’t even in our galaxy. It was in a nearby satellite galaxy called the Large Magellanic Cloud.

But death by supernova probably isn’t something you have to worry about in your lifetime, or your children’s or grandchildren’s or great-great-great-grandchildren’s lifetime. IK Pegasi, the closest candidate we have for a supernova, is 150 light-years away—too far to do any real damage to Earth.

Even that 2.8-million-year-old supernova that ejected its radioactive insides into our oceans was at least 100 light-years from Earth, which was not close enough to cause a mass-extinction. The physicists deemed it a “near miss.”

7. Supernovae light can echo through time

Just as your voice echoes when its sound waves bounce off a surface and come back again, a supernova echoes in space when its light waves bounce off cosmic dust clouds and redirect themselves toward Earth.

Because the echoed light takes a scenic route to our planet, this phenomenon opens a portal to the past, allowing scientists to look at and decode supernovae that occurred hundreds of years ago. A recent example of this is SN1572, or Tycho’s supernova, a supernova that occurred in 1572. This supernova shined brighter than Venus, was visible in daylight and took two years to dim from the sky.

In 2008, astronomers found light waves originating from the cosmic demolition site of the original star. They determined that they were seeing light echoes from Tycho’s supernova. Although the light was 20 billion times fainter than what astronomer Tycho Brahe observed in 1572, scientists were able to analyze its spectrum and classify the supernova as a thermonuclear supernova.

More than four centuries after its explosion, light from this historical supernova is still arriving at Earth.

8. Supernovae were used to discover dark energy

Because thermonuclear supernovae are so bright, and because their light brightens and dims in a predictable way, they can be used as lighthouses for cosmology.

In 1998, scientists thought that cosmic expansion, initiated by the big bang, was likely slowing down over time. But supernova studies suggested that the expansion of the universe was actually speeding up.

Scientists can measure the true brightness of supernovae by looking at the timescale over which they brighten and fade. By comparing how bright these supernovae appear with how bright they actually are, scientists are able to determine how far away they are.

Scientists can also measure the increase in the wavelength of a supernova’s light as it moves farther and farther away from us. This is called the redshift.

Comparing the redshift with the distances of supernovae allowed scientists to infer how the rate of expansion has changed over the history of the universe. Scientists believe that the culprit for this cosmic acceleration is something called dark energy.

9. Supernovae occur at a rate of approximately 10 per second

By the time you reach the end of this sentence, it is likely a star will have exploded somewhere in the universe.

As scientists evolve better techniques to explore space, the number of supernovae they discover increases. Currently they find over a thousand supernovae per year.

But when you look deep into the night sky at bright lights shining from billions of light-years away, you’re actually looking into the past. The supernovae that scientists are detecting stretch back to the very beginning of the universe. By adding up all of the supernovae they’ve observed, scientists can figure out the rate at which supernovae occur across the entire universe.

Scientists estimate about 10 supernovae occur per second, exploding in space like popcorn in the microwave.

10. We’re about to get much better at detecting far-away supernovae

Even though we’ve been aware of these exploding stars for millennia, there’s still so much we don’t know about them. There are two known types of supernovae, but there are many different varieties that scientists are still learning about.

Supernovae could result from the merger of two white dwarfs. Alternatively, the rotation of a star could create a black hole that accretes material and launches a jet through the star. Or the density of a star’s core could be so high that it starts creating electron-positron pairs, causing a chain reaction in the star.

Right now, scientists are mapping the night sky with the Dark Energy Survey, or DES. Scientists can discover new supernova explosions by looking for changes in the images they take over time.

Another survey currently going on is the All-Sky Automated Survey for Supernovae, or the ASAS-SN, which recently observed the most luminous supernova ever discovered.

In 2019, the Large Synoptic Survey Telescope, or LSST, will revolutionize our understanding of supernovae. LSST is designed to collect more light and peer deeper into space than ever before. It will move rapidly across the sky and take more images in larger chunks than previous surveys. This will increase the number of supernovae we see by hundreds of thousands per year.

Studying these astral bombs will expand our knowledge of space and bring us even closer to understanding not just our origin, but the cosmic reach of the universe.

 

Like what you see? Sign up for a free subscription to symmetry!

 

by Ali Sundermier at August 25, 2015 01:00 PM

arXiv blog

The 20 Most Infamous Cyberattacks of the 21st Century (Part I)

Cyberattacks are on the increase, and one cybersecurity researcher is on a mission to document them all.

August 25, 2015 04:15 AM

August 24, 2015

Tommaso Dorigo - Scientificblogging

New Frontiers In Physics: The 2015 Conference In Kolimbari
Nowadays Physics is a very big chunck of science, and although in our University courses we try to give our students a basic knowledge of all of it, it has become increasingly clear that it is very hard to keep up to date with the developments in such diverse sub-fields as quantum optics, material science, particle physics, astrophysics, quantum field theory, statistical physics, thermodynamics, etcetera.

Simply put, there is not enough time within the average life time of a human being to read and learn about everything that is being studied in dozens of different disciplines that form what one may generically call "Physics. 


read more

by Tommaso Dorigo at August 24, 2015 01:21 PM

Clifford V. Johnson - Asymptotia

Beetlemania…

Fig beetles.

beetlemania_cvj

(Slightly blurred due to it being windy and a telephoto shot with a light handheld point-and-shoot...)

-cvj Click to continue reading this post

The post Beetlemania… appeared first on Asymptotia.

by Clifford at August 24, 2015 10:20 AM

astrobites - astro-ph reader's digest

An Explosive Signature of Galaxy Collisions

Gamma ray bursts (GRBs) are among the most dramatically explosive events in the universe. They’re often dubbed the largest explosions since the Big Bang (it’s pretty hard to quantify how big the Big Bang was, but suffice it to say it was quite large). There are two classes of GRBs: long-duration and short-duration. Long-duration GRBs (which interest us today) are caused when extremely massive stars go bust.

Fig 1. -

Fig 1. – Long-duration GRBs are thought to form during the deaths of the most massive stars. As the stars run out of fuel (left to right) they star fusing heavier elements together until reaching iron (Fe). Iron doesn’t fuse, and the star can collapse into a black hole. As the material is sucked into the black hole, a powerful jet can burst out into the universe (bottom left), which we would observe as a GRB.

The most massive stars burn through their fuel much faster, and die out much more quickly than smaller stars. Therefore, long-duration GRBs should only be seen in galaxies with a lot of recent star formation. All the massive stars will have already died in a galaxy which isn’t forming new stars. Lots of detailed observations have been required to confirm this connection between GRBs and their host galaxies. It’s, in fact, one of the main pieces of evidence for the massive-star explanation.

The authors of today’s paper studied the host galaxy of a long-duration GRB with an additional goal in mind. Rather than just show that this galaxy is forming lots of stars, they wanted to look at its gas to explain why it’s forming so many stars. So, they went looking for neutral hydrogen gas in the galaxy. Neutral gas is a galaxy’s fuel for forming new stars. Understanding the properties of the gas should tell us about the way in which the galaxy is forming stars.

Hot, ionized hydrogen is easy to observe, because it emits a lot of light in the UV and optical ranges. This ionized hydrogen is found right around young, star-forming regions, and so has been seen in GRB hosts before. But the cold, neutral hydrogen – which makes up most of a galaxy’s gas – is much harder to observe directly. It doesn’t emit much light on its own, but one of the main places it does emit is in the radio band: the 21-cm line. For more information on the physics involved, see this astrobite page, but suffice it to say that pretty much all neutral hydrogen emits weakly at 21 cm.

This signal is weak enough that it hasn’t been detected in the more distant GRB hosts. Today’s authors observed the host galaxy of the closest-yet-observed GRB (980425), which is only 100 million light-years away: about 50 times farther away than the Andromeda galaxy. This is practically just next-door, compared to most GRBs. This close proximity allowed them to make the first ever detection of 21-cm hydrogen emission from a GRB host galaxy.

maps_panel2

Fig. 2 -The radio map (contours) of the neutral hydrogen gas from 21-cm radio observations. The densest portions of the disk align with the location of the GRB explosion (red arrow) and a currently-ongoing burst of star formation (blue arrow). Fig 2 from Arabsalmani et al. 2015

Using powerful radio observations – primarily from the Giant Metrewave Radio Telescope – the authors made maps of hydrogen 21-cm emission across the galaxy. They found a large disk of neutral gas, which was thickest in the region around where the GRB went off. Denser gas leads to more ongoing star formation, which as we know can mean that very massive stars may still be around to become GRBs.

The most important finding, however, was that the gas disk had been disturbed: more than 21% of the gas wasn’t aligned with the disk. This disturbance most likely came from a merger with a smaller galaxy, that mixed up the disk when passing by. The authors argue that this merger could have helped get the star-formation going. By shock-compressing the gas, the disturbance would have kick-started the galaxy into forming stars and, eventually, resulted in the GRB.

This paper is quite impressive, as it shows that astronomers are probing farther into the link between GRBs and their host galaxies. Astronomers have known for a while that GRBs are sign-posts to galaxies which are forming lots of stars. But today’s paper used radio observations of the gas to connect that star formation to a recent merger. Most GRB hosts are much farther away, and similar observations will be difficult. But with more sensitive observatories – like ALMA or the VLA – it may be possible to see whether the gas of more GRB hosts show evidence of mergers. Perhaps GRBs are telling us even more about their galaxies than we had thought before!

by Ben Cook at August 24, 2015 04:44 AM

August 23, 2015

Clifford V. Johnson - Asymptotia

Red and Round…

red_tomatoes_aug_2015

Some more good results from the garden, after I thought that the whole crop was again going to be prematurely doomed, like last year. I tried to photograph the other thing about this year's gardening narrative that I intend to tell you about, but with poor results, but I'll say more shortly. In the meantime, for the record here are some Carmello tomatoes and some of a type of Russian Black [...] Click to continue reading this post

The post Red and Round… appeared first on Asymptotia.

by Clifford at August 23, 2015 12:10 AM

August 21, 2015

ZapperZ - Physics and Physicists

Quantum Teleportation Versus Star Trek's "Transporter".
Chad Orzel has an article on Forbes explaining a bit more on what quantum teleportation is, and how it is different than those transporters in Star Trek. You might think that this is rather well-known since this has been covered many times, even on this blog. But the ignorance of what quantum teleportation is still pops up frequently, and I see people on public forums still think that we can transport objects from one location to another because "quantum teleportation" has been verified.

So, if you are still cloudy on this topic, you might want to read that article.

Zz.

by ZapperZ (noreply@blogger.com) at August 21, 2015 12:48 PM

arXiv blog

How Astronomers Could Observe Light Sails Around Other Stars

Light sails are a promising way of exploring star systems. If other civilizations use them, these sails should be visible from Earth, say astrophysicists.

August 21, 2015 04:38 AM

August 20, 2015

astrobites - astro-ph reader's digest

Magnetars: The Perpetrators of (Nearly) Everything

Fig 1:  Artist's conception of a magnetar with strong magnetic field lines. [From Wikipedia Commons]

Fig 1: Artist’s conception of a magnetar with strong magnetic field lines. [From Wikipedia Commons]

Astronomers who study cosmic explosions have a running joke: anything too wild to explain with standard models are probably magnetars. These scapegoats are neutron stars with extremely powerful magnetic fields, like the one shown to the right.

Super-luminous supernovae? Probably a magnetar collapsing. Short, weak gamma ray bursts? Why not magnetar flares. Ultra-long gamma ray bursts? Gotta be magnetars. Magnetars are a popular model due to their natural versatility. In today’s paper, the authors tie together several magnetar theories into a cohesive theoretical explanation of two types of transients, or short cosmic events: super-luminous supernovae (SLSNe) and ultra-long gamma ray bursts (ULGRBs).

The Super-Ultra Transients

Super-luminous supernovae, as their name suggests, are extreme stellar deaths which are about 100 times brighter than normal core-collapse supernovae. The brightest SLSN, ASASSN-15lh (pronounced “Assassin 15lh”), is especially troubling for scientists because it lies well above the previously predicted energy limits of a magnetar model*. The other new-kids-on-the-transient-block are ultra-long gamma ray bursts which are bursts of gamma-ray energy which last a few thousands of seconds. The other popular variety of gamma ray bursts associated with SNe are plain-old “long gamma ray bursts”, which last less than 100 seconds are often accompanied by a core-collapse supernovae. Both long gamma ray bursts and ULGRBs are currently predicted in magnetar models. The question is: can we tie these two extreme events, SLSNe and ULGRBs, together in a cohesive theoretical framework?

The authors say yes! The basic theoretical idea proposed is that a very massive star will begin to collapse like a standard core-collapse supernova. The implosion briefly leaves behind a twirling pulsar whose angular momentum is saving it from collapsing into a black hole. Material is flung from its spinning surface, especially along its magnetic poles. From these poles, columned material is seen as high-energy jets, like you can see in this video of Vela. Eventually, the magnetars slow down and finally collapse into a black hole.

Fig 2: The connection between ULGRBs and SLSNe. Along the x-axis is is the initial spin-period of the magnetar. On the y-axis is the magnetic field of the magnetar. The red-shaded region shows where SLSNe are possible, and the blue-shaded region show where GRBs are possible. The red and green points are observed SLSNe.

Fig 2: The connection between ULGRBs and SLSNe. Along the x-axis is is the initial spin-period of the magnetar. On the y-axis is the magnetic field of the magnetar. The red-shaded region shows where SLSNe are possible, and the blue-shaded region show where GRBs are possible. The red and green points are observed SLSNe.

Connecting the Dots

We can explain the consequences of the model using the image shown above. In the upper left quadrant, the magnetars spin very quickly (i.e. short spin periods) and have large magnetic fields. In this scenario, the escaping magnetar jets are extremely powerful and columnated, and the magnetar will spin down and collapse into a black hole after a few minutes. This scenario describes the typical long gamma ray bursts that we often see.

Now if we move down and right on the figure, our initial magnetic field weakens and the period of the magnetar grows. In this case, the expected jet from the magnetar will weaken, but it will last longer as the magnetar takes a longer time to slow down its life-preserving spin. If the jet is able to blast its way out of the magnetar and is directed towards us, we will see it as an ULGRB, with a lifetime of about a half hour!

One of the most exciting features of the plot are the solid black lines that show where the supernova luminosity is maximized. In these points, the luminosity of the supernova is enhanced by the magnetar, leading to super luminous supernovae. These lines are in great agreement with three notable, luminous SNe.  It’s especially exciting that the black contours overlap with the region where ultra-long GRBs are produced. In other words, the authors predict that it is possible for a super-luminous supernova to be associated with an ultra-long gamma ray burst, tying together these extreme phenomena.

What’s Next?

One of the best tests of this theory will come from observations of ASASSN-15lh over the next several months. Ionizing photons from a magnetar model are predicted to escape from the supernova’s ejected material in the form of X-rays. Observations of these X-ray photons could be a smoking gun for a magnetar model of super-luminous supernovae, so stay tuned!

*Note: At the time of this bite, ASASSN-15lh is not a confirmed supernova. It may be another exciting type of transient known as a tidal disruption event, which you can read all about here

by Ashley Villar at August 20, 2015 02:11 PM

Symmetrybreaking - Fermilab/SLAC

Q&A: Marcelle Soares-Santos

Scientist Marcelle Soares-Santos talks about Brazil, neutron stars and a love of discovery.

Marcelle Soares-Santos has been exploring the cosmos since she was an undergraduate at the Federal University of Espirito Santo in southeast Brazil. She received her PhD from the University of São Paulo and is currently an astrophysicist on the Dark Energy Survey based at Fermi National Accelerator Laboratory outside Chicago.

Soares-Santos has worked at Fermilab for only five years, but she has already made a significant impact: In 2014, she was bestowed the Alvin Tollestrup Award for postdoctoral research. Now she is embarking on a new study to measure gravitational waves from neutron star collisions.

 

S: You recently attended the LISHEP conference, a high-energy physics conference held annually in Brazil. This year it was held in the region of Manaus, near your childhood home. What was it like to grow up there?

MS: Manaus is very different from the region that I think most foreigners know, Rio or São Paulo, but it’s very beautiful, very interesting. When I was four, my father worked for a mining company, and they found a huge reserve of iron ore in the middle of the Amazon forest. All over Brazil, people got offers from that company to get some extra benefits, which was very good for us because one of the benefits was a chance to go a good school there.

 

S: When did you get interested in physics?

MS: That was very early on, when I was a little kid. I didn’t know that it was physics I wanted to do, but I knew I wanted to do science. I tend to say that I lacked any other talents. I could not play any sport, I wasn’t good in the arts. But math and science, that was something I was good at.

These days I look back and feel that, had I known what I know today, I might not have had this confidence, because I understand now how lots of people are not encouraged to view physics as a topic they can handle. But back then I had a little bit of blind faith in the school system.

 

S: You work on the Dark Energy Survey. When did the interest in astrophysics kick in?

MS: I did an undergraduate research project. In Brazil, there is a program of research initiation where undergraduates can work for an entire year on a particular topic. My supervisor’s research was related to dark energy and gravitational waves. It’s interesting, because today I work on those two topics from a completely different perspective.

 

S: You’re also starting on a new project to study gravitational waves. What’s that about?

MS: For the first time we are building detectors that will be able to detect gravitational waves, not from cosmological sources, but from colliding neutron stars. These events are very rare, but we know they occur, and we can calculate how much gravitational wave emission there will be. The detectors are now reaching the sensitivity that they can see that. There’s LIGO in the United States and VIRGO in Europe.

Relying solely on gravitational waves, it’s possible only to roughly localize in the sky where the star collision happens. But we also have the Dark Energy Camera, so we can use it to find the optical counterpart—lots and lots of photons—and pinpoint the event picked up by the gravitational wave detector.

If we see the collision, we will be the first ones to see it based on a gravitational wave signal. That will be really cool.

 

S: How did the project get started? What is it called?

MS: I saw an announcement that LIGO was going to start operating this year, and I thought, “DECam would be great for this.” I talked to Jim Annis [at Fermilab] and said, “Look, look at this. It would be cool.” And he said, “Yeah, it would.”

It’s called the DES-GW project. It will start up in September. Groups from Fermilab, the University of Chicago, University of Pennsylvania and Harvard are participating.

 

S: What’s your favorite thing about what you do?

MS: Building these crazy ideas to become a reality. That’s the fun part of it. Of course, it’s not always possible, and we have more ideas than we can actually realize, but if you get to do one, it’s really cool. Part of the reason I moved from theory [as a graduate student] to experiment is that I wanted to do something where you actually get to close the loop of answering a question.

 

S: Has anything about being a scientist surprised you?

MS: In the beginning I thought I’d never be the person doing hands-on work on detector. I thought of myself more as someone who would be sitting in front of a computer. And it’s true that I spend most of my time sitting in front of the computer, but I also get a chance to go to Chile [where the Dark Energy Camera is located] and take data, be at the lab and get my hands dirty. Back then I thought that was more the role of an engineer than a scientist. I learned it doesn’t matter the label. It is a part of the job, and it’s a fun part.

 

S:In June 2014 Fermilab posted a Facebook post about you winning the Alvin Tollestrup Award. It received by far more likes than any Fermilab post up to that point, and most were pouring in from Brazil. What was behind its popularity?

MS:That was surprising for me. Typically whenever there is something on Facebook related to what I do, my parents will be excited about it and repost, so I get a few likes and reposts from relatives and friends. This one, I don’t know what happened.  I think in part there was a little bit of pride, people seeing a Brazilian being successful abroad.

I got lots of friend requests from people I’ve never met before. I got questions from high schoolers about physics and how to pursue a physics education. It’s a big responsibility to say something. What do you say to people? I tried to answer reasonably and tell them my experience. It was my 15 minutes of fame in social media.

 

Like what you see? Sign up for a free subscription to symmetry!

 

by Leah Hesla at August 20, 2015 01:00 PM

Axel Maas - Looking Inside the Standard Model

Looking for one thing in another
One of the most powerful methods to learn about particle physics is to smash two particles against each other at high speed. This is what is currently done at the LHC, where two protonsare used. Protons have the advantage that they are simple to handle on an engineering level, but since they are made up out of quarks andgluons, these collisions are rather messy.

An alternative are colliders using electrons and positrons. There have been many of these used successfully in the past. The reason is that the electrons and positrons appear at first sight to be elementary, but are technically much harder to use. Nonetheless, there are right now two large projects planned to use them, one in Japan and one in China. The decision is still out, whether either, or both, will be build, but they would both open up a new view on particle physics. These would start, hopefully, in the (late) 2020ies or early 2030ies.

However, they may be a little more messier than currently expected. I have written previously several times about our research on the structure of the Higgs particle. Especially that the Higgs may be much less simpler than just a singleparticle. We are currently working on possible consequences of this insight.

What has this to do with the collisions? Well, already 35 years agopeople figured out that if the statements about the Higgs are true, then this has further implications. Especially, the electrons as we know them cannot be really elementary particles. Rather, they have to be bound states, a combination of what we usually call an electron and a Higgs.

This is confusing at first sight: An electron should consists out of an electron and something else? The reason is that a clear distinction is often not made. But one should be more precise. One should first think of an elementary 'proper' electron. Together with the Higgs this proper electron creates a bound state. This bound state is what we perceive and usually call an electron. But it is different from the proper electron. Thus, we should therefore call it rather an 'effective' electron. This chaos is not so much different as what you would get when you would call a hydrogen atom also proton, since the electron is such a small addition to the proton in the atom. That you do not do so has a historical origin, as it has in the reverse way for the (effective) electron. Yes, it is confusing.

Even after mastering this confusion, that seems to be a rather outrageous statement. The Higgs is so much heavier than the electron, how can that be? Well, field theory indeed allows to have a bound state of two particles which is much, much lighter than the two individual particles. This effect is called a mass defect. This has been measured for atoms and nuclei, but there this is a very small effect. So, it is in principle possible. Still, the enormous size of the effect makes this a very surprising statement.

What we want to do now is to find some way to confirm this picture using experiments. And confirm this extreme mass defect.

Unfortunately, we cannot disassemble the bound state. But the presence of the Higgs can be seen in a different way. When we collide such bound states hard enough, then we usually have the situation that we actually collide from each bound states just one of the elements, rather than the whole thing. In a simple picture, in most cases one of the parts will be in the front of the collision, and will take the larger part of the hit.

This means that sometimes we will collide the conventional part, the proper electron. Then everything looks as usually expected. But sometimes we will do something involving the Higgs from either or both bound states. We can estimate already that anything involving the Higgs will be very rare. In the simple picture above, the Higgs, being much heavier than the proper electron, mostly drags behind the proper electron. But 'rarer' is no quantitative enough in physics. Therefore we have to do now calculations. This is a project I intend to do now with a new master student.

We want to do this, since we want to predict whether the aforementioned next experiments we will be sensitive enough to see that some times we actually collide the Higgses. That would confirm the idea of bound states. Then, we would indeed find something else than people were originally been looking for. Or expecting.

by Axel Maas (noreply@blogger.com) at August 20, 2015 07:42 AM

August 19, 2015

ZapperZ - Physics and Physicists

The Apparent Pentaquark Discovery - More Explanation
Recall the report on the apparent observation of a pentaquark made by LHCb a few weeks back. Fermilab's Don Lincoln had a video that explains a bit of what a quark is, what a pentaquark is, and how physics will proceed in verifying this.



Zz.

by ZapperZ (noreply@blogger.com) at August 19, 2015 03:29 PM

ZapperZ - Physics and Physicists

The Physics Of Air Conditioners
Ah, the convenience of having air conditioning. How many of us have thanked the technology that gave so much comfort during the hot, muggy day.

This CNET article covers the basic physics of air conditioners. Any undergraduate student who had taken intro Physics course should know the basic physics of this device when studying thermodynamics and the Carnot cycle. This is essentially a heat pump, where heat is transferred from a cooler reservoir to a warmer reservoir.

But, if you have forgotten about this, or if you are not aware of the physics behind that thing that gives you such comfort, then you might want to read it.

Zz.

by ZapperZ (noreply@blogger.com) at August 19, 2015 03:02 PM

August 18, 2015

Clifford V. Johnson - Asymptotia

Mid-Conversation…

three_beginnings_panels

Been a while since I shared a snippet from the graphic book in progress. And this time the dialogue is not redacted! A few remarks: [...] Click to continue reading this post

The post Mid-Conversation… appeared first on Asymptotia.

by Clifford at August 18, 2015 08:44 PM

arXiv blog

Physicists Unveil First Quantum Interconnect

An international team of physicists has found a way to connect quantum devices in a way that transports entanglement between them.

August 18, 2015 06:07 PM

Symmetrybreaking - Fermilab/SLAC

The age of the universe

How can we figure out when the universe began?

Looking out from our planet at the vast array of stars, humans have always asked questions central to our origin: How did all of this come to be? Has it always existed? If not, how and when did it begin?

How can we determine the history of something so complex when we were not around to witness its birth?

Scientists have used several methods: checking the age of the oldest objects in the universe, determining the expansion rate of the universe to trace backward in time, and using measurements of the cosmic microwave background to figure out the initial conditions of the universe and its evolution.

Hubble and an expanding universe

In the early 1900s, there was no such concept of the age of the universe, says Stanford University associate professor Chao-Lin Kuo of SLAC National Accelerator Laboratory. “Philosophers and physicists thought the universe had no beginning and no end.”

Then in the 1920s, mathematician Alexander Friedmann predicted an expanding universe. Edwin Hubble confirmed this when he discovered that many galaxies were moving away from our own at high speeds. Hubble measured several of these galaxies and in 1929 published a paper stating the universe is getting bigger.

Scientists then realized that they could wind this expansion back in time to a point when it all began. “So it was not until Friedmann and Hubble that the concept of a birth of the universe started,” Kuo says.

Tracing the expansion of the universe back in time is called finding its “dynamical age,” says Nobel Laureate Adam Riess, professor of astronomy and physics at Johns Hopkins University.

“We know the universe is expanding, and we think we understand the expansion history,” he says. “So like a movie, you can run it backwards until everything is on top of everything in the big bang.”

The expansion rate of the universe is known as the Hubble constant.

The Hubble puzzle

The Hubble constant has not been easy to measure, and the number has changed several times since the 1930s, Kuo says.

One way to check the Hubble constant is to compare its prediction for the age of the universe with the age of the oldest objects we can see. At the very least, the universe should be older than the objects it contains.

Scientists can estimate the age of very old stars that have burned out—called white dwarfs—by determining how long they have been cooling. Scientists can also estimate the age of globular clusters, large clusters of old stars that formed at roughly the same time.

They have estimated the oldest objects to be between 12 billion and 13 billion years old.

In the 1990s, scientists were puzzled when they found that their estimate of the age of the universe—based on their measurement of the Hubble constant—was several billion years younger than the age of these oldest stars.

However, in 1998, Riess and colleagues Saul Perlmutter of Lawrence Berkeley National Laboratory and Brian Schmidt of the Australian National Lab found the root of the problem: The universe wasn’t expanding at a steady rate. It was accelerating.

They figured this out by observing a type of supernova, the explosion of a star at the end of its life. Type 1a supernovae explode with uniform brightness, and light travels at a constant speed. By observing several different Type 1a supernovae, the scientists were able to calculate their distance from the Earth and how long the light took to get here.

“Supernovae are used to determine how fast the universe is expanding around us,” Riess says. “And by looking at very distant supernovae that exploded in the past and whose light has taken a long time to reach us, we can also see how the expansion rate has recently been changing.”

Using this method, scientists have estimated the age of the universe to be around 13.3 billion years.

Recipe for the universe

Another way to estimate the age of the universe is by using the cosmic microwave background, radiation left over from just after the big bang that extends in every direction.

“The CMB tells you the initial conditions and the recipe of the early universe—what kinds of stuff it had in it,” Riess says. “And if we understand that well enough, in principle, we can predict how fast the universe made that stuff with those initial conditions and how the universe would expand at different points in the future.”

Using NASA’s Wilkinson Microwave Anisotropy Probe, scientists created a detailed map of the minute temperature fluctuations in the CMB. They then compared the fluctuation pattern with different theoretical models of the universe that predict patterns of CMB. In 2003 they found a match.

“Using these comparisons, we have been able to figure out the shape of the universe, the density of the universe and its components,” Kuo says. WMAP found that ordinary matter makes up about 4 percent of the universe; dark matter is about 23 percent; and the remaining 73 percent is dark energy. Using the WMAP data, scientists estimated the age of the universe to be 13.772 billion years, plus or minus 59 million years.

In 2013, the European Space Agency’s Planck space telescope created an even more detailed map of the CMB temperature fluctuations and estimated the universe to be 13.82 billion years old, plus or minus 50 million years—slightly older than WMAP’s estimate. Planck also made more detailed measurements of the components of the universe and found slightly less dark energy (around 68 percent) and slightly more dark matter (around 27 percent).

New puzzles

Even with these extremely precise measurements, scientists still have puzzles to solve. The measured current expansion rate of the universe tends to be about 5 percent higher than what is predicted from the CMB, and scientists are not sure why, Riess says.

“It could be a sign that we do not totally understand the physics of the universe, or it could be an error in either of the two measurements,” Riess says.

“It is a sign of tremendous progress in cosmology that we get upset and worried about a 5 percent difference, whereas 15 or 20 years ago, measurements of the expansion rate could differ by a factor of two.”

There is also much left to understand about dark matter and dark energy, which appear to make up about 95 percent of the universe. “Our best chance to understand the nature of these unknown dark components is by making these kinds of precise measurements and looking for small disagreements or a loose thread that we can pull on to see if the sweater unravels.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Amelia Williamson Smith at August 18, 2015 02:39 PM

Quantum Diaries

Pour une physique nucléaire accessible à tous

Grégoire Besse, doctorant au CNRS en physique nucléaire théorique, nous confie son intérêt pour la médiation des sciences.

C’est en débutant ma thèse que je me suis aperçu du lien inéluctable entre la recherche et la vulgarisation. J’ai donc progressivement choisi de me lancer dans cette démarche afin d’expliquer mes recherches et de les rendre plus « limpides » pour le commun des mortels. Ma thèse porte sur la physique nucléaire théorique et s’intitule  « Description théorique de la dynamique nucléaire lors des collisions d’ions lourds et ses implications astrophysiques ». Elle se déroule au laboratoire Subatech à Nantes. Je travaille sur la description dynamique d’un système nucléaire, c’est-à-dire des noyaux en collision ou en réseau. Pour cela, le groupe de recherche dont je fais partie a élaboré un code de simulation nommé Dynamical Yavelets in Nuclei (DYWAN). Ce code est déjà opérationnel mais reste en phase d’optimisation.

Exemple de collision à basse énergie entre deux noyaux. On observe que les noyaux se déforment sous l’effet de la force nucléaire pour se coller, jusqu’à atteindre une fusion.

Exemple de collision à basse énergie entre deux noyaux. On observe que les noyaux se déforment sous l’effet de la force nucléaire pour se coller, jusqu’à atteindre une fusion.

La physique nucléaire s’intéresse aux noyaux et aux comportements de la force nucléaire. La force nucléaire, ou interaction forte résiduelle, est l’effet de l’interaction forte (quarks-gluons) à l’échelle nucléaire. Il s’agit de l’interaction nucléon-nucléon. Bien moins médiatisée que la physique des hautes énergies (celle du LHC et du boson de Higgs), la physique nucléaire reste néanmoins un maillon essentiel pour comprendre la matière. De plus, ses applications sont immédiates, comme par exemple avec la radioactivité, la fission, la fusion ou la production de radio-isotopes.

Ma passion au service de mon travail

Aperçu de l’environnement 3D en OpenGL. Il est visitable comme un jeu-vidéo avec clavier-souris. Les noyaux (bleu-rouge et cyan-rose), déjà mêlés, sont représentés par des objets mathématiques : les états cohérents (les boules avec des nuages de points).

Aperçu de l’environnement 3D en OpenGL. Il est visitable comme un jeu-vidéo avec clavier-souris. Les noyaux (bleu-rouge et cyan-rose), déjà mêlés, sont représentés par des objets mathématiques : les états cohérents (les boules avec des nuages de points).

Le but de ma thèse est de fournir un code de simulation puissant capable de reproduire des données et des comportements observés expérimentalement puis de prédire des réactions. Nous nous focalisons sur la collision d’ions lourds qui permettent de produire des systèmes nucléaires très exotiques tels que de la matière très riche en neutrons. D’autres groupes de recherche du laboratoire s’intéressent plutôt aux études de la radioactivité, de la durée de vie et du comportement des noyaux isolés. Ceci me rappelle la métaphore d’Albert Einstein qui expliquait que pour comprendre le fonctionnement d’une montre sans l’ouvrir, vous avez deux solutions : l’observation (écoute, regard, prise de données et émission d’hypothèses) ou l’expérimentation (vous lancez la montre contre un mur, vous regardez les pièces qui sortent et vous essayer de tout remettre en ordre). Nous utilisons plutôt cette deuxième méthode.

Parallèlement à ma thèse, j’essaie de mettre au point un logiciel  alliant recherche et nouvelles technologies (j’en suis arrivé à un environnement 3D visitable avec clavier-souris). Je suis très intéressé par la réalité virtuelle et la réalité augmentée : je pense que ces outils permettront de nouvelles approches dans la recherche, un nouveau point de vue pour une nouvelle théorie. Et cela a déjà fait ses preuves : nous avons débusqué des erreurs sur DYWAN grâce à mon logiciel !

L’oiseau bleu, ami de la recherche

Mon arrivée sur Twitter n’est pas très ancienne, mais très vite j’ai compris que ce réseau social est un outil formidable pour la recherche. Cette dernière est un monde actif en constante évolution, il paraît alors légitime de se tenir informé des avancées car cela fait normalement partie de notre travail. Par ailleurs, Twitter permet un aperçu rapide (- de 140 caractères) des informations importantes.

J’ai découvert le compte @EnDirectDuLabo par hasard : chaque semaine, un scientifique en prend les rênes pour partager son quotidien avec les abonnés. Avec un public potentiel de plus de 2 000 personnes, l’expérience peut être intimidante. Mais finalement, lorsque ce fut mon tour, tout s’est bien passé et j’ai eu des échanges avec un public varié : chercheurs, doctorants, journalistes, community managers, amateurs et autres curieux.

Au final, cette expérience m’a aidé à mieux cerner mon sujet de thèse. De plus, ces « relations » sont très enrichissantes au quotidien : une photo, une phrase, un article, un blog, vive la curiosité et le partage 2.0 !

by CNRS-IN2P3 at August 18, 2015 02:39 PM

CERN Bulletin

CERN Bulletin Issue No. 34-36/2015
Link to e-Bulletin Issue No. 34-36/2015Link to all articles in this issue No.

August 18, 2015 01:09 PM

Jester - Resonaances

Weekend Plot: Inflation'15
The Planck collaboration is releasing new publications based on their full dataset, including CMB temperature and large-scale polarization data.  The updated values of the crucial  cosmological parameters were already made public in December last year, however one important new element is the combination of these result with the joint Planck/Bicep constraints on the CMB B-mode polarization.  The consequences for models of inflation are summarized in this plot:

It shows the constraints on the spectral index ns and the tensor-to-scalar ratio r of the CMB fluctuations, compared to predictions of various single-field models of inflation.  The limits on ns changed slightly compared to the previous release, but the more important progress is along the y-axis. After including the joint Planck/Bicep analysis (in the plot referred to as BKP), the combined limit on the tensor-to-scalar ratio becomes r < 0.08.  What is also important, the new limit is much more robust; for example, allowing for a scale dependence of the spectral index  relaxes the bound  only slightly,  to r< 0.10.

The new results have a large impact on certain classes models. The model with the quadratic inflaton potential, arguably the simplest model of inflation, is now strongly disfavored. Natural inflation, where the inflaton is a pseudo-Golsdtone boson with a cosine potential, is in trouble. More generally, the data now favors a concave shape of the inflaton potential during the observable period of inflation; that is to say, it looks more like a hilltop than a half-pipe. A strong player emerging from this competition is R^2 inflation which, ironically, is the first model of inflation ever written.  That model is equivalent to an exponential shape of the inflaton potential, V=c[1-exp(-a φ/MPL)]^2, with a=sqrt(2/3) in the exponent. A wider range of the exponent a can also fit the data, as long as a is not too small. If your favorite theory predicts an exponential potential of this form, it may be a good time to work on it. However, one should not forget that other shapes of the potential are still allowed, for example a similar exponential potential without the square V~ 1-exp(-a φ/MPL), a linear potential V~φ, or more generally any power law potential V~φ^n, with the power n≲1. At this point, the data do not favor significantly one or the other. The next waves of CMB polarization experiments should clarify the picture. In particular, R^2 inflation predicts 0.003 < r < 0.005, which is should be testable in a not-so-distant future.

Planck's inflation paper is here.

by Jester (noreply@blogger.com) at August 18, 2015 12:20 PM

Jester - Resonaances

How long until it's interesting?
Last night, for the first time, the LHC  collided particles at the center-of-mass energy of 13 TeV. Routine collisions should follow early in June. The plan is to collect 5-10 inverse femtobarn (fb-1) of data before winter comes, adding to the 25 fb-1 from Run-1. It's high time dust off your Madgraph and tool up for what may be the most exciting time in particle physics in this century. But when exactly should we start getting excited? When should we start friending LHC experimentalists on facebook? When is the time to look over their shoulders for a glimpse of of gluinos popping out of the detectors. One simple way to estimate the answer is to calculate what is the luminosity when the number of particles produced  at 13 TeV will exceed that produced during the whole Run-1. This depends on the ratio of the production cross sections at 13 and 8 TeV which is of course strongly dependent on the particle's mass and production mechanism. Moreover, the LHC discovery potential will also depend on how the background processes change, and on a host of other experimental issues.  Nevertheless, let us forget for a moment about  the fine-print, and  calculate the ratio of 13 and 8 TeV cross sections for a few particles popular among the general public. This will give us a rough estimate of the threshold luminosity when things should get interesting.

  • Higgs boson: Ratio≈2.3; Luminosity≈10 fb-1.
    Higgs physics will not be terribly exciting this year, with only a modest improvement of the couplings measurements expected. 
  • tth: Ratio≈4; Luminosity≈6 fb-1.
    Nevertheless, for certain processes involving the Higgs boson the improvement may be a bit  faster. In particular, the theoretically very important process of Higgs production in association with top quarks (tth) was on the verge of being detected in Run-1. If we're lucky, this year's data may tip the scale and provide an evidence for a non-zero top Yukawa couplings. 
  • 300 GeV Higgs partner:  Ratio≈2.7 Luminosity≈9 fb-1.
    Not much hope for new scalars in the Higgs family this year.  
  • 800 GeV stops: Ratio≈10; Luminosity≈2 fb-1.
    800 GeV is close to the current lower limit on the mass of a scalar top partner decaying to a top quark and a massless neutralino. In this case, one should remember that backgrounds also increase at 13 TeV, so the progress will be a bit slower than what the above number suggests. Nevertheless,  this year we will certainly explore new parameter space and make the naturalness problem even more severe. Similar conclusions hold for a fermionic top partner. 
  • 3 TeV Z' boson: Ratio≈18; Luminosity≈1.2 fb-1.
    Getting interesting! Limits on Z' bosons decaying to leptons will be improved very soon; moreover, in this case background is not an issue.  
  • 1.4 TeV gluino: Ratio≈30; Luminosity≈0.7 fb-1.
    If all goes well, better limits on gluinos can be delivered by the end of the summer! 

In summary, the progress will be very fast for new heavy particles. In particular, for gluon-initiated production of TeV-scale particles  already the first inverse femtobarn may bring us into a new territory. For lighter particles the progress will be slower, especially when backgrounds are difficult.  On the other hand, precision physics, such as the Higgs couplings measurements, is unlikely to be in the spotlight this year.

by Jester (noreply@blogger.com) at August 18, 2015 12:20 PM

Jester - Resonaances

Weekend Plot: Higgs mass and SUSY
This weekend's plot shows the region in the stop mass and mixing space of the MSSM that reproduces the measured Higgs boson mass of 125 GeV:



Unlike in the Standard Model, in the minimal supersymmetric extension of the Standard Model (MSSM) the Higgs boson mass is not a free parameter; it can be calculated given all masses and couplings of the supersymmetric particles. At the lowest order, it is equal to the Z bosons mass 91 GeV (for large enough tanβ). To reconcile the predicted and the observed Higgs mass, one needs to invoke large loop corrections due to supersymmetry breaking. These are dominated by the contribution of the top quark and its 2 scalar partners (stops) which couple most strongly of all particles to the Higgs. As can be seen in the plot above, the stop mass preferred by the Higgs mass measurement is around 10 TeV. With a little bit of conspiracy, if the mixing between the two stops  is just right, this can be lowered to about 2 TeV. In any case, this means that, as long as the MSSM is the correct theory, there is little chance to discover the stops at the LHC.

This conclusion may be surprising because previous calculations were painting a more optimistic picture. The results above are derived with the new SUSYHD code, which utilizes effective field theory techniques to compute the Higgs mass in the presence of  heavy supersymmetric particles. Other frequently used codes, such as FeynHiggs or Suspect, obtain a significantly larger Higgs mass for the same supersymmetric spectrum, especially near the maximal mixing point. The difference can be clearly seen in the plot to the right (called the boobs plot by some experts). Although there is a  debate about the size of the error as estimated by SUSYHD, other effective theory calculations report the same central values.

by Jester (noreply@blogger.com) at August 18, 2015 12:18 PM

Jester - Resonaances

Weekend plot: dark photon update
Here is a late weekend plot with new limits on the dark photon parameter space:

The dark photon is a hypothetical massive spin-1 boson mixing with the ordinary photon. The minimal model is fully characterized by just 2 parameters: the mass mA' and the mixing angle ε. This scenario is probed by several different experiments using completely different techniques.  It is interesting to observe how quickly the experimental constraints have been improving in the recent years. The latest update appeared a month ago thanks to the NA48 collaboration. NA48/2 was an experiment a decade ago at CERN devoted to studying CP violation in kaons. Kaons can decay to neutral pions, and the latter can be recycled into a nice probe of dark photons.  Most often,  π0 decays to two photons. If the dark photon is lighter than 135 MeV, one of the photons can mix into an on-shell dark photon, which in turn can decay into an electron and a positron. Therefore,  NA48 analyzed the π0 → γ e+ e-  decays in their dataset. Such pion decays occur also in the Standard Model, with an off-shell photon instead of a dark photon in the intermediate state.  However, the presence of the dark photon would produce a peak in the invariant mass spectrum of the e+ e- pair on top of the smooth Standard Model background. Failure to see a significant peak allows one to set limits on the dark photon parameter space, see the dripping blood region in the plot.

So, another cute experiment bites into the dark photon parameter space.  After this update, one can robustly conclude that the mixing angle in the minimal model has to be less than 0.001 as long as the dark photon is lighter than 10 GeV. This is by itself not very revealing, because there is no  theoretically preferred value of  ε or mA'.  However, one interesting consequence the NA48 result is that it closes the window where the minimal model can explain the 3σ excess in the muon anomalous magnetic moment.

by Jester (noreply@blogger.com) at August 18, 2015 12:18 PM

Tommaso Dorigo - Scientificblogging

First CMS Physics Result At 13 TeV: Top Quarks
Twenty years have passed since the first observation of the top quark, the last of the collection of six that constitutes the matter of which atomic nuclei are made. And in these twenty years particle physics has made some quite serious leaps forward; the discovery that neutrinos oscillate and have mass (albeit a tiny one), and the discovery of the Higgs boson are the two most important ones to cite. Yet the top quark remains a very interesting object to study at particle colliders.

read more

by Tommaso Dorigo at August 18, 2015 08:24 AM

The n-Category Cafe

A Wrinkle in the Mathematical Universe

Of all the permutation groups, only <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> has an outer automorphism. This puts a kind of ‘wrinkle’ in the fabric of mathematics, which would be nice to explore using category theory.

For starters, let <semantics>Bij n<annotation encoding="application/x-tex">Bij_n</annotation></semantics> be the groupoid of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-element sets and bijections between these. Only for <semantics>n=6<annotation encoding="application/x-tex">n = 6</annotation></semantics> is there an equivalence from this groupoid to itself that isn’t naturally isomorphic to the identity!

This is just another way to say that only <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> has an outer isomorphism.

And here’s another way to play with this idea:

Given any category <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, let <semantics>Aut(X)<annotation encoding="application/x-tex">Aut(X)</annotation></semantics> be the category where objects are equivalences <semantics>f:XX<annotation encoding="application/x-tex">f : X \to X</annotation></semantics> and morphisms are natural isomorphisms between these. This is like a group, since composition gives a functor

<semantics>:Aut(X)×Aut(X)Aut(X)<annotation encoding="application/x-tex"> \circ : Aut(X) \times Aut(X) \to Aut(X) </annotation></semantics>

which acts like the multiplication in a group. It’s like the symmetry group of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. But it’s not a group: it’s a ‘2-group’, or categorical group. It’s called the automorphism 2-group of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

By calling it a 2-group, I mean that <semantics>Aut(X)<annotation encoding="application/x-tex">Aut(X)</annotation></semantics> is a monoidal category where all objects have weak inverses with respect to the tensor product, and all morphisms are invertible. Any pointed space has a fundamental 2-group, and this sets up a correspondence between 2-groups and connected pointed homotopy 2-types. So, topologists can have some fun with 2-groups!

Now consider <semantics>Bij n<annotation encoding="application/x-tex">Bij_n</annotation></semantics>, the groupoid of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-element sets and bijections between them. Up to equivalence, we can describe <semantics>Aut(Bij n)<annotation encoding="application/x-tex">Aut(Bij_n)</annotation></semantics> as follows. The objects are just automorphisms of <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics>, while a morphism from an automorphism <semantics>f:S nS n<annotation encoding="application/x-tex">f: S_n \to S_n</annotation></semantics> to an automorphism <semantics>f:S nS n<annotation encoding="application/x-tex">f' : S_n \to S_n</annotation></semantics> is an element <semantics>gS n<annotation encoding="application/x-tex">g \in S_n</annotation></semantics> that conjugates one automorphism to give the other:

<semantics>f(h)=gf(h)g 1hS n<annotation encoding="application/x-tex"> f'(h) = g f(h) g^{-1} \qquad \forall h \in S_n </annotation></semantics>

So, if all automorphisms of <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics> are inner, all objects of <semantics>Aut(Bij n)<annotation encoding="application/x-tex">Aut(Bij_n)</annotation></semantics> are isomorphic to the unit object, and thus to each other.

Puzzle 1. For <semantics>n6<annotation encoding="application/x-tex">n \ne 6</annotation></semantics>, all automorphisms of <semantics>S n<annotation encoding="application/x-tex">S_n</annotation></semantics> are inner. What are the connected pointed homotopy 2-types corresponding to <semantics>Aut(Bij n)<annotation encoding="application/x-tex">Aut(Bij_n)</annotation></semantics> in these cases?

Puzzle 2. The permutation group <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> has an outer automorphism of order 2, and indeed <semantics>Out(S 6)= 2.<annotation encoding="application/x-tex">Out(S_6) = \mathbb{Z}_2.</annotation></semantics> What is the connected pointed homotopy 2-type corresponding to <semantics>Aut(Bij 6)<annotation encoding="application/x-tex">Aut(Bij_6)</annotation></semantics>?

Puzzle 3. Let <semantics>Bij<annotation encoding="application/x-tex">Bij</annotation></semantics> be the groupoid where objects are finite sets and morphisms are bijections. <semantics>Bij<annotation encoding="application/x-tex">Bij</annotation></semantics> is the coproduct of all the groupoids <semantics>Bij n<annotation encoding="application/x-tex">Bij_n</annotation></semantics> where <semantics>n0<annotation encoding="application/x-tex">n \ge 0</annotation></semantics>:

<semantics>Bij= n=0 Bij n<annotation encoding="application/x-tex"> Bij = \sum_{n = 0}^\infty Bij_n </annotation></semantics>

Give a concrete description of the 2-group <semantics>Aut(Bij)<annotation encoding="application/x-tex">Aut(Bij)</annotation></semantics>, up to equivalence. What is the corresponding pointed connected homotopy 2-type?

You can get a bit of intuition for the outer automorphism of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> using something called the Tutte–Coxeter graph.

Let <semantics>S={1,2,3,4,5,6}<annotation encoding="application/x-tex">S = \{1,2,3,4,5,6\}</annotation></semantics>. Of course the symmetric group <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> acts on <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>, but James Sylvester found a different action of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> on a 6-element set, which in turn gives an outer automorphism of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics>.

To do this, he made the following definitions:

• A duad is a 2-element subset of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. Note that there are <semantics>6choose2=15<annotation encoding="application/x-tex"> {6 \choose 2} = 15 </annotation></semantics> duads.

• A syntheme is a set of 3 duads forming a partition of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. There are also 15 synthemes.

• A synthematic total is a set of 5 synthemes partitioning the set of 15 duads. There are 6 synthematic totals.

Any permutation of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> gives a permutation of the set <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> of synthematic totals, so we obtain an action of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> on <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>. Choosing any bijection betweeen <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> and <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>, this in turn gives an action of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> on <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>, and thus a homomorphism from <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> to itself. Sylvester showed that this is an outer automorphism!

There’s a way to draw this situation. It’s a bit tricky, but Greg Egan has kindly done it:

Here we see 15 small red blobs: these are the duads. We also see 15 larger blue blobs: these are the synthemes. We draw an edge from a duad to a syntheme whenever that duad lies in that syntheme. The result is a graph called the Tutte–Coxeter graph, with 30 vertices and 45 edges.

The 6 concentric rings around the picture are the 6 synthematic totals. A band of color appears in one of these rings near some syntheme if that syntheme is part of that synthematic total.

If we draw the Tutte–Coxeter graph without all the decorations, it looks like this:

The red vertices come from duads, the blue ones from synthemes. The outer automorphism of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> gives a symmetry of the Tutte–Coxeter graph that switches the red and blue vertices!

The inner automorphisms, which correspond to elements of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics>, also give symmetries: for each element of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics>, the Tutte–Coxeter graph has a symmetry that permutes the numbers in the picture. These symmetries map red vertices to red ones and blue vertices to blue ones.

The group <semantics>Aut(S 6)<annotation encoding="application/x-tex">\mathrm{Aut}(S_6)</annotation></semantics> has

<semantics>2×6!=1440<annotation encoding="application/x-tex"> 2 \times 6! = 1440 </annotation></semantics>

elements, coming from the <semantics>6!<annotation encoding="application/x-tex">6!</annotation></semantics> inner automorphisms of <semantics>S 6<annotation encoding="application/x-tex">S_6</annotation></semantics> and the outer automorphism of order 2. In fact, <semantics>Aut(S 6)<annotation encoding="application/x-tex">\mathrm{Aut}(S_6)</annotation></semantics> is the whole symmetry group of the Tutte–Coxeter graph.

For more on the Tutte–Coxeter graph, see my post on the AMS-hosted blog Visual Insight:

by john (baez@math.ucr.edu) at August 18, 2015 08:24 AM

August 17, 2015

Symmetrybreaking - Fermilab/SLAC

Dark Energy Survey finds more celestial neighbors

The observation of new dwarf galaxy candidates could mean our sky is more crowded than we thought.

Scientists on the Dark Energy Survey, using one of the world’s most powerful digital cameras, have discovered eight more faint celestial objects hovering near our Milky Way galaxy. Signs indicate that they—like the objects found by the same team earlier this year—are likely dwarf satellite galaxies, the smallest and closest known form of galaxies.

Satellite galaxies are small celestial objects that orbit larger galaxies, such as our own Milky Way. Dwarf galaxies can be found with fewer than 1000 stars, in contrast to the Milky Way, an average-sized galaxy containing billions of stars. Scientists have predicted that larger galaxies are built from smaller galaxies, which are thought to be especially rich in dark matter, which makes up about 25 percent of the total matter and energy in the universe. Dwarf satellite galaxies, therefore, are considered key to understanding dark matter and the process by which larger galaxies form.

The main goal of the Dark Energy Survey, as its name suggests, is to better understand the nature of dark energy, the mysterious stuff that makes up about 70 percent of the matter and energy in the universe. Scientists believe that dark energy is the key to understanding why the expansion of the universe is speeding up. To carry out its dark energy mission, DES is taking snapshots of hundreds of millions of distant galaxies. However, some of the DES images also contain stars in dwarf galaxies much closer to the Milky Way. The same data can therefore be used to probe both dark energy, which scientists think is driving galaxies apart, and dark matter, which is thought to hold galaxies together.

Scientists can only see the nearest dwarf galaxies, since they are so faint, and had previously found only a handful. If these new discoveries are representative of the entire sky, there could be many more galaxies hiding in our cosmic neighborhood.

“Just this year, more than 20 of these dwarf satellite galaxy candidates have been spotted, with 17 of those found in Dark Energy Survey data,” says Alex Drlica-Wagner of Fermi National Accelerator Laboratory, one of the leaders of the DES analysis. “We’ve nearly doubled the number of these objects we know about in just one year, which is remarkable.”

In March, researchers with the Dark Energy Survey and an independent team from the University of Cambridge jointly announced the discovery of nine of these objects in snapshots taken by the Dark Energy Camera, the extraordinary instrument at the heart of the DES, an experiment funded by the DOE, the National Science Foundation and other funding agencies. Two of those have been confirmed as dwarf satellite galaxies so far.

Prior to 2015, scientists had located only about two dozen such galaxies around the Milky Way.

“DES is finding galaxies so faint that they would have been very difficult to recognize in previous surveys,” says Keith Bechtol of the University of Wisconsin-Madison. “The discovery of so many new galaxy candidates in one-eighth of the sky could mean there are more to find around the Milky Way.”

The closest of these newly discovered objects is about 80,000 light years away, and the furthest roughly 700,000 light years away. These objects are, on average, around a billion times dimmer than the Milky Way and a million times less massive. The faintest of the new dwarf galaxy candidates has about 500 stars.

Most of the newly discovered objects are in the southern half of the DES survey area, in close proximity to the Large Magellanic Cloud and the Small Magellanic Cloud. These are the two largest satellite galaxies associated with the Milky Way, about 158,000 light years and 208,000 light years away, respectively. It is possible that many of these new objects could be satellite galaxies of these larger satellite galaxies, which would be a discovery by itself.

“That result would be fascinating,” says Risa Wechsler of SLAC National Accelerator Laboratory. “Satellites of satellites are predicted by our models of dark matter. Either we are seeing these types of systems for the first time, or there is something we don’t understand about how these satellite galaxies are distributed in the sky.”

Since dwarf galaxies are thought to be made mostly of dark matter, with very few stars, they are excellent targets to explore the properties of dark matter. Further analysis will confirm whether these new objects are indeed dwarf satellite galaxies, and whether signs of dark matter can be detected from them.

The 17 dwarf satellite galaxy candidates were discovered in the first two years of data collected by the Dark Energy Survey, a five-year effort to photograph a portion of the southern sky in unprecedented detail. Scientists have now had a first look at most of the survey area, but data from the next three years of the survey will likely allow them to find objects that are even fainter, more diffuse or farther away. The third survey season has just begun.

“This exciting discovery is the product of a strong collaborative effort from the entire DES team,” says Basilio Santiago, a DES Milky Way Science Working Group coordinator and a member of the DES-Brazil Consortium. “We’ve only just begun our probe of the cosmos, and we’re looking forward to more exciting discoveries in the coming years.” 


This article is based on a Fermilab press release.

 

Like what you see? Sign up for a free subscription to symmetry!

August 17, 2015 02:29 PM

August 15, 2015

ATLAS Experiment

BOOST Outreach and Particle Fever

Conferences like BOOST, which my colleagues Cristoph and Tatjana have written about already, are designed to bring physicists to think about the latest results in the field. When you put 100 experts from around the world together into a room for a week, you get a fantastic picture of the state of the art in searches for new physics and measurements of the Standard Model. But it turns out there’s another great use for conferences: they’re an opportunity to talk to the general public about the work we do. The BOOST committee organized a discussion and screening of the movie “Particle Fever” on Monday, and I think it was an enormously successful event!

particle_fever_06

For those who haven’t seen it, Particle Fever is excellent. It is the story of the discovery of the Higgs Boson, and its consequences on the myriad of theories that we are searching for at the LHC. It presents the whole experience of construction, commissioning, turning on, and operating the experiments, from the perspective of experimentalists and theorists, young and old. People love it – it has a 95% rating on Rotten Tomatoes – and nearly all my colleagues loved it as well. It’s rare to find a documentary that both experts and the public enjoy, so this is indeed a real achievement!

Getting back to BOOST, not only did we have a screening of the movie, but also a panel discussion where people could ask questions about the movie and about physics in general. One question that an audience member asked was really quite excellent. He asked why physicists think that movies like Particle Fever, and events like this public showing, were important. Why did we go to the trouble of booking a room, organizing people, and spending hours of our day on a movie we’ve already seen many times before? And let’s not forget that David Kaplan, a physicist and a producer of the movie, spent several years of his life on this project full time. He essentially gave up research for a few years in order to make a movie about doing research – not an easy task for a professor!

So why do we do it? Why is Particle Fever important?

The answer, to me, is that we have a responsibility to share what we know about the Universe. We study the fundamental nature of the Universe not so that we as individuals can understand more, but so that humanity as a whole understands more. On an experiment as big as ATLAS, you quickly become extremely aware of how much you depend on the work and experience of others, and on the decades and centuries of scientists who came before us. Doing science means contributing to this shared knowledge. And while some details may only be important to a few individuals (not many people are going to care about the derivation of the jet energy scale via numerical inversion), the big picture is something that everyone can appreciate.

And Particle Fever helps with that. The movie is funny, smart, and understandable – all things that we strive to be as science communicators, but which we sometimes fail at. Every particle physicist owes David Kaplan and the director Mark Levinson a tremendous debt, because they have done such a tremendous job of communicating the excitement of fundamental knowledge and discovery. CERN has always sought to unite Europe, and the world, through a quest for understanding, and Particle Fever helps the rest of the world join us on that quest.

Conferences like BOOST are a great time to focus on the details of our work, but they’re also an opportunity to consider how our physics relates to the rest of the world, and how best we can communicate our understanding. Particle Fever has made me realize just how much work it takes to do a really wonderful job, and I’m extremely happy that such a great guide is available to the public. With any luck, we’ll have more movies coming out soon about the discovery of supersymmetry and extra dimensions!


“Max Max is a postdoctoral fellow at the Enrico Fermi Institute at the University of Chicago, and has worked on ATLAS since 2009. His work focuses on searches for supersymmetry, an exciting potential extension of the Standard Model which predicts many new particles the Large Hadron Collider might be able to produce. When not colliding particles, he can be found cycling, hiking, going to concerts, or playing board games.

by swiatlow at August 15, 2015 10:11 PM