Particle Physics Planet


October 15, 2018

Emily Lakdawalla - The Planetary Society Blog

A Joyless 'First Man'
As a space fan, I enjoyed the movie for its depictions of the experience of spaceflight itself. But as person, this is not a film I will often return to. There is a richness and complexity of human experience beyond what is depicted here, and to ignore the awe and joy of exploration in a misguided effort to create emotional drama undermines the effectiveness of this film to capture a person, not just an emotional landscape as uninviting as the Moon.

October 15, 2018 07:38 PM

Clifford V. Johnson - Asymptotia

Mindscape Interview!

And then two come along at once... Following on yesterday, another of the longer interviews I've done recently has appeared. This one was for Sean Carroll's excellent Mindscape podcast. This interview/chat is all about string theory, including some of the core ideas, its history, what that "quantum gravity" thing is anyway, and why it isn't actually a theory of (just) strings. Here's a direct link to the audio, and here's a link to the page about it on Sean's blog.

The whole Mindscape podcast has had some fantastic conversations, by the way, so do check it out on iTunes or your favourite podcast supplier!

I hope you enjoy it!!

-cvj Click to continue reading this post

The post Mindscape Interview! appeared first on Asymptotia.

by Clifford at October 15, 2018 06:47 PM

The n-Category Cafe

Topoi of G-sets

I’m thinking about finite groups these days, from a Klein geometry perspective where we think of a group <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> as a source of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-sets. Since the category of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-sets is a topos, this lets us translate concepts, facts and questions about groups into concepts, facts and questions about topoi. I’m not at all good at this, so here are a bunch of basic questions.

For any group <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> the category of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-sets is a Boolean topos, which means basically that its internal logic obeys the principle of excluded middle.

  • Which Boolean topoi are equivalent to the category of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-sets for some group <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>?

  • Which are equivalent to the category of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-sets for a finite group <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>?

It might be easiest to start by characterizing the categories of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-sets where <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> is a groupoid, and then add an extra condition to force <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> to be a group.

The category <semantics>GSet<annotation encoding="application/x-tex">G Set</annotation></semantics> comes with a forgetful functor <semantics>U:GSetSet<annotation encoding="application/x-tex">U: G Set \to Set</annotation></semantics>.

  • Is the group of natural automorphisms of <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> just <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>?

This should be easy to check, I’m just feeling lazy. If some result like this is true, how come people talk so much about the Tannaka–Krein reconstruction theorem and not so much about this simpler thing? (Maybe it’s just too obvious.)

Whenever we have a homomorphism <semantics>f:HG<annotation encoding="application/x-tex">f \colon H \to G</annotation></semantics> we get an obvious functor

<semantics>f *:GSetHSet<annotation encoding="application/x-tex"> f^\ast \colon G Set \to H Set </annotation></semantics>

This is part of an essential geometric morphism, which means that it has both a right and left adjoint. By this means we can actually get a 2-functor from the 2-category of groups (yeah, it’s a 2-category since groups can be seen as one-object categories) to the 2-category <semantics>Topos ess<annotation encoding="application/x-tex">Topos_{ess}</annotation></semantics> consisting of topoi, essential geometric morphisms and natural transformations. If I’m reading the <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>Lab correctly, this makes <semantics>GSet<annotation encoding="application/x-tex">G Set</annotation></semantics> into a full sub-2-category of <semantics>Topos ess<annotation encoding="application/x-tex">Topos_{ess}</annotation></semantics>. This makes it all the more interesting to know which topoi are equivalent to categories of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-sets.

  • What properties characterize essential geometric morphisms of the form <semantics>i *:GSetHSet<annotation encoding="application/x-tex">i^\ast \colon G \Set \to H \Set</annotation></semantics> when <semantics>i:HG<annotation encoding="application/x-tex">i \colon H \to G</annotation></semantics> is the inclusion of a subgroup?

Whenever we have this, we get a transitive <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-set <semantics>G/H<annotation encoding="application/x-tex">G/H</annotation></semantics>, which is thus a special object in <semantics>GSet<annotation encoding="application/x-tex">G Set</annotation></semantics>. These objects are just the atoms in <semantics>GSet<annotation encoding="application/x-tex">G Set</annotation></semantics>: that is, the objects whose only subobjects are themselves and the initial object. Indeed <semantics>GSet<annotation encoding="application/x-tex">G Set</annotation></semantics> is an atomic topos, meaning that every object is a coproduct of atoms. That’s just a fancy way of saying that every <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-set can be broken into orbits, which are transitive <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-sets.

Next:

  • What properties characterize essential geometric morphisms of the form <semantics>i *:GSetHSet<annotation encoding="application/x-tex">i^\ast \colon G \Set \to H \Set</annotation></semantics> when <semantics>i:HG<annotation encoding="application/x-tex">i \colon H \to G</annotation></semantics> is the inclusion of a normal subgroup?

In this case <semantics>G/H<annotation encoding="application/x-tex">G/H</annotation></semantics> is a group with a surjection <semantics>p:GG/H<annotation encoding="application/x-tex">p \colon G \to G/H</annotation></semantics>, so we get another topos <semantics>(G/H)Set<annotation encoding="application/x-tex">(G/H)Set</annotation></semantics> and essential geometric morphisms

<semantics>Set(G/H)Setp *GSeti *HSetSet<annotation encoding="application/x-tex"> Set \longrightarrow (G/H)Set \stackrel{p^\ast}{\longrightarrow} G Set \stackrel{i^\ast}{\longrightarrow} H Set \longrightarrow Set </annotation></semantics>

  • What properties characterize essential geometric morphisms of the form <semantics>p *<annotation encoding="application/x-tex">p^*</annotation></semantics> for <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> a surjective homomorphism of groups?

  • Is there a concept of ‘short exact sequence’ of essential geometric morphisms such that the above sequence is an example?

Well, my questions could go on all day, but this is enough for now!

by john (baez@math.ucr.edu) at October 15, 2018 05:31 PM

Christian P. Robert - xi'an's og

ABC intro for Astrophysics

Today I received in the mail a copy of the short book published by edp sciences after the courses we gave last year at the astrophysics summer school, in Autrans. Which contains a quick introduction to ABC extracted from my notes (which I still hope to turn into a book!). As well as a longer coverage of Bayesian foundations and computations by David Stenning and David van Dyk.

by xi'an at October 15, 2018 05:18 PM

Peter Coles - In the Dark

The Big Bang Exploded?

I suspect that I’m not the only physicist who receives unsolicited correspondence from people with wacky views on Life, the Universe and Everything. Being a cosmologist, I probably get more of this stuff than those working in less speculative branches of physics. Because I’ve written a few things that appeared in the public domain, I probably even get more than most cosmologists (except the really famous ones of course).

Many “alternative” cosmologists have now discovered email, and indeed the comments box on this blog, but there are still a lot who send their ideas through regular post. Whenever I get a envelope with an address on it that has been typed by an old-fashioned typewriter it’s a dead giveaway that it’s going to be one of those. Sometimes they are just letters (typed or handwritten), but sometimes they are complete manuscripts often with wonderfully batty illustrations. I remember one called Dark Matter, The Great Pyramid and the Theory of Crystal Healing. I used to have an entire filing cabinet filled with things like his, but I took the opportunity of moving from Cardiff some time ago to throw most of them out.

One particular correspondent started writing to me after the publication of my little book, Cosmology: A Very Short Introduction. This chap sent a terse letter to me pointing out that the Big Bang theory was obviously completely wrong. The reason was obvious to anyone who understood thermodynamics. He had spent a lifetime designing high-quality refrigeration equipment and therefore knew what he was talking about (or so he said). He even sent me this booklet about his ideas, which for some reason I have neglected to send for recycling:

His point was that, according to the Big Bang theory, the Universe cools as it expands. Its current temperature is about 3 Kelvin (-270 Celsius or thereabouts) but it is now expanding and cooling. Turning the clock back gives a Universe that was hotter when it was younger. He thought this was all wrong.

The argument is false, my correspondent asserted, because the Universe – by definition – hasn’t got any surroundings and therefore isn’t expanding into anything. Since it isn’t pushing against anything it can’t do any work. The internal energy of the gas must therefore remain constant and since the internal energy of an ideal gas is only a function of its temperature, the expansion of the Universe must therefore be at a constant temperature (i.e. isothermal, rather than adiabatic). He backed up his argument with bona fide experimental results on the free expansion of gases.

I didn’t reply and filed the letter away. Another came, and I did likewise. Increasingly overcome by some form of apoplexy his letters got ruder and ruder, eventually blaming me for the decline of the British education system and demanding that I be fired from my job. Finally, he wrote to the President of the Royal Society demanding that I be “struck off” and forbidden (on grounds of incompetence) ever to teach thermodynamics in a University. The copies of the letters he sent me are still will the pamphlet.

I don’t agree with him that the Big Bang is wrong, but I’ve never had the energy to reply to his rather belligerent letters. However, I think it might be fun to turn this into a little competition, so here’s a challenge for you: provide the clearest and most succint explanation of why the temperature of the expanding Universe does fall with time, despite what my correspondent thought.

Answers via the comment box please!

by telescoper at October 15, 2018 02:35 PM

Peter Coles - In the Dark

Especially when the October Wind

Especially when the October wind
With frosty fingers punishes my hair,
Caught by the crabbing sun I walk on fire
And cast a shadow crab upon the land,
By the sea’s side, hearing the noise of birds,
Hearing the raven cough in winter sticks,
My busy heart who shudders as she talks
Sheds the syllabic blood and drains her words.

Shut, too, in a tower of words, I mark
On the horizon walking like the trees
The wordy shapes of women, and the rows
Of the star-gestured children in the park.
Some let me make you of the vowelled beeches,
Some of the oaken voices, from the roots
Of many a thorny shire tell you notes,
Some let me make you of the water’s speeches.

Behind a pot of ferns the wagging clock
Tells me the hour’s word, the neural meaning
Flies on the shafted disk, declaims the morning
And tells the windy weather in the cock.
Some let me make you of the meadow’s signs;
The signal grass that tells me all I know
Breaks with the wormy winter through the eye.
Some let me tell you of the raven’s sins.

Especially when the October wind
(Some let me make you of autumnal spells,
The spider-tongued, and the loud hill of Wales)
With fists of turnips punishes the land,
Some let me make you of the heartless words.
The heart is drained that, spelling in the scurry
Of chemic blood, warned of the coming fury.
By the sea’s side hear the dark-vowelled birds.

by Dylan Thomas (1914-1953)

 

by telescoper at October 15, 2018 11:31 AM

Emily Lakdawalla - The Planetary Society Blog

Imaging the Earth from Lunar orbit
Radio amateurs around the world worked together to take an image of the Earth and the far side of the Moon.

October 15, 2018 11:00 AM

October 14, 2018

Christian P. Robert - xi'an's og

the invasion of the American cheeses

Part of the new Nafta agreement between the USA and its neighbours, Canada and Mexico, is lifting restrictions on the export of American cheeses to these countries. Having tasted high quality cheeses from Québec on my last visit to Montréal, and having yet to find similar performances in a US cheese, I looked at the list of cheese involved in the agreement, only to discover a collection of European cheese that should be protected by AOC rules under EU regulations (and only attributed to cheeses produced in the original regions):

Brie [de Meaux or de Melun?]
Burrata [di Andria?]
Camembert [missing the de Normandie to be AOC]
Coulommiers [actually not AOC!]
Emmenthal [which should be AOC Emmentaler Switzerland!]
Pecorino [all five Italian varieties being PDO]
Provolone [both Italian versions being PDO]

Plus another imposition that British Columbia wines be no longer segregated from US wines in British Columbia! Which sounds somewhat absurd if wine like those from (BC) Okanagan Valley or (Washington) Walla Walla is to be enjoyed with some more subtlety than diet cokeOwning a winery apparently does not necessarily require such subtlety!

by xi'an at October 14, 2018 10:18 PM

Clifford V. Johnson - Asymptotia

Futuristic Podcast Interview

For your listening pleasure: I've been asked to do a number of longer interviews recently. One of these was for the "Futuristic Podcast of Mark Gerlach", who interviews all sorts of people from the arts (normally) over to the sciences (well, he hopes to do more of that starting with me). Go and check out his show on iTunes. The particular episode with me can be found as episode 31. We talk about a lot of things, from how people get into science (including my take on the nature vs nurture discussion), through the changes in how people get information about science to the development of string theory, to black holes and quantum entanglement - and a host of things in between. We even talked about The Dialogues, you'll be happy to hear. I hope you enjoy listening!

(The picture? Not immediately relevant, except for the fact that I did cycle to the place the recording took place. I mostly put it there because I was fixing my bike not long ago and it is good to have a photo in a post. That is all.)

-cvj Click to continue reading this post

The post Futuristic Podcast Interview appeared first on Asymptotia.

by Clifford at October 14, 2018 07:22 PM

Peter Coles - In the Dark

All The Girls Go Crazy

I came across this on Youtube a while ago and I quite often play it when I’m at work if I’m in need of an aural pick-me-up when I’m flagging a bit. The tune All The Girls Go Crazy is one of many manifestations of a 16-bar blues theme that was fairly ubiquitous in New Orleans Jazz. The recording is by a band led Ken Colyer who I think is on cornet rather than trumpet on this track, but the Youtube poster gives no other information about the personnel or the date. I’m going to stick my neck out and say that the clarinettist sounds to me like Ian Wheeler and the drummer is without doubt Colin Bowden, one of the very best drummers in this style that the UK has ever produced. If I’m right then I think the date is somewhere around the mid-1950s, at the peak of the New Orleans revival in the UK. No doubt some other jazz fan out there will correct me if I’m wrong!

Ken Colyer (`the guvnor’) had very firm ideas about how New Orleans music should be performed, and you’ll notice that there’s much more ensemble work here than you find in the typical string-of-solos approach adopted by many `Trad’ bands of the period.

I’m going to look very silly if it’s not Colin Bowden on drums here, but for me he (or whoever else is the drummer) is the star of this performance, as it is he who is responsible for the steadily increasing sense of momentum, achieved without speeding up (which is the worst thing a rhythm section can do). Notice how he signals the end of each set of 8-bars with a little figure on the tom-toms and/or a cymbal crash, and it is by increasing the strength of these that he raises the excitement level. Notice also that he also has the last word with his cymbal, something jazz drummers are wont to do.

P.S. If you look here, you’ll see a certain Peter Coles playing alongside Ken Colyer in the 1970s. It’s not me, though. It’s my uncle Peter…

by telescoper at October 14, 2018 05:06 PM

Peter Coles - In the Dark

Union Matters

The above collection of goodies arrived last week in a Welcome Pack from the Irish Federation of University Teachers (IFUT), my new trade union. I sent in an application to join some time ago, and was getting a bit worried that it might have been lost, but then confirmation arrived in the form of my membership card along with a pen, a badge, a lanyard, an application form for a Credit Union and various other bits and bobs. It’s only by standing together that academics in Irish universities have any hope of exerting enough pressure on the Government to get it to reverse the persistent underfunding of Higher Education in this country. Even then it won’t be easy – last week’s budget had nothing whatsoever in it for universities or students.

Incidentally, according to the online budget calculator, I’ll be a princely €28 per month better off next year as a result of small changes in taxation, but it seems to me that the priority should have been to help the less well off and it failed to do that. No doubt, however, the cautious approach to public finances shown by the Government is largely down to the uncertain effects of Brexit.

While I am on about unions, some of the readers of this blog will recall that I was participating in industrial action by UCU (the Universities and Colleges Union) in the UK earlier this year in relation to proposed cuts to pensions in the Universities Superannuation Scheme (USS). I have since left that scheme, deferring my benefits from it until I retire, but I couldn’t resist passing on a link to an article I read yesterday, which argues that USS’s valuation (which resulted in a deficit) rests on a large and demonstrable mistake and, when this is corrected there is no deficit as at 31 March 2018 and no need for detrimental changes to benefits or contributions.

Could it be that all that pain was caused by an accounting error? If so, then heads should roll!

by telescoper at October 14, 2018 02:46 PM

October 13, 2018

John Baez - Azimuth

Category Theory Course

I’m teaching a course on category theory at U.C. Riverside, and since my website is still suffering from reduced functionality I’ll put the course notes here for now. I taught an introductory course on category theory in 2016, but this one is a bit more advanced.

The hand-written notes here are by Christian Williams. They are probably best seen as a reminder to myself as to what I’d like to include in a short book someday.

Lecture 1 What is pure mathematics all about? The importance of free structures.

Lecture 2: The natural numbers as a free structure. Adjoint functors.

Lecture 3: Adjoint functors in terms of unit and counit.

Lecture 4: 2-Categories. Adjunctions.

Lecture 5: 2-Categories and string diagrams. Composing adjunctions.

Lecture 6: The ‘main spine’ of mathematics. Getting a monad from an adjunction.

Lecture 7: Definition of a monad. Getting a monad from an adjunction. The augmented simplex category.

by John Baez at October 13, 2018 11:35 PM

Peter Coles - In the Dark

Bax, Vaughan Williams & Potter at the NCH.

Last night I was once again at the National Concert Hall in Dublin for a concert by the RTÉ National Symphony Orchestra, this time conducted by Kenneth Montgomery. I took the above picture about five minutes before the start of the concert and, although a few more people arrived before the music began, it was a very low attendance. I don’t think the hall was more than 20% full. I’m not sure why. Perhaps Storm Callum made it difficult for some to make the journey to Dublin? I was delayed a bit on the way there from Maynooth, but I’m glad I made it because it was a fine concert.

I always appreciate it when unfamiliar works are programmed alongside more standard repertoire, and last night provided a good example of that. One piece was an established favourite among concert-goers, another I have on CD but have never heard live, and one I had never heard before at all.

The opening piece was In Memoriam by Arnold Bax. Although considered by many to be an archetypal English composer, Bax had a strong affinity for Ireland and indeed lived here from 1911 until the outbreak of war in 1914. I’ve always felt Bax’s music was greatly influenced by Sibelius, but he was very interested in Celtic culture and that comes across in his In Memoriam, which is built around a very folk-like melody. The work was composed to honour Pádraig Pearse, one of the leaders of the 1916 uprising, who was subsequently executed by the British authorities, and was written in the immediate aftermath of the rebellion in 1916. It is a very fine piece, in my opinion, starting in a rather elegiac mood, but with passages that celebrate of Pearce’s life than mourn his death, and the ending is very moving, like a beautiful sunset.

There was then a short delay while various rearrangements were made on stage. Off went the wind instruments and percussion, and into the space vacated by their departure moved a subset of the string instruments, creating a second (smaller) string orchestra separated from the remaining musicians. In addition, the principals of the relevant sections arranged themselves to form a string quartet around the conductor’s podium. If you didn’t know before reading this what was about to be played, then that description will no doubt have led you to conclude that it must be the Fantasia on a Theme by Thomas Tallis by Ralph Vaughan Williams. This is an evergreen concert piece, for good reason, and the string players of the RTÉ National Symphony Orchestra delivered a very fine account of it. I remarked on the fine playing of the string section after the last concert I attended at the NCH, and they did it again.

After the interval was a piece I had never heard before, the Sinfonia “De Profundis” by Belfast-born A.J. `Archie’ Potter, composed fifty years ago in 1968, and first performed in 1969 by the RTÉ National Symphony Orchestra in Dublin. The title is a reference to Psalm 130, and some of the thematic material comes from liturgical music. In the composer’s own words:

As the title suggests, it is a musical account of one man’s own progress from despair over a particular circumstance in his life to spiritual recovery and (for the time being of course) triumph over the powers of darkness.

Although `a journey from darkness into light’ is a description that could apply to many symphonies (especially those of Beethoven), this work in five movements does not have a typically `symphonic’ structure in that it is based on variations on a theme drawn from a 16th century carol spread throughout the whole work rather than confined to one movement, alongside another element comprising a `tone row’. The juxtaposition of `traditional’ diatonic and `modernist’ serialist explorations generates tension which is only released at the very end, when it is released by the arrival of a new theme borrowed from the `Old 124th’.

That brief description of what is going on in this work doesn’t do justice at all to the impression it creates on the listener, which is of a richly varied set of textures sometimes mournful but sometimes boisterous, with dashes of robust humour thrown in for good measure. I’m not at all familiar with A.J. Potter, but I must hear more of his music. Based on this piece, he was both clever and expressive.

As a bonus we had an orchestral encore in the form of another piece by Archie Potter, much shorter and much lighter. Orchestral encores are rare in the UK, but seem to be less so here in Ireland.

After that I left in order to return to Maynooth. Appropriately enough, in the light of the piece by Bax, I took a train from Pearse station…

by telescoper at October 13, 2018 10:25 AM

Emily Lakdawalla - The Planetary Society Blog

Book Announcement and Excerpt: Astronomy for Kids
For Astronomy Day, Bruce announces his new book Astronomy for Kids, provides excerpts, and gives some bonus planet observing info.

October 13, 2018 07:00 AM

October 12, 2018

Christian P. Robert - xi'an's og

Argentan half-marathon [1:26:03, 22nd/329, 4th V2/63, 15⁰]

Despite failing to reach the podium this year in my V2 category, a predictable first as I get closer to move to the next category, V3, or “senior grandmaster”, and missing by a few seconds my 1:25 target, I was rather happy with this half-marathon, having run at my pace the whole race, with none of the low passages I had in previous races and instead feeling quite well in the last kilometers, where I left the first female runner. Weather was perfect, with no sun, no rain, and no wind in the second half. The three V2 runners in front of me were much faster (1:22, 1:24, and 1:25:21) than those of the previous years, an illustration of the law of small numbers found in these races…

by xi'an at October 12, 2018 10:18 PM

ZapperZ - Physics and Physicists

Time Crystals
Ignoring the theatrics, Don Lincoln's video is the simplest level of explanation that you can ask for for what a "time crystal" is, after you strip away the hyperbole.



Zz.

by ZapperZ (noreply@blogger.com) at October 12, 2018 09:22 PM

Emily Lakdawalla - The Planetary Society Blog

How to follow BepiColombo's launch
I’m thrilled to be anticipating the beginning of a new mission to Mercury. Here's a timeline for BepiColombo's planned launch on 20 October (19 October in the U.S.).

October 12, 2018 02:56 PM

Christian P. Robert - xi'an's og

Juan Antonio Cano Sanchez (1956-2018)

I have just learned the very sad news that Juan Antonio Cano, from Universidad de Murcia, with whom Diego Salmerón and I wrote two papers on integral priors, has passed away, after a long fight against a kidney disease. Having communicated with him recently, I am quite shocked by him passing away as I was not aware of his poor health. The last time we met was at the O’Bayes 2015 meeting in Valencià, with a long chat in the botanical gardens of the Universitat de Valencià. Juan Antonio was a very kind and unassuming person, open and friendly, with a continued flow of research in Objective Bayes methodology and in particular on integral priors. Hasta luego, Juan Antonio!

by xi'an at October 12, 2018 09:24 AM

October 11, 2018

Emily Lakdawalla - The Planetary Society Blog

Space station crew safe after failed launch
About two minutes after liftoff, the Soyuz vehicle carrying NASA astronaut Nick Hague and Russian cosmonaut Alexey Ovchinin to orbit failed.

October 11, 2018 04:26 PM

Jon Butterworth - Life and Physics

Playful Explorations
Originally posted on NearcticTraveller:
Atom Land: A Guided Tour Through the Strange (And Impossibly Small) World of Particle Physics by Jon Butterworth. The Experiment. New York. 2018. A Mind at Play: How Claude Shannon Invented the Information Age by Jimmy…

by Jon Butterworth at October 11, 2018 07:28 AM

October 09, 2018

Jon Butterworth - Life and Physics

Boosting boost
Regular readers (hello!) will know that the topics of jet substructure, boosted objects and the annual Boost meeting often feature here, because I work on them and they are interesting and important for physics at the Large Hadron Collider (and … Continue reading

by Jon Butterworth at October 09, 2018 02:40 PM

October 07, 2018

John Baez - Azimuth

Lebesgue Universal Covering Problem (Part 3)

Back in 2015, I reported some progress on this difficult problem in plane geometry. I’m happy to report some more.

First, remember the story. A subset of the plane has diameter 1 if the distance between any two points in this set is ≤ 1. A universal covering is a convex subset of the plane that can cover a translated, reflected and/or rotated version of every subset of the plane with diameter 1. In 1914, the famous mathematician Henri Lebesgue sent a letter to a fellow named Pál, challenging him to find the universal covering with the least area.

Pál worked on this problem, and 6 years later he published a paper on it. He found a very nice universal covering: a regular hexagon in which one can inscribe a circle of diameter 1. This has area

0.86602540…

But he also found a universal covering with less area, by removing two triangles from this hexagon—for example, the triangles C1C2C3 and E1E2E3 here:

The resulting universal covering has area

0.84529946…

In 1936, Sprague went on to prove that more area could be removed from another corner of Pál’s original hexagon, giving a universal covering of area

0.8441377708435…

In 1992, Hansen took these reductions even further by removing two more pieces from Pál’s hexagon. Each piece is a thin sliver bounded by two straight lines and an arc. The first piece is tiny. The second is downright microscopic!

Hansen claimed the areas of these regions were 4 · 10-11 and 6 · 10-18. This turned out to be wrong. The actual areas are 3.7507 · 10-11 and 8.4460 · 10-21. The resulting universal covering had an area of

0.844137708416…

This tiny improvement over Sprague’s work led Klee and Wagon to write:

it does seem safe to guess that progress on [this problem], which has been painfully slow in the past, may be even more painfully slow in the future.

However, in 2015 Philip Gibbs found a way to remove about a million times more area than Hansen’s larger region: a whopping 2.233 · 10-5. This gave a universal covering with area

0.844115376859…

Karine Bagdasaryan and I helped Gibbs write up a rigorous proof of this result, and we published it here:

• John Baez, Karine Bagdasaryan and Philip Gibbs,The Lebesgue universal
covering problem
, Journal of Computational Geometry 6 (2015), 288–299.

Greg Egan played an instrumental role as well, catching various computational errors.

At the time Philip was sure he could remove even more area, at the expense of a more complicated proof. Since the proof was already quite complicated, we decided to stick with what we had.

But this week I met Philip at The philosophy and physics of Noether’s theorems, a wonderful workshop in London which deserves a full blog article of its own. It turns out that he has gone further: he claims to have found a vastly better universal covering, with area

0.8440935944…

This is an improvement of 2.178245 × 10-5 over our earlier work—roughly equal to our improvement over Hansen.

You can read his argument here:

• Philip Gibbs, An upper bound for Lebesgue’s universal covering problem, 22 January 2018.

I say ‘claims’ not because I doubt his result—he’s clearly a master at this kind of mathematics!—but because I haven’t checked it and it’s easy to make mistakes, for example mistakes in computing the areas of the shapes removed.

It seems we are closing in on the final result; however, Philip Gibbs believes there is still room for improvement, so I expect it will take at least a decade or two to solve this problem… unless, of course, some mathematicians start working on it full-time, which could speed things up considerably.

by John Baez at October 07, 2018 02:08 PM

October 06, 2018

Jon Butterworth - Life and Physics

Book Review — Atom Land: A Guided Tour Through the Strange (and Impossibly Small) World of Particle Physics
Originally posted on Evilcyclist's Blog:
Atom Land: A Guided Tour Through the Strange (and Impossibly Small) World of Particle Physics by Jon Butterworth. Butterworth is a lecture in particle physics at a layman’s level. Butterworth is a physics professor…

by Jon Butterworth at October 06, 2018 06:42 PM

John Baez - Azimuth

Riverside Math Workshop

We’re having a workshop with a bunch of cool math talks at U. C. Riverside, and you can register for it here:

Riverside Mathematics Workshop for Excellence and Diversity, Friday 19 October – Saturday 20 October, 2018. Organized by John Baez, Carl Mautner, José González and Chen Weitao.

This is the first of an annual series of workshops to showcase and celebrate excellence in research by women and other under-represented groups for the purpose of fostering and encouraging growth in the U.C. Riverside mathematical community.

After tea at 3:30 p.m. on Friday there will be two plenary talks, lasting until 5:00. Catherine Searle will talk on “Symmetries of spaces with lower curvature bounds”, and Edray Goins will give a talk called “Clocks, parking garages, and the solvability of the quintic: a friendly introduction to monodromy”. There will then be a banquet in the Alumni Center 6:30 – 8:30 p.m.

On Saturday there will be coffee and a poster session at 8:30 a.m., and then two parallel sessions on pure and applied mathematics, with talks at 9:30, 10:30, 11:30, 1:00 and 2:00. Check out the abstracts here!

(I’m especially interested in Christina Vasilakopoulou’s talk on Frobenius and Hopf monoids in enriched categories, but she’s my postdoc so I’m biased.)

by John Baez at October 06, 2018 07:22 AM

October 05, 2018

ZapperZ - Physics and Physicists

RIP Leon Lederman
One of the most charismatic physicists that I've ever met, former Fermilab Director and Nobel Laureate Leon Lederman, has passed away at the age of 96. Most of the general public will probably not know his name, but will have heard the name "God Particle", which he coined in his book, and which he originally intended to call the "God-Damn Particle".

He had been in failing health, and suffered from dementia. It force his family to auction off his Nobel Prize medal to help with his medical cost. But his lasting legacy will be in his effort to put "Physics First" in elementary and high school. And of course, there's Fermilab.

He truly was, and still is, a giant in this field.

Zz.

by ZapperZ (noreply@blogger.com) at October 05, 2018 01:16 PM

October 04, 2018

Lubos Motl - string vacua and pheno

Leon Lederman: 1922-2018
Leon Lederman was a giant of the 20th century experimental particle physics. Sadly, he died on Wednesday in a care center in Idaho, due to the complications from dementia (not so shocking at age of 96).

He was born to a Russian Jewish family in 1922. He was the key man in teams that discovered the neutral \(K\)-mesons (do you remember Feynman's discussion about the two-state Hilbert space of \(K^0\) and \(\bar K^0\) that may be mixed as the superpositions of long-lived and short-lived kaons?), the bottom quark \(b\), and the muon (i.e. second) neutrino.

For the muon neutrino discovery, he was given the 1988 Nobel prize in physics, along with two other men.



Lederman was a charming guy who was always a neverending fountain of jokes. As a professor, he has led 50 graduate students in some epoch – none of them went to jail, he bragged.

Also, Lederman was a crucial cheerleader for particle physics. He made the key promotion that allowed Ronald Reagan to plan and build the Tevatron (the room for superconducting magnets in an existing tunnel was reserved in 1981) – which discovered the top quark \(t\) in 1995. We might say that among the 6+6 lepton+quark (elementary fermions') flavors, he was rather fundamental in the discovery of 4 (one-third), namely \(s,b,\nu_\mu,t\).



Leon Lederman has been a huge proponent of physics education – and also the main guy behind the Physics First movement demanding that teenagers are first exposed to physics and then e.g. biology.

He was also a great popularizer. His book "The God Particle" described experimental particle physics and coined the laymen's most popular term for the Higgs boson. We often explain the name as saying that "it was the work by an editor" because Lederman originally wanted the title "The Goddamn Particle".

But it would be fake news – and some people promote the fake news – to say that Lederman found the references to religion unacceptable, like some others do. Instead, in one defense of his "The God Particle", he quoted quite a piece from Genesis, like here. These days, I find it obvious (but I already found it likely a decade ago) that the criticisms against the God Particle were driven by left-wing activists' efforts to make any references to religion etc. politically unacceptable within the Academia.

Some of his methods to promote physics were truly creative. A decade ago, he built a booth on the street and was answering physics questions posed by the pedestrians (literally) in Manhattan.

In 2015, Lederman became the second person who sold his Nobel Prize medal for $765,002. He may have needed some money for the treatment of his dementia that was just diagnosed.

However, even from a financial perspective, I think it was a good idea to sell the medal because its value is likely to drop in coming years. It seems that two days ago, the physics Nobel prize was finally hijacked by the identity politics activists and meritocratically oriented people will simply stop watching that award – I have stopped.

You know, it was announced by tons of media in advance that the newest winners "must" include a woman, and it seems that "they" found a laser team where it was possible. She was a 24-year-old accidental member of a team that did the Nobel work. On top of that, out of her less than 9,000 citations, 2/3 are from the papers co-written with Mr Mourou (who has had 30,000+ extra citations at other places) which signals that he, and not she, was the engine in their team. Needless to say, she's presented as a full-blown, if not the superior, winner by the media – but that's not what the hard data say.

Once you suspect that there may be political reasons behind some winners, the problem isn't even limited to the privileged groups such as women. Mr Mourou could have gotten his prize because of his proximity to a woman researcher, too. And there may be other reasons. Fortunately, one would need a lot of concentrated energy to roll in his grave – that's the only reason why Alfred Nobel isn't doing it right now. The last disciplines in which his prize had some meaning are being ruined, too.

But back to Lederman. He has lived in a different epoch when brilliant people living in the West could have been driven by genuine love for science and, without becoming slaves of any political movement, they could have done great things.

RIP Leon Lederman.

by Luboš Motl (noreply@blogger.com) at October 04, 2018 12:06 PM

Jon Butterworth - Life and Physics

Why use a map to tell the story?
The paperback edition of A Map of the Invisible is out now, and to help promote it we made a few videos on some of the themes in the book. Here’s the second one:

by Jon Butterworth at October 04, 2018 10:11 AM

October 03, 2018

The n-Category Cafe

Category Theory 2019

The major annual category theory conference will be held in Edinburgh next year:

Category Theory 2019

University of Edinburgh

7-13 July 2019

Organizing committee: Steve Awodey, Richard Garner, Chris Heunen, Tom Leinster, Christina Vasilakopoulou.

As John has just pointed out, this is followed two days later by the Applied Category Theory conference and school in Oxford, very conveniently for anyone wishing to go to both.

by leinster (Tom.Leinster@gmx.com) at October 03, 2018 10:33 PM

October 02, 2018

The n-Category Cafe

Applied Category Theory 2019

I’m helping organize ACT 2019, an applied category theory conference and school at Oxford, July 15-26, 2019. Here’s a ‘pre-announcement’.

More details will come later, but here’s some good news: it’s right after the big annual worldwide category theory conference, which is in Edinburgh in 2019. So, conference-hopping category theorists can attend both!

Dear all,

As part of a new growing community in Applied Category Theory, now with a dedicated journal Compositionality, a traveling workshop series SYCO, a forthcoming Cambridge U. Press book series Reasoning with Categories, and several one-off events including at NIST, we launch an annual conference+school series named Applied Category Theory, the coming one being at Oxford, July 15-19 for the conference, and July 22-26 for the school. The dates are chosen such that CT 2019 (Edinburgh) and the ACT 2019 conference (Oxford) will be back-to-back, for those wishing to participate in both.

There already was a successful invitation-only pilot, ACT 2018, last year at the Lorentz Centre in Leiden, also in the format of school+workshop.

For the conference, for those who are familiar with the successful QPL conference series, we will follow a very similar format for the ACT conference. This means that we will accept both new papers which then will be published in a proceedings volume (most likely a Compositionality special Proceedings issue), as well as shorter abstracts of papers published elsewhere. There will be a thorough selection process, as typical in computer science conferences. The idea is that all the best work in applied category theory will be presented at the conference, and that acceptance is something that means something, just like in CS conferences. This is particularly important for young people as it will help them with their careers.

Expect a call for submissions soon, and start preparing your papers now!

The school in ACT 2018 was unique in that small groups of students worked closely with an experienced researcher (these were John Baez, Aleks Kissinger, Martha Lewis and Pawel Sobociński), and each group ended up producing a paper. We will continue with this format or a closely related one, with Jules Hedges and Daniel Cicala as organisers this year. As there were 80 applications last year for 16 slots, we may want to try to find a way to involve more students.

We are fortunate to have a number of private sector companies closely associated in some way or another, who will also participate, with Cambridge Quantum Computing Inc. and StateBox having already made major financial/logistic contributions.

On behalf of the ACT Steering Committee,

John Baez, Bob Coecke, David Spivak, Christina Vasilakopoulou

by john (baez@math.ucr.edu) at October 02, 2018 05:57 PM

John Baez - Azimuth

Applied Category Theory 2019

 

animation by Marius Buliga

I’m helping organize ACT 2019, an applied category theory conference and school at Oxford, July 15-26, 2019.

More details will come later, but here’s the basic idea. If you’re a grad student interested in this subject, you should apply for the ‘school’. Not yet—we’ll let you know when.

Dear all,

As part of a new growing community in Applied Category Theory, now with a dedicated journal Compositionality, a traveling workshop series SYCO, a forthcoming Cambridge U. Press book series Reasoning with Categories, and several one-off events including at NIST, we launch an annual conference+school series named Applied Category Theory, the coming one being at Oxford, July 15-19 for the conference, and July 22-26 for the school. The dates are chosen such that CT 2019 (Edinburgh) and the ACT 2019 conference (Oxford) will be back-to-back, for those wishing to participate in both.

There already was a successful invitation-only pilot, ACT 2018, last year at the Lorentz Centre in Leiden, also in the format of school+workshop.

For the conference, for those who are familiar with the successful QPL conference series, we will follow a very similar format for the ACT conference. This means that we will accept both new papers which then will be published in a proceedings volume (most likely a Compositionality special Proceedings issue), as well as shorter abstracts of papers published elsewhere. There will be a thorough selection process, as typical in computer science conferences. The idea is that all the best work in applied category theory will be presented at the conference, and that acceptance is something that means something, just like in CS conferences. This is particularly important for young people as it will help them with their careers.

Expect a call for submissions soon, and start preparing your papers now!

The school in ACT 2018 was unique in that small groups of students worked closely with an experienced researcher (these were John Baez, Aleks Kissinger, Martha Lewis and Pawel Sobociński), and each group ended up producing a paper. We will continue with this format or a closely related one, with Jules Hedges and Daniel Cicala as organisers this year. As there were 80 applications last year for 16 slots, we may want to try to find a way to involve more students.

We are fortunate to have a number of private sector companies closely associated in some way or another, who will also participate, with Cambridge Quantum Computing Inc. and StateBox having already made major financial/logistic contributions.

On behalf of the ACT Steering Committee,

John Baez, Bob Coecke, David Spivak, Christina Vasilakopoulou

by John Baez at October 02, 2018 04:11 PM

CERN Bulletin

Public meetings

Do you have questions about 2018 MERIT exercise, career evolution? Are you keen to hear about Staff Association’s oncoming projects ?

Come inform yourself and ask your questions during our public meetings.

These public meetings are also an opportunity to get the latest information on the current issues.

Don't miss this opportunity to get the latest news and to discuss with the representatives of the statutory body that is the Staff Association!

October 02, 2018 02:10 PM

CERN Bulletin

Interview de Ghislain Roy, Président de l'Association du personnel

Pour la 300e de l’Echo, Ghislain Roy, actuel président de l’Association du personnel, répond à nos questions…

'Ghislain, physicien de formation, est rentré au CERN en 1992 comme boursier. Il est embauché en 1993 en tant que titulaire dans le groupe d’opération des accélérateurs, SL/OP, dans lequel il aura le plaisir de travailler en tant qu’ingénieur en charge puis comme coordinateur de l’exploitation du LEP. Avec l’arrêt du LEP, Ghislain évolue et devient Radiation Safety Officer (RSO) puis Departemental Safety Officer (DSO) au sein du département des accélérateurs AB puis BE. Il a, entre autres, participé à la mise en place des systèmes de sécurité et d’accès du LHC. Juste avant le "Long Shutdown 1" du LHC, Ghislain retourne brièvement dans le groupe d’exploitation des accélérateurs puis rejoint le groupe de physique des faisceaux (BE/ABP) dans lequel il participe à différents projets, comme par exemple, le projet de transformation de l’accélérateur d’ions lourds, LEIR en un injecteur vers une station biomédicale pour mesurer l’efficacité de différents types d’ions légers dans le traitement des cancers ; cette étude, dénommée BioLEIR, n’a finalement pas été poursuivie par la suite. Déjà délégué et membre du Comité Exécutif de l’AP de 2001 à 2004, Ghislain décide en 2015, aux vues de l'évolution de la révision quinquennale en cours à cette époque, de revenir comme délégué à l’Association avec la motivation affirmée de s'engager pour servir et peser. D’abord membre du Comité Exécutif et secrétaire du Bureau de l’AP, il candidate pour la présidence de l’AP en 2016 et est élu avec Catherine Laverrière comme vice-présidente.'

ECHO : Pourrais-tu nous rappeler quelle était l’ambiance à l’AP au moment où tu es devenu président ?

GR : Les décisions prises lors de la dernière révision quinquennale, avec une refonte des carrières et un ralentissement net de l’avancement, avaient divisé l’Association en deux camps et laissé des désaccords assez profonds au sein de l’AP qui nuisaient à son bon fonctionnement. A l’époque, la gouvernance de l’AP était plutôt présidentielle. Dès que je suis arrivé, j’ai voulu tordre le coup à cette idée. De mon point de vue, l’AP doit être démocratique et le pouvoir décisionnel de l’AP réside dans le Conseil du personnel, le Président étant élu par le Conseil du personnel et soumis à ses décisions.

ECHO : Comment l’AP fonctionne-t-elle aujourd’hui ?

GR : Trouver des gens qui acceptent de se présenter pour être élus délégués du personnel est difficile. La situation est toujours délicate aujourd’hui mais la dernière élection des délégués du personnel a cependant été un succès. Le Conseil du personnel a été fortement renouvelé en 2017. Les personnes qui sont arrivées au sein de l’AP ont des profils variés ce qui permet, aujourd’hui, d’avoir un Conseil du personnel plus représentatif de l’ensemble des corps de métiers du CERN, avec un équilibre entre ingénieurs et techniciens mais aussi avec l’arrivée de quatre boursiers. Le Conseil du personnel est solide, mature et très assidu. Il y a eu un sursaut lors de ces dernières élections. Le Conseil du personnel doit normalement représenter l’ensemble du CERN, toutes catégories et toutes nationalités confondues. Dans le futur, il serait également bien de pouvoir attirer des utilisateurs et des membres du personnel associés, comme membres de l'AP mais aussi comme délégués du personnel.

ECHO : Le numéro 1 de l’ECHO, publié en 2006, titrait sur ‘La rupture’. A l’époque, il s’agissait d'une rupture dans la concertation entre l’AP et la direction du CERN de l’époque présidée par Robert Aymar. Pourrais-tu nous donner ta vision sur le mécanisme de concertation ?

GR : Le mécanisme de concertation est très peu utilisé dans le monde des relations employeur-employés en général, mais au CERN, il est au cœur des relations entre la direction du CERN et le personnel représenté par l'Association du personnel. Pour fonctionner correctement, la concertation nécessite bonne foi et confiance de la part de chacune des deux parties. Elle doit permettre d’échanger sans tabou avec comme seul but de s'efforcer de trouver une solution satisfaisante pour les deux parties, et qui soit dans le meilleur intérêt de l'Organisation. Concertation n’est pas négociation, Concertation n'est pas co-gestion et Concertation n'est pas consultation ! L’AP agit comme force de proposition dans la discussion afin d'élaborer la meilleure des propositions pour le CERN et son personnel. En général, dans le processus de concertation, un alignement d’intérêts se met en place dans la mesure où la Direction et l’AP ont pour but commun d’assurer le succès global du CERN. La décision finale est toujours prise par la Directrice générale, le Comité des Finances ou le conseil du CERN.

ECHO : Quelle est ton opinion sur le statut des employés du CERN ?

GR : D’un point de vue très personnel, je suis très attaché au statut du fonctionnaire international où les intérêts qui priment sont ceux de la mission du CERN, qu’ils soient scientifiques, technologiques ou éducatifs. Cela va au-delà des intérêts de chaque État et doit s’inscrire dans le long terme. Les conditions d’emploi au CERN sont bonnes. Dans la science, c’est un modèle que l’on devrait copier et pas chercher à affaiblir. Nous devons, bien sûr, nous adapter à l'évolution du monde mais en restant créatifs et en conservant cette idée d’un intérêt supérieur de notre Organisation. Les tentations de vouloir réduire, couper, qui peuvent sembler intéressantes à court terme, sont souvent plus destructrices que constructives sur le long terme.

Une évolution, qui me tient à cœur, car je l’ai vécue pendant de nombreuses années au sein des équipes d’exploitation des accélérateurs, serait de mettre plus en avant l’intérêt collectif, l'intérêt et la performance de l’équipe, qui devraient primer par rapport aux intérêts particuliers et aux performances individuelles de chacun, qui amènent les personnes à trop se focaliser sur l’évolution de leur propre carrière. Je regrette que cela ne soit pas plus présent dans la qualification de la performance faite au travers du MERIT aujourd'hui.

ECHO : En pensant à l'avenir, quelle est ta vision sur les défis futurs que le CERN va devoir relever ?

GR : Depuis plusieurs décennies, le CERN est LE centre de recherche mondial pour la physique des hautes énergies. Et cela a eu un impact important sur l’évolution de la population globale du CERN, qui a vu son nombre d’utilisateurs exploser. Aujourd’hui, le CERN fonctionne globalement bien : le personnel est motivé, les performances de nos installations sont excellentes et les résultats scientifiques nombreux. Tout cela est fait avec un nombre d’employés titulaires qui est resté quasiment constant depuis 20 ans. Par contre, le nombre d’étudiants, de boursiers et d'attachés de projet a fortement augmenté, proportionnellement avec le nombre de projets sur lequel nous travaillons. Mais cette situation devient maintenant difficilement soutenable.

Quand on pense à l’avenir, avec des projets de grande envergure, qui dépasseraient les limites physiques du CERN d’aujourd’hui, il faudra faire face à de nouveaux enjeux. Mais ce ne sera pas la première fois que le CERN doit relever de tels défis. La construction du site de Prévessin dans les années 70, et par la suite l’expansion du CERN sur les différents points du LEP et du LHC, ont déjà, à l’époque, soulevé des questions similaires. Je suis confiant dans la capacité du CERN à trouver des solutions qui permettront de conserver l’unicité de l'Organisation, de son site et de son personnel, en garantissant l’engagement et la motivation du personnel qui font sa force et son succès. L’AP sera évidemment présente pour proposer des solutions qui iront dans ce sens.

ECHO : En 2006, le premier numéro de l'ECHO marquait aussi une rupture dans la communication de l’AP qui, avant cette époque, se faisaient au travers du Bulletin du CERN. Quelle est ton opinion sur le sujet ?

GR : La scission qui s’est faite en 2006 entre le Bulletin publié par le Management et la partie de l'Association du personnel qui est devenue "Écho", n’est à mon avis pas bonne. L’AP n'est pas un syndicat et son mode d’interaction avec la direction n’est pas l’opposition mais la concertation. Nous tirons tous à la même roue et dans la même direction, même si nos opinions peuvent ponctuellement diverger. Je préférerais revenir à une configuration où l’on redonne à la communication de l'Association du personnel une place dans le Bulletin du CERN, et pas en séparation totale du Bulletin. Cela serait aussi un bon signe que la Direction actuelle a bien abandonné les vues de celle de 2006 et de Robert Aymar sur la question de la Concertation.

ECHO : Pour finir, quel message voudrais-tu envoyer au personnel du CERN ?

GR : Je voudrais lancer un appel, un appel pour susciter l’intérêt de toutes les personnes présentes au CERN à s’investir dans la vie sociale de l'Organisation, et plus généralement dans sa vie politique, au sens grec du terme, c'est à dire la vie de la Cité en général. Que ce soit au travers des clubs, dans des activités d'intérêt général, en étant guide ou en servant dans une des nombreuses commissions paritaires (Reclassement, Discipline, etc.) ou au sein de l’Association du personnel. Cette Organisation n'est pas pour les membres du personnel employé un simple employeur, c'est votre État, celui qui vous fournit aussi la sécurité sociale et votre retraite à terme. Pour les membres du personnel associé, ce n'est pas juste un laboratoire d'accueil mais une communauté d'intérêt dans laquelle vous pouvez peser par vos avis et votre vision.

Soyez impliqués dans la vie du CERN ! et Rejoignez-nous !

 

The English version of this article will be published in the next Echo.

October 02, 2018 02:10 PM

CERN Bulletin

Offer for our members

Our partner FNAC is offering to all our members 15% discount on all headphones and speakers (except Devialet and Apple).

This offer is valid between October 1st and October 31, 2018 upon the presentation of your Staff Association membership card.

 

October 02, 2018 01:10 PM

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juillet et décembre.

La prochaine permanence se tiendra le :

Mardi 30 octobre de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel

La permanence suivante aura lieu le mardi 27 novembre 2018.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/
Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

October 02, 2018 01:10 PM

CERN Bulletin

Interfon

Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.

October 02, 2018 01:10 PM

ZapperZ - Physics and Physicists

2018 Nobel Prize in Physics ... FINALLY, after 55 years!
I seriously thought that I'd never see this in my lifetime, and I'm terribly happy that I was wrong!

The 2018 Nobel Prize in Physics has just been announced, and for the first time in more than 50 years, one of the winners is a woman!

The Nobel Prize in Physics 2018 was awarded “for groundbreaking inventions in the field of laser physics” with one half to Arthur Ashkin “for the optical tweezers and their application to biological systems”, the other half jointly to Gérard Mourou and Donna Strickland “for their method of generating high-intensity, ultra-short optical pulses”.

Congratulations to all, and especially to Donna Strickland.

I will admit that this wasn't something I expected. I didn't realize that the area of ultra-short laser pulses was in the Nobel Committee and nomination radar. But it is still very nice that this area of laser pulse-shaping technique is being recognized.

Zz.

by ZapperZ (noreply@blogger.com) at October 02, 2018 12:56 PM

October 01, 2018

Jon Butterworth - Life and Physics

The Strumion. And on.
As many of you will know (pay attention at the back) some theory guy said some exciting stuff at CERN and they have, as usual, suppressed his amazing discovery just like they did with those faster-than-light neutrinos and the fact … Continue reading

by Jon Butterworth at October 01, 2018 08:21 PM

Lubos Motl - string vacua and pheno

Physics was invented and built by men
Activists at CERN turned an excerpt from Sexmission into reality

CERN has updated the statement to say that all Strumia's CERN ties were suspended, at least during the ongoing Inquisition trial ("investigation of the conference"). I was hoping it wouldn't happen but I was prepared to see that it would happen. What do you want to investigate, idiots? Strumia has made some elementary and some elaborate comments about women in physics and a bunch of brain-dead wannabe fascists and mental cripples found the truth inconvenient. That's it.

However, things are much better in Italy where Alessandro is primarily employed. The rector of the University of Pisa Paolo Mancarella (IT), after he got some complaints from the totalitarian cultural Marxists and after he looked at the 26 slides, refused to start ethical proceedings against Strumia. That looks better although something will be "investigated" over there by an ethical committee, too. But maybe the page says something else and Mr Mancarella doesn't really speak English.

Poles are our Western Slavic cousins. They generally love us, Czechs, more than we love them. (We're their #1 favorite foreigners but it's not true for Poles.) They're great but I surely don't think that they're good e.g. in the sense of humor. (See my answer to What Poles do better than Czechs and vice versa.)



You need to click at a link and play the video outside TRF.

However, I became a great fan of a 1983 or 1984 Polish cult film, the sci-fi comedy named Sexmission. Max and Albert, two men from the 1980s, volunteer to (earn some bucks and) undergo a hibernation experiment (designed by Prof Wiktor Kuppelweiser). There's a war (whose special weapon selectively attacks men) and they are only waken up in a relatively distant future (well, 2042) in which no men are alive anymore. The rest of the mankind – purely women – live in a totalitarian society underground while their propaganda says that radioactivity makes it impossible to live on the surface.



The ideology of the totalitarian society is feminism: "man is your enemy", all of the obedient girls and women shout all the time. In 1983, feminism was not a sensitive political topic at all (I think that the number of feminists in Poland is close to zero even today) so people watched it as pure fiction.

If you want to make the filmmakers look courageous, the totalitarian feminism may be considered a hidden satire against the totalitarian communism in Poland – which would end 5 years later. However, I think that these "metaphors" are not so clear. The filmmakers could have also claimed that it was a satire directed against some trends in the Western society of the 1980s.

Well, it wouldn't really be too accurate because the West was still alright in the 1980s. But it would be extremely apt to consider Sexmission a satire mocking the Western Europe (and North America) of 2018. In fact, the writers of the film almost accurately predicted what the West would look like just 35 years later.



I embedded a 5-minute excerpt with English subtitles at the top. Max, the direct and more ordinary man, and Albert, the thinner, shy, and more intellectual guy, are asked to sign a document that they were born men against their will, they want to declare all their previous male lives as non-existent, and they would undergo naturalization (castration).

They laugh and refuse the offer.

A big courtroom opens above their heads with the tribunal of ladies. The female "researchers" are split into two camps. One of them wants to castrate the guys, the other one wants to kill them – after some experiments are made on them.

Just to be sure, Her Excellency, the leader of the female civilization, turns out to be a man at the end, an impotent one, who could have pretended he was female (from his childhood) and he became the alpha female. It's like in the joke "What is the smartest cell in a woman's body?" – "The sperm."

Max and Albert see that they're in trouble – just like Alessandro Strumia does right now. It must be some organization. Nevertheless, they start to explain to them that the whole history is the history of men. There would have been no progress without men. The women want examples. Albert mentions Copernicus and Einstein. They respond: Copernicus was a woman, Einstein was a woman, and so on. Max gets upset and screams: "And so was Marie Curie, wasn't he?"

Well, that wasn't the best example, Albert tells Max.

But you can see that the feminist organization that was deliberately exaggerated back in 1983 – to make the movie more comical – actually reacted more calmly to Albert's comment that the whole history and science is the history of men. The real-world feminists of 2018 – even those that have something to do with CERN, a global center of the hardest scientific discipline (a 13-TeV-hard one) that is expected to be very rational – react more insanely than the exaggerated fictitious feminists from a 1983 movie!

Three decades ago, every kid and every adult knew that the history, science, and technology, among other things, was overwhelmingly created by men. Everyone agreed that only really stupid and uneducated people – uneducated at the level of retarded kids from the kindergarten – could disagree with this innocent proposition.

Now, in 2018, the idiocy of not knowing this basic kindergarten fact has not only become tolerable in some environments. These environments actually love to harass everybody who dares to know what has been a matter of common sense for centuries and millenniums. Strumia has said many things but the left-wing activist journalists love to pick the statement that "physics was invented and built by men".

It's insane. Even if you consider yourself moderate and even if you think that your humble correspondent is more involved in this business, you simply have to help to stop these radical loons that began to conquer every influential industry and structure within the Western society. If you fail to help all sensible people to stop this mad cow disease of cultural Marxism, you will pay dearly for your laziness, too. All of us will. The whole mankind will.

In particular, I urge the Italian government to threaten the Italian exit from CERN (Italy pays some CHF 117 million a year) unless CERN officially apologizes to Prof Alessandro Strumia and restores all his access to projects at CERN. It's a moral duty of the Italian government to defend the basic civic rights of its citizens against foreign and international organizations that don't respect the basic rules of the Western civilization.

P.S.: The Sexmission continues with a nice defense of the old world order by Albert – women were standing on the pedestal, poets were writing poems for them etc. One woman is particularly on their side, the blonde and wise Lamia (she stopped using the pills against the sexual desire so she's been horny, fell in love with Max, and also understood that the old world will have been a better place from a grandma, too).

She agrees to try to leave the underground dystopia along with the men – who, like typical heroic men, prefer freedom, even if it meant just 2 weeks of freedom (given the radioactivity and oxygen reserves).

On the surface of Earth, they of course find out that the scaremongering about the radioactivity is exactly as untrue as the global warming hysteria today. The catastrophically looking world on the surface was just a painting, too. There's no dangerous radioactivity there – they realize once they see the first stork. They find a 20th century house there ("it looks like a blockhouse," Lamia cutely said!) – it's the house where the leader (who is actually male) spends a part of his life. (He also played a key role in maintaining the lies about the radioactivity that is incompatible with life – it's easier to control the ladies if they stay underground. The similarity of the "purpose of this fearmongering" to that of the global warming hysteria in the real world is self-evident.)

The movie ends with a happy end. They have fun in the bedroom with Lamia Reno and a former apparatchik (Emma Dax) who came to catch them and who is disgusted by the propaganda tactics of the feminist regime after both women are already declared to be dead on TV. The leader, after they unmask him, agrees to share the house. They won't report him to the women. Max and Albert think how to save the whole civilization. They ultimately place their sperm to lots of the test tubes in the factory producing girls. The final screenshot of the movie shows the first newborn boy's penis – after it scares a worker in the factory. ;-)

You know, that movie could have been interpreted as a satire in various ways. The Polish communist authorities could have found reasons to ban it. However, it seems to me that the movie would be much more likely to get banned in the contemporary Western Europe and North America – there is probably less freedom in these ex-beacons of the free civilization than in the communist Poland of 1983.

If you Google search for "sexmission watch full movie" without the quotes, you will find 3 parts of the movie with English subtitles at Daily Motion.

by Luboš Motl (noreply@blogger.com) at October 01, 2018 08:05 PM

Clifford V. Johnson - Asymptotia

Diverse Futures

I was asked by editors of the magazine Physics World's 30th anniversary edition to do a drawing that somehow captures changes in physics over the last 30 years, and looks forward to 30 years from now. This was an interesting challenge. There was not anything like the freedom to use space that I had in other works I've done, like my graphic book about science "The Dialogues", or my glimpse of the near future in my SF story "Resolution" in the Twelve Tomorrows anthology. I had over 230 pages for the former, and 20 pages for the latter. Here, I had one page. Well, actually a little over 2/3 of a page (once you take into account the introductory text, etc).

So I thought about it a lot. The editors wanted to show an active working environment, and so I thought about the interiors of labs for some time, looked up lots of physics breakthroughs over the years, and reflected on what might come. I eventually realized that the most important single change in the science that can be visually depicted (and arguably the single most important change of any kind) is the change that's happened to the scientists. Most importantly, we've become more diverse in various ways (not uniformly across all fields though), much more collaborative, and the means by which we communicate in order to do science have expanded greatly. All of this has benefited the science greatly, and I think that if you were to get a time machine and visit a lab 30 years ago, or 30 years from now, it will be the changes in the people that will most strike you, if you're paying attention. So I decided to focus on the break/discussion area of the lab, and imagined that someone stood in the same spot each year and took a snapshot. What we're seeing is those photos tacked to a noticeboard somewhere, and that's our time machine. Have a look, and keep an eye out for various details I put in to reflect the different periods. Enjoy! (Direct link here, and below I've embedded the image itself that's from the magazine. I recommend reading the whole issue, as it is a great survey of the last 30 years.)

Physics World Illustration showing snapshots in time by Clifford V. Johnson

-cvj Click to continue reading this post

The post Diverse Futures appeared first on Asymptotia.

by Clifford at October 01, 2018 06:00 PM

September 30, 2018

Lubos Motl - string vacua and pheno

Nasty SJWs persuade spineless CERN officials to start an inquisition trial against an Italian scientist
The victim "dared" to say that women aren't isomorphic to men when he was asked

Galileo Galilei, the Italian founder of the scientific method as we know it, has been a target of the Roman Catholic Inquisition trials between 1610 and 1633 – mostly because of his heliocentric "heresies".



Those Inquisition folks should have gone extinct, shouldn't they? Sadly, four centuries later, the contamination of the intellectual institutions by this garbage that is violently opposed to the Academic freedoms and any kind of honest research that is inconvenient for the powerful has exceeded anything that could have been seen in the 17th century.

On Friday, the 1st Workshop on High Energy Theory and Gender took place at CERN, the Center of Europe for the Research of Nuclei [sic]. Thankfully, an Italian scientist who has actually thought about the problem – as well as the phenomenological particle physics where he has accumulated 30,939 citations according to INSPIRE so far (41,772 at Google Scholar), a real star (that you may sometimes meet in the blogosphere, anyway) – was invited to give a talk, too:
Experimental test of a new global discrete symmetry

Scheduled title: Bibliometrics data about gender issues in fundamental theory
The aforementioned "symmetry" is the non-existent symmetry (or spontaneously broken symmetry, an alternative explanation the speaker considers) between men and women. The talk is full of graphs and evidence that the scientific institutions are heavily biased against men and have lost much of meritocracy. I won't mention the name of the Italian professor. Why? Because I want to make it harder for additional members of that toxic movement to go after his or her neck and about 70% of feminists and similar unfriendly mammals don't have a powerful enough brain to find the name of the speaker.



I recommend you to go through the 26 slides because they're wonderfully on-topic, although they are often elementary and sometimes plagued by minor errors. They elaborate on lots of points and there are some calculations. At the same moment, there are certain prerequisites and methods in the "review" part of the talk that every scientist should be obliged to understand.

It was a conference promoting "equal opportunities" of genders but for some reason (that all of us understand very well, of course), all 11 physics talks at the conference were delivered by women.



In the insane contemporary social atmosphere, the reaction could have been predicted. You should figure out the name of the speaker and search for that surname on Twitter (try influential tweets as well as the recent ones). An amazingly hostile, brain-dead fascist mob of parasites within high-energy physics – who haven't contributed as much as the Italian scientist even if you combine their contributions – has gathered out of a stinky dumping ground and began to plan methods to harm the speaker personally.

The irony that the victim of the new Inquisition trial is an achieved Italian scientist hasn't discouraged them, not even by epsilon. And you know, Galileo did his gravitational experiments in Pisa. Our victim of the postmodern Inquisition is also affiliated with University of Pisa. I could probably go on...

Today, CERN has issued a totally shocking
Statement: CERN stands for diversity
where some officials who didn't have the courage to sign themselves seem to speak on behalf of almost all the European countries. Cheap filth, I assure you that over 95% of the citizens of the Czech Republic squarely stand on the side of the Italian scientist and against you. It's absolutely outrageous that you abused the name of my country to "sign" this disgusting piece.

I know lots of names of the fascist bullies who plan to start terror against the Italian scientist. To preserve the very existence of science, you need to be totally eliminated from the institutional science, scumbags.



Women in science can do much more than Galileo did in science – unless there's a catch, of course.

Incidentally, you shouldn't be surprised that all traces of the Italian scientist have been erased from the conference website. The Italian scientist has been retroactively disinvited from the conference after he has delivered the talk, the only talk that has made any sense over there! It's the classic The Commissar Vanishes all over again.

The slides have been removed by the Stalinists to make sure that no one can find a hyperlink pointing to the content of the talk again and no one can download it or read it.

It is not clear who wrote the despicable Fatwa – that cripples the good name of CERN and makes it look like the twin sister of Daesh – but the Director General Ms Fabiola Gianotti simply has to be held responsible for outrageous abuses of the CERN website against an individual respected member of the community that take place under her watch.

If she personally allowed the CERN press releases to be abused in this way, I have a message for her: If you believe that you can't follow in the footsteps of those who were tried in Nuremberg, you are recklessly optimistic about your fate. CERN has to be cleaned from this culturally Marxist junk and if it turns out that it's impossible to do so, CERN has to be euthanized. I am ready to ask our PM – whom I don't like too much – to save some money by exiting CERN. Not that Czechs matter over there. But we could save some money and he will agree.

by Luboš Motl (noreply@blogger.com) at September 30, 2018 08:07 PM

September 29, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

History of Physics at the IoP

This week saw a most enjoyable conference on the history of physics at the Institute of Physics in London. The IoP has had an active subgroup in the history of physics for many years, complete with its own newsletter, but this was the group’s first official workshop for a long while. It proved to be a most enjoyable and informative occasion, I hope it is the first of many to come.

download

The Institute of Physics at Portland Place in London (made famous by writer Ian McEwan in the novel ‘Solar’, as the scene of a dramatic clash between a brilliant physicist of questionable integrity and a Professor of Science Studies)

There were plenty of talks on what might be called ‘classical history’, such as Maxwell, Kelvin and the Inverse Square law of Electrostatics (by Isobel Falconer of the University of St. Andrews) and Newton’s First Law – a History (by Paul Ranford of University College London), while the more socially-minded historian might have enjoyed talks such as Psychical and Optical Research; Between Lord Rayleigh’s Naturalism and Dualism (by Gregory Bridgman of the University of Cambridge) and The Paradigm Shift of Physics -Religion-Unbelief Relationship from the Renaissance to the 21st Century (by Elisabetta Canetta of St Mary’s University). Of particular interest to me were a number of excellent talks drawn from the history of 20th century physics, such as A Partial History of Cosmic Ray Research in the UK (by the leading cosmic ray physicist Alan Watson), The Origins and Development of Free-Electron Lasers in the UK (by Elaine Seddon of Daresbury Laboratory),  When Condensed Matter became King (by Joseph Martin of the University of Cambridge), and Symmetries: On Physical and Aesthetic Argument in the Development of Relativity (by Richard Staley of the University of Cambridge). The official conference programme can be viewed here.

My own talk, Interrogating the Legend of Einstein’s “Biggest Blunder”, was a brief synopsis of our recent paper on this topic, soon to appear in the journal Physics in Perspective. Essentially our finding is that, despite recent doubts about the story, the evidence suggests that Einstein certainly did come to view his introduction of the cosmological constant term to the field equations as a serious blunder and almost certainly did declare the term his “biggest blunder” on at least one occasion. Given his awareness of contemporaneous problems such as the age of the universe predicted by cosmologies without the term, this finding has some relevance to those of today’s cosmologists who seek to describe the recently-discovered acceleration in cosmic expansion without a cosmological constant. The slides for the talk can be found here.

I must admit I missed a trick at question time. Asked about other  examples of ‘fudge factors’ that were introduced and later regretted, I forgot the obvious one. In 1900, Max Planck suggested that energy transfer between oscillators somehow occurs in small packets or ‘quanta’ of energy in order to successfully predict the spectrum of radiation from a hot body. However, he saw this as a mathematical device and was not at all supportive of the more general postulate of the ‘light quantum’ when it was proposed by a young Einstein in 1905.  Indeed, Planck rejected the light quantum for many years.

All in all, a superb conference. It was also a pleasure to visit London once again. As always, I booked a cheap ‘ n’ cheerful hotel in the city centre, walkable to the conference. On my way to the meeting, I walked past Madame Tussauds, the Royal Academy of Music, and had breakfast at the tennis courts in Regent’s Park. What a city!

IMG_1937 (1)

Walking past the Royal Academy on my way to the conference

IMG_1942IMG_1946

Views of London over a quick dinner after the conference

by cormac at September 29, 2018 09:07 PM

ZapperZ - Physics and Physicists

Record 1200 Tesla, and then, BANG!
Hey, would you sacrifice your equipment just so you can break the record on the strongest magnetic field created in a lab? These people would.

Speaking with IEEE Spectrum, lead researcher Shojiro Takeyama explained that his team was hoping to achieve a magnetic field that reached 700 Tesla (the unit of measurement for gauging the strength of a magnetic field). At that level, the generator would likely self destruct, but when pushed to its limits the machine actually achieved a strength of 1,200 Tesla.

To put that in perspective, an MRI machine — which is the most intense indoor magnetic field most people would ever encounter — comes in at just three Tesla. Needless to say, the researchers’ machine didn’t survive the test, but it did land them in the record books.



Honestly, I don't think I can get away with doing that!

Zz.

by ZapperZ (noreply@blogger.com) at September 29, 2018 06:26 PM

September 28, 2018

Lubos Motl - string vacua and pheno

Sleptons in Antarctica: 5-sigma evidence for stau-like high energy terrestrial rays
As Jitter pointed out, an extremely interesting astro-ph paper appeared yesterday:
The ANITA Anomalous Events as Signatures of a Beyond Standard Model Particle, and Supporting Observations from IceCube
The paper was promoted at Live Science and the Science Magazine:
Bizarre Particles Keep Flying Out of Antarctica's Ice, and They Might Shatter Modern Physics

Oddball particles tunneling through Earth could point to new physics
What's going on?



The LHC collider hasn't found any evidence for supersymmetry before the deadlines that looked rather likely to the optimists – and not only optimists. Your humble correspondent has sent $100 to Adam Falkowski, with some logistic help by Tobias Sander. If the SUSY had been found, the outcome of our bet would have been more exciting – $10,000 into my pocket.



But the superpartners exist at some scale – everyone who is convinced that this statement is incorrect is a moron. Maybe an easier way to find evidence for SUSY is to ignore the $10 billion collider and buy an air ticket to the chilliest continent. That's how it looks according to the paper.

Derek Fox is the lead author. Steinn Sigurðsson is an important second author (in total, there are 7 authors). Do you understand why The Reference Frame is the only website that pays tribute to the beautiful (as in "Dirac") Icelandic character ð\(=\partial_\mu \gamma^\mu\) in his name? ;-)

I've known Steinn over the Internet for many years. But according to this blog, his most famous achievement so far was that he proved that his Motl number was at most six. It means that there exists the chain of collaborators Motl-Dine-Farrar-Hogg-Blandford-Hernquist-Sigurdsson.

Now, he's been very important in an actual scientific development that is said to provide us with some evidence for supersymmetry.



ANITA, some detector in Antarctica, has recorded something like two cosmic rays with EeV energies. Just to be sure, "eV" is the electronvolt and "E" stands for "exa" which is one million times "tera" (the thing in between is "peta"). So "exa" is \(10^{18}\).

There have been other "exa-electronvolt" particles in the cosmic rays but dear Houston, we have a problem here. Cosmic rays should arrive from the Cosmos and like Heaven and the sky, the Cosmos is above us. Instead, two events arrived from the bottom, from the hell, they were going up.

Can cosmic rays penetrate through the Earth and land in the detector while going in the unusual direction "up"? Low-energy cosmic rays surely can – low-energy neutrinos are almost invisible, like ghosts. But what about high energy neutrinos, like EeV?



Mr Tau (right) and his silent, small but heavy superpartner.

Well, EeV is way above the electroweak scale, 240 GeV or so, and at these high energies, the electroweak symmetry is restored. One of the implications is that neutrinos recall their siblings, the charged leptons – they interact equally strongly. That really means "very strongly". High energy neutrinos have virtually no chance to penetrate through thousands of kilometers of rock.

Fox et al. say that the probability of a Standard Model-based explanation for these two ANITA events – and a few seemingly analogous IceCube results – is below one millionth, i.e. the evidence for the Beyond the Standard Model physics is formally above 5 sigma.

And they identify a nice supersymmetric scenario that may explain the events smoothly. Instead of the conversion of "tau neutrino to tau" (which still may explain cosmic rays going from the empty space), they suggest that the particle flying through the Earth was a stau, the superpartner of the charged tau lepton.

Their stau \(\tilde \tau_R\) – like in some regular GMSB (gauge-mediated supersymmetry breaking) models – is the NLSP (next-to-lightest superpartner), it is rather long-lived, and (when it hits a nucleon, it) decays to the tau \(\tau\) lepton (the same end product as if you have the tau neutrino from the heaven) plus the LSP, the truly invisible (lightest supersymmetric) particle that is a dark matter candidate – probably denoted not as \(\tilde \chi\) but \(\tilde G\) because it should be a gravitino in GMSB.

If you can offer an immediate explanation why stau in these models is so weakly interacting to get through Earth, I will appreciate you crash course.



Right now, Fox is hungry for more data. I mean Derek Fox or the mammal. In the entertainment industry, Fox isn't hungry at all and it – acting on behalf of Disney – sold its stake in Sky to a hungry Comcast.

by Luboš Motl (noreply@blogger.com) at September 28, 2018 06:56 AM

The n-Category Cafe

Exceptional Quantum Geometry and Particle Physics

It would be great if we could make sense of the Standard Model: the 3 generations of quarks and leptons, the 3 colors of quarks vs. colorless leptons, the way only the weak force notices the difference between left and right, the curious gauge group <semantics>SU(3)×SU(2)×U(1)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(2)\times \mathrm{U}(1)</annotation></semantics>, the role of the Higgs boson, and so on. I can’t help but hope that all these facts are clues that we have not yet managed to interpret.

These papers may not be on the right track, but I feel a duty to explain them:

After all, the math is probably right. And they use the exceptional Jordan algebra, which I’ve already wasted a lot of time thinking about — so I’m in a better position than most to summarize what they’ve done.

Don’t get me wrong: I’m not claiming this paper is important for physics! I really have no idea. But it’s making progress on a quirky, quixotic line of thought that has fascinated me for years.

Here’s the main result. The exceptional Jordan algebra contains a lot of copies of 4-dimensional Minkowski spacetime. The symmetries of the exceptional Jordan algebra that preserve any one of these copies form a group…. which happens to be exactly the gauge group of the Standard Model!

Formally real Jordan algebras were invented by Jordan to serve as algebras of observables in quantum theory, but they also turn out to describe spacetimes equipped with a highly symmetrical causal structure. For example, <semantics>𝔥 2()<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{C})</annotation></semantics>, the Jordan algebra of <semantics>2×2<annotation encoding="application/x-tex">2 \times 2</annotation></semantics> self-adjoint complex matrices, is the algebra of observables for a spin-<semantics>1/2<annotation encoding="application/x-tex">1/2</annotation></semantics> particle — but it can also be identified with 4-dimensional Minkowski spacetime! This dual role of formally real Jordan algebras remains somewhat mysterious, though the connection is understood in this case.

When Jordan, Wigner and von Neumann classified formally real Jordan algebras, they found 4 infinite families and one exception: the exceptional Jordan algebra <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>, consisting of <semantics>3×3<annotation encoding="application/x-tex">3\times 3</annotation></semantics> self-adjoint octonion matrices. Ever since then, physicists have wondered what this thing is good for.

Now Todorov and Dubois–Violette claim they’re getting the gauge group of the Standard Model from the symmetry group of the exceptional Jordan algebra by taking the subgroup that

  1. preserves a copy of 10d Minkowski spacetime inside this Jordan algebra, and

  2. also preserves a copy of the complex numbers inside the octonions — which is just what we need to pick out a copy of 4d Minkowski spacetime inside 10d Minkowski spacetime!

But let me explain this in more detail. First, some old stuff:

If you pick a unit imaginary octonion and call it <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>, you get a copy of the complex numbers inside the octonions <semantics>𝕆<annotation encoding="application/x-tex">\mathbb{O}</annotation></semantics>. This lets us split <semantics>𝕆<annotation encoding="application/x-tex">\mathbb{O}</annotation></semantics> into <semantics>V<annotation encoding="application/x-tex">\mathbb{C} \oplus V</annotation></semantics>, where <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> is a 3-dimensional complex Hilbert space. The subgroup of the automorphism group of the octonions that fixes <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> is <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics>. This is the gauge group of the strong force. It acts on <semantics>V<annotation encoding="application/x-tex">\mathbb{C} \oplus V</annotation></semantics> in exactly the way you’d need for a lepton and a quark.

The exceptional Jordan algebra <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics> contains the Jordan algebra <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics> of <semantics>2×2<annotation encoding="application/x-tex">2 \times 2</annotation></semantics> self-adjoint octonion matrices in various ways. <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics> can be identified with 10-dimensional Minkowski spacetime, with the determinant serving as the Minkowski metric. Picking a unit imaginary octonion <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> then chooses a copy of <semantics>𝔥 2()<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{C})</annotation></semantics> inside <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics>, and <semantics>𝔥 2()<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{C})</annotation></semantics> can be identified with 4-dimensional Minkowski spacetime.

All this is well-known to people who play these games. Now for the new part.

1) First, suppose we take the automorphism group of the exceptional Jordan algebra and look at the subgroup that preserves the splitting of <semantics>𝕆<annotation encoding="application/x-tex">\mathbb{O}</annotation></semantics> into <semantics>V<annotation encoding="application/x-tex">\mathbb{C} \oplus V</annotation></semantics> for each entry of these octonion matrices. This subgroup is

<semantics>SU(3)×SU(3)/3<annotation encoding="application/x-tex"> \displaystyle{ \frac{ \mathrm{SU}(3) \times \mathrm{SU}(3) } {\mathbb{Z}/3} } </annotation></semantics>

It’s not terribly hard to see why this might be true. We can take any element of <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics> and spit it into two parts using <semantics>𝕆=V<annotation encoding="application/x-tex">\mathbb{O} = \mathbb{C} \oplus V</annotation></semantics>, getting a decomposition one can write as <semantics>𝔥 3(𝕆)=𝔥 3()𝔥 3(V)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O}) = \mathfrak{h}_3(\mathbb{C}) \oplus \mathfrak{h}_3(V)</annotation></semantics>. One copy of <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics> acts by conjugation on <semantics>𝔥 3()<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{C})</annotation></semantics> while another acts by conjugation on <semantics>𝔥 3(V)<annotation encoding="application/x-tex">\mathfrak{h}_3(V)</annotation></semantics>. These two actions commute. The center of <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics> is <semantics>/3<annotation encoding="application/x-tex">\mathbb{Z}/3</annotation></semantics>, consisting of diagonal matrices that are cube roots of the identity matrix. So, we get an inclusion of <semantics>/3<annotation encoding="application/x-tex">\mathbb{Z}/3</annotation></semantics> in the diagonal of <semantics>SU(3)×SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(3)</annotation></semantics> and this subgroup acts trivially on <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>.

2) Next, take the subgroup of <semantics>(SU(3)×SU(3))//3<annotation encoding="application/x-tex">(\mathrm{SU}(3) \times \mathrm{SU}(3))/\mathbb{Z}/3</annotation></semantics> that also preserves a copy of <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics> inside <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>. This subgroup, Dubois-Violette and Todorov claim, is

<semantics>SU(3)×SU(2)×U(1)/6<annotation encoding="application/x-tex"> \displaystyle{ \frac{ \mathrm{SU}(3) \times \mathrm{SU}(2) \times \mathrm{U}(1) } {\mathbb{Z}/6} } </annotation></semantics>

And this is the true gauge group of the Standard Model!

People often say the Standard Model has gauge group is <semantics>SU(3)×SU(2)×U(1)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(2) \times \mathrm{U}(1)</annotation></semantics>, which is okay, but this group has a <semantics>/6<annotation encoding="application/x-tex">\mathbb{Z}/6</annotation></semantics> subgroup that acts trivially on all particles—a fact that arises only because quarks have the exact charges they do! So, the ‘true’ gauge group of the Standard model is the quotient <semantics>(SU(3)×SU(2)×U(1))//6<annotation encoding="application/x-tex">(\mathrm{SU}(3) \times \mathrm{SU}(2) \times \mathrm{U}(1))/\mathbb{Z}/6</annotation></semantics>. And this is fundamental to the <semantics>SU(5)<annotation encoding="application/x-tex">\mathrm{SU}(5)</annotation></semantics> grand unified theory—a well-known fact that John Huerta and I explained a while ago here. The point is that while <semantics>SU(3)×SU(2)×U(1)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(2)\times \mathrm{U}(1)</annotation></semantics> is not a subgroup of <semantics>SU(5)<annotation encoding="application/x-tex">\mathrm{SU}(5)</annotation></semantics>, its quotient by <semantics>/6<annotation encoding="application/x-tex">\mathbb{Z}/6</annotation></semantics> is.

I’ll admit, I don’t fully get how

<semantics>SU(3)×SU(2)×U(1)/6<annotation encoding="application/x-tex"> \displaystyle{ \frac{ \mathrm{SU}(3) \times \mathrm{SU}(2) \times \mathrm{U}(1) } {\mathbb{Z}/6} } </annotation></semantics>

shows up inside

<semantics>SU(3)×SU(3)/3<annotation encoding="application/x-tex"> \displaystyle{ \frac{ \mathrm{SU}(3) \times \mathrm{SU}(3) } {\mathbb{Z}/3} } </annotation></semantics>

as the subgroup that preserves an <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics> inside <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>.

I think it works like this. I described <semantics>SU(3)×SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(3)</annotation></semantics> one way, but there should be another essentially equivalent way to get two copies of <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics> acting on <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>. Namely, let the first copy act componentwise on each entry of your <semantics>3×3<annotation encoding="application/x-tex">3 \times 3</annotation></semantics> octonionic matrix, and let the second act by conjugation on the whole matrix. In this alternative picture the <semantics>/3<annotation encoding="application/x-tex">\mathbb{Z}/3</annotation></semantics> subgroup lies wholly in the second copy of <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics>. Then, figure out those elements of <semantics>SU(3)×SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3) \times \mathrm{SU}(3)</annotation></semantics> that preserve a copy of <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics> inside <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>: say, the matrices where the last row and last column vanish. All the elements of the first copy of <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics> preserve this <semantics>𝔥 2(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_2(\mathbb{O})</annotation></semantics>, because they act componentwise. But not all elements of the second copy do: only the block diagonal ones with a <semantics>2×2<annotation encoding="application/x-tex">2\times 2</annotation></semantics> block and a <semantics>1×1<annotation encoding="application/x-tex">1 \times 1</annotation></semantics> block. The matrices in <semantics>SU(3)<annotation encoding="application/x-tex">\mathrm{SU}(3)</annotation></semantics> with this block diagonal form look like

<semantics>(αg 0 0 α 2)<annotation encoding="application/x-tex"> \left( \begin{array}{cc} \alpha g & 0 \\ 0 & \alpha^{-2} \end{array} \right) </annotation></semantics>

where <semantics>gSU(2)<annotation encoding="application/x-tex">g \in \mathrm{SU}(2)</annotation></semantics> and <semantics>αU(1)<annotation encoding="application/x-tex">\alpha \in \mathrm{U}(1)</annotation></semantics>. These form a group isomorphic to

<semantics>SU(2)×U(1)/2<annotation encoding="application/x-tex"> \displaystyle{ \frac{ \mathrm{SU}(2) \times \mathrm{U}(1)}{\mathbb{Z}/2} } </annotation></semantics>

If all this works out, it’s very pretty: the 2 and the 1 in <semantics>SU(2)×U(1)<annotation encoding="application/x-tex">\mathrm{SU}(2) \times \mathrm{U}(1)</annotation></semantics> arise from the choice of a <semantics>2×2<annotation encoding="application/x-tex">2 \times 2</annotation></semantics> block and <semantics>1×1<annotation encoding="application/x-tex">1 \times 1</annotation></semantics> block in <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>… which is also the choice that lets us find Minkowski spacetime inside <semantics>𝔥 3(𝕆)<annotation encoding="application/x-tex">\mathfrak{h}_3(\mathbb{O})</annotation></semantics>.

But I need to check some things, like how we get the <semantics>/6<annotation encoding="application/x-tex">\mathbb{Z}/6</annotation></semantics>.

by john (baez@math.ucr.edu) at September 28, 2018 12:18 AM

September 27, 2018

Axel Maas - Looking Inside the Standard Model

Unexpected connections
The history of physics is full of stuff developed for one purpose ending up being useful for an entirely different purpose. Quite often they also failed their original purpose miserably, but are paramount for the new one. Newer examples are the first attempts to describe the weak interactions, which ended up describing the strong one. Also, string theory was originally invented for the strong interactions, and failed for this purpose. Now, well, it is the popular science star, and a serious candidate for quantum gravity.

But failing is optional for having a second use. And we just start to discover a second use for our investigations of grand-unified theories. There our research used a toy model. We did this, because we wanted to understand a mechanism. And because doing the full story would have been much too complicated before we did not know, whether the mechanism works. But it turns out this toy theory may be an interesting theory on its own.

And it may be interesting for a very different topic: Dark matter. This is a hypothetical type of matter of which we see a lot of indirect evidence in the universe. But we are still mystified of what it is (and whether it is matter at all). Of course, such mysteries draw our interests like a flame the moth. Hence, our group in Graz starts to push also in this direction, being curious on what is going on. For now, we follow the most probable explanation that there are additional particles making up dark matter. Then there are two questions: What are they? And do they, and if yes how, interact with the rest of the world? Aside from gravity, of course.

Next week I will go to a workshop in which new ideas on dark matter will be explored, to get a better understanding of what is known. And in the course of preparing for this workshop I noted that there is this connection. I will actually present this idea at the workshop, as it forms a new class of possible explanations of dark matter. Perhaps not the right one, but at the current time an equally plausible one as many others.

And here is how it works. Theories of the type of grand-unified theories were for a long time expected to have a lot of massless particles. This was not bad for their original purpose, as we know quite some of them, like the photon and the gluons. However, our results showed that with an improved treatment and shift in paradigm that this is not always true. At least some of them do not have massless particles.

But dark matter needs to be massive to influence stars and galaxies gravitationally. And, except for very special circumstances, there should not be additional massless dark particles. Because otherwise the massive ones could decay into the massless ones. And then the mass is gone, and this does not work. Thus the reason why such theories had been excluded. But with our new results, they become feasible. Even more so, we have a lot of indirect evidence that dark matter is not just a single, massive particle. Rather, it needs to interact with itself, and there could be indeed many different dark matter particles. After all, if there is dark matter, it makes up four times more stuff in the universe than everything we can see. And what we see consists out of many particles, so why should not dark matter do so as well. And this is also realized in our model.

And this is how it works. The scenario I will describe (you can download my talk already now, if you want to look for yourself - though it is somewhat technical) finds two different types of stable dark matter. Furthermore, they interact. And the great thing about our approach is that we can calculate this quite precisely, giving us a chance to make predictions. Still, we need to do this, to make sure that everything works with what astrophysics tells us. Moreover, this setup gives us two more additional particles, which we can couple to the Higgs through a so-called portal. Again, we can calculate this, and how everything comes together. This allows to test this model not only by astronomical observations, but at CERN. This gives the basic idea. Now, we need to do all the detailed calculations. I am quite excited to try this out :) - so stay tuned, whether it actually makes sense. Or whether the model will have to wait for another opportunity.

by Axel Maas (noreply@blogger.com) at September 27, 2018 11:53 AM

September 26, 2018

The n-Category Cafe

A Communal Proof of an Initiality Theorem

One of the main reasons I’m interested in type theory in general, and homotopy type theory (HoTT) in particular, is that it has categorical semantics. More precisely, there is a correspondence between (1) type theories and (2) classes of structured categories, such that any proof in a particular type theory can be interpreted into any category with the corresponding structure. I wrote a lot about type theory from this perspective in The Logic of Space. The basic idea is that we construct a particular structured category <semantics>Syn<annotation encoding="application/x-tex">Syn</annotation></semantics> out of the syntax of the type theory, and prove that it is the initial such category. Then we can interpret any syntactic object <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> in a structured category <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> by regarding <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> as living in <semantics>Syn<annotation encoding="application/x-tex">Syn</annotation></semantics> and applying the unique structured functor <semantics>SynC<annotation encoding="application/x-tex">Syn\to C</annotation></semantics>.

Unfortunately, we don’t currently have any very general definitions of what “a type theory” is, what the “corresponding class of structured categories” is, or a very general proof of this “initiality theorem”. The idea of such proofs is easy — just induct over the construction of syntax — but its realization in practice can be long and tedious. Thus, people are understandably reluctant to take the time and space to write out such a proof explicitly, when “everyone knows” how the proof should go and probably hardly anyone would really read such a proof in detail anyway. This is especially true for dependent type theory, which is qualitatively more complicated in various ways than non-dependent type theories; to my knowledge only one person (Thomas Streicher) has ever written out anything approaching a complete proof of initiality for a dependent type theory.

There is currently some disagreement in the HoTT community over how much a problem this is. On one side, the late Vladimir Voevodsky argued that it is completely unacceptable, delayed the publication of his seminal model of type theory in simplicial sets because of his dissatisfaction with the situation, and spent the last years of his life working on the problem. (Others, less dogmatic in philosophy, are nevertheless also working on the problem — specifically, attempting to give a general definition of “type theory” and prove a general initiality theorem for all such “type theories”.) On the other side, plenty of people point out reasonably that functorial semantics has been well-understood for decades, and why should we worry so much about a particular instance of it all of a sudden now? Unfortunately, the existence of this disagreement is not good for the perception of our discipline among other mathematicians.

In my experience, arguments about the importance of initiality often tend to devolve into disagreements about questions like: Is Streicher’s proof hard to understand? Does it generalize “easily” to other type constructors? How “easy” is “easily”? What kinds of type constructors? Is it hard to deal with variable binding? Is the categorical structure really “exactly” the same as the type-theoretic structure? Where is the “hard part” of an initiality proof? Is there even a “hard part” at all? Plenty of people have opinions about these questions, but for most of us these opinions are not based on actual experience of trying to prove such a theorem.

Last month at the nForum, Richard Williamson suggested the (in hindsight obvious) solution: let’s get a bunch of people together and work together to write out a complete proof of an initiality theorem, in modern language, for a basic uncomplicated dependent type theory. If we have enough contributors to divide up the work, the verbosity and tedium shouldn’t be overwhelming. We can use the nLab wikilink features to organize the proof in a “drill-down” manner so that a reader can get a high level idea and then delve into as many or as few details as desired. Hopefully, this will increase overall public awareness of how such proofs work, so that they seem less “magic”. Moreover, all the contributors will get some actual experience “in the trenches” with an initiality proof, thereby hopefully leading us to more informed opinions.

I don’t view such a project as a replacement for proving one general theorem, but as a complementary effort, whose goals are primarily understanding and exposition. However, if it’s successful, the result will be a complete initiality theorem for at least one dependent type theory; and we can add as many bells and whistles to this theory as we have time and energy for, hopefully in a relatively modular way.

We had some preliminary discussion about this project at the nForum here, at which enough people expressed interest in participating that I think the project can get off the ground. But the more the merrier! If you’d like to help out, even just a little bit, just add your name to the list of participants on this nLab page and join the conversation when it begins. (Some other people have informally told me they’re interested, but I didn’t keep a record of their names, so I didn’t add them to the list; if you fall in that category, please add yourself!)

I’m not sure yet how we will do most of our communication and coordination. We’ll probably have one or more nForum threads for discussion. I think it might be nice to have some scheduled videoconference meetings for those who can make it, especially during the early stages when we’ll have to make various decisions that will affect the overall course of the project; but I’m not wedded to that if others aren’t interested. Most of the work will probably be individual people writing out proofs of inductive cases on nLab pages.

Some of the decisions we’ll have to make at the beginning include:

  • What type theory should we aim for as a first target? We can always add more to it later, so it should be something fairly uncomplicated, but nontrivial enough to exhibit the interesting features. For instance, I think it should certainly have <semantics>Π<annotation encoding="application/x-tex">\Pi</annotation></semantics>-types. What about universes?

  • Exactly how should we present the syntax? In particular, should we represent variable binding with named variables, de Bruijn indices, or some other method? Should all terms be fully annotated?

  • What categorical structure should we use as the substrate? Options include contextual categories (a.k.a. “C-systems”), categories with families, split comprehension categories, etc.

  • How should we structure the proof? The questions here are hard to describe concisely, but for instance one of them was mentioned by Peter Lumsdaine at the nForum thread: Streicher phrases the induction using an “existential quantifier” for interpretation of contexts, but it is arguably easier to use a “universal quantifier” in the same place.

Feel free to use the comments of this post to express opinions about any of these, or about the project overall. My current plan is to wait a couple weeks to give folks a chance to sign up, then “officially” start making plans as a group.

by shulman (viritrilbia@gmail.com) at September 26, 2018 07:08 PM

ZapperZ - Physics and Physicists

How Fast Is The Photoelectric Effect?
Every student who studied modern physics in an undergraduate General Physics course would have encountered the photoelectric effect. It is a phenomenon that has a special place in the history of physics, and the theoretical description of this phenomenon gave Einstein his Nobel Prize.

So one would think that this is a done deal already, and we should know all there is to know about it. In some sense, we do. We know enough about it that we have expanded this phenomenon to be included in a more general phenomenon called photoemission. We use this phenomenon to study many things, including band structure of materials. So it is very well-known.

Yet, as with so many things in physics, the more we study it, the more we want to know the minute details of it. In this case, the current study is on how fast an electron is emitted from a material once light impinges upon it. In other words, from the moment a photon is absorbed, how quickly does the electron is liberated from the material?

This is not that easy to answer because, well, one can already guess at how would one determine (i) the exact time when one photon is absorbed into a material, and (ii) the exact time when an electron  is liberated due to that absorbed photon. On top of that, this may be a very fast process, so how does one measure a time scale that is almost instantaneous?

The authors of this latest paper[1] came up with a very ingenious method to determine this, and in the process, they have elucidated even more the various stages of what is involved in the photoelectric effect. But before we continue, let's get one thing very clear here.

The "photoelectric effect" that we know and love, and the one that Millikan studied, is the phenomenon whereby UV light is shown onto a metallic surface (cathode). We know now that this is an emission process of electrons coming from the metal's conduction band. This is important because, as this new study shows, this process is different than the emission from core levels (i.e. not from the continuous conduction band). Those of us who have done photoemission work using both UV and x-rays can attest to such differences.

The experiment in this report was done on a tungsten surface, or more specifically, W(110) surface. The hard UV light that was used allowed them to get photoemission from the conduction band and a core-level state.

What they found was that from the time that a photon is absorbed to the moment that an electron is emitted, the time for the process for a conduction electron is ~ 45 as, while for a core-level electron is ~100 as.

{as = attosecond = 1 x 10^(-18) second}

So the emission from core-level takes more than twice as long to occur. In their analysis, the authors stressed this conclusion:

These findings highlight that proper accounting for the initial creation, origin, transport and scattering of electrons is imperative for the proper description of the photoelectric effect.

Bill Spicer's 3-step model of photoemission process certainly highlighted the fact that it isn't a simple process. This paper not only reinforce that, but also included the effect of surface states in the influence to emission time and thus, possibly influencing other properties of the emitted photoelectron. 

There are many things in physics which we know a lot of. But these are also areas in which we continue to dig deeper to find out even more. There will never be a point where we know everything there is to know, even with established ideas and phenomena.

Zz.

[1] M. Ossiander et al., Nature 561, 374 (2018). https://www.nature.com/articles/s41586-018-0503-6
Summary of this work can be found here.

by ZapperZ (noreply@blogger.com) at September 26, 2018 06:42 PM

September 25, 2018

Sean Carroll - Preposterous Universe

Atiyah and the Fine-Structure Constant

Sir Michael Atiyah, one of the world’s greatest living mathematicians, has proposed a derivation of α, the fine-structure constant of quantum electrodynamics. A preprint is here. The math here is not my forte, but from the theoretical-physics point of view, this seems misguided to me.

(He’s also proposed a proof of the Riemann conjecture, I have zero insight to give there.)

Caveat: Michael Atiyah is a smart cookie and has accomplished way more than I ever will. It’s certainly possible that, despite the considerations I mention here, he’s somehow onto something, and if so I’ll join in the general celebration. But I honestly think what I’m saying here is on the right track.

In quantum electrodynamics (QED), α tells us the strength of the electromagnetic interaction. Numerically it’s approximately 1/137. If it were larger, electromagnetism would be stronger, atoms would be smaller, etc; and inversely if it were smaller. It’s the number that tells us the overall strength of QED interactions between electrons and photons, as calculated by diagrams like these.
As Atiyah notes, in some sense α is a fundamental dimensionless numerical quantity like e or π. As such it is tempting to try to “derive” its value from some deeper principles. Arthur Eddington famously tried to derive exactly 1/137, but failed; Atiyah cites him approvingly.

But to a modern physicist, this seems like a misguided quest. First, because renormalization theory teaches us that α isn’t really a number at all; it’s a function. In particular, it’s a function of the total amount of momentum involved in the interaction you are considering. Essentially, the strength of electromagnetism is slightly different for processes happening at different energies. Atiyah isn’t even trying to derive a function, just a number.

This is basically the objection given by Sabine Hossenfelder. But to be as charitable as possible, I don’t think it’s absolutely a knock-down objection. There is a limit we can take as the momentum goes to zero, at which point α is a single number. Atiyah mentions nothing about this, which should give us skepticism that he’s on the right track, but it’s conceivable.

More importantly, I think, is the fact that α isn’t really fundamental at all. The Feynman diagrams we drew above are the simple ones, but to any given process there are also much more complicated ones, e.g.

And in fact, the total answer we get depends not only on the properties of electrons and photons, but on all of the other particles that could appear as virtual particles in these complicated diagrams. So what you and I measure as the fine-structure constant actually depends on things like the mass of the top quark and the coupling of the Higgs boson. Again, nowhere to be found in Atiyah’s paper.

Most importantly, in my mind, is that not only is α not fundamental, QED itself is not fundamental. It’s possible that the strong, weak, and electromagnetic forces are combined into some Grand Unified theory, but we honestly don’t know at this point. However, we do know, thanks to Weinberg and Salam, that the weak and electromagnetic forces are unified into the electroweak theory. In QED, α is related to the “elementary electric charge” e by the simple formula α = e2/4π. (I’ve set annoying things like Planck’s constant and the speed of light equal to one. And note that this e has nothing to do with the base of natural logarithms, e = 2.71828.) So if you’re “deriving” α, you’re really deriving e.

But e is absolutely not fundamental. In the electroweak theory, we have two coupling constants, g and g’ (for “weak isospin” and “weak hypercharge,” if you must know). There is also a “weak mixing angle” or “Weinberg angle” θW relating how the original gauge bosons get projected onto the photon and W/Z bosons after spontaneous symmetry breaking. In terms of these, we have a formula for the elementary electric charge: e = g sinθW. The elementary electric charge isn’t one of the basic ingredients of nature; it’s just something we observe fairly directly at low energies, after a bunch of complicated stuff happens at higher energies.

Not a whit of this appears in Atiyah’s paper. Indeed, as far as I can tell, there’s nothing in there about electromagnetism or QED; it just seems to be a way to calculate a number that is close enough to the measured value of α that he could plausibly claim it’s exactly right. (Though skepticism has been raised by people trying to reproduce his numerical result.) I couldn’t see any physical motivation for the fine-structure constant to have this particular value

These are not arguments why Atiyah’s particular derivation is wrong; they’re arguments why no such derivation should ever be possible. α isn’t the kind of thing for which we should expect to be able to derive a fundamental formula, it’s a messy low-energy manifestation of a lot of complicated inputs. It would be like trying to derive a fundamental formula for the average temperature in Los Angeles.

Again, I could be wrong about this. It’s possible that, despite all the reasons why we should expect α to be a messy combination of many different inputs, some mathematically elegant formula is secretly behind it all. But knowing what we know now, I wouldn’t bet on it.

by Sean Carroll at September 25, 2018 08:03 AM

September 24, 2018

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

by nonequilibrium_admin at September 24, 2018 04:49 AM

September 22, 2018

Clifford V. Johnson - Asymptotia

Jumpers, Sweaters, and So Forth…

If you've been following on instagram you'll know that I spent some time over the last weeks working on an illustration that was commissioned by a physics magazine. (Feels odd saying that, commissioned, but that's exactly what happened. Apparently I'm able to add professional illustrator to my CV now. Huh.) Anyway, the illustration will show the interior of a lab. I'll let you know more about it closer to publication. Much of the focus was on the people, and for reasons that will become clear, I did a bit of a throwback to the 80s, and so tried to reflect that period somewhat, old computers and ghastly sweaters and all. Here's a sequence of stages of a corner of the work (click on it for a larger view):

-cvj
Click to continue reading this post

The post Jumpers, Sweaters, and So Forth… appeared first on Asymptotia.

by Clifford at September 22, 2018 09:00 PM

September 20, 2018

John Baez - Azimuth

Patterns That Eventually Fail

Sometimes patterns can lead you astray. For example, it’s known that

\displaystyle{ \mathrm{li}(x) = \int_0^x \frac{dt}{\ln t} }

is a good approximation to \pi(x), the number of primes less than or equal to x. Numerical evidence suggests that \mathrm{li}(x) is always greater than \pi(x). For example,

\mathrm{li}(10^{12}) - \pi(10^{12}) = 38,263

and

\mathrm{li}(10^{24}) - \pi(10^{24}) = 17,146,907,278

But in 1914, Littlewood heroically showed that in fact, \mathrm{li}(x) - \pi(x) changes sign infinitely many times!

This raised the question: when does \pi(x) first exceed \mathrm{li}(x)? In 1933, Littlewood’s student Skewes showed, assuming the Riemann hypothesis, that it must do so for some x less than or equal to

\displaystyle{ 10^{10^{10^{34}}} }

Later, in 1955, Skewes showed without the Riemann hypothesis that \pi(x) must exceed \mathrm{li}(x) for some x smaller than

\displaystyle{ 10^{10^{10^{964}}} }

By now this bound has been improved enormously. We now know the two functions cross somewhere near 1.397 \times 10^{316}, but we don’t know if this is the first crossing!

All this math is quite deep. Here is something less deep, but still fun.

You can show that

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, dt = \frac{\pi}{2} }

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, dt = \frac{\pi}{2} }

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, dt = \frac{\pi}{2} }

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, \frac{\sin \left(\frac{t}{301}\right)}{\frac{t}{301}} \, dt = \frac{\pi}{2} }

and so on.

It’s a nice pattern. But this pattern doesn’t go on forever! It lasts a very, very long time… but not forever.

More precisely, the identity

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }

holds when

n < 9.8 \cdot 10^{42}

but not for all n. At some point it stops working and never works again. In fact, it definitely fails for all

n > 7.4 \cdot 10^{43}

The explanation

The integrals here are a variant of the Borwein integrals:

\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, dx= \frac{\pi}{2} }

\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\frac{\sin(x/3)}{x/3} \, dx = \frac{\pi}{2} }

\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\, \frac{\sin(x/3)}{x/3} \, \frac{\sin(x/5)}{x/5} \, dx = \frac{\pi}{2} }

where the pattern continues until

\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots\frac{\sin(x/13)}{x/13} \, dx = \frac{\pi}{2} }

but then fails:

\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots \frac{\sin(x/15)}{x/15} \, dx \approx \frac \pi 2 - 2.31\times 10^{-11} }

I never understood this until I read Greg Egan’s explanation, based on the work of Hanspeter Schmid. It’s all about convolution, and Fourier transforms:

Suppose we have a rectangular pulse, centred on the origin, with a height of 1/2 and a half-width of 1.

Now, suppose we keep taking moving averages of this function, again and again, with the average computed in a window of half-width 1/3, then 1/5, then 1/7, 1/9, and so on.

There are a couple of features of the original pulse that will persist completely unchanged for the first few stages of this process, but then they will be abruptly lost at some point.

The first feature is that F(0) = 1/2. In the original pulse, the point (0,1/2) lies on a plateau, a perfectly constant segment with a half-width of 1. The process of repeatedly taking the moving average will nibble away at this plateau, shrinking its half-width by the half-width of the averaging window. So, once the sum of the windows’ half-widths exceeds 1, at 1/3+1/5+1/7+…+1/15, F(0) will suddenly fall below 1/2, but up until that step it will remain untouched.

In the animation below, the plateau where F(x)=1/2 is marked in red.

The second feature is that F(–1)=F(1)=1/4. In the original pulse, we have a step at –1 and 1, but if we define F here as the average of the left-hand and right-hand limits we get 1/4, and once we apply the first moving average we simply have 1/4 as the function’s value.

In this case, F(–1)=F(1)=1/4 will continue to hold so long as the points (–1,1/4) and (1,1/4) are surrounded by regions where the function has a suitable symmetry: it is equal to an odd function, offset and translated from the origin to these centres. So long as that’s true for a region wider than the averaging window being applied, the average at the centre will be unchanged.

The initial half-width of each of these symmetrical slopes is 2 (stretching from the opposite end of the plateau and an equal distance away along the x-axis), and as with the plateau, this is nibbled away each time we take another moving average. And in this case, the feature persists until 1/3+1/5+1/7+…+1/113, which is when the sum first exceeds 2.

In the animation, the yellow arrows mark the extent of the symmetrical slopes.

OK, none of this is difficult to understand, but why should we care?

Because this is how Hanspeter Schmid explained the infamous Borwein integrals:

∫sin(t)/t dt = π/2
∫sin(t/3)/(t/3) × sin(t)/t dt = π/2
∫sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2

∫sin(t/13)/(t/13) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2

But then the pattern is broken:

∫sin(t/15)/(t/15) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2

Here these integrals are from t=0 to t=∞. And Schmid came up with an even more persistent pattern of his own:

∫2 cos(t) sin(t)/t dt = π/2
∫2 cos(t) sin(t/3)/(t/3) × sin(t)/t dt = π/2
∫2 cos(t) sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2

∫2 cos(t) sin(t/111)/(t/111) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2

But:

∫2 cos(t) sin(t/113)/(t/113) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2

The first set of integrals, due to Borwein, correspond to taking the Fourier transforms of our sequence of ever-smoother pulses and then evaluating F(0). The Fourier transform of the sinc function:

sinc(w t) = sin(w t)/(w t)

is proportional to a rectangular pulse of half-width w, and the Fourier transform of a product of sinc functions is the convolution of their transforms, which in the case of a rectangular pulse just amounts to taking a moving average.

Schmid’s integrals come from adding a clever twist: the extra factor of 2 cos(t) shifts the integral from the zero-frequency Fourier component to the sum of its components at angular frequencies –1 and 1, and hence the result depends on F(–1)+F(1)=1/2, which as we have seen persists for much longer than F(0)=1/2.

• Hanspeter Schmid, Two curious integrals and a graphic proof, Elem. Math. 69 (2014) 11–17.

I asked Greg if we could generalize these results to give even longer sequences of identities that eventually fail, and he showed me how: you can just take the Borwein integrals and replace the numbers 1, 1/3, 1/5, 1/7, … by some sequence of positive numbers

1, a_1, a_2, a_3 \dots

The integral

\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(a_1 x)}{a_1 x} \, \frac{\sin(a_2 x)}{a_2 x} \cdots \frac{\sin(a_n x)}{a_n x} \, dx }

will then equal \pi/2 as long as a_1 + \cdots + a_n \le 1, but not when it exceeds 1. You can see a full explanation on Wikipedia:

• Wikipedia, Borwein integral: general formula.

As an example, I chose the integral

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt  }

which equals \pi/2 if and only if

\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} \le 1  }

Thus, the identity holds if

\displaystyle{ \sum_{k=1}^n \frac{1}{100 k} \le 1  }

However,

\displaystyle{ \sum_{k=1}^n \frac{1}{k} \le 1 + \ln n }

so the identity holds if

\displaystyle{ \frac{1}{100} (1 + \ln n) \le 1 }

or

\ln n \le 99

or

n \le e^{99} \approx 9.8 \cdot 10^{42}

On the other hand, the identity fails if

\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} > 1  }

so it fails if

\displaystyle{ \sum_{k=1}^n \frac{1}{101 k} > 1  }

However,

\displaystyle{ \sum_{k=1}^n \frac{1}{k} \ge \ln n }

so the identity fails if

\displaystyle{ \frac{1}{101} \ln n > 1 }

or

\displaystyle{ \ln n > 101}

or

\displaystyle{n > e^{101} \approx 7.4 \cdot 10^{43} }

With a little work one could sharpen these estimates considerably, though it would take more work to find the exact value of n at which

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }

first fails.

by John Baez at September 20, 2018 08:32 PM

September 10, 2018

Lubos Motl - string vacua and pheno

Why string theory is quantum mechanics on steroids
In many previous texts, most recently in the essay posted two blog posts ago, I expressed the idea that string theory may be interpreted as the wisdom of quantum mechanics that is taken really seriously – and that is applied to everything, including the most basic aspects of the spacetime, matter, and information.

People like me are impressed by the power of string theory because it really builds on quantum mechanics in a critical way to deduce things that would have been impossible before. On the contrary, morons typically dislike string theory because their mezzoscopic peabrains are already stretched to the limit when they think about quantum mechanics – while string theory requires the stretching to go beyond these limits. Peabrains unavoidably crack and morons, writing things that are not even wrong about their trouble with physics, end up lost in math.

Other physicists have also made the statement – usually in less colorful ways – that string theory is quantum mechanics on steroids. It may be a good idea to explain what all of us mean – why string theory depends on quantum mechanics so much and why the power of quantum mechanics is given the opportunity to achieve some new amazing things within string theory.



At the beginning, I must say that the non-experts (including many pompous fools who call themselves "experts") usually overlook the whole "beef" of string theory just like they overlook the "beef" of quantum mechanics.

They imagine that quantum mechanics "is" a new equation, Schrödinger's equation, that plays the same role as Newton's, Maxwell's, Einstein's, and other equations. But quantum mechanics is much more – and much more universal and revolutionary – than another addition to classical physics. The actual heart of quantum mechanics is that the objects in its equations are connected to the observations very differently than the classical counterparts have been.

In the same way, they imagine that string theory is a theory of a new random dynamical object, a rubber band, and they imagine either downright classical vibrating strings or quantum mechanical strings that just don't differ from other quantum mechanical objects. But this understanding doesn't go beyond the (unavoidably oversimplified) name of string theory. If you analyze the composition of the term "string theory" as a linguist, you may think it's just a "theory of some strings". But that's not really the lesson one should draw. The real lesson is that if certain operations are done well with particular things, one ends with some amazing set of equations that may explain lots of things about the Universe.

Strings are exceptionally powerful – and only exceptionally powerful – at the quantum level. And the point of string theory isn't that it's a theory of another object. The point is that string theory is special among theories that would initially look "analogous".

Why is it special? And why is the magic of string theory so intertwined with quantum mechanics?

Discrete types of Nature's building blocks

For centuries, people knew something about chemistry. Matter around us is made of compounds which are mixtures of elements – such as hydrogen, helium, lithium, and I am sure you have memorized the rest. The number of types of atoms around us is finite. If arbitrarily large nuclei were allowed or stable, it would be countably infinite. But the number would still be discrete – not continuous.



For some century, people realized that the elements are probably made out of identical atoms. Each element has its own kind of atoms. The concept of atoms was first promoted by Democritus in ancient Greece. But in chemistry, atoms became more specific.

Sometime in the late 19th and early 20th century, people began to understand that the atom isn't as indivisible as the Greek name suggested. It is composed of a dense nucleus and electrons that live somewhere around the nucleus. Nucleus was later found to be composed of protons and neutrons. Quantum mechanics of 1925 allowed the physicists to study the quantized motion of electrons around the nuclei – and the motion of the electrons is the crucial thing that decides about the energy levels of all atoms and, consequently, their chemical properties.

In the 1960s, protons and neutrons were found to be composite as well. First, matter was composed of atoms – different kinds of building blocks for every element. Later, matter was reduced to bound states of electrons, protons, and neutrons. Later, protons and neutrons were replaced with quarks while electrons remained and became an important example of leptons, a group of fermions that is considered "on par" with quarks. The Standard Model deals with fermions, namely quarks and leptons, and bosons, namely the gauge boson and the Higgs boson. The bosons are particularly capable of mediating forces between all the fermions (and bosons).

But even in this "nearly final" picture, there are still finitely many but relatively many species of elementary particles. Their number is slightly lower than the number of atoms that were considered indivisible a century earlier. But the difference isn't too big – neither qualitatively nor quantitatively. We have dozens of types of basic "atoms" or "elementary particles" and each of them must be equipped with some properties (yes, the properties of elementary particles in the Standard Model look more precise and fundamental than the properties of atoms of the elements used to). The different particle species amount to many independent assumptions about Nature that have to be added to the mix to build a viable theory.

Can we do better? Can we derive the species from a smaller number of assumptions – and from one kind of matter?

String theory – let's assume that Nature is described by a weakly-coupled heterotic string theory (closed strings only), to make it simpler – describes all elementary particles, bosons and fermions, as discrete energy eigenstates of a vibrating closed string. All interactions boil down to splitting and merging of these oscillating strings. Quantum mechanics is needed for the energy levels to be discrete – just like in the case of the energy levels of atoms. But for the first time, there is only one underlying building block in Nature, a vibrating closed string.

Like in atomic and molecular physics, quantum mechanics is needed for the discrete – finite or countable – number of species of small bound objects that exist.

Also, the number of spacetime dimensions was always arbitrary in classical physics. When constructing a theory, you had to assume a particular number – in other words, you had to add the coordinates \(t,x,y,z\) to your theory manually, one by one – and because the choice of the spacetime dimension was one of the first steps in the construction of any theory, there was no way to treat the theories in different spacetime dimensions simultaneously, and there were consequently no conceptual ways how to derive the right spacetime dimension.

In string theory, it's different because even the spacetime dimensions – scalar fields on the world sheet – are "things" that contribute to various quantities (such as the conformal anomaly) and string theory is therefore capable of picking the preferred (critical) dimension of the spacetime. Even the individual spacetime dimensions are sort of made of the "same convertible stuff" within string theory. This would be unthinkable in classical physics.

Prediction of gravity and other special forces: state-operator correspondence

String theory is not only the world's only known theory that allows Einsteinian gravity in \(D\geq 4\) to co-exist with quantum mechanics. String theory makes the Einsteinian gravity unavoidable. It predicts gravitons, spin-two particles that interact in agreement with the equivalence principle (all objects accelerate at the same acceleration in a gravitational field).

Why is it so? I gave an explanation e.g. in 2007. It is because a particular energy level of the vibrating closed string looks like a spin-two massless particle and it may be shown that the addition of a coherent state of such "graviton strings" into a spacetime is equivalent to the change of the classical geometry on which all other objects – all other vibrating strings – propagate. In this way, the dynamical curved geometry (or at least any finite change of it) may be literally built out of these gravitons.

(Similarly, the addition of strings in another mode, the photon mode, may have the effect that is indistinguishable from the modification of the background electromagnetic field and it is true for all other low-energy fields, too.)

Why is it so? What is the most important "miracle" or a property of string theory that allows this to work? I have picked the state-operator correspondence. And the state-operator correspondence is an entirely quantum mechanical relationship – something that wouldn't be possible in a classical world.

What is the state-operator correspondence? Consider a closed string. It has some Hilbert space. In terms of energy eigenstates, the Hilbert space has a zero mode described by the usual \(x_0,p_0\) degrees of freedom that make the string behave as a quantum mechanical particle. And then the strings may be stretched and the amount of vibrations may be increased by adding oscillators – excitations by creation operators of many quantum harmonic oscillators. So a basis vector in this energy basis of the closed string's Hilbert space is e.g.\[

\alpha^\kappa_{-2}\alpha^\lambda_{-3} \tilde \alpha^\mu_{-4} \tilde\alpha_{-1}^\nu \ket{0; p^\rho}.

\] What is this state? It looks like a momentum eigenstate of a particle whose spacetime momentum is \(p^\rho\). However, for a string, the "lightest" state with this momentum is just a ground state of an infinite-dimensional harmonic oscillator. We may excite that ground state with the oscillators \(\alpha\). These excitations are vaguely analogous to the kicking of the electrons in the atoms from the ground state to higher states, e.g. from \(1s\) to \(2p\). Those oscillators without a tilde are left-moving, those with a tilde are right-moving waves on the string. The (negative) subscript labels the number of periods along the closed string (which Fourier mode we pick). The superscript \(\kappa\) etc. labels in which transverse spacetime direction the string's oscillation is increased.

The total squared mass is given by \(2+3=4+1\) in some string units. The sum of the tilded and untilded subscripts must be equal (five, in this case) for the "beginning" of the closed string to be immaterial, technically because \(L_0-\tilde L_0 = 0\). Great. This was a basis of the closed string's Hilbert space.

But we may also discuss the linear operators on that Hilbert space. They're constructed as functionals of \(X^\kappa(\sigma)\) and \(P^\kappa(\sigma)\) – I am omitting some extra fields (ghosts) that are needed in some descriptions, plus I am omitting a discussion about the difference between transverse and longitudinal directions of the excitations etc. – there are numerous technicalities you have to master when you study string theory at the expert level but they don't really affect the main message I want to convey.

OK, the Hilbert space is infinite-dimensional but its dimension \(d\) must be squared, to get \(d^2\), if you want to quantify the dimension of the space of matrices on that space, OK? A matrix is "larger" than a column vector. The number \(d^2\) looks much higher than \(d\) but nevertheless, for \(d=\infty\), as long as it is the right "stringy infinity", there exists a very natural one-to-one map between the states and the local operators. Let me immediately tell you what is the operator corresponding to the state above:\[

(\partial_z)^2 X^\kappa
(\partial_z)^3 X^\lambda
(\partial_{\bar z})^4 X^\mu
(\partial_{\bar z})^1 X^\nu
\exp(ip\cdot X(\sigma))

\] There should be some normal ordering here. All the four operators \(X^{\kappa,\lambda,\mu,\nu}\) are evaluated at the point of the string \(\sigma\), too. You see that the superscripts \(\kappa,\lambda,\mu,\nu\) were copied to natural places, the subscripts \(2,3,4,1\) were translated to powers of the world sheet derivative with respect to \(z\) or \(\bar z\), the holomorphic or antiholomorphic complex coordinates on the Euclideanized worldsheet. Tilded and untilded oscillators were translated to the holomorphic and antiholomorphic derivatives. An exponential of \(X^\rho\) operator was inserted to encode the ordinary "zero mode", particle-like total momentum of the string. And the total operator looks like some very general product of a function of \(X^\rho\) – the imaginary exponentials are a good basis, ask Mr Fourier why it is so – and its derivatives (of arbitrarily high orders). By the combination of the "Fourier basis wisdom" and a simple decomposition to monomials, every function of \(X^\rho\) and its worldsheet derivatives may be expanded to a sum of such terms.

The map between operators and states isn't quite one-to-one. We only considered "local operators at point \(\sigma\) of the string" where the value of \(\sigma\) remains unspecified. But the "number of possible values of \(\sigma\)" looks like a smaller factor than the factor \(d\) that distinguishes \(d,d^2\), the dimension of the Hilbert space and the space of operators, so the state-operator correspondence is "almost" a one-to-one map.

Such a map would be unthinkable in classical physics. In classical physics, a pure state would be a point in the phase space. On the other hand, the observable of classical physics is any coordinate on the phase space – such as \(x\) or \(p\) or \(ax^2+bp^2\). Is there a canonical way to assign a coordinate on the phase space – a scalar function on the phase space – to a particular point \((x,p)\) on that space? There's clearly none. These mathematical objects carry completely different information – and the choice of the coordinate depends on much more information. You would have a chance to map a probability distribution (another scalar function) on the phase space to a general coordinate on the phase space – except that the former is non-negative. But that map wouldn't be shocking in quantum mechanics, either, because the probability distribution is upgraded to a density matrix which is a similar matrix as the observables. The magic of string theory is that there is a dictionary between pure states and operators.

This state-operator correspondence is important – it is a part of the most conceptual proof of the string theory's prediction of the Einsteinian gravity. Why does the state-operator correspondence exist? What is the recipe underlying this magic?

Well, you can prove the state-operator correspondence by considering a path integral on an infinite cylinder. By conformal transformations – symmetries of the world sheet theory – the infinite cylinder may be mapped to the plane with the origin removed. The boundary conditions on the tiny removed circle at the origin (boundary conditions rephrased as a linear insertion in the path integral) correspond to a pure state; but the specification of these boundary conditions must also be equivalent to a linear action at the origin, i.e. a local operator.

Another "magic player" that appeared in the previous paragraph – a chain of my explanations – is the conformal symmetry. A solution to the world sheet theory works even if you conformally transform it (a conformal transformation is a diffeomorphism that doesn't change the angles even if you keep the old metric tensor field). Conformal symmetries exist even in purely classical field theories. Lots of the self-similar or scale-invariant "critical" behavior exhibits the conformal symmetry in one way or another. But what's cool about the combination of conformal symmetry and quantum mechanics is that a particular, fully specified pure state (and the ground state of a string or another object, e.g. the spacetime vacuum) may be equivalent to a particular state of the self-similar fog.

The combination of quantum mechanics and conformal symmetry is therefore responsible for many nontrivial abilities of string theory such as the state-operator correspondence (see above) or holography in the AdS/CFT correspondence. At the classical level, the conformal symmetry of the boundary theory is already isomorphic to the isometry of the AdS bulk. But that wouldn't be enough for the equivalence between "field theory" in spacetimes of different dimensions. Holography i.e. the ability to remove the holographic dimension in quantum gravity may only exist when the conformal symmetry exists within a quantum mechanical framework.

Dualities, unexpected enhanced symmetries, unexpected numerous descriptions

The first quantum mechanical X-factor of quantum mechanics is the state-operator correspondence and its consequences – either on the world sheet (including the prediction of forces mediated by string modes) or on in the boundary CFT in the holographic AdS/CFT correspondence.

To make the basic skeleton of this blog post simple, I will only discuss the second class of stringy quantum muscles as one package – the unexpected symmetries, enhanced symmetries, and numerous descriptions. For some discussion of the enhanced symmetries, try e.g. this 2012 blog post.

In theoretical physicists' jargon, dualities are relationships between seemingly different descriptions that shouldn't represent the same physics but for some deep, nontrivial, and surprising reasons, the physical behavior is completely equivalent, including the quantitative properties such as the mass spectrum of some bound states etc.

The enhanced symmetries such as the \(SU(2)\) gauge group of the compactification on a self-dual circle (under T-duality) are a special example of dualities, too. The action of this \(SU(2)\), except for the simple \(U(1)\) subgroup, looks like some weird mixing of states with different winding numbers etc. Nothing like that could be a symmetry in classical physics. In particular, we need quantum mechanics to make the momenta quantized – just like the winding numbers (the integer saying how many times a string is wound around a non-contractible circle in the spacetime) are quantized – if we want to exchange momenta and windings as in T-duality. But within string theory, those symmetries become possible.

Many stringy vacua have larger symmetry groups than expected classically. You may identify 16+16 fermions on the heterotic string's world sheet and figure out that the theory will have an \(SO(16)\times SO(16)\) symmetry. But if you look carefully, the group is actually enhanced to an \(E_8\times E_8\). Similarly, a string theory on the Leech lattice could be expected to have a Conway group of symmetries – the isometry of such a lattice – but instead, you get a much cooler, larger, and sexier monster group of symmetries, the largest sporadic finite group.

Two fermions on the world sheet may be bosonized – they are equivalent to one boson. This is also a simple example of a "stringy duality" between two seemingly very different theories. The conformal symmetry and/or the relative scarcity of the number of possible conformal field theories may be used in a proof of this equivalence. Wess-Zumino-Witten models involving strings propagating on group manifolds are equivalent to other "simple" theories, too.

I don't want to elaborate on all the examples – their number is really huge and I have discussed many of them in the past. They may often be found in different chapters of string theory textbooks. Here, I want to emphasize their general spirit and where this spirit comes from. Quantum mechanics is absolutely essential for this phenomenon.

Why is it so? Why don't we see almost any of these enhanced symmetries, dualities, and equivalences between descriptions in classical physics? An easy answer is unlikely to be a rigorous proof but it may be rather apt, anyway. My simplest explanation would be: You don't see dualities and other things in classical physics because classical physics allows you the "infinite sharpness and resolution" which means that if two things look different, they almost certainly are different.

(Well, some symmetries do exist classically. For example, Maxwell's equations – with added magnetic monopoles or subtracted electric charges – have the symmetry of exchanging the electric fields with the magnetic fields, \(\vec E\to \vec B\), \(\vec B\to -\vec E\). This is a classical seed of the stringy S-dualities – and of stringy T-dualities if the electromagnetic duality is performed on a world sheet. But quantum mechanics is needed for the electromagnetic duality to work in the presence of particles with well-defined non-zero charges in the S-duality case; and in the presence of quantized stringy winding charges in the T-duality example because the T-dual momenta have to be quantized as well.)

On the other hand, quantum mechanics brings you the uncertainty principle which introduces some fog and fuzziness. The objects don't have sharp boundaries and shapes given by ordinary classical functions. Instead, the boundaries are fuzzy and may be interpreted in various ways. It doesn't mean that the whole theory is ill-defined. Quantum mechanics is completely quantitative and allows an arbitrarily high precision.

Instead, the quantum mechanical description often leads to a discrete spectrum and allows you to describe all the "invariant" properties of an energy-like operator by its discrete spectrum – by several or countably many eigenvalues. And there are many classical models whose quantization may yield the same spectrum. The spectrum – perhaps with an extra information package that is still relatively small – may capture all the physically measurable, invariant properties of the physical theory.

We may see the seed of this multiplicity of descriptions in basic quantum mechanics. The multiplicity exists because there are many – and many clever – unitary transformations on the Hilbert space and many bases and clever bases we may pick. The Fourier-like transformation from one basis to another makes the theory look very different than before. Such integral transformations would be very unnatural in classical physics because they would map a local theory to a non-local one. But in quantum mechanics, both descriptions may often be equally local.

OK, so string theory, due to its being a special theory that maximizes the number of clever ways in which the novel features of quantum mechanics are exploited, is the world champion in predicting things that were believed to be "irreducible assumptions whose 'why' questions could never be answered by science" and allowing new perspectives to look at the same physical phenomena. String theory allows to derive the spacetime dimension, the spectrum of elementary particles (given some discrete information about the choice of the compactification, a vacuum solution of the stringy equations), and it allows you to describe the same physics by bosonized or fermionized descriptions, descriptions related by S-dualities, T-dualities (including mirror symmetries), U-dualities, string-string-dualities which exhibit enhanced gauge symmetries, holography as in the AdS/CFT correspondence, the matrix model description representing any system as a state of bound D-branes with off-diagonal matrix entries for each coordinate, the ER-EPR correspondence for black holes, and many other things.

If you feel why quantum mechanics smells like progress relatively to classical physics, string theory should smell like progress relatively to the previous quantum mechanical theories because the "quantum mechanical thinking" is applied even to things that were envisioned as independent classical assumptions. That's why string theory is quantum mechanics squared, quantum mechanics with an X-factor, or quantum mechanics on steroids. Deep thinkers who have loved the quantum revolution and who have looked into string theory carefully are likely to end up loving string theory, and those who have had psychological problems with quantum mechanics must have even worse problems with string theory.

Throughout the text above, I have repeatedly said that "quantum mechanics is applied to new properties and objects" within string theory. When I was proofreading my remarks, I felt uneasy about these formulations because the comment about the "application" indicates that we just wanted to use quantum mechanics more universally and seriously, and it was guaranteed that we could have done so. But this isn't the case. The existence of string theory (where the deeper derivations of seemingly irreducible classical assumptions about the world may arise) is a sort of a miracle, much like the existence of quantum mechanics itself. (Well, a miracle squared.) Before 1925, people didn't know quantum mechanics. They didn't know it was possible. But it was possible. Quantum mechanics was discovered as a highly constrained, qualitatively different replacement for classical physics that nevertheless agrees with the empirical data – and allows us to derive many more things correctly. In the same way, string theory is a replacement for local quantum field theories that works in almost the same way but not quite. Just like quantum mechanics allows us to derive the spectrum and states of atoms from a deeper point, string theory allows us to derive the properties of elementary particles and even the spacetime dimension and other things from a deeper, more starting point. Like quantum mechanics itself, string theory feels like something important that wasn't invented or constructed by humans. It pre-existed and it was discovered.

by Luboš Motl (noreply@blogger.com) at September 10, 2018 03:33 PM

September 04, 2018

Clifford V. Johnson - Asymptotia

Beach Scene…


The working title for this was “when you forget to bring your camera on holiday...” but I know you won’t believe that's why I drew it! (This was actually a quick sketch done at the beach on Sunday, with a few tweaks added over dinner and some shadows added using iPad.)

I'm working toward doing finish work on a commissioned illustration for a magazine (I'll tell you about it more when I can - check instagram, etc., for updates/peeks), and am finding my drawing skills very rusty --so opportunities to do sketches, whenever I can find them, are very welcome.

-cvj Click to continue reading this post

The post Beach Scene… appeared first on Asymptotia.

by Clifford at September 04, 2018 09:08 PM

August 13, 2018

Andrew Jaffe - Leaves on the Line

Planck: Demographics and Diversity

Another aspect of Planck’s legacy bears examining.

A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).

Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.

Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)

I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).

This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)

However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.

This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.

by Andrew at August 13, 2018 10:07 PM

Axel Maas - Looking Inside the Standard Model

Fostering an idea with experience
In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.

by Axel Maas (noreply@blogger.com) at August 13, 2018 02:46 PM

July 26, 2018

Sean Carroll - Preposterous Universe

Mindscape Podcast

For anyone who hasn’t been following along on other social media, the big news is that I’ve started a podcast, called Mindscape. It’s still young, but early returns are promising!

I won’t be posting each new episode here; the podcast has a “blog” of its own, and episodes and associated show notes will be published there. You can subscribe by RSS as usual, or there is also an email list you can sign up for. For podcast aficionados, Mindscape should be available wherever finer podcasts are served, including iTunes, Google Play, Stitcher, Spotify, and so on.

As explained at the welcome post, the format will be fairly conventional: me talking to smart people about interesting ideas. It won’t be all, or even primarily, about physics; much of my personal motivation is to get the opportunity to talk about all sorts of other interesting things. I’m expecting there will be occasional solo episodes that just have me rambling on about one thing or another.

We’ve already had a bunch of cool guests, check these out:

And there are more exciting episodes on the way. Enjoy, and spread the word!

by Sean Carroll at July 26, 2018 04:15 PM

July 20, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Summer days, academics and technological universities

The heatwave in the northern hemisphere may (or may not) be an ominous portend of things to come, but it’s certainly making for an enjoyable summer here in Ireland. I usually find it quite difficult to do any meaningful research when the sun is out, but things are a bit different when the good weather is regular.  Most days, I have breakfast in the village, a swim in the sea before work, a swim after work and a game of tennis to round off the evening. Tough life, eh.

 

 

 

                                       Counsellor’s Strand in Dunmore East

So far, I’ve got one one conference proceeding written, one historical paper revamped and two articles refereed (I really enjoy the latter process, it’s so easy for academics to become isolated). Next week I hope to get back to that book I never seem to finish.

However, it would be misleading to portray a cosy image of a college full of academics beavering away over the summer. This simply isn’t the case around here – while a few researchers can be found in college this summer, the majority of lecturing staff decamped on June 20th and will not return until September 1st.

And why wouldn’t they? Isn’t that their right under the Institute of Technology contracts, especially given the heavy teaching loads during the semester? Sure – but I think it’s important to acknowledge that this is a very different set-up to the modern university sector, and doesn’t quite square with the move towards technological universities.

This week, the Irish newspapers are full of articles depicting the opening of Ireland’s first technological university, and apparently, the Prime Minister is anxious our own college should get a move on. Hmm. No mention of the prospect of a change in teaching duties, or increased facilities/time for research, as far as I can tell (I’d give a lot for an office that was fit for purpose).  So will the new designation just amount to a name change? And this is not to mention the scary business of the merging of different institutes of technology. Those who raise questions about this now tend to get cast as dismissed as resistors of progress. Yet the history of merging large organisations in Ireland hardly inspires confidence, not least because of a tendency for increased layers of bureaucracy to appear out of nowhere – HSE anyone?

by cormac at July 20, 2018 03:32 PM

July 19, 2018

Andrew Jaffe - Leaves on the Line

(Almost) The end of Planck

This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.

Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.

And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).

(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)

Planck 2018: the science

So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.

The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.

All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)

Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: PlanckSpectra (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.

As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).

Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.

The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.

(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)

Planck 2018: lessons learned

So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).

But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.

Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.

That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.

by Andrew at July 19, 2018 06:51 PM

Andrew Jaffe - Leaves on the Line

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.

by Andrew at July 19, 2018 12:02 PM

July 16, 2018

Tommaso Dorigo - Scientificblogging

A Beautiful New Spectroscopy Measurement
What is spectroscopy ? 
(A) the observation of ghosts by infrared visors or other optical devices
(B) the study of excited states of matter through observation of energy emissions

If you answered (A), you are probably using a lousy internet search engine; and btw, you are rather dumb. Ghosts do not exist. 

Otherwise you are welcome to read on. We are, in fact, about to discuss a cutting-edge spectroscopy measurement, performed by the CMS experiment using lots of proton-proton collisions by the CERN Large Hadron Collider (LHC). 

read more

by Tommaso Dorigo at July 16, 2018 09:13 AM

July 12, 2018

Matt Strassler - Of Particular Significance

“Seeing” Double: Neutrinos and Photons Observed from the Same Cosmic Source

There has long been a question as to what types of events and processes are responsible for the highest-energy neutrinos coming from space and observed by scientists.  Another question, probably related, is what creates the majority of high-energy cosmic rays — the particles, mostly protons, that are constantly raining down upon the Earth.

As scientists’ ability to detect high-energy neutrinos (particles that are hugely abundant, electrically neutral, very light-weight, and very difficult to observe) and high-energy photons (particles of light, though not necessarily of visible light) have become more powerful and precise, there’s been considerable hope of getting an answer to these question.  One of the things we’ve been awaiting (and been disappointed a couple of times) is a violent explosion out in the universe that produces both high-energy photons and neutrinos at the same time, at a high enough rate that both types of particles can be observed at the same time coming from the same direction.

In recent years, there has been some indirect evidence that blazars — narrow jets of particles, pointed in our general direction like the barrel of a gun, and created as material swirls near and almost into giant black holes in the centers of very distant galaxies — may be responsible for the high-energy neutrinos.  Strong direct evidence in favor of this hypothesis has just been presented today.   Last year, one of these blazars flared brightly, and the flare created both high-energy neutrinos and high-energy photons that were observed within the same period, coming from the same place in the sky.

I have written about the IceCube neutrino observatory before; it’s a cubic kilometer of ice under the South Pole, instrumented with light detectors, and it’s ideal for observing neutrinos whose motion-energy far exceeds that of the protons in the Large Hadron Collider, where the Higgs particle was discovered.  These neutrinos mostly pass through Ice Cube undetected, but one in 100,000 hits something, and debris from the collision produces visible light that Ice Cube’s detectors can record.   IceCube has already made important discoveries, detecting a new class of high-energy neutrinos.

On Sept 22 of last year, one of these very high-energy neutrinos was observed at IceCube. More precisely, a muon created underground by the collision of this neutrino with an atomic nucleus was observed in IceCube.  To create the observed muon, the neutrino must have had a motion-energy tens of thousand times larger than than the motion-energy of each proton at the Large Hadron Collider (LHC).  And the direction of the neutrino’s motion is known too; it’s essentially the same as that of the observed muon.  So IceCube’s scientists knew where, on the sky, this neutrino had come from.

(This doesn’t work for typical cosmic rays; protons, for instance, travel in curved paths because they are deflected by cosmic magnetic fields, so even if you measure their travel direction at their arrival to Earth, you don’t then know where they came from. Neutrinos, beng electrically neutral, aren’t affected by magnetic fields and travel in a straight line, just as photons do.)

Very close to that direction is a well-known blazar (TXS-0506), four billion light years away (a good fraction of the distance across the visible universe).

The IceCube scientists immediately reported their neutrino observation to scientists with high-energy photon detectors.  (I’ve also written about some of the detectors used to study the very high-energy photons that we find in the sky: in particular, the Fermi/LAT satellite played a role in this latest discovery.) Fermi/LAT, which continuously monitors the sky, was already detecting high-energy photons coming from the same direction.   Within a few days the Fermi scientists had confirmed that TXS-0506 was indeed flaring at the time — already starting in April 2017 in fact, six times as bright as normal.  With this news from IceCube and Fermi/LAT, many other telescopes (including the MAGIC cosmic ray detector telescopes among others) then followed suit and studied the blazar, learning more about the properties of its flare.

Now, just a single neutrino on its own isn’t entirely convincing; is it possible that this was all just a coincidence?  So the IceCube folks went back to their older data to snoop around.  There they discovered, in their 2014-2015 data, a dramatic flare in neutrinos — more than a dozen neutrinos, seen over 150 days, had come from the same direction in the sky where TXS-0506 is sitting.  (More precisely, nearly 20 from this direction were seen, in a time period where normally there’d just be 6 or 7 by random chance.)  This confirms that this blazar is indeed a source of neutrinos.  And from the energies of the neutrinos in this flare, yet more can be learned about this blazar, and how it makes  high-energy photons and neutrinos at the same time.  Interestingly, so far at least, there’s no strong evidence for this 2014 flare in photons, except perhaps an increase in the number of the highest-energy photons… but not in the total brightness of the source.

The full picture, still emerging, tends to support the idea that the blazar arises from a supermassive black hole, acting as a natural particle accelerator, making a narrow spray of particles, including protons, at extremely high energy.  These protons, millions of times more energetic than those at the Large Hadron Collider, then collide with more ordinary particles that are just wandering around, such as visible-light photons from starlight or infrared photons from the ambient heat of the universe.  The collisions produce particles called pions, made from quarks and anti-quarks and gluons (just as protons are), which in turn decay either to photons or to (among other things) neutrinos.  And its those resulting photons and neutrinos which have now been jointly observed.

Since cosmic rays, the mysterious high energy particles from outer space that are constantly raining down on our planet, are mostly protons, this is evidence that many, perhaps most, of the highest energy cosmic rays are created in the natural particle accelerators associated with blazars. Many scientists have suspected that the most extreme cosmic rays are associated with the most active black holes at the centers of galaxies, and now we have evidence and more details in favor of this idea.  It now appears likely that that this question will be answerable over time, as more blazar flares are observed and studied.

The announcement of this important discovery was made at the National Science Foundation by Francis Halzen, the IceCube principal investigator, Olga Botner, former IceCube spokesperson, Regina Caputo, the Fermi-LAT analysis coordinator, and Razmik Mirzoyan, MAGIC spokesperson.

The fact that both photons and neutrinos have been observed from the same source is an example of what people are now calling “multi-messenger astronomy”; a previous example was the observation in gravitational waves, and in photons of many different energies, of two merging neutron stars.  Of course, something like this already happened in 1987, when a supernova was seen by eye, and also observed in neutrinos.  But in this case, the neutrinos and photons have energies millions and billions of times larger!

 

by Matt Strassler at July 12, 2018 04:59 PM

July 08, 2018

Marco Frasca - The Gauge Connection

ICHEP 2018

The great high-energy physics conference ICHEP 2018 is over and, as usual, I spend some words about it. The big collaborations of CERN presented their last results. I think the most relevant of this is about the evidence (3\sigma) that the Standard Model is at odds with the measurement of spin correlation between top-antitop pair of quarks. More is given in the ATLAS communicate. As expected, increasing precision proves to be rewarding.

About the Higgs particle, after the important announcement about the existence of the ttH process, both ATLAS and CMS are pursuing further their improvement of precision. About the signal strength they give the following results. For ATLAS (see here)

\mu=1.13\pm 0.05({\rm stat.})\pm 0.05({\rm exp.})^{+0.05}_{-0.04}({\rm sig. th.})\pm 0.03({\rm bkg. th})

and CMS (see here)

\mu=1.17\pm 0.06({\rm stat.})^{+0.06}_{-0.05}({\rm sig. th.})\pm 0.06({\rm other syst.}).

The news is that the error is diminished and both agrees. They show a small tension, 13% and 17% respectively, but the overall result is consistent with the Standard Model.

When the different contributions are unpacked in the respective contributions due to different processes, CMS claims some tensions in the WW decay that should be taken under scrutiny in the future (see here). They presented the results from 35.9{\rm fb}^{-1} data and so, there is no significant improvement, for the moment, with respect to Moriond conference this year. The situation is rather better for the ZZ decay where no tension appears and the agreement with the Standard Model is there in all its glory (see here). Things are quite different, but not too much, for ATLAS as in this case they observe some tensions but these are all below 2\sigma (see here). For the WW decay, ATLAS does not see anything above 1\sigma (see here).

So, although there is something to take under attention with the increase of data, that will reach 100 {\rm fb}^{-1} this year, but the Standard Model is in good health with respect to the Higgs sector even if there is a lot to be answered yet and precision measurements are the main tool. The correlation in the tt pair is absolutely promising and we should hope this will be confirmed a discovery.

 

by mfrasca at July 08, 2018 10:58 AM

July 04, 2018

Tommaso Dorigo - Scientificblogging

Chasing The Higgs Self Coupling: New CMS Results
Happy Birthday Higgs boson! The discovery of the last fundamental particle of the Standard Model was announced exactly 6 years ago at CERN (well, plus one day, since I decided to postpone to July 5 the publication of this post...).

In the Standard Model, the theory of fundamental interactions among elementary particles which enshrines our current understanding of the subnuclear world,  particles that constitute matter are fermionic: they have a haif-integer value of a quantity we call spin; and particles that mediate interactions between those fermions, keeping them together and governing their behaviour, are bosonic: they have an integer value of spin. 

read more

by Tommaso Dorigo at July 04, 2018 12:57 PM

June 25, 2018

Sean Carroll - Preposterous Universe

On Civility

Alex Wong/Getty Images

White House Press Secretary Sarah Sanders went to have dinner at a local restaurant the other day. The owner, who is adamantly opposed to the policies of the Trump administration, politely asked her to leave, and she did. Now (who says human behavior is hard to predict?) an intense discussion has broken out concerning the role of civility in public discourse and our daily life. The Washington Post editorial board, in particular, called for public officials to be allowed to eat in peace, and people have responded in volume.

I don’t have a tweet-length response to this, as I think the issue is more complex than people want to make it out to be. I am pretty far out to one extreme when it comes to the importance of engaging constructively with people with whom we disagree. We live in a liberal democracy, and we should value the importance of getting along even in the face of fundamentally different values, much less specific political stances. Not everyone is worth talking to, but I prefer to err on the side of trying to listen to and speak with as wide a spectrum of people as I can. Hell, maybe I am even wrong and could learn something.

On the other hand, there is a limit. At some point, people become so odious and morally reprehensible that they are just monsters, not respected opponents. It’s important to keep in our list of available actions the ability to simply oppose those who are irredeemably dangerous/evil/wrong. You don’t have to let Hitler eat in your restaurant.

This raises two issues that are not so easy to adjudicate. First, where do we draw the line? What are the criteria by which we can judge someone to have crossed over from “disagreed with” to “shunned”? I honestly don’t know. I tend to err on the side of not shunning people (in public spaces) until it becomes absolutely necessary, but I’m willing to have my mind changed about this. I also think the worry that this particular administration exhibits authoritarian tendencies that could lead to a catastrophe is not a completely silly one, and is at least worth considering seriously.

More importantly, if the argument is “moral monsters should just be shunned, not reasoned with or dealt with constructively,” we have to be prepared to be shunned ourselves by those who think that we’re moral monsters (and those people are out there).  There are those who think, for what they take to be good moral reasons, that abortion and homosexuality are unforgivable sins. If we think it’s okay for restaurant owners who oppose Trump to refuse service to members of his administration, we have to allow staunch opponents of e.g. abortion rights to refuse service to politicians or judges who protect those rights.

The issue becomes especially tricky when the category of “people who are considered to be morally reprehensible” coincides with an entire class of humans who have long been discriminated against, e.g. gays or transgender people. In my view it is bigoted and wrong to discriminate against those groups, but there exist people who find it a moral imperative to do so. A sensible distinction can probably be made between groups that we as a society have decided are worthy of protection and equal treatment regardless of an individual’s moral code, so it’s at least consistent to allow restaurant owners to refuse to serve specific people they think are moral monsters because of some policy they advocate, while still requiring that they serve members of groups whose behaviors they find objectionable.

The only alternative, as I see it, is to give up on the values of liberal toleration, and to simply declare that our personal moral views are unquestionably the right ones, and everyone should be judged by them. That sounds wrong, although we do in fact enshrine certain moral judgments in our legal codes (murder is bad) while leaving others up to individual conscience (whether you want to eat meat is up to you). But it’s probably best to keep that moral core that we codify into law as minimal and widely-agreed-upon as possible, if we want to live in a diverse society.

This would all be simpler if we didn’t have an administration in power that actively works to demonize immigrants and non-straight-white-Americans more generally. Tolerating the intolerant is one of the hardest tasks in a democracy.

 

 

by Sean Carroll at June 25, 2018 06:00 PM

June 24, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

7th Robert Boyle Summer School

This weekend saw the 7th Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s one of my favourite conferences – a select number of talks on the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

220px-Robert_Boyle_0001

The Irish-born scientist and aristocrat Robert Boyle   

IMG_1745[1]

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

Born in Lismore into a wealthy landowning family, Robert Boyle became one of the most important figures in the Scientific Revolution. A contemporary of Isaac Newton and Robert Hooke, he is recognized the world over for his scientific discoveries, his role in the rise of the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

This year, the theme of the conference was ‘What do we know – and how do we know it?’. There were many interesting talks such as Boyle’s Theory of Knowledge by Dr William Eaton, Associate Professor of Early Modern Philosophy at Georgia Southern University: The How, Who & What of Scientific Discovery by Paul Strathern, author of a great many books on scientists and philosophers such as the well-known Philosophers in 90 Minutes series: Scientific Enquiry and Brain StateUnderstanding the Nature of Knowledge by Professor William T. O’Connor, Head of Teaching and Research in Physiology at the University of Limerick Graduate Entry Medical School: The Promise and Peril of Big Data by Timandra Harkness, well-know media presenter, comedian and writer. For physicists, there was a welcome opportunity to hear the well-known American philosopher of physics Robert P. Crease present the talk Science Denial: will any knowledge do? The full programme for the conference can be found here.

All in all, a hugely enjoyable summer school, culminating in a garden party in the grounds of Lismore castle, Boyle’s ancestral home. My own contribution was to provide the music for the garden party – a flute, violin and cello trio, playing the music of Boyle’s contemporaries, from Johann Sebastian Bach to Turlough O’ Carolan. In my view, the latter was a baroque composer of great importance whose music should be much better known outside Ireland.

trio

IMG_9390

IMG_9398 (1)

Images from the garden party in the grounds of Lismore Castle

by cormac at June 24, 2018 08:19 PM

June 22, 2018

Jester - Resonaances

Both g-2 anomalies
Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:   
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections...

by Mad Hatter (noreply@blogger.com) at June 22, 2018 11:04 PM

June 16, 2018

Tommaso Dorigo - Scientificblogging

On The Residual Brightness Of Eclipsed Jovian Moons
While preparing for another evening of observation of Jupiter's atmosphere with my faithful 16" dobsonian scope, I found out that the satellite Io will disappear behind the Jovian shadow tonight. This is a quite common phenomenon and not a very spectacular one, but still quite interesting to look forward to during a visual observation - the moon takes some time to fully disappear, so it is fun to follow the event.
This however got me thinking. A fully eclipsed jovian moon should still be able to reflect back some light picked up from the still lit other satellites - so it should not, after all, appear completely dark. Can a calculation be made of the effect ? Of course - and it's not that difficult.

read more

by Tommaso Dorigo at June 16, 2018 04:47 PM

June 12, 2018

Axel Maas - Looking Inside the Standard Model

How to test an idea
As you may have guessed from reading through the blog, our work is centered around a change of paradigm: That there is a very intriguing structure of the Higgs and the W/Z bosons. And that what we observe in the experiments are actually more complicated than what we usually assume. That they are not just essentially point-like objects.

This is a very bold claim, as it touches upon very basic things in the standard model of particle physics. And the interpretation of experiments. However, it is at the same time a necessary consequence if one takes the underlying more formal theoretical foundation seriously. The reason that there is not a huge clash is that the standard model is very special. Because of this both pictures give almost the same prediction for experiments. This can also be understood quantitatively. That is where I have written a review about. It can be imagined in this way:

Thus, the actual particle, which we observe, and call the Higgs is actually a complicated object made from two Higgs particles. However, one of those is so much eclipsed by the other that it looks like just a single one. And a very tiny correction to it.

So far, this does not seem to be something where it is necessary to worry about.

However, there are many and good reasons to believe that the standard model is not the end of particle physics. There are many, many blogs out there, which explain the reasons for this much better than I do. However, our research provides hints that what works so nicely in the standard model, may work much less so in some extensions of the standard model. That there the composite nature makes huge differences for experiments. This was what came out of our numerical simulations. Of course, these are not perfect. And, after all, unfortunately we did not yet discover anything beyond the standard model in experiments. So we cannot test our ideas against actual experiments, which would be the best thing to do. And without experimental support such an enormous shift in paradigm seems to be a bit far fetched. Even if our numerical simulations, which are far from perfect, support the idea. Formal ideas supported by numerical simulations is just not as convincing as experimental confirmation.

So, is this hopeless? Do we have to wait for new physics to make its appearance?

Well, not yet. In the figure above, there was 'something'. So, the ideas make also a statement that even within the standard model there should be a difference. The only question is, what is really the value of a 'little bit'? So far, experiments did not show any deviations from the usual picture. So 'little bit' needs indeed to be really rather small. But we have a calculation prescription for this 'little bit' for the standard model. So, at the very least what we can do is to make a calculation for this 'little bit' in the standard model. We should then see if the value of 'little bit' may already be so large that the basic idea is ruled out, because we are in conflict with experiment. If this is the case, this would raise a lot of question on the basic theory, but well, experiment rules. And thus, we would need to go back to the drawing board, and get a better understanding of the theory.

Or, we get something which is in agreement with current experiment, because it is smaller then the current experimental precision. But then we can make a statement how much better experimental precision needs to become to see the difference. Hopefully the answer will not be so much that it will not be possible within the next couple of decades. But this we will see at the end of the calculation. And then we can decide, whether we will get an experimental test.

Doing the calculations is actually not so simple. On the one hand, they are technically challenging, even though our method for it is rather well under control. But it will also not yield perfect results, but hopefully good enough. Also, it depends strongly on the type of experiment how simple the calculations are. We did a first few steps, though for a type of experiment not (yet) available, but hopefully in about twenty years. There we saw that not only the type of experiment, but also the type of measurement matters. For some measurements the effect will be much smaller than for others. But we are not yet able to predict this before doing the calculation. There, we need still much better understanding of the underlying mathematics. That we will hopefully gain by doing more of these calculations. This is a project I am currently pursuing with a number of master students for various measurements and at various levels. Hopefully, in the end we get a clear set of predictions. And then we can ask our colleagues at experiments to please check these predictions. So, stay tuned.

By the way: This is the standard cycle for testing new ideas and theories. Have an idea. Check that it fits with all existing experiments. And yes, this may be very, very many. If your idea passes this test: Great! There is actually a chance that it can be right. If not, you have to understand why it does not fit. If it can be fixed, fix it, and start again. Or have a new idea. And, at any rate, if it cannot be fixed, have a new idea. When you got an idea which works with everything we know, use it to make a prediction where you get a difference to our current theories. By this you provide an experimental test, which can decide whether your idea is the better one. If yes: Great! You just rewritten our understanding of nature. If not: Well, go back to fix it or have a new idea. Of course, it is best if we have already an experiment which does not fit with our current theories. But there we are at this stage a little short off. May change again. If your theory has no predictions which can be testable in any foreseeable future experimentally. Well, that is a good question how to deal with this, and there is not yet a consensus how to proceed.

by Axel Maas (noreply@blogger.com) at June 12, 2018 10:49 AM

June 10, 2018

Tommaso Dorigo - Scientificblogging

Modeling Issues Or New Physics ? Surprises From Top Quark Kinematics Study
Simulation, noun:
1. Imitation or enactment
2. The act or process of pretending; feigning.
3. An assumption or imitation of a particular appearance or form; counterfeit; sham.

Well, high-energy physics is all about simulations. 

We have a theoretical model that predicts the outcome of the very energetic particle collisions we create in the core of our giant detectors, but we only have approximate descriptions of the inputs to the theoretical model, so we need simulations. 

read more

by Tommaso Dorigo at June 10, 2018 11:18 AM

June 09, 2018

Jester - Resonaances

Dark Matter goes sub-GeV
It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.

Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles. 
                       
It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing  developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T.  In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck:  ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However,  this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10.   

It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space).  Nevertheless, dedicated experiments will soon  be taking over. Recently, two collaborations published first results from their prototype detectors:  one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor.  Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV.  A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.
     
Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.

by Mad Hatter (noreply@blogger.com) at June 09, 2018 05:39 PM

June 08, 2018

Jester - Resonaances

Massive Gravity, or You Only Live Twice
Proving Einstein wrong is the ultimate ambition of every crackpot and physicist alike. In particular, Einstein's theory of gravitation -  the general relativity -  has been a victim of constant harassment. That is to say, it is trivial to modify gravity at large energies (short distances), for example by embedding it in string theory, but it is notoriously difficult to change its long distance behavior. At the same time, motivations to keep trying go beyond intellectual gymnastics. For example, the accelerated expansion of the universe may be a manifestation of modified gravity (rather than of a small cosmological constant).   

In Einstein's general relativity, gravitational interactions are mediated by a massless spin-2 particle - the so-called graviton. This is what gives it its hallmark properties: the long range and the universality. One obvious way to screw with Einstein is to add mass to the graviton, as entertained already in 1939 by Fierz and Pauli. The Particle Data Group quotes the constraint m ≤ 6*10^−32 eV, so we are talking about the De Broglie wavelength comparable to the size of the observable universe. Yet even that teeny mass may cause massive troubles. In 1970 the Fierz-Pauli theory was killed by the van Dam-Veltman-Zakharov (vDVZ) discontinuity. The problem stems from the fact that a massive spin-2 particle has 5 polarization states (0,±1,±2) unlike a massless one which has only two (±2). It turns out that the polarization-0 state couples to matter with the similar strength as the usual polarization ±2 modes, even in the limit where the mass goes to zero, and thus mediates an additional force which differs from the usual gravity. One finds that, in massive gravity, light bending would be 25% smaller, in conflict with the very precise observations of stars' deflection around the Sun. vDV concluded that "the graviton has rigorously zero mass". Dead for the first time...           

The second coming was heralded soon after by Vainshtein, who noticed that the troublesome polarization-0 mode can be shut off in the proximity of stars and planets. This can happen in the presence of graviton self-interactions of a certain type. Technically, what happens is that the polarization-0 mode develops a background value around massive sources which, through the derivative self-interactions, renormalizes its kinetic term and effectively diminishes its interaction strength with matter. See here for a nice review and more technical details. Thanks to the Vainshtein mechanism, the usual predictions of general relativity are recovered around large massive source, which is exactly where we can best measure gravitational effects. The possible self-interactions leading a healthy theory without ghosts have been classified, and go under the name of the dRGT massive gravity.

There is however one inevitable consequence of the Vainshtein mechanism. The graviton self-interaction strength grows with energy, and at some point becomes inconsistent with the unitarity limits that every quantum theory should obey. This means that massive gravity is necessarily an effective theory with a limited validity range and has to be replaced by a more fundamental theory at some cutoff scale 𝞚. This is of course nothing new for gravity: the usual Einstein gravity is also an effective theory valid at most up to the Planck scale MPl~10^19 GeV.  But for massive gravity the cutoff depends on the graviton mass and is much smaller for realistic theories. At best,
So the massive gravity theory in its usual form cannot be used at distance scales shorter than ~300 km. For particle physicists that would be a disaster, but for cosmologists this is fine, as one can still predict the behavior of galaxies, stars, and planets. While the theory certainly cannot be used to describe the results of table top experiments,  it is relevant for the  movement of celestial bodies in the Solar System. Indeed, lunar laser ranging experiments or precision studies of Jupiter's orbit are interesting probes of the graviton mass.

Now comes the latest twist in the story. Some time ago this paper showed that not everything is allowed  in effective theories.  Assuming the full theory is unitary, causal and local implies non-trivial constraints on the possible interactions in the low-energy effective theory. These techniques are suitable to constrain, via dispersion relations, derivative interactions of the kind required by the Vainshtein mechanism. Applying them to the dRGT gravity one finds that it is inconsistent to assume the theory is valid all the way up to 𝞚max. Instead, it must be replaced by a more fundamental theory already at a much lower cutoff scale,  parameterized as 𝞚 = g*^1/3 𝞚max (the parameter g* is interpreted as the coupling strength of the more fundamental theory). The allowed parameter space in the g*-m plane is showed in this plot:

Massive gravity must live in the lower left corner, outside the gray area  excluded theoretically  and where the graviton mass satisfies the experimental upper limit m~10^−32 eV. This implies g* ≼ 10^-10, and thus the validity range of the theory is some 3 order of magnitude lower than 𝞚max. In other words, massive gravity is not a consistent effective theory at distance scales below ~1 million km, and thus cannot be used to describe the motion of falling apples, GPS satellites or even the Moon. In this sense, it's not much of a competition to, say, Newton. Dead for the second time.   

Is this the end of the story? For the third coming we would need a more general theory with additional light particles beyond the massive graviton, which is consistent theoretically in a larger energy range, realizes the Vainshtein mechanism, and is in agreement with the current experimental observations. This is hard but not impossible to imagine. Whatever the outcome, what I like in this story is the role of theory in driving the progress, which is rarely seen these days. In the process, we have understood a lot of interesting physics whose relevance goes well beyond one specific theory. So the trip was certainly worth it, even if we find ourselves back at the departure point.

by Mad Hatter (noreply@blogger.com) at June 08, 2018 08:35 AM

June 07, 2018

Jester - Resonaances

Can MiniBooNE be right?
The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.


This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance.  This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed.     

In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012.  The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.

What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles...  Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01.     

Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of  tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.

But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper.  In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron  one in a short-baseline experiment is
where U is the 4x4 neutrino mixing matrix.  The Uμ4 matrix elements controls also the νμ survival probability
The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update.
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.

by Mad Hatter (noreply@blogger.com) at June 07, 2018 01:20 PM

June 01, 2018

Jester - Resonaances

WIMPs after XENON1T
After today's update from the XENON1T experiment, the situation on the front of direct detection of WIMP dark matter is as follows

WIMP can be loosely defined as a dark matter particle with mass in the 1 GeV - 10 TeV range and significant interactions with ordinary matter. Historically, WIMP searches have stimulated enormous interest because this type of dark matter can be easily realized in models with low scale supersymmetry. Now that we are older and wiser, many physicists would rather put their money on other realizations, such as axions, MeV dark matter, or primordial black holes. Nevertheless, WIMPs remain a viable possibility that should be further explored.
 
To detect WIMPs heavier than a few GeV, currently the most successful strategy is to use huge detectors filled with xenon atoms, hoping one of them is hit by a passing dark matter particle. Xenon1T beats the competition from the LUX and Panda-X experiments because it has a bigger gun tank. Technologically speaking, we have come a long way in the last 30 years. XENON1T is now sensitive to 40 GeV WIMPs interacting with nucleons with the cross section of 40 yoctobarn (1 yb = 10^-12 pb = 10^-48 cm^2). This is 6 orders of magnitude better than what the first direct detection experiment in the Homestake mine could achieve back in the 80s. Compared to the last year, the  limit is better by a factor of two at the most sensitive mass point. At high mass the improvement is somewhat smaller than expected due to a small excess of events observed by XENON1T, which is probably just a 1 sigma upward fluctuation of the background.

What we are learning about WIMPs is how they can (or cannot) interact with us. Of course, at this point in the game we don't see qualitative progress, but rather incremental quantitative improvements. One possible scenario is that WIMPs experience one of the Standard Model forces,  such as the weak or the Higgs force. The former option is strongly constrained by now. If WIMPs had interacted in the same way as our neutrino does, that is by exchanging a Z boson,  it would have been found in the Homestake experiment. Xenon1T is probing models where the dark matter coupling to the Z boson is suppressed by a factor cχ ~ 10^-3 - 10^-4 compared to that of an active neutrino. On the other hand, dark matter could be participating in weak interactions only by exchanging W bosons, which can happen for example when it is a part of an SU(2) triplet. In the plot you can see that XENON1T is approaching but not yet excluding this interesting possibility. As for models using the Higgs force, XENON1T is probing the (subjectively) most natural parameter space where WIMPs couple with order one strength to the Higgs field. 

And the arms race continues. The search in XENON1T will go on until the end of this year, although at this point a discovery is extremely unlikely. Further progress is expected on a timescale of a few years thanks to the next generation xenon detectors XENONnT and LUX-ZEPLIN, which should achieve yoctobarn sensitivity. DARWIN may be the ultimate experiment along these lines, in the sense that there is no prefix smaller than yocto it will reach the irreducible background from atmospheric neutrinos, after which new detection techniques will be needed.  For dark matter mass closer to 1 GeV, several orders of magnitude of pristine parameter space will be covered by the SuperCDMS experiment. Until then we are kept in suspense. Is dark matter made of WIMPs? And if yes, does it stick above the neutrino sea?

by Mad Hatter (noreply@blogger.com) at June 01, 2018 05:30 PM

Tommaso Dorigo - Scientificblogging

MiniBoone Confirms Neutrino Anomaly
Neutrinos, the most mysterious and fascinating of all elementary particles, continue to puzzle physicists. 20 years after the experimental verification of a long-debated effect whereby the three neutrino species can "oscillate", changing their nature by turning one into the other as they propagate in vacuum and in matter, the jury is still out to decide what really is the matter with them. And a new result by the MiniBoone collaboration is stirring waters once more.

read more

by Tommaso Dorigo at June 01, 2018 12:49 PM