Particle Physics Planet


March 23, 2017

Christian P. Robert - xi'an's og

parameter space for mixture models

“The paper defines a new solution to the problem of defining a suitable parameter space for mixture models.”

When I received the table of contents of the incoming Statistics & Computing and saw a paper by V. Maroufy and P. Marriott about the above, I was quite excited about a new approach to mixture parameterisation. Especially after our recent reposting of the weakly informative reparameterisation paper. Alas, after reading the paper, I fail to see the (statistical) point of the whole exercise.

Starting from the basic fact that mixtures face many identifiability issues, not only invariance by component permutation, but the possibility to add spurious components as well, the authors move to an entirely different galaxy by defining mixtures of so-called local mixtures. Developed by one of the authors. The notion is just incomprehensible for me: the object is a weighted sum of the basic component of the original mixture, e.g., a Normal density, and of k of its derivatives wrt its mean, a sort of parameterised Taylor expansion. Which implies the parameter is unidimensional, incidentally. The weights of this strange mixture are furthermore constrained by the positivity of the resulting mixture, a constraint that seems impossible to satisfy in the Normal case when the number of derivatives is odd. And hard to analyse in any case since possibly negative components do not enjoy an interpretation as a probability density. In exponential families, the local mixture is the original exponential family density multiplied by a polynomial. The current paper moves one step further [from the reasonable] by considering mixtures [in the standard sense] of such objects. Which components are parameterised by their mean parameter and a collection of weights. The authors then restrict the mean parameters to belong to a finite and fixed set, which elements are coerced by a maximum error rate on any compound distribution derived from this exponential family structure. The remainder of the paper discusses of the choice of the mean parameters and of an EM algorithm to estimate the parameters, with a confusing lower bound on the mixture weights that impacts the estimation of the weights. And no mention made of the positivity constraint. I remain completely bemused by the paper and its purpose: I do not even fathom how this qualifies as a mixture.


Filed under: Statistics, University life Tagged: mixtures of distributions, reparameterisation, Statistics and Computing, Taylor expansion

by xi'an at March 23, 2017 11:17 PM

Peter Coles - In the Dark

London looking back

I thought I’d do a quick post as a reaction to yesterday’s terrible events in London in which four people lost their lives and several are still critically injured. We now know that the attacker was British and that he was known to the intelligence services. He appears to have acted alone and was armed with knives and drove an ordinary car onto the pavement, hitting a number of people before crashing the car and managing to stab a police officer to death before he was himself shot and killed. Whatever his motivations were, it looks more likely on the basis of information currently available that these were the actions of a crazed individual than part of an international terrorist conspiracy. We should, however, avoid jumping to conclusions and wait for the investigation to be completed.

The first thing I want to do is to express my condolences to the families and friends of those who lost their lives. My thoughts are also with those who were critically injured and I hope with all my heart that they will all recover speedily and completely. Physical healing will take time, but they will need help, support and time  to come to terms with the mental trauma too. The same is true for those who were caught up in this attack and received minor injuries or even just witnessed what happened, because they must have been shocked by the experience. I hope they receive all the help they need at what must be a very difficult time.

The second point is that it’s clear that the police and other emergency services acted with great courage and professionalism yesterday. One policeman sadly died, but the swift actions of his colleagues prevented further loss of life. Ambulances, paramedics and members of the public all responded magnificently to care for those injured, and we shall probably find that their response saved many lives too. They deserve all our thanks.

Finally, I noticed a number of ill-informed comments on Twitter from the usual gang of Far-Right hate-mongers, especially professional troll Katie Hopkins, claiming that London was “cowed” and “afraid” by this attack. I don’t believe that for one minute, and I want to explain why.

I lived in London for about eight years (between 1990 and 1998). During that time I found myself in relatively close proximity to three major bomb explosions, though fortunately I wasn’t close enough to be actually harmed. I also concluded that my proximity to these events was purely coincidental…

The first, in 1993, was the Bishopsgate Bombing. I happened to be looking out of the kitchen window of my flat in Bethnal Green when that bomb went off. I had a clear view across Weavers Fields towards the City of London and saw the explosion happen. I heard it too, several seconds later, loud enough to set off the car alarms in the car park beneath my window.

This picture, from the relevant Wikipedia page, shows the devastation of the area affected by the blast.

The other two came in quick succession. First, a large bomb exploded in London Docklands on Friday February 8th 1996, at around 5pm, when our regular weekly Astronomy seminar was just about to finish at Queen Mary College on the Mile End Road. We were only a couple of miles from the blast, but I don’t remember hearing anything and it was only later that I found out what had happened.

Then, on the evening of Sunday 18th February 1996, I was in a fairly long queue trying to get into a night club in Covent Garden when there was a loud bang followed by a tinkling sound caused by pieces of glass falling to the ground. It sounded very close but I was in a narrow street surrounded by tall buildings and it was hard to figure out from which direction the sound had come from. It turned out that someone had accidentally detonated a bomb on a bus in Aldwych, apparently en route to plant it somewhere else (probably King’s Cross). What I remember most about that evening was that it took me a very long time to get home. Several blocks around the site of the explosion were cordoned off. I lived in the East End, on the wrong side of sealed-off area, so I had to find a way around it before heading home. No buses or taxis were to be found so I had to walk all the way. I arrived home in the early hours of the morning.

Anyway, my point is that amid these awful terrorist atrocities of the 1990s, people were not “cowed” or “afraid”. Londoners are made of sterner stuff than that. It is true that one’s immediate response when confronted with, e.g. , a bomb explosion is to be a bit rattled. I’m sure that was true for many Londoners yesterday. That soon gives way to a determination to get on with your life and not let the bastards win. The events of the 1990s gave us a London of road blocks, security barriers and many other irritating inconveniences, but they did not bring the city to a standstill, as some have suggested happened yesterday. For the most part it was “business as usual”.

I don’t live in London anymore, but I think Londoners are as unlikely to be frightened today as they were back then. And it will take much more than one man to “shut down the city”. As a matter of fact, I think only a coward would suggest otherwise.

 

 

 


by telescoper at March 23, 2017 04:12 PM

Emily Lakdawalla - The Planetary Society Blog

A repeat of the space shuttle's bold test flight? NASA considers crew aboard first SLS mission
NASA has only flown astronauts aboard a rocket's first flight once, when John Young and Bob Crippen took space shuttle Columbia on the boldest test flight in history. What are the risks of repeating the feat for SLS?

March 23, 2017 02:37 PM

Peter Coles - In the Dark

Composed upon Westminster Bridge, September 3 1802, by William Wordsworth

Earth has not anything to show more fair:
Dull would he be of soul who could pass by
A sight so touching in its majesty:
This City now doth, like a garment, wear
The beauty of the morning; silent, bare,
Ships, towers, domes, theatres, and temples lie
Open unto the fields, and to the sky;
All bright and glittering in the smokeless air.
Never did sun more beautifully steep
In his first splendour, valley, rock, or hill;
Ne’er saw I, never felt, a calm so deep!
The river glideth at his own sweet will:
Dear God! the very houses seem asleep;
And all that mighty heart is lying still!

by William Wordsworth (1770-1850)

 


by telescoper at March 23, 2017 01:47 PM

Christian P. Robert - xi'an's og

and it only gets worse…

“Trump wants us to associate immigrants with criminality. That is the reason behind a weekly published list of immigrant crimes – the first of which was made public on Monday. Singling out the crimes of undocumented immigrants has one objective: to make people view them as deviant, dangerous and fundamentally undesirable. ” The Guardian, March 22, 2017

“`I didn’t want this job. I didn’t seek this job,’ Tillerson told the Independent Journal Review (IJR), in an interview (…) `My wife told me I’m supposed to do this.'” The Guardian, March 22, 2017

“…under the GOP plan, it estimated that 24 million people of all ages would lose coverage over 10 years (…) Trump’s plan, for instance, would cut $5.8 billion from the National Institutes of Health, an 18 percent drop for the $32 billion agency that funds much of the nation’s research into what causes different diseases and what it will take to treat them.” The New York Times, March 5, 2017


Filed under: Kids, pictures, Travel, University life Tagged: Donald Trump, GOP, ice, immigration, NIH, The Guardian, The New York Times, trumpism, US politics

by xi'an at March 23, 2017 01:18 PM

March 22, 2017

Christian P. Robert - xi'an's og

X-Outline of a Theory of Statistical Estimation

While visiting Warwick last week, Jean-Michel Marin pointed out and forwarded me this remarkable paper of Jerzy Neyman, published in 1937, and presented to the Royal Society by Harold Jeffreys.

“Leaving apart on one side the practical difficulty of achieving randomness and the meaning of this word when applied to actual experiments…”

“It may be useful to point out that although we are frequently witnessing controversies in which authors try to defend one or another system of the theory of probability as the only legitimate, I am of the opinion that several such theories may be and actually are legitimate, in spite of their occasionally contradicting one another. Each of these theories is based on some system of postulates, and so long as the postulates forming one particular system do not contradict each other and are sufficient to construct a theory, this is as legitimate as any other. “

This paper is fairly long in part because Neyman starts by setting Kolmogorov’s axioms of probability. This is of historical interest but also needed for Neyman to oppose his notion of probability to Jeffreys’ (which is the same from a formal perspective, I believe!). He actually spends a fair chunk on explaining why constants cannot have anything but trivial probability measures. Getting ready to state that an a priori distribution has no meaning (p.343) and that in the rare cases it does it is mostly unknown. While reading the paper, I thought that the distinction was more in terms of frequentist or conditional properties of the estimators, Neyman’s arguments paving the way to his definition of a confidence interval. Assuming repeatability of the experiment under the same conditions and therefore same parameter value (p.344).

“The advantage of the unbiassed [sic] estimates and the justification of their use lies in the fact that in cases frequently met the probability of their differing very much from the estimated parameters is small.”

“…the maximum likelihood estimates appear to be what could be called the best “almost unbiassed [sic]” estimates.”

It is also quite interesting to read that the principle for insisting on unbiasedness is one of producing small errors, because this is not that often the case, as shown by the complete class theorems of Wald (ten years later). And that maximum likelihood is somewhat relegated to a secondary rank, almost unbiased being understood as consistent. A most amusing part of the paper is when Neyman inverts the credible set into a confidence set, that is, turning what is random in a constant and vice-versa. With a justification that the credible interval has zero or one coverage, while the confidence interval has a long-run validity of returning the correct rate of success. What is equally amusing is that the boundaries of a credible interval turn into functions of the sample, hence could be evaluated on a frequentist basis, as done later by Dennis Lindley and others like Welch and Peers, but that Neyman fails to see this and turn the bounds into hard values. For a given sample.

“This, however, is not always the case, and in general there are two or more systems of confidence intervals possible corresponding to the same confidence coefficient α, such that for certain sample points, E’, the intervals in one system are shorter than those in the other, while for some other sample points, E”, the reverse is true.”

The resulting construction of a confidence interval is then awfully convoluted when compared with the derivation of an HPD region, going through regions of acceptance that are the dual of a confidence interval (in the sampling space), while apparently [from my hasty read] missing a rule to order them. And rejecting the notion of a confidence interval being possibly empty, which, while being of practical interest, clashes with its frequentist backup.


Filed under: Books, Statistics, University life Tagged: Bayesian Analysis, confidence intervals, credible intervals, Dennis Lindley, Harold Jeffreys, inference, Jerzy Neyman, maximum likelihood estimation, unbiasedness, University of Warwick, X-Outline

by xi'an at March 22, 2017 11:17 PM

Peter Coles - In the Dark

Keep Calm and Carry On

I had just finished my biggest task of the day and stopped to make a cup of tea, when I caught the news of a serious incident on Westminster Bridge in London, at which it seems several lives have been lost.

My thoughts are with my friends and colleagues in London at this very scary time and, above all,  with those who have been affected directly by this terrible event.

I hope everyone will keep as calm as possible and avoid jumping to conclusions about who is responsible, and let the police and security services get on with doing their job.

wp-1466852935234.jpeg


by telescoper at March 22, 2017 04:20 PM

The n-Category Cafe

Functional Equations VII: The p-Norms

The <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>-norms have a nice multiplicativity property:

<semantics>(Ax,Ay,Az,Bx,By,Bz) p=(A,B) p(x,y,z) p<annotation encoding="application/x-tex"> \|(A x, A y, A z, B x, B y, B z)\|_p = \|(A, B)\|_p \, \|(x, y, z)\|_p </annotation></semantics>

for all <semantics>A,B,x,y,z<annotation encoding="application/x-tex">A, B, x, y, z \in \mathbb{R}</annotation></semantics> — and similarly, of course, for any numbers of arguments.

Guillaume Aubrun and Ion Nechita showed that this condition completely characterizes the <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>-norms. In other words, any system of norms that’s multiplicative in this sense must be equal to <semantics> p<annotation encoding="application/x-tex">\|\cdot\|_p</annotation></semantics> for some <semantics>p[1,]<annotation encoding="application/x-tex">p \in [1, \infty]</annotation></semantics>. And the amazing thing is, to prove this, they used some nontrivial probability theory.

All this is explained in this week’s functional equations notes, which start on page 26 here.

by leinster (Tom.Leinster@ed.ac.uk) at March 22, 2017 01:21 AM

March 21, 2017

Christian P. Robert - xi'an's og

Russian roulette still rolling

Colin Wei and Iain Murray arXived a new version of their paper on doubly-intractable distributions, which is to be presented at AISTATS. It builds upon the Russian roulette estimator of Lyne et al. (2015), which itself exploits the debiasing technique of McLeish et al. (2011) [found earlier in the physics literature as in Carter and Cashwell, 1975, according to the current paper]. Such an unbiased estimator of the inverse of the normalising constant can be used for pseudo-marginal MCMC, except that the estimator is sometimes negative and has to be so as proved by Pierre Jacob and co-authors. As I discussed in my post on the Russian roulette estimator, replacing the negative estimate with its absolute value does not seem right because a negative value indicates that the quantity is close to zero, hence replacing it with zero would sound more appropriate. Wei and Murray start from the property that, while the expectation of the importance weight is equal to the normalising constant, the expectation of the inverse of the importance weight converges to the inverse of the weight for an MCMC chain. This however sounds like an harmonic mean estimate because the property would also stand for any substitute to the importance density, as it only requires the density to integrate to one… As noted in the paper, the variance of the resulting Roulette estimator “will be high” or even infinite. Following Glynn et al. (2014), the authors build a coupled version of that solution, which key feature is to cut the higher order terms in the debiasing estimator. This does not guarantee finite variance or positivity of the estimate, though. In order to decrease the variance (assuming it is finite), backward coupling is introduced, with a Rao-Blackwellisation step using our 1996 Biometrika derivation. Which happens to be of lower cost than the standard Rao-Blackwellisation in that special case, O(N) versus O(N²), N being the stopping rule used in the debiasing estimator. Under the assumption that the inverse importance weight has finite expectation [wrt the importance density], the resulting backward-coupling Russian roulette estimator can be proven to be unbiased, as it enjoys a finite expectation. (As in the generalised harmonic mean case, the constraint imposes thinner tails on the importance function, which then hampers the convergence of the MCMC chain.) No mention is made of achieving finite variance for those estimators, which again is a serious concern due to the similarity with harmonic means…


Filed under: Statistics Tagged: AISTATS 2017, Biometrika, coupling, debiasing, doubly intractable problems, harmonic mean estimator, MCMC, MCMC algorithm, normalising constant, Peter Glynn, pseudo-marginal MCMC, Rao-Blackwellisation, Russian roulette

by xi'an at March 21, 2017 11:17 PM

Peter Coles - In the Dark

R.I.P. Colin Dexter (1930-2017)

I was saddened this afternoon to hear of the death, at the age of 86, of Colin Dexter, the novelist who created the character of  Inspector Morse, memorably played on the long-running TV series of the same name by John Thaw.

The television series of Inspector Morse came to an end in 2000, with a poignant episode called The Remorseful Day, but has led to two successful spin-offs, in Lewis and Endeavour both of which are still running.  Colin Dexter regularly appeared in  in both Inspector Morse and Lewis, mainly in non-speaking roles and part of the fun of these programmes was trying to spot him in the background.

As a crime writer, Colin Dexter was definitely in the `English’ tradition of Agatha Christie, in that his detective stories relied more on cleverly convoluted plots than depth of characterization, but the central character of Morse was a brilliant creation in itself and is rightly celebrated. Crime fiction is too often undervalued in literary circles, but I find it a fascinating genre and Colin Dexter was a fine exponent.

Colin Dexter was also an avid solver of crossword puzzles, a characteristic shared by his Detective Inspector Morse. In fact I met Colin Dexter once, back in 2010, at a lunch to celebrate the 2000th Azed puzzle in the Observer which I blogged about  here.  Colin Dexter used to be a regular entrant – and often a winner – in Azed‘s  monthly clue-setting competition, but I haven’t seen his name among the winners for a while. You can see his outstanding record on the “&lit” archive here. I guess he retired from crosswords just has he had done from writing crime novels. To be honest, he seemed quite frail back in 2010 so I’m not surprised he decided to take it easy in his later years.

Incidentally, Colin Dexter took the name `Morse’ from his friend Jeremy Morse, another keen cruciverbalist. Sadly he passed away last year, at the age of 87. Jeremy Morse was another frequent winner of the Azed competition and he produced some really cracking clues – you can find them all on the “&lit” archive too.

Here’s a little cryptic tribute:

Morse inventor developed Nordic Telex (5,6)

Now I think I’ll head home to cook my traditional mid-week vegetable curry, have a glass of wine, and see if I can watch a  DVD last episode of Inspector Morse without crying

R.I.P. Norman Colin Dexter (1930-2017)

 

 


by telescoper at March 21, 2017 05:16 PM

Emily Lakdawalla - The Planetary Society Blog

Unraveling a Martian enigma: The hidden rivers of Arabia Terra
Arabia Terra has always been a bit of a martian enigma. Planetary scientist Joel Davis takes us on a tour of its valley networks and their significance in telling the story of water on Mars.

March 21, 2017 04:01 PM

Symmetrybreaking - Fermilab/SLAC

High-energy visionary

Meet Hernán Quintana Godoy, the scientist who made Chile central to international astronomy.

Header:High-energy visionary

Professor Hernán Quintana Godoy has a way of taking the long view, peering back into the past through distant stars while looking ahead to the future of astronomy in his home, Chile. 

For three decades, Quintana has helped shape the landscape of astronomy in Chile, host to some of the largest ground-based observatories in the world.

In January he became the first recipient of the Education Prize of the American Astronomical Society from a country other than the United States or Canada.     

“Training the next generation of astronomers should not be limited to just a few countries,” says Keely Finkelstein, former chair of the AAS Education Prize Committee. “[Quintana] has been a tireless advocate for establishing excellent education and research programs in Chile.” 

Quintana earned his doctorate from the University of Cambridge in the United Kingdom in 1973. The same year, a military junta headed by General Augusto Pinochet took power in a coup d’état. 

Quintana came home and secured a teaching position at the University of Chile. At the time, Chilean researchers mainly focused on the fundamentals of astronomy—measuring the radiation from stars and calculating the coordinates of celestial objects. By contrast, Quintana’s dissertation on high-energy phenomena seemed downright radical. 

A year and a half after taking his new job, Quintana was granted a leave of absence to complete a post-doc abroad. Writing from the United States, Quintana published an article encouraging Chile to take better advantage of its existing international observatories. He urged the government to provide more funding and to create an environment that would encourage foreign-educated astronomers to return home to Chile after their postgraduate studies. The article did not go over well with the administration at his university.

“I wrote it for a magazine that was clearly against Pinochet,” Quintana says. “The magazine cover was a black page with a big ‘NO’ in red” related to an upcoming referendum.

UCh dissolved Quintana’s teaching position. 

Quintana became a wandering postdoc and research associate in Europe, the US and Canada. It wasn’t until 1981 that Quintana returned to teach at the Physics Institute at Pontifical Catholic University of Chile. 

He continued to push the envelope at PUC. He created elective courses on general astronomy, extragalactic astrophysics and cluster dynamics. He revived and directed a small astronomy group. He encouraged students to expand their horizons by hiring both Chilean and foreign teachers and sending students to study abroad.

“Because of him I took advantage of most of the big observatories in Chile and had an international perspective of research from the very beginning of my career,” says Amelia Ramirez, who studied with Quintana in 1983. A specialist in interacting elliptical galaxies, she is now head of Research and Development in University of La Serena.

In mid-1980s Quintana became the scriptwriter for a set of distance learning astronomy classes produced by the educational division of his university’s public TV channel, TELEDUC. He challenged his viewers to take on advanced topics—and they responded.

 

Inline 1: High-energy visionary
Illustration by Corinne Mucha

“I even introduced two episodes on relativity theory,” Quintana says. “This shocked them. The reception was so good that I wrote a whole book on the subject.” 

The station partnered with universities and institutions across Chile to provide viewers the opportunity to earn a diploma by taking a written test based on the televised material. More than 5000 people enrolled during the four-year broadcasting period. 

“What stands out [about Quintana] is his strategic vision and his creativity to materialize projects,” says Alejandro Clocchiatti, a professor at PUC who worked with Quintana for 20 years. “All he does is with dedication and enthusiasm, even if things don’t go according to plan. He’s got an unbeatable optimism.” 

Over the years, Quintana has had a hand in planning the locations of multiple new telescopes in Chile. In 1994 he guided an expedition to identify the location of the Atacama Large Millimeter Array, a collection of 66 high-precision antennae.

In 1998, PUC finally responded to decades of advocating by Quintana and his colleagues and opened a new major in astronomy. Gradually more universities followed suit. 

Quintana retired three years ago. He is optimistic about the future of Chilean astronomy. It has grown from a collection of 25 professors and their students in the late ’90s to a community of more than 800 hundred students, teachers and researchers.

He says he is looking out for new discoveries forthcoming instruments will bring. The European Extremely Large Telescope, under construction on Cerro Armazones in the Atacama Desert of northern Chile, is expected to produce images 16 times sharper than Hubble’s. The southern facilities of the Cherenkov Telescope Array, a planned collection of 99 telescopes in Chile, will complement a northern array to complete the world’s most sensitive high-energy gamma-ray observatory. Both arrangements will peer into super-massive black holes, the atmospheres of extra-solar planets, and the origin of relativistic cosmic particles. 

“Everything in our universe is constantly changing,” Quintana says. “We are all heirs of that structural evolution.”

by Oscar Miyamoto at March 21, 2017 02:17 PM

The n-Category Cafe

On the Operads of J. P. May

Guest post by Simon Cho

We continue the Kan Extension Seminar II with Max Kelly’s On the operads of J. P. May. As we will see, the main message of the paper is that (symmetric) operads enriched in a suitably nice category <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> arise naturally as monoids for a “substitution product” in the monoidal category <semantics>[P,𝒱]<annotation encoding="application/x-tex">[\mathbf{P}, \mathcal{V}]</annotation></semantics> (where <semantics>P<annotation encoding="application/x-tex">\mathbf{P}</annotation></semantics> is a category that keeps track of the symmetry). Before we begin, I want to thank the organizers and participants of the Kan Extension Seminar (II) for the opportunity to read and discuss these nice papers with them.

Some time ago, in her excellent post about Hyland and Power’s paper, Evangelia described what Lawvere theories are about. We might think of Lawvere theories as a way to frame algebraic structure by stratifying the different components of an algebraic structure into roughly three ascending levels of specificity: the product structure, the specific algebraic operations (meaning, other than projections, etc.), and the models of that algebraic structure. These structures are manifested categorically through (respectively) the category <semantics> 0 op<annotation encoding="application/x-tex">\aleph_0^{\text{op}}</annotation></semantics> of finite sets and (the duals of) maps between them, a category <semantics><annotation encoding="application/x-tex">\mathcal{L}</annotation></semantics> with finite products that has the same objects as <semantics> 0<annotation encoding="application/x-tex">\aleph_0</annotation></semantics>, and some other category <semantics>𝒞<annotation encoding="application/x-tex">\mathcal{C}</annotation></semantics> with finite products. Then a Lawvere theory is just a strict product preserving functor <semantics>I: 0 op<annotation encoding="application/x-tex">I: \aleph_0^{\text{op}} \rightarrow \mathcal{L}</annotation></semantics>, and a model or interpretation of a Lawvere theory is a (non-strict) product preserving functor <semantics>M:𝒞<annotation encoding="application/x-tex">M: \mathcal{L} \rightarrow \mathcal{C}</annotation></semantics>.

Thus <semantics> 0 op<annotation encoding="application/x-tex">\aleph_0^{\text{op}}</annotation></semantics> specifies the bare product structure (with the attendant projections, etc.) which gives us a notion of what it means to be “<semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-ary” for some given <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>; <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> then transfers this notion of arity to the category <semantics><annotation encoding="application/x-tex">\mathcal{L}</annotation></semantics>, whose shape describes the specific algebraic structure in question (think of the diagrams one uses to categorically define the group axioms, for example); <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> then gives a particular manifestation of the algebraic structure <semantics><annotation encoding="application/x-tex">\mathcal{L}</annotation></semantics> on an object <semantics>MI(1)𝒞<annotation encoding="application/x-tex">M \circ I (1) \in \mathcal{C}</annotation></semantics>.

The reason I bring this up is that I like to think of operads as what results when we make the following change of perspective on Lawvere theories: whereas models of Lawvere theories are essentially given by specifying a “ground set of elements” <semantics>A𝒞<annotation encoding="application/x-tex">A \in \mathcal{C}</annotation></semantics> and taking as the <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-ary operations morphisms <semantics>A nA<annotation encoding="application/x-tex">A^n \rightarrow A</annotation></semantics>, we now consider a hypothetical category whose (<semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-indexed) objects themselves are the homsets <semantics>𝒞(A n,A)<annotation encoding="application/x-tex">\mathcal{C}(A^n, A)</annotation></semantics>, along with some machinery that keeps track of what happens when we permute the argument slots.

Cosmos structure on <semantics>[P,𝒱]<annotation encoding="application/x-tex">[\mathbf{P}, \mathcal{V}]</annotation></semantics>

More precisely, consider the category <semantics>P<annotation encoding="application/x-tex">\mathbf{P}</annotation></semantics> with objects the natural numbers, and morphisms <semantics>P(m,n)<annotation encoding="application/x-tex">\mathbf{P}(m,n)</annotation></semantics> given by <semantics>P(n,n)=Σ n<annotation encoding="application/x-tex">\mathbf{P}(n,n) = \Sigma_n</annotation></semantics> (the symmetric group on <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> letters) and <semantics>P(m,n)=<annotation encoding="application/x-tex">\mathbf{P}(m,n) = \emptyset</annotation></semantics> for <semantics>mn<annotation encoding="application/x-tex">m \neq n</annotation></semantics>.

Let <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> be a cosmos, that is, a complete and cocomplete symmetric monoidal closed category with identity <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> and internal hom <semantics>[,]<annotation encoding="application/x-tex">[-,-]</annotation></semantics>.

Fix <semantics>A𝒱<annotation encoding="application/x-tex">A \in \mathcal{V}</annotation></semantics>. The assignment <semantics>n[A n,A]<annotation encoding="application/x-tex">n \mapsto [A^{\otimes n}, A]</annotation></semantics> defines a functor <semantics>P𝒱<annotation encoding="application/x-tex">\mathbf{P} \rightarrow \mathcal{V}</annotation></semantics> (where functoriality in <semantics>P<annotation encoding="application/x-tex">\mathbf{P}</annotation></semantics> comes from the symmetry of the tensor product in <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>). This turns out to be a typical example of a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-operad, which we call the “endomorphism operad” on <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>. In order to actually define what an operad is, we need to lay some groundwork.

(A point of notation: we will henceforth denote <semantics>A n<annotation encoding="application/x-tex">A^{\otimes n}</annotation></semantics> by <semantics>A n<annotation encoding="application/x-tex">A^n</annotation></semantics>.)

We’ll need the fact that the functor <semantics>𝒱(I,):𝒱textbfSets<annotation encoding="application/x-tex">\mathcal{V}(I, -): \mathcal{V} \rightarrow \textbf{Sets}</annotation></semantics> has a left adjoint <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> given by <semantics>FX= XI<annotation encoding="application/x-tex">FX = \coprod_X I</annotation></semantics>. <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> takes the product to the tensor product (since it’s a left adjoint and tensor products in <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> distributes over coproducts), and in fact we can assume that it does so strictly. Henceforth for <semantics>XtextbfSets<annotation encoding="application/x-tex">X \in \textbf{Sets}</annotation></semantics> and <semantics>A𝒱<annotation encoding="application/x-tex">A \in \mathcal{V}</annotation></semantics> we write <semantics>XA<annotation encoding="application/x-tex">X \otimes A</annotation></semantics> to actually mean <semantics>FXA<annotation encoding="application/x-tex">FX \otimes A</annotation></semantics>.

We then get a cosmos structure on <semantics><annotation encoding="application/x-tex">\mathcal{F}</annotation></semantics> which is given by Day convolution: for <semantics>T,S<annotation encoding="application/x-tex">T,S \in \mathcal{F}</annotation></semantics> we have <semantics>TS= m,nP(m+n,)TmSn<annotation encoding="application/x-tex">T \otimes S = \int^{m,n} \mathbf{P}(m+n, - ) \otimes Tm \otimes Sn</annotation></semantics> Since we are thinking of a given <semantics>T<annotation encoding="application/x-tex">T \in \mathcal{F}</annotation></semantics> as a collection of operations (indexed by arity) on which we can act by permuting the argument slots, we can think of <semantics>(TS)k<annotation encoding="application/x-tex">(T \otimes S) k</annotation></semantics> as a collection of the <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics>-ary operations that we obtain by freely permuting <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics> argument slots of type <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> and <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> argument slots of type <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> (where <semantics>m,n<annotation encoding="application/x-tex">m,n</annotation></semantics> range over all pairs such that <semantics>m+n=k<annotation encoding="application/x-tex">m+n = k</annotation></semantics>), modulo respecting the previously given actions of <semantics>Σ m<annotation encoding="application/x-tex">\Sigma_m</annotation></semantics> (resp. <semantics>Σ n<annotation encoding="application/x-tex">\Sigma_n</annotation></semantics>) on <semantics>Tm<annotation encoding="application/x-tex">Tm</annotation></semantics> (resp. <semantics>Sn<annotation encoding="application/x-tex">Sn</annotation></semantics>).

The identity is then given by <semantics>P(0,)I<annotation encoding="application/x-tex">\mathbf{P}(0,-) \otimes I</annotation></semantics>.

Associativity and symmetry of the cosmos structure. Now let <semantics>T,S,R<annotation encoding="application/x-tex">T,S, R \in \mathcal{F}</annotation></semantics>. If we unpack the definition, draw out some diagrams, and apply some abstract nonsense, we find that <semantics>T(SR)(TS)R m+n+kP(m+n+k,)TmSnRk<annotation encoding="application/x-tex">T \otimes (S \otimes R) \simeq (T \otimes S) \otimes R \simeq \int^{m+n+k} \mathbf{P}(m+n+k, - ) \otimes Tm \otimes Sn \otimes Rk</annotation></semantics> which we can again assume are actually equalities.

Before we address the symmetry of this monoidal structure, we make a technical point. <semantics>P<annotation encoding="application/x-tex">\mathbf{P}</annotation></semantics> itself has a symmetric monoidal structure, given by addition. Thus for <semantics>n 1,,n mP<annotation encoding="application/x-tex">n_1, \dots, n_m \in \mathbf{P}</annotation></semantics> we have <semantics>n 1++n mP<annotation encoding="application/x-tex">n_1 + \cdots + n_m \in \mathbf{P}</annotation></semantics>. There is evidently an action of <semantics>Σ m<annotation encoding="application/x-tex">\Sigma_m</annotation></semantics> on this term, which we require to be in the “wrong” direction, so that <semantics>ξΣ m<annotation encoding="application/x-tex">\xi \in \Sigma_m</annotation></semantics> induces <semantics>ξ:n ξ1++n ξmn 1++n m<annotation encoding="application/x-tex">\langle \xi \rangle: n_{\xi 1} + \cdots + n_{\xi m} \rightarrow n_1 + \cdots + n_m</annotation></semantics> rather than the other way around.

(However, for the symmetry of the monoidal structure on <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>, given a product <semantics>A 1A m<annotation encoding="application/x-tex">A_1 \otimes \cdots \otimes A_m</annotation></semantics> we require that the action of <semantics>Σ m<annotation encoding="application/x-tex">\Sigma_m</annotation></semantics> on this term is in the “correct” direction, i.e. <semantics>ξΣ m<annotation encoding="application/x-tex">\xi \in \Sigma_m</annotation></semantics> induces <semantics>ξ:A 1A mA ξ1A ξm<annotation encoding="application/x-tex">\langle \xi \rangle: A_1 \otimes \cdots \otimes A_m \rightarrow A_{\xi 1} \otimes \cdots \otimes A_{\xi m}</annotation></semantics>.)

We thus have:

<semantics>T 1T m = n 1,,n mP(n 1+n m,)T 1n 1T mn m ξ P(ξ,)ξ T ξ1T ξm = n 1,,n mP(n ξ1+n ξm,)T ξ1n ξ1T ξmn ξm <annotation encoding="application/x-tex"> \begin{matrix} T_1 \otimes \cdots \otimes T_m &=& \int^{n_1, \dots, n_m} \mathbf{P}(n_1 + \cdots n_m, - ) \otimes T_1 n_1 \otimes \cdots \otimes T_{m} n_m\\ &&\\ {\langle \xi \rangle} \Big \downarrow && \Big \downarrow {\mathbf{P}(\langle \xi \rangle, -) \otimes \langle \xi \rangle}\\ &&\\ T_{\xi 1} \otimes \cdots \otimes T_{\xi m} &=& \int^{n_1, \dots, n_m} \mathbf{P}(n_{\xi 1} + \cdots n_{\xi m}, - ) \otimes T_{\xi 1} n_{\xi 1} \otimes \cdots \otimes T_{\xi m} n_{\xi m}\\ \end{matrix} </annotation></semantics>

Now <semantics>ξ:n ξ1++n ξmn 1++n m<annotation encoding="application/x-tex">\langle \xi \rangle: n_{\xi 1} + \cdots + n_{\xi m} \rightarrow n_1 + \cdots + n_m</annotation></semantics> extends to an action <semantics>ξ:T 1T mT ξ1T ξm<annotation encoding="application/x-tex">\langle \xi \rangle: T_1 \otimes \cdots \otimes T_m \rightarrow T_{\xi 1} \otimes \cdots \otimes T_{\xi m}</annotation></semantics> as we saw previously. Therefore we now have a functor <semantics>P op×<annotation encoding="application/x-tex">\mathbf{P}^{\text{op}} \times \mathcal{F} \rightarrow \mathcal{F}</annotation></semantics> given by <semantics>(m,T)T m<annotation encoding="application/x-tex">(m, T) \mapsto T^m</annotation></semantics>, a fact which we will later use.

<semantics><annotation encoding="application/x-tex">\mathcal{F}</annotation></semantics> as a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-category. There is a way in which we can regard <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> as a full coreflective subcategory of <semantics><annotation encoding="application/x-tex">\mathcal{F}</annotation></semantics>: consider the functor <semantics>ϕ:𝒱<annotation encoding="application/x-tex">\phi: \mathcal{F} \rightarrow \mathcal{V}</annotation></semantics> given by <semantics>ϕT=T0<annotation encoding="application/x-tex">\phi T = T0</annotation></semantics>. This has a right adjoint <semantics>ψ:𝒱<annotation encoding="application/x-tex">\psi: \mathcal{V} \rightarrow \mathcal{F}</annotation></semantics> given by <semantics>ψA=P(0,)A<annotation encoding="application/x-tex">\psi A = \mathbf{P}(0, -) \otimes A</annotation></semantics>.

The inclusion <semantics>ψ<annotation encoding="application/x-tex">\psi</annotation></semantics> preserves all of the relevant monoidal structure, so we are justified in considering <semantics>A𝒱<annotation encoding="application/x-tex">A \in \mathcal{V}</annotation></semantics> as either an object of <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> or of <semantics><annotation encoding="application/x-tex">\mathcal{F}</annotation></semantics> (via the inclusion <semantics>ψ<annotation encoding="application/x-tex">\psi</annotation></semantics>). With this notation we can write, for <semantics>A𝒱<annotation encoding="application/x-tex">A \in \mathcal{V}</annotation></semantics> and <semantics>T,S<annotation encoding="application/x-tex">T,S \in \mathcal{F}</annotation></semantics>: <semantics>(AT,S)𝒱(A,[T,S])<annotation encoding="application/x-tex">\mathcal{F}(A \otimes T, S) \simeq \mathcal{V}(A, [T,S])</annotation></semantics> If <semantics>T,S<annotation encoding="application/x-tex">T, S \in \mathcal{F}</annotation></semantics> then their <semantics><annotation encoding="application/x-tex">\mathcal{F}</annotation></semantics>-valued hom is given by <semantics>[[T,S]]<annotation encoding="application/x-tex">[[T,S]]</annotation></semantics>, where for <semantics>kP<annotation encoding="application/x-tex">k \in \mathbf{P}</annotation></semantics> we have <semantics>[[T,S]]k= n[Tn,S(n+k)]<annotation encoding="application/x-tex">[[T,S]]k = \int_n [Tn, S(n+k)]</annotation></semantics> and their <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-valued hom, which makes <semantics><annotation encoding="application/x-tex">\mathcal{F}</annotation></semantics> into a <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>-category, is given by <semantics>[T,S]=ϕ[[T,S]]= n[Tn,Sn]<annotation encoding="application/x-tex">[T,S] = \phi [[T,S]] = \int_n [Tn, Sn]</annotation></semantics>

The substitution product

Let us return to our motivating example of the endomorphism operad (which we denote by <semantics>{A,A}<annotation encoding="application/x-tex">\{A,A\}</annotation></semantics>) on <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, for a fixed <semantics>A𝒱<annotation encoding="application/x-tex">A \in \mathcal{V}</annotation></semantics>. For now it’s just an object <semantics>{A,A}<annotation encoding="application/x-tex">\{A, A\} \in \mathcal{F}</annotation></semantics>; but it contains more structure than we’re currently using. Namely, for each <semantics>m,n 1,,n mP<annotation encoding="application/x-tex">m, n_1, \dots, n_m \in \mathbf{P}</annotation></semantics> we can give a morphism <semantics>[A m,A]([A n 1,A][A n m,A])[A n 1++n m,A]<annotation encoding="application/x-tex">[A^m, A] \otimes \left ( [A^{n_1}, A] \otimes \cdots \otimes [A^{n_m}, A] \right ) \rightarrow [A^{n_1 + \cdots + n_m}, A]</annotation></semantics> coming from evaluation (see the section below about the little <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-disks operad for details). We would like a general framework for expressing such a notion of composing operations.

Definition of an operad. Recall from the previous section that, for given <semantics>T<annotation encoding="application/x-tex">T \in \mathcal{F}</annotation></semantics>, we can consider <semantics>nT n<annotation encoding="application/x-tex">n \mapsto T^n</annotation></semantics> as a functor <semantics>P op<annotation encoding="application/x-tex">\mathbf{P}^{\text{op}} \rightarrow \mathcal{F}</annotation></semantics>. We can thus define a (non-symmetric!) product <semantics>TS= nTnS n<annotation encoding="application/x-tex">T \circ S = \int^n Tn \otimes S^n</annotation></semantics>. It is easy to check that if <semantics>S𝒱<annotation encoding="application/x-tex">S \in \mathcal{V}</annotation></semantics> then in fact <semantics>TS𝒱<annotation encoding="application/x-tex">T \circ S \in \mathcal{V}</annotation></semantics>, so that <semantics><annotation encoding="application/x-tex">\circ</annotation></semantics> can be considered as a functor either of type <semantics>×<annotation encoding="application/x-tex">\mathcal{F} \times \mathcal{F} \rightarrow \mathcal{F}</annotation></semantics> or of type <semantics>×𝒱𝒱<annotation encoding="application/x-tex">\mathcal{F} \times \mathcal{V} \rightarrow \mathcal{V}</annotation></semantics>.

The clarity with which Kelly’s paper demonstrates the various important properties of this substitution product would be difficult for me to improve upon, so I simply list here the punchlines, and refer the reader to the original paper for their proofs:

  • For <semantics>T,S<annotation encoding="application/x-tex">T,S \in \mathcal{F}</annotation></semantics> and <semantics>nP<annotation encoding="application/x-tex">n \in \mathbf{P}</annotation></semantics>, we have <semantics>(TS) nT nS<annotation encoding="application/x-tex">(T \circ S)^n \simeq T^n \circ S</annotation></semantics> which is natural in <semantics>T,S,n<annotation encoding="application/x-tex">T, S, n</annotation></semantics>. Using this and a Fubini style argument we get associativity of <semantics><annotation encoding="application/x-tex">\circ</annotation></semantics>.

  • <semantics>J=P(1,)I<annotation encoding="application/x-tex">J = \mathbf{P}(1, - )\otimes I</annotation></semantics> is the identity for <semantics><annotation encoding="application/x-tex">\circ</annotation></semantics>.

  • For <semantics>S<annotation encoding="application/x-tex">S \in \mathcal{F}</annotation></semantics>, <semantics>S:<annotation encoding="application/x-tex">- \circ S: \mathcal{F} \rightarrow \mathcal{F}</annotation></semantics> has the right adjoint <semantics>{S,}<annotation encoding="application/x-tex">\{S, -\}</annotation></semantics> given by <semantics>{S,R}m=[S m,R]<annotation encoding="application/x-tex">\{S, R\}m = [S^m, R]</annotation></semantics>. Moreover if <semantics>A𝒱<annotation encoding="application/x-tex">A \in \mathcal{V}</annotation></semantics> then we in fact have <semantics>𝒱(TA,B)(T,{A,B})<annotation encoding="application/x-tex">\mathcal{V}(T \circ A, B) \simeq \mathcal{F} (T, \{A, B\})</annotation></semantics>.

We can now define an operad as a monoid for <semantics><annotation encoding="application/x-tex">\circ</annotation></semantics>, i.e. some <semantics>T<annotation encoding="application/x-tex">T \in \mathcal{F}</annotation></semantics> equipped with <semantics>μ:TTT<annotation encoding="application/x-tex">\mu: T \circ T \rightarrow T</annotation></semantics> and <semantics>η:JT<annotation encoding="application/x-tex">\eta: J \rightarrow T</annotation></semantics> satisfying the monoid axioms. Operad morphisms are morphisms <semantics>TT <annotation encoding="application/x-tex">T \rightarrow T^\prime</annotation></semantics> that respect <semantics>μ<annotation encoding="application/x-tex">\mu</annotation></semantics> and <semantics>η<annotation encoding="application/x-tex">\eta</annotation></semantics>.

<semantics>{A,A}<annotation encoding="application/x-tex">\{A, A\}</annotation></semantics> as an operad. Once again we turn back to the example of <semantics>{A,A}<annotation encoding="application/x-tex">\{A, A\} \in \mathcal{F}</annotation></semantics>. Note that our choice to denote the endomorphism operad <semantics>(n[A n,A])<annotation encoding="application/x-tex">(n \mapsto [A^n, A])</annotation></semantics> by <semantics>{A,A}<annotation encoding="application/x-tex">\{A, A\}</annotation></semantics> agrees with the construction of <semantics>{A,}<annotation encoding="application/x-tex">\{A, -\}</annotation></semantics> as the right adjoint to <semantics>A<annotation encoding="application/x-tex">- \circ A</annotation></semantics>.

There is an evident evaluation map <semantics>{A,A}AeA<annotation encoding="application/x-tex">\{A, A\} \circ A \xrightarrow{e} A</annotation></semantics>, so that we have the composition <semantics>{A,A}{A,A}A1e{A,A}AeA<annotation encoding="application/x-tex">\{A, A\} \circ \{A, A\} \circ A \xrightarrow{1 \circ e} \{A,A\} \circ A \xrightarrow{e} A</annotation></semantics> which by adjunction gives us <semantics>μ:{A,A}{A,A}{A,A}<annotation encoding="application/x-tex">\mu:\{A,A\} \circ \{A,A\} \rightarrow \{A,A\}</annotation></semantics> which we take as our monoid multiplication. Similarly <semantics>JAA<annotation encoding="application/x-tex">J \circ A \simeq A</annotation></semantics> corresponds by adjunction to <semantics>η:J{A,A}<annotation encoding="application/x-tex">\eta: J \rightarrow \{A, A\}</annotation></semantics>. We thus have that <semantics>{A,A}<annotation encoding="application/x-tex">\{A,A\}</annotation></semantics> is an operad. In fact it is the “universal” operad, in the following sense:

Every operad <semantics>T<annotation encoding="application/x-tex">T \in \mathcal{F}</annotation></semantics> gives a monad <semantics>T<annotation encoding="application/x-tex">T \circ -</annotation></semantics> on <semantics><annotation encoding="application/x-tex">\mathcal{F}</annotation></semantics>, or on <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics> via restriction. Given <semantics>A<annotation encoding="application/x-tex">A \in \mathcal{F}</annotation></semantics>, algebra structures <semantics>h :TAA<annotation encoding="application/x-tex">h^{\prime}: T \circ A \rightarrow A</annotation></semantics> for the monad <semantics>T<annotation encoding="application/x-tex">T \circ -</annotation></semantics> on <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> correspond precisely to operad morphisms <semantics>h:T{A,A}<annotation encoding="application/x-tex">h: T \rightarrow \{A,A\}</annotation></semantics>. In this case we say that <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> gives an algebra structure on <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> for the operad <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>.

The little <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-disks operad

There are some other aspects of operads that the paper looks at, but for this post I will abuse artistic license to talk about something else that isn’t exactly in the paper (although it is indirectly referenced): May’s little <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-disks operad. For a great introduction to the following material I recommend Emily Riehl’s notes on Kathryn Hess’s two-part (I,II) talk on operads in algebraic topology.

Let <semantics>𝒱=(Top nice,×,{*})<annotation encoding="application/x-tex">\mathcal{V} = (\mathbf{Top}_{\text{nice}}, \times, \{*\})</annotation></semantics> where <semantics>Top nice<annotation encoding="application/x-tex">\mathbf{Top}_{\text{nice}}</annotation></semantics> is one’s favorite cartesian closed category of topological spaces, with <semantics>×<annotation encoding="application/x-tex">\times</annotation></semantics> the appropriate product in this category.

Fix some <semantics>n<annotation encoding="application/x-tex">n \in \mathbb{N}</annotation></semantics>. For <semantics>kP<annotation encoding="application/x-tex">k \in \mathbf{P}</annotation></semantics>, we let <semantics>d n(k)=sEmb( kD n,D n)<annotation encoding="application/x-tex">d_n(k) = \text{sEmb}(\coprod_{k} D^n, D^n)</annotation></semantics>, the space of standard embeddings of <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> copies of the closed unit <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-disk in <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics> into the closed unit <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-disk in <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics>. By the space of standard embeddings we mean the subspace of the mapping space consisting of the maps which restrict on each summand to affine maps <semantics>xλx+c<annotation encoding="application/x-tex">x \mapsto \lambda x + c</annotation></semantics> with <semantics>0λ1<annotation encoding="application/x-tex">0 \leq \lambda \leq 1</annotation></semantics>.

Given <semantics>ξP(k,k)<annotation encoding="application/x-tex">\xi \in \mathbf{P}(k, k)</annotation></semantics> we have the evident action <semantics>ξ:sEmb( kD n,D n)sEmb( ξkD n,D n)<annotation encoding="application/x-tex">\langle \xi \rangle: \text{sEmb}(\coprod_{k} D^n, D^n) \rightarrow \text{sEmb}(\coprod_{\xi k} D^n, D^n)</annotation></semantics>, which gives us a functor <semantics>d n:PTop nice<annotation encoding="application/x-tex">d_n: \mathbf{P} \rightarrow \mathbf{Top}_{\text{nice}}</annotation></semantics>, so <semantics>d n<annotation encoding="application/x-tex">d_n \in \mathcal{F}</annotation></semantics>.

Fix some <semantics>k,lP<annotation encoding="application/x-tex">k,l \in \mathbf{P}</annotation></semantics>; then <semantics>d n k(l)= m 1,,m kP(m 1++m k,l)d n(m 1)d n(m k)<annotation encoding="application/x-tex">d_n^k(l) = \int^{m_1, \dots, m_k} \mathbf{P}(m_1 + \cdots + m_k, l) \otimes d_n(m_1) \otimes \cdots \otimes d_n(m_k)</annotation></semantics>, which we can roughly think of as all the different ways we can partition a total of <semantics>l<annotation encoding="application/x-tex">l</annotation></semantics> disks into <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> blocks, with the <semantics>i th<annotation encoding="application/x-tex">i^{\text{th}}</annotation></semantics> block having <semantics>m i<annotation encoding="application/x-tex">m_i</annotation></semantics> disks, and then map each block of <semantics>m i<annotation encoding="application/x-tex">m_i</annotation></semantics> disks into a single disk, all the while being able to permute the <semantics>l<annotation encoding="application/x-tex">l</annotation></semantics> disks amongst themselves (without necessarily having to respect the partitions).

We then get <semantics>μ:d nd nd n<annotation encoding="application/x-tex">\mu: d_n \circ d_n \rightarrow d_n</annotation></semantics> by composing the disk embeddings. More precisely, for each <semantics>l<annotation encoding="application/x-tex">l</annotation></semantics> we get a morphism <semantics>μ l:(d n(k)d n k)ld n(k)(d n k(l))d n(l)<annotation encoding="application/x-tex">\mu_l: (d_n(k) \otimes d_n^k)l \simeq d_n(k) \otimes (d_n^k(l)) \rightarrow d_n(l)</annotation></semantics> from the following considerations:

First we note that <semantics>d n(k)d n(m 1)d n(m k) =sEmb( kD n,D n)×( 1iksEmb( m iD n,D n)) sEmb(D n,D n) k×( 1iksEmb( m iD n,D n)) 1ik(sEmb( m iD n,D n)×sEmb(D n,D n)).<annotation encoding="application/x-tex"> \begin{aligned} d_n(k) \otimes d_n(m_1) \otimes \cdots \otimes d_n(m_k) &= \text{sEmb}(\coprod_k D^n, D^n) \times (\prod_{1 \leq i \leq k} \text{sEmb}(\coprod_{m_i} D^n, D^n))\\ &\simeq \text{sEmb}(D^n, D^n)^k \times (\prod_{1 \leq i \leq k} \text{sEmb}(\coprod_{m_i} D^n, D^n))\\ &\simeq \prod_{1 \leq i \leq k} (\text{sEmb}(\coprod_{m_i} D^n, D^n) \times \text{sEmb}(D^n, D^n)). \end{aligned} </annotation></semantics> Now for each <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> there is a map <semantics>sEmb( m iD n,D n)×sEmb(D n,D n)sEmb( m iD n,D n)<annotation encoding="application/x-tex">\text{sEmb}(\coprod_{m_i} D^n, D^n) \times \text{sEmb}(D^n, D^n) \rightarrow \text{sEmb}(\coprod_{m_i}D^n, D^n)</annotation></semantics> induced from iterated evaluation by adjunction. Then by the above, this gives a morphism <semantics>d n(k)d n(m 1)d n(m k) 1iksEmb( m iD n,D n) sEmb( m 1++m kD n,D n) =d n(m 1++m k).<annotation encoding="application/x-tex"> \begin{aligned} d_n(k) \otimes d_n(m_1) \otimes \cdots \otimes d_n(m_k) &\rightarrow \prod_{1 \leq i \leq k} \text{sEmb} (\coprod_{m_i} D^n, D^n)\\ &\simeq \text{sEmb}(\coprod_{m_1 + \cdots + m_k} D^n, D^n)\\ &= d_n(m_1 + \cdots + m_k). \end{aligned} </annotation></semantics>

A big reason that the little <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-disks operad is relevant to algebraic topology is that there is a big theorem stating that a space is weakly equivalent to an <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-fold loop space if and only if it’s an algebra for <semantics>d n<annotation encoding="application/x-tex">d_n</annotation></semantics>.

One direction is straightforward: consider a space <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and its <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-fold loop space <semantics>Ω nA<annotation encoding="application/x-tex">\Omega^n A</annotation></semantics>. Given an element of <semantics>d n(k)<annotation encoding="application/x-tex">d_n (k)</annotation></semantics> and <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> choices of “little maps” <semantics>(D n,D n)(A,*)<annotation encoding="application/x-tex">(D^n, \partial D^n) \rightarrow (A, \ast)</annotation></semantics>, we can stitch together these little maps into one large map <semantics>(D n,D n)(A,*)<annotation encoding="application/x-tex">(D^n, \partial D^n) \rightarrow (A,\ast)</annotation></semantics> according to the instructions specified by the chosen element of <semantics>d n(k)<annotation encoding="application/x-tex">d_n(k)</annotation></semantics> (where we map everything in the complement of the <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> little disks to the basepoint in <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>). Doing this for each <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics>, we get an operad morphism <semantics>d n{Ω nA,Ω nA}<annotation encoding="application/x-tex">d_n \rightarrow \{\Omega^n A, \Omega^n A\}</annotation></semantics>.

The other direction is much harder, and Maru gave an absolutely fantastic sketch of the basic story in our group discussions, which I hope she will post in the comments; I refrain from including it in the body of this post, partially for reasons of length and partially because I would just end up repeating verbatim what she said in the discussion.

by riehl (eriehl@math.jhu.edu) at March 21, 2017 10:42 AM

Clifford V. Johnson - Asymptotia

News from the Front, XIII: Holographic Heat Engines for Fun and Profit

I put a set of new results out on to the arxiv recently. They were fun to work out. They represent some of my continued fascination with holographic heat engines, those things I came up with back in 2014 that I think I've written about here before (here and here). For various reasons (that I've explained in various papers) I like to think of them as an answer waiting for the right question, and I've been refining my understanding of them in various projects, trying to get clues to what the question or questions might be.

As I've said elsewhere, I seem to have got into the habit of using 21st Century techniques to tackle problems of a 19th Century flavour! The title of the paper is "Approaching the Carnot limit at finite power: An exact solution". As you may know, the Carnot engine, whose efficiency is the best a heat engine can do (for specified temperatures of exchange with the hot and cold reservoirs), is itself not a useful practical engine. It is a perfectly reversible engine and as such takes infinite time to run a cycle. A zero power engine is not much practical use. So you might wonder how close a real engine can come to the Carnot efficiency... the answer should be that it can come arbitrarily close, but most engines don't, and so people who care about this sort of thing spend a lot of time thinking about how to design special engines that can come close. And there are various arguments you can make for how to do it in various special systems and so forth. It's all very interesting and there's been some important work done.

What I realized recently is that my old friends the holographic heat engines are a very good tool for tackling this problem. Part of the reason is that the underlying working substance that I've been using is a black hole (or, if you prefer, is defined by a black hole), and such things are often captured as exact [...] Click to continue reading this post

The post News from the Front, XIII: Holographic Heat Engines for Fun and Profit appeared first on Asymptotia.

by Clifford at March 21, 2017 05:19 AM

March 20, 2017

Peter Coles - In the Dark

My Last Will – by Sir Walter Raleigh (no, not that one…)

The vernal equinox in the Northern hemisphere passed this morning at 10.29 GMT, heralding the start of spring – a time when naturally our thoughts turn to death and decay. Which is no doubt why I remembered this poem  I came across some time ago but for some reason haven’t posted yet. It’s quite astonishing how many websites attribute this verse to the Elizabethan courtier and explorer Sir Walter Raleigh, who was indeed an accomplished poet, but the use of language is very clearly not of that period. In fact this was written by Professor Sir Walter Alexander Raleigh (1861-1922). What he says in this poem about his own untidiness is I’m afraid very true also of me, but the semi-joking tone with which he opens gives way to something far more profound, and I think the last two lines are particularly powerful.

When I am safely laid away,
Out of work and out of play,
Sheltered by the kindly ground
From the world of sight and sound,
One or two of those I leave
Will remember me and grieve,
Thinking how I made them gay
By the things I used to say;
— But the crown of their distress
Will be my untidiness.

What a nuisance then will be
All that shall remain of me!
Shelves of books I never read,
Piles of bills, undocketed,
Shaving-brushes, razors, strops,
Bottles that have lost their tops,
Boxes full of odds and ends,
Letters from departed friends,
Faded ties and broken braces
Tucked away in secret places,
Baggy trousers, ragged coats,
Stacks of ancient lecture-notes,
And that ghostliest of shows,
Boots and shoes in horrid rows.
Though they are of cheerful mind,
My lovers, whom I leave behind,
When they find these in my stead,
Will be sorry I am dead.

They will grieve; but you, my dear,
Who have never tasted fear,
Brave companion of my youth,
Free as air and true as truth,
Do not let these weary things
Rob you of your junketings.

Burn the papers; sell the books;
Clear out all the pestered nooks;
Make a mighty funeral pyre
For the corpse of old desire,
Till there shall remain of it
Naught but ashes in a pit:
And when you have done away
All that is of yesterday,
If you feel a thrill of pain,
Master it, and start again.

This, at least, you have never done
Since you first beheld the sun:
If you came upon your own
Blind to light and deaf to tone,
Basking in the great release
Of unconsciousness and peace,
You would never, while you live,
Shatter what you cannot give;
— Faithful to the watch you keep,
You would never break their sleep.

Clouds will sail and winds will blow
As they did an age ago
O’er us who lived in little towns
Underneath the Berkshire downs.
When at heart you shall be sad,
Pondering the joys we had,
Listen and keep very still.
If the lowing from the hill
Or the tolling of a bell
Do not serve to break the spell,
Listen; you may be allowed
To hear my laughter from a cloud.

Take the good that life can give
For the time you have to live.
Friends of yours and friends of mine
Surely will not let you pine.
Sons and daughters will not spare
More than friendly love and care.
If the Fates are kind to you,
Some will stay to see you through;
And the time will not be long
Till the silence ends the song.

Sleep is God’s own gift; and man,
Snatching all the joys he can,
Would not dare to give his voice
To reverse his Maker’s choice.
Brief delight, eternal quiet,
How change these for endless riot
Broken by a single rest?
Well you know that sleep is best.

We that have been heart to heart
Fall asleep, and drift apart.
Will that overwhelming tide
Reunite us, or divide?
Whence we come and whither go
None can tell us, but I know
Passion’s self is often marred
By a kind of self-regard,
And the torture of the cry
“You are you, and I am I.”
While we live, the waking sense
Feeds upon our difference,
In our passion and our pride
Not united, but allied.

We are severed by the sun,
And by darkness are made one.

 


by telescoper at March 20, 2017 05:37 PM

Emily Lakdawalla - The Planetary Society Blog

Signed, sealed but not delivered: LightSail 2 awaits ship date
Following a pre-ship review at Planetary Society headquarters, LightSail 2 is ready to be integrated with its Prox-1 partner spacecraft. The final shipping schedule, however, has yet to be determined.

March 20, 2017 11:00 AM

March 19, 2017

Jaques Distler - Musings

Responsibility

Many years ago, when I was an assistant professor at Princeton, there was a cocktail party at Curt Callan’s house to mark the beginning of the semester. There, I found myself in the kitchen, chatting with Sacha Polyakov. I asked him what he was going to be teaching that semester, and he replied that he was very nervous because — for the first time in his life — he would be teaching an undergraduate course. After my initial surprise that he had gotten this far in life without ever having taught an undergraduate course, I asked which course it was. He said it was the advanced undergraduate Mechanics course (chaos, etc.) and we agreed that would be a fun subject to teach. We chatted some more, and then he said that, on reflection, he probably shouldn’t be quite so worried. After all, it wasn’t as if he was going to teach Quantum Field Theory, “That’s a subject I’d feel responsible for.”

This remark stuck with me, but it never seemed quite so poignant until this semester, when I find myself teaching the undergraduate particle physics course.

The textbooks (and I mean all of them) start off by “explaining” that relativistic quantum mechanics (e.g. replacing the Schrödinger equation with Klein-Gordon) make no sense (negative probabilities and all that …). And they then proceed to use it anyway (supplemented by some Feynman rules pulled out of thin air).

This drives me up the #@%^ing wall. It is precisely wrong.

There is a perfectly consistent quantum mechanical theory of free particles. The problem arises when you want to introduce interactions. In Special Relativity, there is no interaction-at-a-distance; all forces are necessarily mediated by fields. Those fields fluctuate and, when you want to study the quantum theory, you end up having to quantize them.

But the free particle is just fine. Of course it has to be: free field theory is just the theory of an (indefinite number of) free particles. So it better be true that the quantum theory of a single relativistic free particle makes sense.

So what is that theory?

  1. It has a Hilbert space, <semantics><annotation encoding="application/x-tex">\mathcal{H}</annotation></semantics>, of states. To make the action of Lorentz transformations as simple as possible, it behoves us to use a Lorentz-invariant inner product on that Hilbert space. This is most easily done in the momentum representation <semantics>χ|ϕ=d 3k(2π) 32k 2+m 2χ(k) *ϕ(k)<annotation encoding="application/x-tex"> \langle\chi|\phi\rangle = \int \frac{d^3\vec{k}}{{(2\pi)}^3 2\sqrt{\vec{k}^2+m^2}}\, \chi(\vec{k})^* \phi(\vec{k}) </annotation></semantics>
  2. As usual, the time-evolution is given by a Schrödinger equation
(1)<semantics>i t|ψ=H 0|ψ<annotation encoding="application/x-tex">i\partial_t |\psi\rangle = H_0 |\psi\rangle </annotation></semantics>

where <semantics>H 0=p 2+m 2<annotation encoding="application/x-tex">H_0 = \sqrt{\vec{p}^2+m^2}</annotation></semantics>. Now, you might object that it is hard to make sense of a pseudo-differential operator like <semantics>H 0<annotation encoding="application/x-tex">H_0</annotation></semantics>. Perhaps. But it’s not any harder than making sense of <semantics>U(t)=e ip 2t/2m<annotation encoding="application/x-tex">U(t)= e^{-i \vec{p}^2 t/2m}</annotation></semantics>, which we routinely pretend to do in elementary quantum. In both cases, we use the fact that, in the momentum representation, the operator <semantics>p<annotation encoding="application/x-tex">\vec{p}</annotation></semantics> is represented as multiplication by <semantics>k<annotation encoding="application/x-tex">\vec{k}</annotation></semantics>.

I could go on, but let me leave the rest of the development of the theory as a series of questions.

  1. The self-adjoint operator, <semantics>x<annotation encoding="application/x-tex">\vec{x}</annotation></semantics>, satisfies <semantics>[x i,p j]=iδ j i<annotation encoding="application/x-tex"> [x^i,p_j] = i \delta^{i}_j </annotation></semantics> Thus it can be written in the form <semantics>x i=i(k i+f i(k))<annotation encoding="application/x-tex"> x^i = i\left(\frac{\partial}{\partial k_i} + f_i(\vec{k})\right) </annotation></semantics> for some real function <semantics>f i<annotation encoding="application/x-tex">f_i</annotation></semantics>. What is <semantics>f i(k)<annotation encoding="application/x-tex">f_i(\vec{k})</annotation></semantics>?
  2. Define <semantics>J 0(r)<annotation encoding="application/x-tex">J^0(\vec{r})</annotation></semantics> to be the probability density. That is, when the particle is in state <semantics>|ϕ<annotation encoding="application/x-tex">|\phi\rangle</annotation></semantics>, the probability for finding it in some Borel subset <semantics>S 3<annotation encoding="application/x-tex">S\subset\mathbb{R}^3</annotation></semantics> is given by <semantics>Prob(S)= Sd 3rJ 0(r)<annotation encoding="application/x-tex"> \text{Prob}(S) = \int_S d^3\vec{r} J^0(\vec{r}) </annotation></semantics> Obviously, <semantics>J 0(r)<annotation encoding="application/x-tex">J^0(\vec{r})</annotation></semantics> must take the form <semantics>J 0(r)=d 3kd 3k(2π) 64k 2+m 2k 2+m 2g(k,k)e i(kk)rϕ(k)ϕ(k) *<annotation encoding="application/x-tex"> J^0(\vec{r}) = \int\frac{d^3\vec{k}d^3\vec{k}'}{{(2\pi)}^6 4\sqrt{\vec{k}^2+m^2}\sqrt{{\vec{k}'}^2+m^2}} g(\vec{k},\vec{k}') e^{i(\vec{k}-\vec{k'})\cdot\vec{r}}\phi(\vec{k})\phi(\vec{k}')^* </annotation></semantics> Find <semantics>g(k,k)<annotation encoding="application/x-tex">g(\vec{k},\vec{k}')</annotation></semantics>. (Hint: you need to diagonalize the operator <semantics>x<annotation encoding="application/x-tex">\vec{x}</annotation></semantics> that you found in problem 1.)
  3. The conservation of probability says <semantics>0= tJ 0+ iJ i<annotation encoding="application/x-tex"> 0=\partial_t J^0 + \partial_i J^i </annotation></semantics> Use the Schrödinger equation (1) to find <semantics>J i(r)<annotation encoding="application/x-tex">J^i(\vec{r})</annotation></semantics>.
  4. Under Lorentz transformations, <semantics>H 0<annotation encoding="application/x-tex">H_0</annotation></semantics> and <semantics>p<annotation encoding="application/x-tex">\vec{p}</annotation></semantics> transform as the components of a 4-vector. For a boost in the <semantics>z<annotation encoding="application/x-tex">z</annotation></semantics>-direction, of rapidity <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>, we should have <semantics>U λp 2+m 2U λ 1 =cosh(λ)p 2+m 2+sinh(λ)p 3 U λp 1U λ 1 =p 1 U λp 2U λ 1 =p 3 U λp 3U λ 1 =sinh(λ)p 2+m 2+cosh(λ)p 3<annotation encoding="application/x-tex"> \begin{split} U_\lambda \sqrt{\vec{p}^2+m^2} U_\lambda^{-1} &= \cosh(\lambda) \sqrt{\vec{p}^2+m^2} + \sinh(\lambda) p_3\\ U_\lambda p_1 U_\lambda^{-1} &= p_1\\ U_\lambda p_2 U_\lambda^{-1} &= p_3\\ U_\lambda p_3 U_\lambda^{-1} &= \sinh(\lambda) \sqrt{\vec{p}^2+m^2} + \cosh(\lambda) p_3 \end{split} </annotation></semantics> and we should be able to write <semantics>U λ=e iλB<annotation encoding="application/x-tex">U_\lambda = e^{i\lambda B}</annotation></semantics> for some self-adjoint operator, <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>. What is <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>? (N.B.: by contrast the <semantics>x i<annotation encoding="application/x-tex">x^i</annotation></semantics>, introduced above, do not transform in a simple way under Lorentz transformations.)

The Hilbert space of a free scalar field is now <semantics> n=0 Sym n<annotation encoding="application/x-tex">\bigoplus_{n=0}^\infty \text{Sym}^n\mathcal{H}</annotation></semantics>. That’s perhaps not the easiest way to get there. But it is a way …

Update:

Yike! Well, that went south pretty fast. For the first time (ever, I think) I’m closing comments on this one, and calling it a day. To summarize, for those who still care,

  1. There is a decomposition of the Hilbert space of a Free Scalar field as <semantics> ϕ= n=0 n<annotation encoding="application/x-tex"> \mathcal{H}_\phi = \bigoplus_{n=0}^\infty \mathcal{H}_n </annotation></semantics> where <semantics> n=Sym n<annotation encoding="application/x-tex"> \mathcal{H}_n = \text{Sym}^n \mathcal{H} </annotation></semantics> and <semantics><annotation encoding="application/x-tex">\mathcal{H}</annotation></semantics> is 1-particle Hilbert space described above (also known as the spin-<semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>, mass-<semantics>m<annotation encoding="application/x-tex">m</annotation></semantics>, irreducible unitary representation of Poincaré).
  2. The Hamiltonian of the Free Scalar field is the direct sum of the induced Hamiltonia on <semantics> n<annotation encoding="application/x-tex">\mathcal{H}_n</annotation></semantics>, induced from the Hamiltonian, <semantics>H=p 2+m 2<annotation encoding="application/x-tex">H=\sqrt{\vec{p}^2+m^2}</annotation></semantics>, on <semantics><annotation encoding="application/x-tex">\mathcal{H}</annotation></semantics>. In particular, it (along with the other Poincaré generators) is block-diagonal with respect to this decomposition.
  3. There are other interesting observables which are also block-diagonal, with respect to this decomposition (i.e., don’t change the particle number) and hence we can discuss their restriction to <semantics> n<annotation encoding="application/x-tex">\mathcal{H}_n</annotation></semantics>.

Gotta keep reminding myself why I decided to foreswear blogging…

by distler (distler@golem.ph.utexas.edu) at March 19, 2017 07:48 AM

March 18, 2017

Clifford V. Johnson - Asymptotia

BBC CrowdScience SXSW Panel!

They recorded one of the panels I was on at SXSW as a 30 minute episode of the BBC World Service programme CrowdScience! The subject was science and the movies, and it was a lot of fun, with some illuminating exchanges, I had some fantastic co-panellists: Dr. Mae Jemison (the astronaut, doctor, and chemical engineer), Professor Polina Anikeeva (she researches in materials science and engineering at MIT), and Rick Loverd (director of the Science and Entertainment Exchange), and we had an excellent host, Marnie Chesterton. It has aired now, but in case you missed it, here is a link to the site where you can listen to our discussion.

-cvj Click to continue reading this post

The post BBC CrowdScience SXSW Panel! appeared first on Asymptotia.

by Clifford at March 18, 2017 08:27 PM

ZapperZ - Physics and Physicists

Minutephysics's "How To Teleport Schrodinger's Cat"
It used to be that Minute Physics videos are roughly.... a minute long. But that is no longer true. Here, he tackles quantum entanglement via trying an illustration of teleporting the infamous Schrodinger's Cat.



I'm sorry, but how many of you managed to follow this?

I think I'll stick to my "Quantum Entanglement for Dummies". :)

Zz.

by ZapperZ (noreply@blogger.com) at March 18, 2017 02:25 PM

Lubos Motl - string vacua and pheno

Particles' wave functions always spread superluminally
It's been almost a week since we discussed Jacques Distler's confusion about some basics of quantum field theory. He posts several blog posts a year, a quantum field theory course is probably the only one he teaches, and he was "driven up the wall" by a point that almost every good introductory textbook makes at the very beginning. I expected that within a day or two, he would post a detailed text with the derivations saying "Oops, I've been silly [for 50 years]".

It just didn't happen. He still insists that the one-particle truncation of a quantum field theory is perfectly consistent and causal. In particular, he repeated many times in his blog post (search for the word "superluminal") that the relativistically modified Schrödinger's equation for one particle (with a square root) guarantees that the wave packets never spread faster than the speed of light. Oops, it's just too bad.




By these comments, Jacques says that he is ignorant about many things that I (and my instructors) considered basics of quantum field theory since I was an undergraduate, such as:
  1. The special theory of relativity and quantum mechanics are consistent but their combination is constraining and has some unavoidable consequences – some basic general properties of quantum field theories.
  2. Consistent relativistic quantum mechanical theories guarantee that objects capable of emitting a particle are necessarily able to absorb them as well, and vice versa.
  3. For particles that are charged in any way, the existence of antiparticles becomes an unavoidable consequence of relativity and quantum mechanics.
  4. Probabilities of processes (e.g. cross sections) that involve these antiparticles are guaranteed to be linked to probabilities involving the original particles via crossing symmetry or its generalizations.
  5. The pair production of particles and antiparticles becomes certain when energy \(E\gg m\) is available or when fields are squeezed at distances \(\ell \ll 1/m\) (much) shorter than the Compton wavelength.
  6. Only observables constructed from quantum fields may be attributed to regions of the Minkowski spacetime so that they're independent from each other at spacelike separations (because they commute or anticommute).
  7. Wave functions that are functions of "positions of particles" unavoidably allow propagation that exceeds the speed of light and there can't be any equation that bans it. The causal propagation only applies to quantum fields (the observables), not to wave functions of particles' positions.
  8. Equivalently, almost all trajectories of particles that contribute to the Feynman path integral are superluminal and non-differentiable almost everywhere and this fact can't be avoided by any relativistic version of the mathematical expressions. Causality is only obtained by a combination of emission and absorption, contributions from particles and antiparticles, and at the level of quantum fields (observables).
It's a lot of basic stuff that Jacques should know but instead, he doesn't know it and these insight drive him up the wall. Let's look at those things.




The most well-defined disagreement is about the "relativistically corrected" Schrödinger equation\[

i\hbar\frac{\partial}{\partial t} \psi = c \sqrt{m^2c^2-\hbar^2\Delta} \psi + V(x) \psi

\] You see that it's like the usual one-particle equation except that the non-relativistic formula for the kinetic energy, \(E=|\vec p|^2/2m\), is replaced by the relativistic one, \(E=\sqrt{|\vec p|^2+m^2}\), with the same Laplacian (times \(-\hbar^2\)) substituted for \(|\vec p|^2\).

Jacques believes that when you substitute a localized wave packet for \(\psi(x,y,z)\) at \(t=0\) and you wait for time \(t'\), it will only spread to the ball of radius \(t'\) away from the original region: it will never propagate superluminally. Search for "superluminally" in his blog post and comments. Oops, it's wrong and embarrassingly wrong.

I think that the simplest way to see why he's wrong is to realize that the equation above still has the usual non-relativistic limit. As long as you guarantee that \(|\vec p| \ll m\) in the \(c=\hbar=1\) units, the evolution of the wave packets must be well approximated by non-relativistic physics and the non-relativistic Schrödinger equation.

Consider an actual electron moving around a nucleus. In the hydrogen atom, the motion is basically non-relativistic. Consider an initial localized wave packet for the electron that has a uniform phase, is much larger than the Compton wavelength \(\hbar/mc\approx 2.4\times 10^{-12}\,{\rm m}\) (it's simply \(1/m\) in the \(c=\hbar=1\) units) but still smaller than the radius of the atom. For example, the radius of the packet is \(10^{-11}\) meters. Outside a sphere of this radius, the wave function is zero.

Will this wave packet spread superluminally? You bet. By construction, the average speed is about an order of magnitude lower than the speed of light which is reasonably non-relativistic. So with a 1% accuracy (squared speed), and aside from the irrelevant phase linked to the additional additive shift \(E_0=mc^2\) to the energy, the wave packet will spread like if it followed the non-relativistic Schrödinger equation\[

i\hbar\frac{\partial}{\partial t} \psi = -\hbar^2\frac{\Delta}{2m} \psi + V(x) \psi

\] Let's set \(V(x)=0\). OK, how do the wave packets spread according to the ordinary Schrödinger equation? Let's ask Ron Maimon – every good self-didact is enough to answer such questions. Well, it's simple: the Schrödinger equation is just a diffusion (or heat) equation where the main parameter is imaginary. If \(m\) above were imaginary, \(m=i\mu\), then the solution to the diffusion equation would be\[

\rho(x,t)\equiv \psi(x,t) = \frac{\sqrt{\mu}}{\sqrt{2\pi t}} \exp(-\mu x^2/t)

\] The width of the Gaussian packet goes like \(\Delta x\sim \sqrt{t/\mu}\). It's very simple.



If you know the graph of the square root, you must know that the speed is initially very high. The speed \(dx/dt\) scales like the derivative of the square root of time, i.e. as \(1/\sqrt{t\mu}\). For times shorter than \(1/\mu\), the speed with which the wave packet spreads unavoidably exceeds the speed of light. It's kosher that we're looking at timescales shorter than the "Compton time scale" of the electron. We only assumed that the spatial size of the wave packet is longer than the Compton wavelength. Whether an analogous scaling is obeyed by the dependence on time depends on the equation itself and the answer is clearly No. The asymmetric treatment of space and time in the equation (the square root is only used for the spatial derivatives) may be partly blamed for that asymmetry.

Just to be sure, all the scalings are the same for the value of \(\mu=-im\) that is imaginary.

If you don't feel sure that our non-relativistic approximation was adequate for the question, I can give you a stronger weapon: the exact solution of the equation (Schrödinger's equation with the square root). What is it? Well, it's nothing else than the retarded Green's function – as taught in the context of the quantum Klein-Gordon field. Look e.g. at Page 7 of these lectures by Gonsalves in Buffalo.

The retarded function is the matrix element of the evolution operator for the one-particle Hilbert space\[

G_{\rm ret}(x-x') = \bra{x,y,z} \exp(H(t-t')/i) \ket{x',y',z'}.

\] When the particle is initially (a delta function) at the position \((x',y',z')\) at time \(t'\) and you wait for time \(t-t'\) i.e. you evolve it by the square-root-based Hamiltonian up to the moment \(t'\), and you ask what will be the amplitude at the position \((x,y,z)\), the answer is nothing else than the retarded Green's function of the difference between the two four-vectors.

Can the retarded Green's functions be analytically calculated? As long as you include Bessel functions among your "analytically allowed tools", the answer is Yes. If we set the four-vector \(x'=0\) to zero, the retarded Green's function is simply\[

G_{\rm ret}(x) = \theta(t) \zzav{ \frac{ \delta( x^\mu x_\mu ) }{2\pi} - \frac{m}{4\pi}J_1 (mx^\mu x_\mu ) }

\] For small and large timelike or spacelike separation, the Bessel function of the first kind used in the expression asymptotically is an odd function of the argument and behaves as (the sign is OK for positive arguments)\[

J_n(z) \sim \left\{ \begin{array}{cc} \frac{1}{n!} \zav{ \frac{z}{2} }^n&{\rm for}\,\, |z|\ll 1 \\
\sqrt{\frac{2}{\pi z}} \cos\zav{ z- \frac{(2n+1)\pi}{4} }
& {\rm for}\,\,|z|\gg 1 \end{array} \right.

\] But another lesson of the calculation is that the Green's function is nonzero even for \(x^\mu x_\mu\) negative, i.e. spacelike separation – although it decreases roughly as \(\exp(-m|x|)\) over there if you redefine the normalization by the factor of \(2E\) in the momentum space (which is a non-local transformation in the position space). See the last displayed equation on page 2 of Gonsalves:
Relativistic Causality:

Quantum mechanics of a single relativistic free point particle is inconsistent with the principle of relativity that signals cannot travel faster than the speed of light. The probability amplitude for a particle of mass \(m\) to travel from position \({\bf r}_0\) to \({\bf r}\) in a time interval \(t\) is\[

U(t) = \bra{{\bf r}} e^{-iHt} \ket{{\bf r}_0} =
\bra{{\bf r}} e^{-i\sqrt{{\bf p}^2+m^2}t} \ket{{\bf r}_0}\sim\\
\sim \exp(-m\sqrt{{\rm r}^2-t^2}),\quad {\rm for}\,\,{\rm spacelike}\,\, {\rm r}^2\gt t^2

\]
Gonsalves also quotes "particle creation and annihilation" and "spin-statistics connection" as the other two unavoidable consequences of a consistent union of quantum mechanics and special relativity. He refers you to Chapter 2 of Peskin-Schroeder to learn these things from a well-known source.

OK, you might ask, what's the right modification of the wave equation for one particle that guarantees that the wave packet never spreads luminally?

There is none. The condition that the packet never spreads superluminally would violate the uncertainty principle, a fundamental postulate of quantum mechanics.

Why is it so? I can give you a simple idea. If you compress the particle to a small region, \(\Delta x \ll 1/m\), much smaller than the Compton wavelength, the uncertainty principle unavoidably says \(\Delta p \gg m\), so the motion is ultrarelativistic. You could think that \(\Delta p\gg m\) or \(p\gg m\) is still consistent with \(v\leq 1\) but the evolved wave packets are unavoidably far from those that minimize the product of uncertainties and as the Bessel mathematics above shows, the piece in the spacelike region just can't exactly vanish, basically due to the non-local character of the operators.

Similar derivations could be made with the help of the Feynman path integral. The typical trajectories contributing to the Feynman propagator are superluminal and non-differentiable almost everywhere and this fact does hold even in the calculation of the propagators in quantum field theory, a relativistic theory. As I discussed in a blog post in 2012, the superluminal or non-differentiable nature of generic paths in the path integral is needed for Feynman's formalism to be compatible with the uncertainty principle. Recall that we have solved a paradox: the calculation of \(xp-px\) in the path integral should amount to the insertion of the classical integrand \(xp-px\) to the path integral but this classical insertion is zero. The paradox was resolved thanks to the generic paths' being non-differentiable: the time ordering of \(x(t)\) and \(p(t\pm \epsilon)\) mattered.

So does quantum field theory prevent you from sending signals to spacelike-separated regions? And how is it achieved?

Yes, quantum field theory perfectly prohibits any propagation of signals superluminally or over spacelike separations. It does so by using the quantum fields. Quantum fields such as \(\Phi(x,y,z,t)\) and functions of them and their derivatives are associated with spacetime points and they commute or anticommute with each other when the separation is spacelike.

The zero commutator means that you may measure them simultaneously – that the decision to measure one doesn't influence the other or that the order of the two measurements is inconsequential. Just to be sure, the previous sentence doesn't say that these spacelike-separated measurements are never correlated. They may be correlated but correlation doesn't mean causation. They're only correlated if the correlation (mathematically described as entanglement within quantum mechanics) follows from the previous contact of the two subsystems that have evolved or moved to the spacelike-separated points.

The point is that the outcomes themselves may be correlated but the human decisions – e.g. which polarization is measured on one photon – do not influence the statistics for the other photon itself at all. The existence of the "collapse" associated with the first measurement doesn't change the odds for the second measurement – although if you know the result into which the first measurement "collapsed", you must refine your predictions for the outcome of the second measurements because a correlation/entanglement could have been present. OK, how does this vanishing of the spacelike-separated commutators agree with the fact that the packets spread superluminally? On page 27 of Peskin-Schroeder, you may see that the "commutator Green's function" is a difference between two ordinary Green's functions and because those two are equal in the spacelike region, the value just cancels in the spacelike region.

But again, the Fourier transform of the ordinary propagator such as \(1/(p^2-m^2+i\epsilon)\) does not vanish in the spacelike regions of the 4-vector \(x^\mu\). It cannot vanish because this position space propagator knows about the correlation of fields at two points of space. And the fields in nearby, spacelike-separated points are correlated, of course (very likely to be almost equal), especially if they are closer than the Compton wavelength. You may view this correlation as a result of the escaping of high-momentum or high-energy quanta to infinity. Only low-momentum or low-energy quanta are left in the vacuum and its low-energy excitations – and because of the Fourier relationship of \(x\) and \(p\), this absence of high-energy quanta means that the quantum fields can't depend on the spatial coordinates too much.

You know, the message is that the ban on superluminal signals is compatible with quantum mechanics but the creation and annihilation of particles must be unavoidably allowed when you reconcile these two principles, special relativity and quantum mechanics. Jacques Distler believes that relativistic causality works even in "QFT truncated to the one-particle Hilbert space" which simply isn't right. He's really misunderstanding the key reason why quantum field theory was needed at all.

Try to calculate the expectation value of the commutator of two fields \(F(x)\) and \(G(y)\) at two spacelike-separated points \(x,y\). The fields \(F,G\) may be the Klein-Gordon \(\Phi\) itself or some bilinear constructed out of it, e.g. the component of a current \(J^0\) that Distler talks about at some point. Imagine that you're calculating this commutator. You first expand \(F,G\) in terms of \(\Phi\) and its derivatives. Then you insert the expansions of \(\Phi\) in terms of the creation and annihilation operators. And you know the expectation values of the type \(\bra 0 \Phi(x)\Phi(y) \ket 0\). When you time-order \(x,y\), it's just the usual propagator in the position space.

The precise calculation will depend on the operators you choose but a general point is true: There will be lots of individual terms that are nonzero for spacelike \(x-y\). Only if you sum all these terms – which will pick creation operators from \(F\) and annihilation operators from \(G\) and vice versa etc., you can achieve the cancellation.

In particular, if you consider the operators \(F,G \sim J^0\), those will contain terms of the type \(a^\dagger a\) as well as \(b^\dagger b\) for a field whose particles and antiparticles differ. Only if you include the correlators of from both particles and antiparticles matching between the points \(x,y\), you may get a cancellation of the commutator (its expectation value).

In other words, the fact that a quantum field is capable of both creating a particle and annihilating an antiparticle (which is the same for "real" fields) is absolutely vital for its ability to commute with spacelike-separated colleagues!

This insight may be formulated in yet another equivalent way. You just can't construct a localized – relativistically causally well-behaved – field operator at a given point that would only contain terms of a given creation-annihilation schematic type, e.g. only \(a^\dagger a\) but no \(b^\dagger b\), only \(a^\dagger\) but no \(b\), and so on. Any operator that has a well-defined "number of particles of each type that it creates or annihilates" is unavoidably "non-local" and can't exactly commute with its spacelike-separated counterparts!

If you wanted to study the truncation of the quantum field theory to a one-particle Hilbert space where the number of particles is \(N=1\), and the number of antiparticles (and all other particle species) is zero, then all "first-quantized" operators on your Hilbert space correspond to some combination of operators of the \(a_k^\dagger a_m\) form. You annihilate one particle and create one particle. But no such combination of operators may be strictly confined to a region so that it would commute with itself at spacelike-separation.

Students who have carefully done some basic calculations in quantum field theory know this fact from many "happy cancellations" that weren't obvious for some time. For example, consider the quantized electromagnetic field. Write the total energy as\[

H = \int d^3 x\,\frac{1}{2}\zav{B^2+ E^2},

\] i.e. the integral of the electric and magnetic energy density. Substitute \(\vec A\) and its derivatives for \(\vec B,\vec E\), and write \(A\) and its derivatives in terms of creation and annihilation operators for photons. So you will get terms of the form \(a^\dagger a\), \(aa\), and \(a^\dagger a^\dagger\). At the end, the total Hamiltonian only contains the terms of the \(a^\dagger a\) "mixed" type but this simplified form is only obtained once you integrate over \(\int d^3 x\) which makes the terms \(a a\) and \(a^\dagger a^\dagger\) vanish because of their oscillating dependence on \(x\). If you only write the energy density itself, it will unavoidably contain the operators of the type \(aa\) and \(a^\dagger a^\dagger\) – annihilating or creating two photons – too. And the terms of all these forms are equally important for the quantum field to be well-behaved, especially for the vanishing of its commutators at spacelike separations.

The broader lesson is that important principles of physics are ultimately reconcilable but the reconciliation is often non-trivial and implies insights, principles, and processes that didn't seem to unavoidably follow from the principles separately. So the combination of relativity and quantum mechanics implies the basic phenomena of quantum field theory – antiparticles, pair production, the inseparability of creation and annihilation, spin-statistics relations, and a few other things.

In the same way, perhaps a more extreme one, the unification of quantum mechanics and general relativity is possible but any consistent theory obeying both principles has to respect some qualitative features we know from quantum gravity – as exemplified by string theory, probably the only possible precise definition of a consistent theory of quantum gravity. In particular, black holes must carry a finite entropy, be practically indistinguishable from heavy particle species, and such heavy particle species must exist. The processes around black holes and those involving elementary particles are unavoidably linked by some UV-IR relationships and string theory's modular invariance is the most explicit known example (or toy model?) of such relationships.

In combination, the known important principles of physics are far more constraining than the principles are separately and they imply that the "kind of a theory we need" or even "the precise theory" is basically unique. This strictness is ultimately good news. If it didn't exist, we would be drowning in the infinite field of possibilities. Because of the "bonus" strictness resulting from the combination of important principles of physics, we know that a theory combining quantum mechanics and special relativity must work like quantum field theory and a theory that also respects gravity as in general relativity has to be string/M-theory.

by Luboš Motl (noreply@blogger.com) at March 18, 2017 11:34 AM

March 17, 2017

Tommaso Dorigo - Scientificblogging

Five New Charmed Baryons Discovered By LHCb!
While I was busy reporting the talks at the "Neutrino Telescope"  conference in Venice, LHCb released a startling new result, which I have not much time to describe in much detail this evening (it's Friday evening here in Italy and I'm going to call the week off), and yet wish to share with you as soon as possible.
The spectroscopy of low- and intermediate-mass hadrons (whatever this means) is a complex topic which either enthuses particle physicists or bores them to death. There are two reasons for this dycothomic behaviour.

read more

by Tommaso Dorigo at March 17, 2017 06:22 PM

Symmetrybreaking - Fermilab/SLAC

Q&A: Dark matter next door?

Astrophysicists Eric Charles and Mattia Di Mauro discuss the surprising glow of our neighbor galaxy. 

Image of the gamma-ray glow in Andromeda captured by the Fermi satellite

Astronomers recently discovered a stronger-than-expected glow of gamma rays at the center of the Andromeda galaxy, the nearest major galaxy to the Milky Way. The signal has fueled hopes that scientists are zeroing in on a sign of dark matter, which is five times more prevalent than normal matter but has never been detected directly. 

Researchers believe that gamma rays—a very energetic form of light—could be produced when hypothetical dark matter particles decay or collide and destroy each other. However, dark matter isn’t the only possible source of the gamma rays. A number of other cosmic processes are known to produce them. 

So what do Andromeda’s gamma rays really tell us about dark matter? To find out, Symmetry’s Manuel Gnida talked with Eric Charles and Mattia Di Mauro, two members of the Fermi-LAT collaboration—an international team of researchers that found the Andromeda gamma-ray signal using the Large Area Telescope, a sensitive “eye” for gamma rays on NASA’s Fermi Gamma-ray Space Telescope. 

Both researchers are based at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory. The LAT was conceived of and assembled at SLAC, which also hosts its operations center.

KIPAC researchers Eric Charles and Mattia Di Mauro

KIPAC researchers Eric Charles and Mattia Di Mauro

Dawn Harmer, SLAC National Accelerator Laboratory

Have you discovered dark matter?

MD:

No, we haven’t. In the study, the LAT team looked at the gamma-ray emissions of the Andromeda galaxy and found something unexpected, something we don’t fully understand yet. But there are other potential astrophysical explanations than dark matter.

It’s also not the first time that the LAT collaboration has studied Andromeda with Fermi, but in the old data the galaxy only looked like a big blob. With more data and improved data processing, we have now obtained a much clearer picture of the galaxy’s gamma-ray glow and how it’s distributed.

What’s so unusual about the results?

EC:

As a spiral galaxy, Andromeda is similar to the Milky Way. Therefore, we expected the emissions of both galaxies to look similar. What we discovered is that they are, in fact, quite different. 

In our galaxy, gamma rays come from all kinds of locations—from the center and the spiral arms in the outer regions. For Andromeda, on the other hand, the signal is concentrated at the center.

Why do galaxies glow in gamma rays?

EC:

The answer depends on the type of galaxy. There are active galaxies called blazars. They emit gamma rays when matter in close orbit around supermassive black holes generates jets of plasma. And then there are “normal” galaxies like Andromeda and the Milky Way that produce gamma rays in other ways.

When we look at the emissions of the Milky Way, the galaxy appears like a bright disk, with the somewhat brighter galactic center at the center of the disk. Most of this glow is diffuse and comes from the gas between the stars that lights up when it’s hit by cosmic rays—energetic particles spit out by star explosions or supernovae. 

Other gamma-ray sources are the remnants of such supernovae and pulsars—extremely dense, magnetized, rapidly rotating neutron stars. These sources show up as bright dots in the gamma-ray map of the Milky Way, except at the center where the density of gamma-ray sources is high and the diffuse glow of the Milky Way is brightest, which prevents the LAT from detecting individual sources.

Andromeda is too far away to see individual gamma-ray sources, so it only has a diffuse glow in our images. But we expected to see most of the emissions to come from the disk as well. Its absence suggests that there is less interaction between gas and cosmic rays in our neighbor galaxy. Since this interaction is tied to the formation of stars, this also suggests that Andromeda had a different history of star formation than the Milky Way.

The sky in gamma rays with energies greater than 1 gigaelectronvolts

The sky in gamma rays with energies greater than 1 gigaelectronvolts, based on eight years of data from the LAT on NASA’s Fermi Gamma-ray Space Telescope.

NASA/DOE/Fermi LAT Collaboration

What does all this have to do with dark matter?

MD:

When we carefully analyze the gamma-ray emissions of the Milky Way and model all the gas and point-like sources to the best of our knowledge, then we’re left with an excess of gamma rays at the galactic center. Some people have argued this excess could be a telltale sign of dark matter particles. 

We know that the concentration of dark matter is largest at the galactic center, so if there were a dark matter signal, we would expect it to come from there. The localization of gamma-ray emissions at Andromeda’s center seems to have renewed the interest in the dark matter interpretation in the media.

Is dark matter the most likely interpretation?

EC:

No, there are other explanations. There are so many gamma-ray sources at the galactic center that we can’t really see them individually. This means that their light merges into an extended, diffuse glow.

In fact, two recent studies from the US and the Netherlands have suggested that this glow in the Milky Way could be due to unresolved point sources such as pulsars. The same interpretation could also be true for Andromeda’s signal.

What would it take to know for certain?

MD:

To identify a dark matter signal, we would need to exclude all other possibilities. This is very difficult for a complex region like the galactic center, for which we don’t even know all the astrophysical processes. Of course, this also means that, for the same reason, we can’t completely rule out the dark matter interpretation.

But what’s really important is that we would want to see the same signal in a few different places. However, we haven’t detected any gamma-ray excesses in other galaxies that are consistent with the ones in the Milky Way and Andromeda. 

This is particularly striking for dwarf galaxies, small companion galaxies of the Milky Way that only have few stars. These objects are only held together because they are dominated by dark matter. If the gamma-ray excess at the galactic center were due to dark matter, then we should have already seen similar signatures in the dwarf galaxies. But we don’t.

by Manuel Gnida at March 17, 2017 04:59 PM

ZapperZ - Physics and Physicists

DOE's Office Of Science Faces Disastrous Cuts
The first Trump budget proposal presents a major disaster for scientific funding and especially to DOE Office of Science budget.

President Donald Trump's first budget request to Congress, to be released at 7 a.m. Thursday, will call for cutting the 2018 budget of the National Institutes of Health (NIH) by $6 billion, or nearly 20%, according to sources familiar with the proposal. The Department of Energy's (DOE's) Office of Science would lose $900 million, or nearly 20% of its $5 billion budget. The proposal also calls for deep cuts to the research programs at the Environmental Protection Agency (EPA) and the National Oceanic and Atmospheric Administration (NOAA), and a 5% cut to NASA's earth science budget. And it would eliminate DOE's roughly $300 million Advanced Research Projects Agency-Energy.

I don't know in what sense this will make America "great again". It is certainly not in science, that's for sure.

Zz.

by ZapperZ (noreply@blogger.com) at March 17, 2017 01:04 AM

March 16, 2017

ZapperZ - Physics and Physicists

Born Rule Confirmed To An Even Tighter Bound
I must say that I might have missed this paper if Chad Orzel didn't mention it in his article. Here, he highlighted a paper by Kauten et al. from New Journal of Physics (open access) that performed 5-slit interference test with the purpose of detecting any higher-order interference beyond that predicted by the Born rule. They found none, and imposed a tighter bound on any higher-order effects.

As Orzel reported:

That's what the NJP paper linked above is about. One of the ways you might get the Born rule from some deeper principle would be to have it be merely an approximation to some more fundamental structure. That, in turn, might very well involve a procedure other than "squaring" the wavefunction to get the probability of various measurement outcomes. In which case, you would expect to see some higher-order contributions to the probability-- the wavefunction cubed, say, or to the fourth power.
.
.
.
Sadly, for fans of variant models of quantum probability, what they actually do is the latter. They don't see any deviation from the ordinary Born rule, and can say with confidence that all the higher-order contributions are zero, to something like a hundredth of a percent.

Of course, this won't stop the continuation of the search, because that is what we do. But it is amazing that QM has withstood numerous challenges throughout its history.

Zz.

by ZapperZ (noreply@blogger.com) at March 16, 2017 03:32 PM

Emily Lakdawalla - The Planetary Society Blog

Trump's first budget proposal is out. Here's how NASA fared
NASA escaped a large-scale budget slash, and planetary science fared well. ARM is canceled, the Moon-versus-Mars debate is not mentioned, and Earth science stands to lose some missions.

March 16, 2017 12:08 PM

Lubos Motl - string vacua and pheno

LHCb discovers five \(css\) bound states at once
The LHCb detector is way smaller and cheaper than its fat ATLAS and CMS siblings. But it doesn't mean that it can't discover cool things – and many things. The letter \(b\) refers to the bottom quark. It's often said that the bottom quark is the best path towards the research of CP-violation and similar things.

But for some reasons, the LHCb managed to discover five new particles without any bottom quark – at once:



The collaboration proudly tweeted about the new discovery and linked to their new paper,
Observation of five new narrow \(\Omega^0_c\) states decaying to \(\Xi^+_c K^−\)
You may count the new peaks on the graph above. If you haven't forgotten some rather rudimentary number theory, you know that the counting goes as follows: One, two, three, four, five. TRF contains new stuff to learn for everybody, including those who would consider any mathematics exam unconstitutional and inhuman. ;-)




They identify the bound states of the Omega baryon according to the decay products that they can label reliably enough. These new charmed neutral Omega baryons (the quark content is \(css\), like the cascading style sheets) decay to a positive charmed Xi baryon (whose quark content is \(ucs\) or \(ucs\), if you agree that the acronym shouldn't be reserved by a corrupt Union of Concerned Scientists and Anthony Watts' dog) and the negative kaon \(K^-\) (quark content: \(\bar u s\), thanks, Bill).

Well, the positive charmed Xi baryon decays to \(p K^- \pi^+\) and those are really well-known everyday animals for the LHCb scientists.




The new \(css\) bound states are narrow resonances – which means that the decay rate is slow (width is small) enough. You may consider them excited states of the same particle or different particles. Which of those is better is little bit a subjective issue. The excited states of a hydrogen atom are clearly "the same particle" because the transitions between them is the most common one and involves a truly neutral, peaceful photon (which is "almost nothing", especially when it comes to charges).

But these excited states of the \(css\) quarks are strongly interacting and it's rather easy for these beasts to create quark-antiquark pairs, in this case an up-antiup pair, and divide all the quarks differently. These processes are actually more frequent than a simple emission of a photon. So the excited states don't change to each other so automatically and they may be considered distinct entities although they're really built from the same ingredients, just like different excited states of a hydrogen atom.



You can imagine how people had to be thrilled in the 1960s when such new particles were discovered frequently and the innocent physicists actually believed that those were elementary particles. However, in the late 1960s, quarks were proposed, and in the early 1970s, QCD was written down. Before QCD, physicists were willing to believe that they live in a paradise with hundreds of exotic elementary particles species or that these numerous particles were proving that Nature was lifting Herself by Her own bootstraps.

At some moment, physicists devoured the QCD apple and their feeling of mystery and submission faded away. Those are just some additional boring bound states of six quark flavors and their antiquarks, aren't they? Why so much ado? And that's where we are. Lots of the childish excitement is gone, our previous emotions look a bit silly and scientifically naive, and when we want to look for the truly deep signs of Nature's mysteries, we know that we must dig deeper than to discover five new baryons (at once).



Off-topic. Dr Sheldon Cooper, the boy (Iain Armitage, a theater critic), interviewed another Sheldon a year ago. The spin-off of TBBT could be fun.

And if you asked me, I find this whole elaborate scheme with Greek letters labeling the QCD ground states to be an anachronism. I would replace symbols like \(\Omega_c^0\) by \(css\) – note that both require three characters – and perhaps added some extra labels when needed. For example, these five excitations may be labeled \(3000,3050,3066,3090,3119\) which are their masses in the units of \(1\MeV\). With this modernized notation, we could reserve the precious Greek letters for something more mysterious, for something that still sounds Greek to us. And I am not talking about Greek economic and imigration policies which should be represented by characters such as f%&*^*g s#&*t.

But I may be wrong and those baryons may be fundamentally important. And even if they're not, it's important that physicists don't forget the craft that their predecessors were so good at half a century ago. It's like not forgetting how to make and listen to classical music or anything of the sort that suddenly faced lots of competition attempting to steal a big part of the people's attention.

by Luboš Motl (noreply@blogger.com) at March 16, 2017 11:31 AM

March 15, 2017

The n-Category Cafe

Functional Equations VI: Using Probability Theory to Solve Functional Equations

A functional equation is an entirely deterministic thing, such as <semantics>f(x+y)=f(x)+f(y)<annotation encoding="application/x-tex"> f(x + y) = f(x) + f(y) </annotation></semantics> or <semantics>f(f(f(x)))=x<annotation encoding="application/x-tex"> f(f(f(x))) = x </annotation></semantics> or <semantics>f(cos(e f(x)))+2x=sin(f(x+1)).<annotation encoding="application/x-tex"> f\Bigl(\cos\bigl(e^{f(x)}\bigr)\Bigr) + 2x = \sin\bigl(f(x+1)\bigr). </annotation></semantics> So it’s a genuine revelation that one can solve some functional equations using probability theory — more specifically, the theory of large deviations.

This week and next week, I’m explaining how. Today (pages 22-25 of these notes) was mainly background:

  • an introduction to the theory of large deviations;

  • an introduction to convex duality, which Simon has written about here before;

  • how the two can be combined to get a nontrivial formula for sums of powers of real numbers.

Next time, I’ll explain how this technique produces a startlingly simple characterization of the <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>-norms.

by leinster (Tom.Leinster@ed.ac.uk) at March 15, 2017 01:23 AM

March 14, 2017

Symmetrybreaking - Fermilab/SLAC

The life of an accelerator

As it evolves, the SLAC linear accelerator illustrates some important technologies from the history of accelerator science.

Header: The life of an accelerator

Tens of thousands of accelerators exist around the world, producing powerful particle beams for the benefit of medical diagnostics, cancer therapy, industrial manufacturing, material analysis, national security, and nuclear as well as fundamental particle physics. Particle beams can also be used to produce powerful beams of X-rays. 

Many of these particle accelerators rely on artfully crafted components called cavities. 

The world’s longest linear accelerator (also known as a linac) sits at the Department of Energy’s SLAC National Accelerator Laboratory. It stretches two miles and accelerates bunches of electrons to very high energies. 

The SLAC linac has undergone changes in its 50 years of operation that illustrate the evolution of the science of accelerator cavities. That evolution continues and will determine what the linac does next.

Inline_1_Cavities
Illustration by Corinne Mucha

Robust copper

An accelerator cavity is a mostly closed, hollow chamber with an opening on each side for particles to pass through. As a particle moves through the cavity, it picks up energy from an electromagnetic field stored inside. Many cavities can be lined up like beads on a string to generate higher and higher particle energies. 

When SLAC’s linac first started operations, each of its cavities was made exclusively from copper. Each tube-like cavity consisted of a 1-inch-long, 4-inch-wide cylinder with disks on either side. Technicians brazed together more than 80,000 cavities to form a straight particle racetrack.  

Scientists generate radiofrequency waves in an apparatus called a klystron that distributes them to the cavities. Each SLAC klystron serves a 10-foot section of the beam line. The arrival of the electron bunch inside the cavity is timed to match the peak in the accelerating electric field. When a particle arrives inside the cavity at the same time as the peak in the electric field, then that bunch is optimally accelerated. 

“Particles only gain energy if the variable electric field precisely matches the particle motion along the length of the accelerator,” says Sami Tantawi, an accelerator physicist at Stanford University and SLAC. “The copper must be very clean and the shape and size of each cavity must be machined very carefully for this to happen.”

In its original form, SLAC’s linac boosted electrons and their antimatter siblings, positrons, to an energy of 50 billion electronvolts. Researchers used these beams of accelerated particles to study the inner structure of the proton, which led to the discovery of fundamental particles known as quarks.

Today almost all accelerators in the world—including smaller systems for medical and industrial applications—are made of copper. Copper is a good electric conductor, which is important because the radiofrequency waves build up an accelerating field by creating electric currents in the cavity walls. Copper can be machined very smoothly and is cheaper than other options, such as silver.  

“Copper accelerators are very robust systems that produce high acceleration gradients of tens of millions of electronvolts per meter, which makes them very attractive for many applications,” says SLAC accelerator scientist Chris Adolphsen. 

Today, one-third of SLAC’s original copper linac is used to accelerate electrons for the Linac Coherent Light Source, a facility that turns energy from the electron beam into what is currently the world’s brightest X-ray laser light.

Researchers continue to push the technology to higher and higher gradients—that is, larger and larger amounts of acceleration over a given distance. 

“Using sophisticated computer programs on powerful supercomputers, we were able to develop new cavity geometries that support almost 10 times larger gradients,” Tantawi says. “Mixing small amounts of silver into the copper further pushes the technology toward its natural limits.” Cooling the copper to very low temperatures helps as well. Tests at 45 Kelvin—negative 384 degrees Fahrenheit—have shown to increase acceleration gradients 20-fold compared to SLAC’s old linac. 

Copper accelerators have their limitations, though. SLAC’s historic linac produces 120 bunches of particles per second, and recent developments have led to copper structures capable of firing 80 times faster. But for applications that need much higher rates, Adolphsen says, “copper cavities don’t work because they would melt.”

Inline_2_Cavities
Illustration by Corinne Mucha

Chill niobium

For this reason, crews at SLAC are in the process of replacing one-third of the original copper linac with cavities made of niobium. 

Niobium can support very large bunch rates, as long as it is cooled. At very low temperatures, it is what’s known as a superconductor.

“Below the critical temperature of 9.2 Kelvin, the cavity walls conduct electricity without losses, and electromagnetic waves can travel up and down the cavity many, many times, like a pendulum that goes on swinging for a very long time,” says Anna Grassellino, an accelerator scientist at Fermi National Accelerator Laboratory. “That’s why niobium cavities can store electromagnetic energy very efficiently and can operate continuously.” 

You can find superconducting niobium cavities in modern particle accelerators such as the Large Hadron Collider at CERN and the CEBAF accelerator at Thomas Jefferson National Accelerator Facility. The European X-ray Free-Electron Laser in Germany, the European Spallation Source at CERN, and the Facility for Rare Isotope Beams at Michigan State University are all being built using niobium technology. Niobium cavities also appear in designs for the next-generation International Linear Collider. 

At SLAC, the niobium cavities will support LCLS-II, an X-ray laser that will produce up to a million ultrabright light flashes per second. The accelerator will have 280 cavities, each about three feet long with a 3-inch opening for the electron beam to fly through. Sets of eight cavities will be strung together into cryomodules that keep the cavities at a chilly 2 Kelvin, which is colder than interstellar space.

Each niobium cavity is made by fusing together two halves stamped from a sheet of pure metal. The cavities are then cleaned very thoroughly because even the tiniest impurities would degrade their performance.

The shape of the cavities is reminiscent of a stack of shiny donuts. This is to maximize the cavity volume for energy storage and to minimize its surface area to cut down on energy dissipation. The exact size and shape also depends on the type of accelerated particle.

“We’ve come a long way since the first development of superconducting cavities decades ago,” Grassellino says. “Today’s niobium cavities produce acceleration gradients of up to about 50 million electronvolts per meter, and R&D work at Fermilab and elsewhere is further pushing the limits.”

Inline_3_Cavities
Illustration by Corinne Mucha

Hot plasma

Over the past few years, SLAC accelerator scientists have been working on a way to push the limits of particle acceleration even further: accelerating particles using bubbles of ionized gas called plasma. 

Plasma wakefield acceleration is capable of creating acceleration gradients that are up to 1000 times larger than those of copper and niobium cavities, promising to drastically shrink the size of particle accelerators and make them much more powerful.

“These plasma bubbles have certain properties that are very similar to conventional metal cavities,” says SLAC accelerator physicist Mark Hogan. “But because they don’t have a solid surface, they can support extremely high acceleration gradients without breaking down.”

Hogan’s team at SLAC and collaborators from the University of California, Los Angeles, have been developing their plasma acceleration method at the Facility for Advanced Accelerator Experimental Tests, using an oven of hot lithium gas for the plasma and an electron beam from SLAC’s copper linac.

Researchers create bubbles by sending either intense laser light or a high-energy beam of charged particles through plasma. They then send beams of particles through the bubbles to be accelerated.

When, for example, an electron bunch enters a plasma, its negative charge expels plasma electrons from its flight path, creating a football-shaped cavity filled with positively charged lithium ions. The expelled electrons form a negatively charged sheath around the cavity.

This plasma bubble, which is only a few hundred microns in size, travels at nearly the speed of light and is very short-lived. On the inside, it has an extremely strong electric field. A second electron bunch enters that field and experiences a tremendous energy gain. Recent data show possible energy boosts of billions of electronvolts in a plasma column of just a little over a meter.

“In addition to much higher acceleration gradients, the plasma technique has another advantage,” says UCLA researcher Chris Clayton. “Copper and niobium cavities don’t keep particle beams tightly bundled and require the use of focusing magnets along the accelerator. Plasma cavities, on the other hand, also focus the beam.”

Much more R&D work is needed before plasma wakefield accelerator technology can be turned into real applications. But it could represent the future of particle acceleration at SLAC and of accelerator science as a whole.

by Manuel Gnida at March 14, 2017 02:34 PM

March 13, 2017

Tommaso Dorigo - Scientificblogging

Posts On Neutrino Experiments, Day 1
The first day of the Neutrino Telescopes XVII conference in Venice is over, and I would like to point you to some short summaries that I published for the conference blog, at http://neutel11.wordpress.com. 
Specifically:


- a summary of the talk on Super-Kamiokande
- a summary of the talk on SNO
- a summary of the talk on KamLAND
- a summary of the talk on K2K and T2K
- a summary of the talk on Daya Bay

You might have noticed that the above experiments were recipients of the 2016 Breakthrough prize in physics. In fact, the session was specifically focusing on these experiments for that reason.

read more

by Tommaso Dorigo at March 13, 2017 05:29 PM

Tommaso Dorigo - Scientificblogging

The Formidable Neutrino
Elementary particles are mysterious and unfathomable, and it takes giant accelerators and incredibly complex devices to study them. In the last 100 years we have made great strides in the investigations of the properties of quarks, leptons, and vector bosons, but I would lie if I said we know half of what we would like to. In science, the opening of a door reveals others, closed by more complicated locks - and no clearer example of this is the investigation of subatomic matter. 

read more

by Tommaso Dorigo at March 13, 2017 11:36 AM

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juin, juillet et décembre.

La prochaine permanence se tiendra le :
Mardi 28 mars de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel

Les permanences suivantes auront lieu les mardis 25 avril, 30 mai, 29 août, 26 septembre, 31 octobre et 28 novembre 2017.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/.
Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

March 13, 2017 11:03 AM

CERN Bulletin

CERN Bulletin

Cine club

Thursday 16 March 2017 at 20:00
CERN Council Chamber

Fire


Directed by Deepa Mehta
Canada / India, 1996, 104 minutes

Sita and Radha are two Indian women stuck in loveless marriages. While Sita is trapped in an arranged relationship with her cruel and unfaithful husband, Jatin , Radha is married to his brother, Ashok, a religious zealot who believes in suppressing desire. As the two women recognize their similar situations, they grow closer, and their relationship becomes far more involved than either of them could have anticipated.

Original version English / Hindi; English subtitles

La souriante Madame Beudet


Directed by Germaine Dulac
France, 1923, 26 minutes

One of the first feminist movies, The Smiling Madame Beudet is the story of an intelligent woman trapped in a loveless marriage. Her husband is used to playing a stupid practical joke in which he puts an empty revolver to his head and threatens to shoot himself. One day, while the husband is away, she puts bullets in the revolver. However, she is stricken with remorse and tries to retrieve the bullets the next morning. Her husband gets to the revolver first only this time he points the revolver at her.

Silent

In collaboration with the Women In Technology Community


Wednesday 22 Mars 2017 at 20:00
CERN Council Chamber

Sans toit ni loi / Vagabond

Directed by Agnès Varda
France, 1985, 105 minutes

A young woman's body is found frozen in a ditch. Through flashbacks and interviews, we see the events that led to her inevitable death.

Original version French ; English subtitles

In collaboration with the Women In Technology Community

Save

Save

March 13, 2017 11:03 AM

CERN Bulletin

Open Day at EVE and School of CERN Staff Association: an opportunity for many parents to discover the structure.

On Saturday, 4 March 2017, the Children’s Day-Care Centre EVE and School of CERN Staff Association opened its doors to allow interested parents to visit the structure.

Staff Association - Carole Dargagnon presents the EVE and school during the open day.

This event was a great success and brought together many families.

The Open Day was held in two sessions (first session at 10 am and second at 11 am), each consisting in two parts:

  • a general presentation of the structure by the Headmistress Carole Dargagnon,
  • a tour of the installations with Marie-Luz Cavagna and Stéphanie Palluel, the administrative assistants.

The management team was delighted to offer parents the opportunity to participate in this pleasant event, where everyone could express themselves, ask questions and find answers in a friendly atmosphere.

March 13, 2017 10:03 AM

CERN Bulletin

“VICO”, Visiting Colleagues

“Hello, I am your delegate” – have you heard this line? Maybe you have already had the pleasure of receiving a visit from a Staff Association delegate – then you know what this is all about. As for those of you, who have not yet heard these words, it’s time to get curious.

The Staff Association has decided to embark upon an adventure called “VICO”, Visiting Colleagues. From past experience, we have understood the value of personal, direct contact with the people we represent. We believe that the best way to achieve this is to knock on your office door and pay you a short visit.  We do not want to make you fill in yet another online questionnaire and would much rather collect your feedback in a short conversation face to face.

Of course, we have prepared ourselves thoroughly for these visit rounds, because we do not want to waste your time. We welcome criticism because it can make us aware of our shortcomings, tell us about how you perceive our work, and help us improve when needed. So, after friendly introducing ourselves into your office and taking a brief moment of your time, we will merely ask you a few questions. We are always eager to hear your opinions on different topics.

You will hear the “Hello, I am your delegate” from mid-March to mid-June, whenever your delegate has the chance to pass by your office. We know that realistically we will not succeed in visiting all of you. This should be even more of a reason for you to accept these short polite intrusions as a privilege to talk to your delegate in person: they are all yours for a few minutes, so please speak your mind.

We are looking forward to some enlightening field trips, as we come see you in your offices. We hope that these visits will enhance mutual understanding and help build stronger ties with you through sharing ideas and opinions.

But rest assured, these visits are not your only chance to meet us: if you happen to run into a delegate in the corridor, an understanding smile and even a coffee between colleagues would surely be welcome! We are all working together, so let us learn from each other.

March 13, 2017 10:03 AM

Lubos Motl - string vacua and pheno

Jacques Distler vs some QFT lore
Young physicists in Austin, be careful about some toxic junk in your city

Three weeks ago, in the article titled
Responsibility
physicist Jacques Distler of UT Austin mentioned a statement by Sasha Polyakov that he was "responsible" for quantum field theory. That comment was particularly relevant when Distler taught an undergraduate particle physics course and was frustrated by the following:
The textbooks (and I mean all of them) start off by “explaining” that relativistic quantum mechanics (e.g. replacing the Schrödinger equation with Klein-Gordon) make no sense (negative probabilities and all that …). And they then proceed to use it anyway (supplemented by some Feynman rules pulled out of thin air).

This drives me up the fúçkïñg wall. It is precisely wrong.

There is a perfectly consistent quantum mechanical theory of free particles. The problem arises when you want to introduce interactions.
Did the following text defend the legitimacy of Distler's frustration? Well, partly... but I would pick the answer No if I had to.




What's going on? Indeed, textbooks and instructors often – and, according to some measures, always – say that quantum mechanics of one particle ceases to behave well once you switch to relativity – to theories covariant under the Lorentz transformations.

Are these statements right? Are they wrong? And are the correct statements one can make important? It depends what exact statements you have in mind.




What Distler discusses is the existence of the Hilbert space – and Hamiltonian – for one particle, e.g. the Klein-Gordon particle. Does it exist? You bet. If you believe that a Hilbert space of particles exists in free quantum field theory, do the following: Write a basis vector of that Hilbert space as the basis of a Fock space, i.e. in terms of the basis vector that are\[

a^\dagger_{\vec k_1} \cdots a^\dagger_{\vec k_n} \ket 0

\] And simply pick those basis vectors that contain exactly one creation operator. This one-particle subspace of the Hilbert space will evolve to itself under the empty-spacetime evolution operators. In fact, if you write the basis in the momentum basis as I did, the Hamiltonian for one real quantum of the real Klein-Gordon equation will be simply\[

H = \sqrt{|\vec k|^2 + m^2}.

\] This is something you may derive from quantum field theory. The operator above is perfectly well-defined in the momentum space. The energy is non-negative, the norms of states are positive, everything works fine.

So has Distler shown that all the statements of the type "one particle isn't consistent in relativistic quantum mechanics" are wrong?

Nope, he hasn't. In particular, he was talking about the statement
...replacing the [non-relativistic, e.g. one-particle] Schrödinger equation with Klein-Gordon make[s] no sense...
But this statement is right at the level of one-particle quantum mechanics because his equation for the evolution of the wave function is not the Klein-Gordon equation. You know, the Klein-Gordon equation is\[

\left(\frac{\partial^2}{\partial t^2} - \frac{\partial^2}{\partial x^2} - \frac{\partial^2}{\partial y^2} - \frac{\partial^2}{\partial z^2} + m^2 \right) \Phi = 0.

\] That's a nice, local – perfectly differential equation. On the other hand, the replacement for the non-relativistic Schrödinger equation\[

i\hbar\frac{\partial}{\partial t} \psi = -\frac{\hbar^2}{2m} \Delta \psi + V(x) \psi

\] that he derived and that describes the evolution of one-particle states was\[

i\hbar\frac{\partial}{\partial t} \psi = c \sqrt{m^2c^2-\hbar^2\Delta} \psi + V(x) \psi

\] Because the square root has a neverending Taylor expansion, the function of the Laplace operator is a terribly non-local "integral operator" acting on the wave function \(\psi(x,y,z,t)\) in the position representation. So this equation for one particle, even though it follows from the Klein-Gordon quantum field theory, doesn't have the nice and local Klein-Gordon form. It isn't pretty and it isn't fundamental. If you wrote this equation in isolation, you should be worried that the resulting theory isn't relativistic because relativity implies locality and this equation allows the localized wave function packet to spread superluminally!

What the statements mean is that if you want to use some nice and local equation for a wave function for one particle – i.e. if you literally want to replace Schrödinger's equation by the similar Klein-Gordon equation – you won't find a way to construct (in terms of local functions of derivatives etc.) the probability current and density etc. that would have the desired positivity properties etc. And this statement is just true and important!

If you want to return to simple, fundamental, justifiable, beautiful equations, you can indeed use the Klein-Gordon, Dirac, Maxwell, and other equations. But you must appreciate that they're equations for (field) operators, not for wave functions.

This statement is important because it's not just a mathematical one. It's highly physical, too. In particular, if you consider any relativistic quantum mechanical theory of particles – quantum field theory or something grander, like string theory – it's unavoidable that when you confine particles to the distance shorter than the Compton wavelength \(\hbar / mc\) of that particle, you will unavoidably have enough energy so that particle-antiparticle pairs will start to be produced at nonzero probabilities. And in relativity, it's normal for a particular to move by a speed comparable to the speed of light, and then its wavelength is comparable to the Compton wavelength. You can't really trust the one-particle theory at distances comparable to its normal de Broglie wavelength! So the theory is wrong in some very strong sense.

The antiparticles (which are the same with the original particle in the real Klein-Gordon case, just to be sure) inevitably follow from relativity combined with quantum mechanics, and so does the pair production of particles and antiparticles. This physical statement has lots of nearly equivalent mathematical manifestations. For example, local observables in a relativistic quantum theory have to be constructed out of quantum fields. So the 1-particle Hilbert space doesn't have any truly local observables: You can't construct the Klein-Gordon field \(\Phi(x,y,z,t)\) out of operators acting on the 1-particle Hilbert space because the latter operators never change the number of particles while \(\Phi(x,y,z,t)\) does (by one or minus one – it's a combination of creation and annihilation operators). In fact, you can't construct the bilinears in \(\Phi\) and/or its derivatives, either, because while those operators in QFT contain some terms that preserve the number of particles, they also contain equally important terms that change the number of particles by two (particle-antiparticle pair production or pair annihilation) and those are equally important for obtaining the right commutators and other things. The mixing of creation operators for particles and the annihilation operators for antiparticles is absolutely unavoidable if you want to define observables at points (or regions smaller than the Compton wavelength).

There's one more statement that Distler made and that is really wrong. Distler wrote that the problems only begin when you start to consider interactions – and from the context, it's clear that he meant interactions involving several quanta of quantum fields, several particles in the quantum field theory sense. But that's not true.

Problems of "one-particle relativistic quantum mechanics" already appear if you consider the behavior of the single particle in external classical fields. Just squeeze a Klein-Gordon particle – e.g. a Higgs boson – in between two metallic plates whose distance is sent to zero. Will it make sense? No, as I mentioned, the walls start to produce particle-antiparticle quanta in general. Time-dependent Hamiltonians lead to particle production, if you wish. Similarly, if you place these particles in any external classical field, the actual Klein-Gordon field may react in a way to create particle pairs.

So the truncation of the Hilbert space of a quantum field theory to the one-particle subspace is inconsistent not only if you consider interactions of particles in the usual Feynman diagrammatic sense – but even if you consider the behavior of the particle in external classical fields. Whatever you try to with the particle that goes beyond the stupid simple single free-particle Hamiltonian will force you to acknowledge that the truncated one-particle theory is no good.

We want to do something more with the theory than just write an unmotivated non-local Hamiltonian of the kind \(H\sim \sqrt{m^2+p^2}\) if I use \(\hbar=c=1\) units here. And as soon as we do anything else – justify this ugly and seemingly non-local (and therefore seemingly relativity-violating) Hamiltonian by an elegant theory, study particle interactions, study the behavior of one particle in external classical fields – we just need to switch to the full-blown quantum field theory, otherwise our musings will be inconsistent.

One extra comment. I mentioned that the non-local differential operator allows the wave packet to spread superluminally. How is it possible that such a thing results from a relativistic theory? Well, quantum field theory has no problem with that because when you do any doable measurement, the processes in which a particle spreads in the middle gets combined with processes involving antiparticles. When you calculate the "strength of influences spreading superluminally", some Green's functions – which are nonzero for spacelike separations – will combine to the "commutator correlation function" which vanishes at spacelike separation. So the inseparable presence of antiparticles will save the locality for you. The truncation to particles-only (without antiparticles) would indeed violate locality required by relativity as long as you could experimentally verify it (you need at least some interactions of that particle with something else for that).

While Jacques is right about the possibility to truncate the Hilbert space of quantum field theories to the one-particle subspaces, he's morally wrong about all these big statements – and some of his statements are literally wrong, too. At least morally, the lore that drives him up the wall is right and there are ways to formulate this lore so that it is both literally true and important, too.

So students in Austin are encouraged to actively ignore their grumpy instructor's tirades against the quantum field theory lore and even more encouraged to understand in what sense the lore is true.



As I explain in the comments, many quantum field theory textbooks have wonderful explanations – usually at the very beginning – of the wisdom that Jacques Distler seems to misunderstand, namely why quantum fields and the mixing of sectors with different numbers of particles is unavoidable for consistency of quantum mechanics with special relativity.

The 2008 textbook by my adviser Tom Banks starts the explanation on Page 3, in section "Why quantum field theory?" It says that the probability amplitude for a particle emission at spacetime point \(x\) and its absorption at point \(y\) is unavoidably nonzero for spacelike separations and because it would only be only nonzero for one of the two time orderings of \(x,y\), and the ordering of spacelike-separated event isn't Lorentz-invariant, the Lorentz invariance would be broken and one must actually demand that only amplitudes where both orders are summed over are allowed. In other words, as argued on page 5, the only known consistent ways to solve this clash with the Lorentz invariance is to postulate that every emission source must also be able to act as an absorption sink and vice versa. When both terms are combined, the sum is still nonzero in the spacelike region but has no brutal discontinuities when the ordering gets reversed.

Also, when the particle carries charges, the emission and absorption in the two related processes must involve particles of opposite charges and one predicts (and Dirac predicted) the existence of antiparticles that are needed for things to work.

Weinberg QFT Volume 1 explains the negative probabilities and energies of the relativistic equations naively used instead of the non-relativistic Schrödinger equation on pages 7, 12, 15... Read it for a while. It's OK but, in my opinion, much less deep than Tom's presentation.

Peskin's and Schroeder's textbook on quantum field theory discusses the non-vanishing of the amplitudes in the spacelike region on page 14 and pages 27-28 discuss that the actual influence of one measurement on another is measured by the commutator of two field operators. And that vanishes for spacelike separations – again, because two processes that are opposite to each other are subtracted.

Without the mixing of creation operators (for particles) and annihilation operators (for antiparticles), you just can't define any observables that would belong to a point or a region and that would behave relativistically (respected the independence of observables that are spacelike separated). Quantum fields are the only known way to avoid this conflict between quantum mechanics and relativity. They are unavoidably superpositions of positive- and negative-energy solutions, and therefore are expanded in sums of creation and annihilation operators. That's why all local discussions make it necessary to allow emission and absorption at the same time – and, consequently, the combination of quantum mechanics and relativity makes it necessary to consider the whole Fock space with a variable number of particles. The one-particle truncation is inconsistent with relativistic dynamics such as time-dependent interactions, emission, or absorption.

In the mathematical language, fields and their functions are necessary for any local observables in relativistic quantum mechanical theories. They always contain terms that change the number of particles – except for the trivial constant operator \(1\). In the physical language, relativity and quantum mechanics simultaneously imply that emission and absorption are linked, antiparticle exists, and scattering amplitudes for particles and antiparticles have to obey identities such as the crossing symmetry.

The teaching of a quantum field theory course could be a good opportunity for Jacques to learn this basic stuff that is often presented on pages such as 3,5,7,12,14... of introductory textbooks.

by Luboš Motl (noreply@blogger.com) at March 13, 2017 07:14 AM

John Baez - Azimuth

Restoring the North Cascades Ecosystem

In 49 hours, the National Park Service will stop taking comments on an important issue: whether to reintroduce grizzly bears into the North Cascades near Seattle. If you leave a comment on their website before then, you can help make this happen! Follow the easy directions here:

http://theoatmeal.com/blog/grizzlies_north_cascades

Please go ahead! Then tell your friends to join in, and give them this link. This can be your good deed for the day.

But if you want more details:

Grizzly bears are traditionally the apex predator in the North Cascades. Without the apex predator, the whole ecosystem is thrown out of balance. I know this from my childhood in northern Virginia, where deer are stripping the forest of all low-hanging greenery with no wolves to control them. With the top predator, the whole ecosystem springs to life and starts humming like a well-tuned engine! For example, when wolves were reintroduced in Yellowstone National Park, it seems that even riverbeds were affected:

There are several plans to restore grizzlies to the North Cascades. On the link I recommended, Matthew Inman supports Alternative C — Incremental Restoration. I’m not an expert on this issue, so I went ahead and supported that. There are actually 4 alternatives on the table:

Alternative A — No Action. They’ll keep doing what they’re already doing. The few grizzlies already there would be protected from poaching, the local population would be advised on how to deal with grizzlies, and the bears would be monitored. All other alternatives will do these things and more.

Alternative B — Ecosystem Evaluation Restoration. Up to 10 grizzly bears will be captured from source populations in northwestern Montana and/or south-central British Columbia and released at a single remote site on Forest Service lands in the North Cascades. This will take 2 years, and then they’ll be monitored for 2 years before deciding what to do next.

Alternative C — Incremental Restoration. 5 to 7 grizzly bears will be captured and released into the North Casades each year over roughly 5 to 10 years, with a goal of establishing an initial population of 25 grizzly bears. Bears would be released at multiple remote sites. They can be relocated or removed if they cause trouble. Alternative C is expected to reach the restoration goal of approximately 200 grizzly bears within 60 to 100 years.

Alternative D — Expedited Restoration. 5 to 7 grizzly bears will be captured and released into the North Casades each year until the population reaches about 200, which is what the area can easily support.

So, pick your own alternative if you like!

By the way, the remaining grizzly bears in the western United States live within six recovery zones:

• the Greater Yellowstone Ecosystem (GYE) in Wyoming and southwest Montana,

• the Northern Continental Divide Ecosystem (NCDE) in northwest Montana,

• the Cabinet-Yaak Ecosystem (CYE) in extreme northwestern Montana and the northern Idaho panhandle,

• the Selkirk Ecosystem (SE) in northern Idaho and northeastern Washington,

• the Bitterroot Ecosystem (BE) in central Idaho and western Montana,

• and the North Cascades Ecosystem (NCE) in northwestern and north-central Washington.

The North Cascades Ecosystem consists of 24,800 square kilometers in Washington, with an additional 10,350 square kilometers in British Columbia. In the US, 90% of this ecosystem is managed by the US Forest Service, the US National Park Service, and the State of Washington, and approximately 41% falls within Forest Service wilderness or the North Cascades National Park Service Complex.

For more, read this:

• National Park Service, Draft Grizzly Bear Restoration Plan / Environmental Impact Statement: North Cascades Ecosystem.

The picture of grizzlies is from this article:

• Ron Judd, Why returning grizzlies to the North Cascades is the right thing to do, Pacific NW Magazine, 23 November 2015.

If you’re worried about reintroducing grizzly bears, read it!

The map is from here:

• Krista Langlois, Grizzlies gain ground, High Country News, 27 August 2014.

Here you’ll see the huge obstacles this project has overcome so far.


by John Baez at March 13, 2017 05:01 AM

March 12, 2017

Clifford V. Johnson - Asymptotia

Some Panellists…

SXSW panel groupMy quick trip to South by Southwest was fruitful, and fun. I was in three events. This* was the group for the panel that was hosted by Rick Loverd, who directs the Science and Entertainment Exchange. We had lots of great discussion about Science in Film, TV, and other entertainment media: - Why it is important to make films more engaging with richer storytelling, to help build broader familiarity with science and scientists, and so on. There were insights from both sides of the "aisle": I spoke about what the kind of work I do in this area, coming from the science side of things and Samantha Corbin-Miller and Stephany Folsom discussed things form their points of view of writers of TV and Film. (I was pleasantly surprised to learn that I'd recently (last Summer) looked at Stephany's work in detail: She wrote the upcoming movie Thor: Ragnarok, and I had studied and written notes on the screenplay and met with the production team and director to give them some help [...] Click to continue reading this post

The post Some Panellists… appeared first on Asymptotia.

by Clifford at March 12, 2017 08:10 PM

ZapperZ - Physics and Physicists

The Weak Nuclear Force
I'm going to highlight this latest video by Fermilab's Don Lincoln for a number of reasons. First, the video:



Second, this is one video packed with a number of very important and illuminating stuff. First he explains about the concept of "spin" in both the classical and quantum picture. This is important because to many people who do not study physics, the word "spin" conjures up a certain idea that is not correct when applied to quantum mechanics. So this video hopefully will enlighten the idea a bit.

But what is more fascinating here is his brief historical overview of the first proposal of the connection between the weak interaction and spin, and how Chien Shiung Wu should have received the Nobel Prize for this with Yang and Lee. This might be another case of gender bias that prevented a brilliant Chinese female physicist from a deserving prize. Considering the time that she lived in and the societal and cultural obstacles that she had to overcome, she simply had to be just too outstanding to be able to get to where she was.

So this is one terrific video all around, and you get to learn a bit about the weak interaction to boot!

Zz.

by ZapperZ (noreply@blogger.com) at March 12, 2017 03:45 PM

Lubos Motl - string vacua and pheno

A stringy interview with Petr Hořava
Giotis has pointed out that the Czech Public Radio recorded a 15-minute English-language interview with Czech string theorist Petr Hořava while he was visiting his old homeland.



I hope that this cutely simple HTML5 audio tag with the MP3 file works for everybody.

For years, Petr has been working at Berkeley. He's well-known as the co-author of the Hořava-Witten "M-theory on spaces with boundaries" that carry the \(E_8\) gauge supermultiplet, as they demonstrated.




He was also one of the several forefathers of D-branes in the late 1980s. More recently, he inspired the Hořava-Lifshitz theories of gravity that try to start with a theory invariant under the non-relativistic – and Galilean – symmetries.




He was also given the Neuron Prize, a Czech science award.

In the interview, he talks about string theory, that and why it's the only game in town, what it may explain, what it modifies, what it doesn't, to what extent string theory has been established etc. I think that I would agree with everything he said. Maybe I would prefer a more optimistic tone but that's a different question. ;-)

by Luboš Motl (noreply@blogger.com) at March 12, 2017 09:41 AM

March 10, 2017

Symmetrybreaking - Fermilab/SLAC

A strength test for the strong force

New research could tell us about particle interactions in the early universe and even hint at new physics.

Illustration of a carnival strength test

Much of the matter in the universe is made up of tiny particles called quarks. Normally it’s impossible to see a quark on its own because they are always bound tightly together in groups. Quarks only separate in extreme conditions, such as immediately after the Big Bang or in the center of stars or during high-energy particle collisions generated in particle colliders.

Scientists at Louisiana Tech University are working on a study of quarks and the force that binds them by analyzing data from the ATLAS experiment at the LHC. Their measurements could tell us more about the conditions of the early universe and could even hint at new, undiscovered principles of physics.

The particles that stick quarks together are aptly named “gluons.” Gluons carry the strong force, one of four fundamental forces in the universe that govern how particles interact and behave. The strong force binds quarks into particles such as protons, neutrons and atomic nuclei.

As its name suggests, the strong force is the strongest—it’s 100 times stronger than the electromagnetic force (which binds electrons into atoms), 10,000 times stronger than the weak force (which governs radioactive decay), and a hundred million million million million million million (1039) times stronger than gravity (which attracts you to the Earth and the Earth to the sun).

But this ratio shifts when the particles are pumped full of energy. Just as real glue loses its stickiness when overheated, the strong force carried by gluons becomes weaker at higher energies.

“Particles play by an evolving set of rules,” says Markus Wobisch from Louisiana Tech University. “The strength of the forces and their influence within the subatomic world changes as the particles’ energies increase. This is a fundamental parameter in our understanding of matter, yet has not been fully investigated by scientists at high energies.”

Characterizing the cohesiveness of the strong force is one of the key ingredients to understanding the formation of particles after the Big Bang and could even provide hints of new physics, such as hidden extra dimensions.

“Extra dimensions could help explain why the fundamental forces vary dramatically in strength,” says Lee Sawyer, a professor at Louisiana Tech University. “For instance, some of the fundamental forces could only appear weak because they live in hidden extra dimensions and we can’t measure their full strength. If the strong force is weaker or stronger than expected at high energies, this tells us that there’s something missing from our basic model of the universe.”

By studying the high-energy collisions produced by the LHC, the research team at Louisiana Tech University is characterizing how the strong force pulls energetic quarks into encumbered particles. The challenge they face is that quarks are rambunctious and caper around inside the particle detectors. This subatomic soirée involves hundreds of particles, often arising from about 20 proton-proton collisions happening simultaneously. It leaves a messy signal, which scientists must then reconstruct and categorize.

Wobisch and his colleagues innovated a new method to study these rowdy groups of quarks called jets. By measuring the angles and orientations of the jets, he and his colleagues are learning important new information about what transpired during the collisions—more than what they can deduce by simple counting the jets.

The average number of jets produced by proton-proton collisions directly corresponds to the strength of the strong force in the LHC’s energetic environment.

“If the strong force is stronger than predicted, then we should see an increase in the number of proton-protons collisions that generate three jets. But if the strong force is actually weaker than predicted, then we’d expect to see relatively more collisions that produce only two jets. The ratio between these two possible outcomes is the key to understanding the strong force.”

After turning on the LHC, scientists doubled their energy reach and have now determined the strength of the strong force up to 1.5 trillion electronvolts, which is roughly the average energy of every particle in the universe just after the Big Bang. Wobisch and his team are hoping to double this number again with more data.

“So far, all our measurements confirm our predictions,” Wobisch says. “More data will help us look at the strong force at even higher energies, giving us a glimpse as to how the first particles formed and the microscopic structure of space-time.”

by Sarah Charley at March 10, 2017 11:11 PM

The n-Category Cafe

The Logic of Space

Mathieu Anel and Gabriel Catren are editing a book called New Spaces for Mathematics and Physics, about all different kinds of notions of “space” and their applications. Among other things, there are chapters about smooth spaces, <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids, topos theory, stacks, and various other things of interest to <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-Cafe patrons, all of which I am looking forward to reading. There are chapters by our own John Baez about the continuum and Urs Schreiber about higher prequantum geometry. Here is my own contribution:

I intend this to be my last effort at popularization of HoTT for some time, and accordingly it ended up being rather comprehensive. It begins with a 20-page introduction to type theory, from the perspective of a mathematician wanting to use it as an internal language for categories. There are many introductions to type theory, but probably not enough from this point of view, and moreover most popularizations of type theory are rather vague about its categorical semantics; thus I chose (with some additional prompting from the editors) to spend quite some time on this, and be fairly (though not completely) precise about exactly how the categorical semantics of type theory works.

I also decided to emphasize the point of view that type theory (and “syntax” more generally) is a presentation of the initial object in some category of structured categories. Some category theorists respond to this by saying essentially “what good is it to describe that initial object in some complicated way, rather than just studying it categorically?” It’s taken me a while to be able to express the answer in a really satisfying way (at least, one that satisfies me), and I tried to do so here. The short version is that by explicitly constructing an object that has some universal property, we may learn more about it than we can conclude from the mere statement of its universal property. This is one of the reasons that topologists study classifying spaces, category theorists study classifying toposes, and algebraists study free groups. For a longer answer, read the chapter.

After this introduction to ordinary type theory, but before moving on to homotopy type theory, I spent a while on synthetic topology: type theory treated as an internal language for a category of spaces (actual space-spaces, not <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids). This seemed appropriate since the book is about all different kinds of space. It also provides a good justification of type theory’s constructive logic for a classical mathematician, since classical principles like the law of excluded middle and the axiom of choice are simply false in categories of spaces (e.g. a continuous surjection generally fails to have a continuous section).

I also introduced some specific toposes of spaces, such as Johnstone’s topological topos and the toposes of continuous sets and smooth sets. I also mentioned their “local” or “cohesive” nature, and how it can be regarded as explaining why so many objects in mathematics come “naturally” with topological structure. Namely, because mathematics can be done in type theory, and thereby interpreted in any topos, any mathematical construction can be interpreted in a topos of spaces; and since the forgetful functor from a local/cohesive topos preserves most categorical operations, in most cases the “underlying set” of such an interpretation is what we would get by performing the same construction directly with sets. This also tell us in what circumstances we should expect a construction that takes account of topology to disagree with its analogue for discrete sets, and in what circumstances we should expect a set-theoretic construction to inherit a nontrivial topology even when there is no topological input; read the chapter for details.

The subsequent introduction to homotopy type theory and synthetic homotopy theory has nothing particularly special about it, although I decided to downplay the role of “fibration categories” in favor of <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-categories when talking about higher-categorical semantics. Current technology for constructing higher-categorical interpretations of type theory uses fibration categories, but I don’t regard that as necessarily essential, and future technology may move away from it. In particular, in addition to the intuition of identity types as path objects in a model category, I think it’s valuable to have a similar intution for identity types as diagonal morphisms in an <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-category.

The last section brings everything together by discussing cohesive homotopy type theory, which is of course one of my current personal interests, modeling the local/cohesive structure of an <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topos with modalities inside homotopy type theory. As I’ve said before, I feel that this perspective greatly clarifies the distinction and relationship between space-spaces and <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid “spaces”, with the connecting “fundamental <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid” functor characterized by a simple universal property.

Finally, in the conclusion I at last allowed myself some philosophical rein to speculate about synthetic theories as foundations for mathematics, as opposed to simply internal languages for categories constructed in an ambient classical mathematics. Once we see that mathematics can be formulated in type theory to apply equally well in a category of spaces as in the category of sets, there is no particular reason to regard the category of sets as the “true” foundation and the category of spaces as “less foundational”. Just as we can construct a category of spaces from a category of sets by equipping sets with topological structure, we can construct a “category of sets” (i.e. a Boolean topos) from a “category of spaces” by restricting to the subcategory of objects with uninteresting topology (the discrete or codiscrete ones). Either category, therefore, can serve as an equally valid “reference frame” from which to describe mathematics.

by shulman (viritrilbia@gmail.com) at March 10, 2017 04:34 PM

ZapperZ - Physics and Physicists

APS Endorses March Of Science
The American Physical Society has unanimously endorsed the upcoming March for Science.

I'll be flying out of town on that exact day of the March, so I had decided a while back to simply contribute to it. I get the sentiment and the mission. However, I'm skeptical on the degree of impact that it will make. It will get publicity, and maybe focuses some of the issues, especially funding in the physical sciences, to the public.

But for it to take hold, it can't simply be a one-day event, and as much as I've involved myself in many outreach programs, I still see a lot of misinformation and ignorance among the public about science, and physics in particular.

Here's something I've always wanted to do, but never followed through and lack the resources to do it. How about we do something similar to a family tree genealogy. But instead of tracing human ancestors, we focus on technology "family tree". I've always wanted to start with the iPhone capacitive touch screen. Trace back up the technology and scientific roots of this component. I bet you there were a lot of various material science, engineering, and physics that were part of various patents, published papers, etc. that eventually gave birth to this touch screen.

What it will do is show the public that what they have so gotten used to came out of very basic research in physics and engineering. We can even list out all the funding agencies that were part of the direct line of "descendants" of the device and show them how money spent on basic science actually became a major component of our economy.

By doing this, you don't beat around the bush. You TELL the public what they can actually get out of an investment in science with a concrete example. And it may come out of areas that they never made connection before.

Zz.

by ZapperZ (noreply@blogger.com) at March 10, 2017 02:25 PM

Clifford V. Johnson - Asymptotia

Upcoming Panels at SXSW

(Image credit: I borrowed this image from the SXSW website.)

It seems that even after finishing the manuscript of the graphic book and turning it in to the publisher*, I can't get away from panels. It's a poor pun, to help make an opening line - I actually mean a different sort of panel. I'll be participating in two (maybe three) of them this Saturday at the South By SouthWest event in Austin, Texas. I'll give you details below, and if you happen to be around, come and see us! This means that I'll not get to see any of the actual conference itself since two (maybe three) events is enough to wipe out most of the day, and then I jump on a plane back to LA.

They're about Science and the media. I'll be talking about the things I've [...] Click to continue reading this post

The post Upcoming Panels at SXSW appeared first on Asymptotia.

by Clifford at March 10, 2017 06:48 AM

The n-Category Cafe

Postdocs in Sydney

Richard Garner writes:

The category theory group at Macquarie is currently advertising a two-year Postdoctoral Research Fellowship to work on a project entitled “Enriched categories: new applications in geometry and logic”.

Applications close 31st March. The position is expected to start in the second half of this year.

More information can be found at the following link:

http://jobs.mq.edu.au/cw/en/job/500525/postdoctoral-research-fellow

Feel free to contact me with further queries.

Richard Garner

by leinster (Tom.Leinster@ed.ac.uk) at March 10, 2017 12:23 AM

March 09, 2017

Marco Frasca - The Gauge Connection

Quote of the day

“Bad men need nothing more to compass their ends, than that good men should look on and do nothing.”

John Stuart Mill


Filed under: Quote

by mfrasca at March 09, 2017 08:29 PM

Lubos Motl - string vacua and pheno

No, energy non-conservation is a lousy approach to the cosmological constant problem
In mid January, Chad Orzel didn't like some hype about a "proposed solution to the cosmological constant problem":


An article in the Physics World promoted an April 2016 paper by Josset, Perez, and Sudarsky recently published in PRL
Dark energy as the weight of violating energy conservation
that has claimed the the apparently observed cosmological constant is just the accumulated amount of energy that was created when Nature violated the energy conservation law – and that's supposed to make things more natural.

The 97% crackpot Lee Smolin praised the idea as a speculative approach in the best possible sense that is revolutionary if true. The 60% crackpot George Ellis said that the proposal was viable and no more fanciful than what's being explored by contemporary theoretical physicists – his English isn't as good as mine so I had to improve this man's prose.

Orzel found these comments too diplomatic and, as a "progressive" (a far left whacko), he decided to look for the best possible debunker with the only politically correct number of penises (zero) who should debunk this stuff: Sabine Hossenfelder.




Unfortunately, the politically correct number of penises often has additional consequences that Orzel must have overlooked. So instead of debunking the stuff, Hossenfelder wrote an essay saying
Yes, a violation of energy conservation can explain the cosmological constant
Yes, geniuses in the NASA basements could have constructed a self-propelling spaceship, too. None of these two closely related claims sounds convincing, however. It's not surprising that Hossenfelder's attitude isn't too far from Smolin's – after all, the two happily collaborated for quite some time.




I think that these cheap ideas show the deterioration of the kind of "theoretical physics surrounding quantum gravity" that is manifested whenever the researchers are allowed not to be experts in string theory. Without string theory, thinking about the physical phenomena going beyond the effective field theory picture almost unavoidably reduces to pure speculation and random sacrifices of cherished principles. These people may be good at throwing important things to the trash can – string theory, energy conservation law etc. – but they never have anything good to replace the things they have thrown away with.

What are Josset et al. doing? First, Einstein's equations of general relativity say\[

R_{\mu \nu} - \tfrac{1}{2}R \, g_{\mu \nu} + \Lambda g_{\mu \nu} = \frac{8 \pi G }{c^4} T_{\mu \nu}.

\] It's an equation for a symmetric tensor. Cannot these components be split to several pieces? Yes, there is a natural enough way to split the equations into two pieces: the trace and the rest. The rest is the traceless part.

If you take the trace of Einstein's equations above, i.e. its product with \(g^{\mu\nu}\) summed over the two indices, you will get\[

R(1-D/2) + D\Lambda = \frac{8\pi G}{c^4} T

\] where the spacetime around us has \(D=4\). Here, \(R\) and \(T\) are the traces of \(R_{\mu\nu}\) and \(T_{\mu\nu}\), respectively. You may calculate the traceless part by subtracting \(g_{\mu\nu}/D\) times the trace from the original Einstein's equations. For \(D=4\), we get:\[

R_{\mu \nu} - \tfrac{1}{4}R \, g_{\mu \nu} = \frac{8 \pi G }{c^4} (T_{\mu \nu} - \tfrac{1}{4} Tg_{\mu\nu})

\] You may derive the trace and traceless part of Einstein's equations from the principle of least action. How? You simply consider separate variations of \(g_{\mu\nu}(x^\alpha)\) that preserve the determinant of the metric at each point (you get the traceless equations in this way); and the variation of the scalar factor (that's how you get the equation for the trace).

Fine, the "reduced" Einstein's equations where you only discuss the traceless tensor – from varying the metric while preserving its determinant – is known as unimodular gravity.

Great. So the cosmological constant doesn't appear in the traceless part at all. It only affects the trace part. The covariant derivative (or divergence) \(\nabla^\mu\) of the original Einstein's equations is identically zero. Note that the covariant derivative of the metric (and therefore the cosmological constant term) is identically zero, much like the covariant derivative of the Einstein tensor. The covariant derivative of the stress-energy tensor vanishes if the equations of motion for the matter field are obeyed.

But when you treat the trace and traceless equations separately, this automatic vanishing of the covariant derivative disappears. So you may decide not to impose the trace part of Einstein's equations (which contains \(\Lambda\)) at all. Instead, you may try to derive this condition from \(\nabla^\mu T_{\mu\nu}=0\). But once you have this continuity equation, you may change it to\[

\nabla^\mu T_{\mu\nu} = J_\nu = \nabla_\nu Q

\] There's a current \(J_\nu\) which measures the violation of the energy conservation law and you may also decide that it should be equal to some gradient of some scalar \(Q\). Great. There is of course no justification for any of these things. You are just randomly abolishing some equations of physics. There is no good conceivable source for any nonzero \(Q\) – Josset et al. therefore enumerate some of the crackpots' most popular ways to bastardize physics, namely modifications of quantum mechanics and spacetime discreteness at the Planck scale.

None of these things is really consistent, let alone well-defined according to some known rules, so these excuses are equivalent to enumerating some tooth fairies and Harry Potters, but they don't care.

You can see that the effect is nothing else than some hypothesized contribution to \(T_{\mu\nu}\) that, after some time, becomes equivalent to some component of some \(Q\).

There is a way to improve this whole theory and eliminate all the nonsensical "violations of the physical laws" while keeping the predictions exactly the same. You just postulate that \(\Lambda\), the cosmological constant, is no longer constant. Instead, it is some variable field – quintessence, if you wish – whose value may evolve due to the very same effects that were driving \(Q\) above. Great, so some mysterious effects – tooth fairy, violations of postulates of quantum mechanics, global warming, discreteness of the spacetime, or any pseudoscience that your New Age religious cult considers fashionable right now – just make the current value of the vacuum energy density \(\Lambda\) equal to \(10^{-123}\) in the Planck units.

Have you solved anything? I don't think so. You have just parameterized the problem in some way and added some implausible supernatural phenomena as "possible" explanations of the problem. But you haven't actually made any steps towards proving that the problems have been solved in any way. And you haven't provided the readers with any evidence that your additional hypotheses are right. So you have just confined yourself into a less likely axiomatic system than you started with. The problem is worse than it was before you tried to do something.

If I quote Feynman's famous cargo cult commencement speech:
There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
That's exactly what Josset et al. – and many others – don't achieve. They determine that if a tooth fairy is behind the cosmological constant, she must have \(10^{123}\) times longer teeth than her legs. But it's just a parameterization of the pre-existing problems within a particular framework that makes extra assumptions: nothing new comes right from the "theory". The theory is scientifically worthless – it's just a way to decorate the numbers and problems with some arbitrary new words and visions. But science must also produce results, i.e. reduce the number of unexplained independent mysteries or parameters or observations.

At Hossenfelder's blog, Haelfix wrote:
It's not entirely settled whether Unimodular gravity differs from GR's prediction at the quantum level. This goes back and forth endlessly in the literature.

At the very least, its not clear what you gain when trying to solve the cosmological constant problem. There is still a finetuning problem, the difference is -they say- that there is only one number to explain, and not an entire renormalization tower of unknown physics which tends to drag you (order by order) towards a Planckian value.
Amen to that.

It's questionable whether you may separately modify the trace and non-trace at all, especially in quantum gravity. Needless to say, you need a really consistent theory of quantum gravity to approach any such question really meaningfully. String theory – the only known and probably the only mathematically possible framework that achieves that – doesn't allow you to treat the trace separately from the rest – there is no "unimodular gravity" according to string theory. Outside string theory, there are no well-defined rules which is a way to understand that the people answering these questions without string theory can't agree about the answer.

(All the "critics" love to say all the nonsense about string theory's making no predictions. Unimodular gravity is just one example among many: Without string theory, you can't say anything about its validity. With string theory, you can make things sharp. If string theory is right, unimodular gravity is not. String theory answers all such qualitative "are you allowed to modify this or that" questions.)

But Haelfix pragmatically measures the "progress" in the second paragraph. The fine-tuning problem for the cosmological constant isn't really solved because you haven't explained why the apparent cosmological constant has converged to the observed tiny value after all these uncontrollable violations of the conservation law. And when the cosmological constant problem is approached consistently, it differs from problems with non-renormalizability of theories because the vacuum energy term is maximally "relevant" (dimension zero operator) so it doesn't produce a tower of high-dimension operators.

In effective field theory, you just need to adjust one term, the bare cosmological constant, and everything is fine. It's still true that you need such an adjustment in the fairy-tale by Josset et al. They haven't improved anything that is linked to the predictions. The ultimate question is why a theory that goes beyond the effective field theory – e.g. string theory – where the cosmological constant isn't adjustable is predicting the value we are observing. The story by Josset et al. isn't helping to solve this actual problem at all. It just parameterizes the problem in terms of some specific hypothesized tooth fairies that don't seem to be helpful in making things better.

By the way, Hossenfelder gave a "cute" response to Haelfix's comment:
And, yes, what Haelfix says above is correct, there is a long back and forth in the literature about whether or not quantizing unimodular gravity helps with the cosmological constant problem by taming vacuum fluctuations, but the calculations in the paper above doesn't depend on the quantization.
Calculations in a paper may be independent of effects in quantum gravity but if quantum gravity prohibits assumptions or results of the calculation (such as unimodular gravity as an allowed inequivalent theory), then the calculations in the paper are clearly irrelevant for or inapplicable to our Universe which follows quantum, not classical, laws at the end. In other words, the paper is strictly worthless because the approximation it uses breaks down for the purpose where it's used.

I feel some deja vu. When the cosmological problem was considered the hottest problem in physics around 2000, many solutions were proposed and the "violation of the energy conservation" was probably one of them. I can't remember who proposed it at that time and I don't think it's important or he or she deserves some credit. But this is just another example showing that if someone is trying to do research of physics without looking for any actual "laws", and without taking some "laws" seriously enough, he won't see any progress. He will just randomly oscillate back and forth – without any way to determine the positive and negative directions – in the landscape of speculations. This is not what science should do which is why all the people should be expected to know the state-of-the-art framework to address all such questions, namely string theory. Even if someone could find "something else" or a "problem with string theory", it could still be possible. But no one should be supported for some Brownian motion in the landscape of speculations.

by Luboš Motl (noreply@blogger.com) at March 09, 2017 06:48 AM

March 07, 2017

Symmetrybreaking - Fermilab/SLAC

Researchers face engineering puzzle

How do you transport 70,000 tons of liquid argon nearly a mile underground?

Header: Researchers face engineering puzzle

Nearly a mile below the surface of Lead, South Dakota, scientists are preparing for a physics experiment that will probe one of the deepest questions of the universe: Why is there more matter than antimatter?

To search for that answer, the Deep Underground Neutrino Experiment, or DUNE, will look at minuscule particles called neutrinos. A beam of neutrinos will travel 800 miles through the Earth from Fermi National Accelerator Laboratory to the Sanford Underground Research Facility, headed for massive underground detectors that can record traces of the elusive particles.

Because neutrinos interact with matter so rarely and so weakly, DUNE scientists need a lot of material to create a big enough target for the particles to run into. The most widely available (and cost effective) inert substance that can do the job is argon, a colorless, odorless element that makes up about 1 percent of the atmosphere.

The researchers also need to place the detector full of argon far below Earth’s surface, where it will be protected from cosmic rays and other interference.

“We have to transfer almost 70,000 tons of liquid argon underground,” says David Montanari, a Fermilab engineer in charge of the experiment’s cryogenics. “And at this point we have two options: We can either transfer it as a liquid or we can transfer it as a gas.”

Either way, this move will be easier said than done.

Liquid or gas?

The argon will arrive at the lab in liquid form, carried inside of 20-ton tanker trucks. Montanari says the collaboration initially assumed that it would be easier to transport the argon down in its liquid form—until they ran into several speed bumps. 

Transporting liquid vertically is very different from transporting it horizontally for one important reason: pressure. The bottom of a mile-tall pipe full of liquid argon would have a pressure of about 3000 pounds per square inch—equivalent to 200 times the pressure at sea level. According to Montanari, to keep these dangerous pressures from occurring, multiple de-pressurizing stations would have to be installed throughout the pipe. 

Even with these depressurizing stations, safety would still be a concern. While argon is non-toxic, if released into the air it can reduce access to oxygen, much like carbon monoxide does in a fire. In the event of a leak, pressurized liquid argon would spill out and could potentially break its vacuum-sealed pipe, expanding rapidly to fill the mine as a gas. One liter of liquid argon would become about 800 liters of argon gas, or four bathtubs’ worth. 

Even without a leak, perhaps the most important challenge in transporting liquid argon is preventing it from evaporating into a gas along the way, according to Montanari. 

To remain a liquid, argon is kept below a brisk temperature of minus 180 degrees Celsius (minus 300 degrees Fahrenheit).

“You need a vacuum-insulated pipe that is a mile long inside a mine shaft,” Montanari says. “Not exactly the most comfortable place to install a vacuum-insulated pipe.”

To avoid these problems, the cryogenics team made the decision to send the argon down as gas instead. 

Routing the pipes containing liquid argon through a large bath of water will warm it up enough to turn it into gas, which will be able to travel down through a standard pipe. Re-condensers located underground act as massive air conditioners will then cool the gas until becomes a liquid again.

“The big advantage is we no longer have vacuum insulated pipe,” Montanari says. “It is just straight piece of pipe.”

Argon gas poses much less of a safety hazard because it is about 1000 times less dense than liquid argon. High pressures would be unlikely to build up and necessitate depressurizing stations, and if a leak occurred, it would not expand as much and cause the same kind of oxygen deficiency. 

The process of filling the detectors with argon will take place in four stages that will take almost two years, Montanari says. This is due to the amount of available cooling power for re-condensing the argon underground. There is also a limit to the amount of argon produced in the US every year, of which only so much can be acquired by the collaboration and transported to the site at a time.

 

Inline: Researchers face engineering puzzle

Illustration by Ana Kova

Argon for answers

Once filled, the liquid argon detectors will pick up light and electrons produced by neutrino interactions.

Part of what makes neutrinos so fascinating to physicists is their habit of oscillating from one flavor—electron, muon or tau—to another. The parameters that govern this “flavor change” are tied directly to some of the most fundamental questions in physics, including why there is more matter than antimatter. With careful observation of neutrino oscillations, scientists in the DUNE collaboration hope to unravel these mysteries in the coming years.  

“At the time of the Big Bang, in theory, there should have been equal amounts of matter and antimatter in the universe,” says Eric James, DUNE’s technical coordinator. That matter and antimatter should have annihilated, leaving behind an empty universe. “But we became a matter-dominated universe.” 

James and other DUNE scientists will be looking to neutrinos for the mechanism behind this matter favoritism. Although the fruits of this labor won’t appear for several years, scientists are looking forward to being able to make use of the massive detectors, which are hundreds of times larger than current detectors that hold only a few hundred tons of liquid argon. 

Currently, DUNE scientists and engineers are working at CERN to construct Proto-DUNE, a miniature replica of the DUNE detector filled with only 300 tons of liquid argon that can be used to test the design and components. 

“Size is really important here,” James says. “A lot of what we’re doing now is figuring out how to take those original technologies which have already being developed... and taking it to this next level with bigger and bigger detectors.”

by Daniel Garisto at March 07, 2017 05:27 PM

John Baez - Azimuth

Pi and the Golden Ratio

Two of my favorite numbers are pi:

\pi = 3.14159...

and the golden ratio:

\displaystyle{ \Phi = \frac{\sqrt{5} + 1}{2} } = 1.6180339...

They’re related:

\pi = \frac{5}{\Phi} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \Phi}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2 + \Phi}}}}  \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2 + \sqrt{2 + \Phi}}}}} \cdots

Greg Egan and I came up with this formula last weekend. It’s probably not new, and it certainly wouldn’t surprise experts, but it’s still fun coming up with a formula like this. Let me explain how we did it.

History has a fractal texture. It’s not exactly self-similar, but the closer you look at any incident, the more fine-grained detail you see. The simplified stories we learn about the history of math and physics in school are like blurry pictures of the Mandelbrot set. You can see the overall shape, but the really exciting stuff is hidden.

François Viète is a French mathematician who doesn’t show up in those simplified stories. He studied law at Poitiers, graduating in 1559. He began his career as an attorney at a quite high level, with cases involving the widow of King Francis I of France and also Mary, Queen of Scots. But his true interest was always mathematics. A friend said he could think about a single question for up to three days, his elbow on the desk, feeding himself without changing position.

Nonetheless, he was highly successful in law. By 1590 he was working for King Henry IV. The king admired his mathematical talents, and Viète soon confirmed his worth by cracking a Spanish cipher, thus allowing the French to read all the Spanish communications they were able to obtain.

In 1591, François Viète came out with an important book, introducing what is called the new algebra: a symbolic method for dealing with polynomial equations. This deserves to be much better known; it was very familiar to Descartes and others, and it was an important precursor to our modern notation and methods. For example, he emphasized care with the use of variables, and advocated denoting known quantities by consonants and unknown quantities by vowels. (Later people switched to using letters near the beginning of the alphabet for known quantities and letters near the end like x,y,z for unknowns.)

In 1593 he came out with another book, Variorum De Rebus Mathematicis Responsorum, Liber VIII. Among other things, it includes a formula for pi. In modernized notation, it looks like this:

\displaystyle{ \frac2\pi = \frac{\sqrt 2}2 \cdot \frac{\sqrt{2+\sqrt 2}}2 \cdot \frac{\sqrt{2+\sqrt{2+\sqrt 2}}}{2} \cdots}

This is remarkable! First of all, it looks cool. Second, it’s the earliest known example of an infinite product in mathematics. Third, it’s the earliest known formula for the exact value of pi. In fact, it seems to be the earliest formula representing a number as the result of an infinite process rather than of a finite calculation! So, Viète’s formula has been called the beginning of analysis. In his article “The life of pi”, Jonathan Borwein went even further and called Viète’s formula “the dawn of modern mathematics”.

How did Viète come up with his formula? I haven’t read his book, but the idea seems fairly clear. The area of the unit circle is pi. So, you can approximate pi better and better by computing the area of a square inscribed in this circle, and then an octagon, and then a 16-gon, and so on:

If you compute these areas in a clever way, you get this series of numbers:

\begin{array}{ccl} A_4 &=& 2 \\  \\ A_8 &=& 2 \cdot \frac{2}{\sqrt{2}} \\  \\ A_{16} &=& 2 \cdot \frac{2}{\sqrt{2}} \cdot \frac{2}{\sqrt{2 + \sqrt{2}}}  \\  \\ A_{32} &=& 2 \cdot \frac{2}{\sqrt{2}} \cdot \frac{2}{\sqrt{2 + \sqrt{2}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2}}}}  \end{array}

and so on, where A_n is the area of a regular n-gon inscribed in the unit circle. So, it was only a small step for Viète (though an infinite leap for mankind) to conclude that

\displaystyle{ \pi = 2 \cdot \frac{2}{\sqrt{2}} \cdot \frac{2}{\sqrt{2 + \sqrt{2}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2}}}} \cdots }

or, if square roots in a denominator make you uncomfortable:

\displaystyle{ \frac2\pi = \frac{\sqrt 2}2 \cdot \frac{\sqrt{2+\sqrt 2}}2 \cdot \frac{\sqrt{2+\sqrt{2+\sqrt 2}}}{2} \cdots}

The basic idea here would not have surprised Archimedes, who rigorously proved that

223/71 < \pi < 22/7

by approximating the circumference of a circle using a regular 96-gon. Since 96 = 2^5 \times 3, you can draw a regular 96-gon with ruler and compass by taking an equilateral triangle and bisecting its edges to get a hexagon, bisecting the edges of that to get a 12-gon, and so on up to 96. In a more modern way of thinking, you can figure out everything you need to know by starting with the angle \pi/3 and using half-angle formulas 4 times to work out the sine or cosine of \pi/96. And indeed, before Viète came along, Ludolph van Ceulen had computed pi to 35 digits using a regular polygon with 2^{62} sides! So Viète’s daring new idea was to give an exact formula for pi that involved an infinite process.

Now let’s see in detail how Viète’s formula works. Since there’s no need to start with a square, we might as well start with a regular n-gon inscribed in the circle and repeatedly bisect its sides, getting better and better approximations to pi. If we start with a pentagon, we’ll get a formula for pi that involves the golden ratio!

We have

\displaystyle{ \pi = \lim_{k \to \infty} A_k }

so we can also compute pi by starting with a regular n-gon and repeatedly doubling the number of vertices:

\displaystyle{ \pi = \lim_{k \to \infty} A_{2^k n} }

The key trick is to write A_{2^k}{n} as a ‘telescoping product’:

A_{2^k n} = A_n \cdot \frac{A_{2n}}{A_n} \cdot  \frac{A_{4n}}{A_{2n}} \cdot \frac{A_{8n}}{A_{4n}}

Thus, taking the limit as k \to \infty we get

\displaystyle{ \pi = A_n \cdot \frac{A_{2n}}{A_n} \cdot \frac{A_{4n}}{A_{2n}} \cdot \frac{A_{8n}}{A_{4n}} \cdots }

where we start with the area of the n-gon and keep ‘correcting’ it to get the area of the 2n-gon, the 4n-gon, the 8n-gon and so on.

There’s a simple formula for the area of a regular n-gon inscribed in a circle. You can chop it into 2 n right triangles, each of which has base \sin(\pi/n) and height \cos(\pi/n), and thus area n \sin(\pi/n) \cos(\pi/n):

Thus,

A_n = n \sin(\pi/n) \cos(\pi/n) = \displaystyle{\frac{n}{2} \sin(2 \pi / n)}

This lets us understand how the area changes when we double the number of vertices:

\displaystyle{ \frac{A_{n}}{A_{2n}} = \frac{\frac{n}{2} \sin(2 \pi / n)}{n \sin(\pi / n)} = \frac{n \sin( \pi / n) \cos(\pi/n)}{n \sin(\pi / n)} = \cos(\pi/n) }

This is nice and simple, but we really need a recursive formula for this quantity. Let’s define

\displaystyle{ R_n = 2\frac{A_{n}}{A_{2n}} = 2 \cos(\pi/n) }

Why the factor of 2? It simplifies our calculations slightly. We can express R_{2n} in terms of R_n using the half-angle formula for the cosine:

\displaystyle{ R_{2n} = 2 \cos(\pi/2n) = 2\sqrt{\frac{1 + \cos(\pi/n)}{2}} = \sqrt{2 + R_n} }

Now we’re ready for some fun! We have

\begin{array}{ccl} \pi &=& \displaystyle{ A_n \cdot \frac{A_{2n}}{A_n} \cdot \frac{A_{4n}}{A_{2n}} \cdot \frac{A_{8n}}{A_{4n}} \cdots }  \\ \\ & = &\displaystyle{ A_n \cdot \frac{2}{R_n} \cdot \frac{2}{R_{2n}} \cdot \frac{2}{R_{4n}} \cdots } \end{array}

so using our recursive formula R_{2n} = \sqrt{2 + R_n}, which holds for any n, we get

\pi =  \displaystyle{ A_n \cdot \frac{2}{R_n} \cdot \frac{2}{\sqrt{2 + R_n}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + R_n}}} \cdots }

I think this deserves to be called the generalized Viète formula. And indeed, if we start with a square, we get

A_4 = \displaystyle{\frac{4}{2} \sin(2 \pi / 4)} = 2

and

R_4 = 2 \cos(\pi/4) = \sqrt{2}

giving Viète’s formula:

\pi = \displaystyle{ 2 \cdot \frac{2}{\sqrt{2}} \cdot \frac{2}{\sqrt{2 + \sqrt{2}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2}}}} \cdots }

as desired!

But what if we start with a pentagon? For this it helps to remember a beautiful but slightly obscure trig fact:

\cos(\pi / 5) = \Phi/2

and a slightly less beautiful one:

\displaystyle{ \sin(2\pi / 5) = \frac{1}{2} \sqrt{2 + \Phi} }

It’s easy to prove these, and I’ll show you how later. For now, note that they imply

A_5 = \displaystyle{\frac{5}{2} \sin(2 \pi / 5)} = \frac{5}{4} \sqrt{2 + \Phi}

and

R_5 = 2 \cos(\pi/5) = \Phi

Thus, the formula

\pi =  \displaystyle{ A_5 \cdot \frac{2}{R_5} \cdot \frac{2}{\sqrt{2 + R_5}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + R_5}}} \cdots }

gives us

\pi =  \displaystyle{ \frac{5}{4} \sqrt{2 + \Phi} \cdot \frac{2}{\Phi} \cdot \frac{2}{\sqrt{2 + \Phi}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \Phi}}} \cdots }

or, cleaning it up a bit, the formula we want:

\pi = \frac{5}{\Phi} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \Phi}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2 + \Phi}}}}  \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2 + \sqrt{2 + \Phi}}}}} \cdots

Voilà!

There’s a lot more to say, but let me just explain the slightly obscure trigonometry facts we needed. To derive these, I find it nice to remember that a regular pentagon, and the pentagram inside it, contain lots of similar triangles:

Using the fact that all these triangles are similar, it’s easy to show that for any one, the ratio of the long side to the short side is \Phi to 1, since

\displaystyle{\Phi = 1 + \frac{1}{\Phi} }

Another important fact is that the pentagram trisects the interior angle of the regular pentagon, breaking the interior angle of 108^\circ = 3\pi/5 into 3 angles of 36^\circ = \pi/5:

Again this is easy and fun to show.

Combining these facts, we can prove that

\displaystyle{ \cos(2\pi/5) = \frac{1}{2\Phi}  }

and

\displaystyle{ \cos(\pi/5) = \frac{\Phi}{2} }

To prove the first equation, chop one of those golden triangles into two right triangles and do things you learned in high school. To prove the second, do the same things to one of the short squat isosceles triangles:

Starting from these equations and using \cos^2 \theta + \sin^2 \theta = 1, we can show

\displaystyle{ \sin(2\pi/5) = \frac{1}{2}\sqrt{2 + \Phi}}

and, just for completeness (we don’t need it here):

\displaystyle{ \sin(\pi/5) = \frac{1}{2}\sqrt{3 - \Phi}}

These require some mildly annoying calculations, where it helps to use the identity

\displaystyle{\frac{1}{\Phi^2} = 2 - \Phi }

Okay, that’s all for now! But if you want more fun, try a couple of puzzles:

Puzzle 1. We’ve gotten formulas for pi starting from a square or a regular pentagon. What formula do you get starting from an equilateral triangle?

Puzzle 2. Using the generalized Viète formula, prove Euler’s formula

\displaystyle{  \frac{\sin x}{x} = \cos\frac{x}{2} \cdot \cos\frac{x}{4} \cdot \cos\frac{x}{8} \cdots }

Conversely, use Euler’s formula to prove the generalized Viète formula.

So, one might say that the real point of Viète’s formula, and its generalized version, is not any special property of pi, but Euler’s formula.


by John Baez at March 07, 2017 04:47 PM

March 03, 2017

Tommaso Dorigo - Scientificblogging

Decision Trees, Explained To Kids
Decision trees are one of the many players in the booming field of supervised machine learning. They can be used to classify elements into two or more classes, depending on their characteristics. Their interest in particle physics applications is large, as we always need to try and decide on a statistical basis what kind of physics process originated the particle collision we see in our detector.

read more

by Tommaso Dorigo at March 03, 2017 09:30 AM

March 02, 2017

Symmetrybreaking - Fermilab/SLAC

Hey Fermilab, it’s a Monkee

Micky Dolenz, best known as a vocalist and drummer in 1960s pop band The Monkees, turns out to be one of Fermi National Accelerator Laboratory’s original fans.

Fermilab Director Nigel Lockyer gives a smiling Micky Dolenz a Fermilab pin

“Dear Ms. Higgins,” began the email to an employee of Fermi National Accelerator Laboratory. “My name is Micky Dolenz. I am in the entertainment business and probably best known for starring in a ’60s TV show called The Monkees. I have also been a big fan of particle physics for many decades.”

The message, which laboratory archivist Valerie Higgins received in November 2016, was legit. And it turns out Dolenz wasn’t kidding about his love of physics. Dolenz visited Fermilab on February 10 and impressed and amazed the scientists he met with his knowledge of (and genuine affection for) the science of quarks, leptons and bosons. Dolenz was, by all accounts, just as excited to meet with Fermilab scientists as they were to meet with him.

“He was so enthusiastic about the lab,” Higgins says. “It was such a treat to see someone of his stature and popularity be so interested and knowledgeable about our kind of physics.”

Previously unbeknownst to most of the lab’s employees, Dolenz’s association with Fermilab actually stretches back more than 40 years. The last time Dolenz visited Fermilab, the year was 1970. The Monkees TV show had wound down, and Dolenz, then 25, was starring in a play called Remains to Be Seen at the Pheasant Run Playhouse in nearby St. Charles, Illinois. Fermilab wasn’t even called Fermilab yet—it still went by the name National Accelerator Laboratory.

Dolenz says he remembers his first visit well. At the time, the lab consisted of a few trailers and bungalows—Fermilab’s now-iconic high-rise building, Wilson Hall, would not be completed until 1973. Dolenz had lunch with several of the scientists then toured the construction site for the Main Ring, the future home of Fermilab’s first superconducting accelerator, the Tevatron.

Dolenz captured some of his visit on 16mm film, footage he says he still has in storage. Dolenz called his previous tour of Fermilab “wonderful” and “a dream come true.”

Dolenz credits a junior high science teacher with sparking his interest in physics. He spent much of his childhood in Los Angeles building oscilloscopes and transceivers for ham radios and other gadgets. “I was always curious, always building stuff,” he says. “While the other kids were reading Superman comics, I was reading Science News. I loved it all, particularly particle physics and quantum physics.” 

Dolenz was in training to be an architect, but at age 20, the Monkees audition offered him the opportunity to catapult to worldwide fame as a TV star and musician instead. (“I’m not an idiot,” he says of accepting the role.) Still, he maintained his interest in science—his first email address, created in the 1990s, was “Higgs137,” referencing both the then-undiscovered Higgs boson and the measure of the fine structure constant.

Former Monkee Micky Dolenz talks with the director and deputy director of Fermilab

Fermilab Director Nigel Lockyer, left, and Deputy Director Joe Lykken, right, talk with Monkee Micky Dolenz during his tour.

Photo by Reidar Hahn, Fermilab

That interest in science has remained strong, Fermilab physicists noted during the February tour. Dolenz toured the underground cavern that houses detectors for the MINOS, NOvA and MINERvA neutrino experiments, the Muon g-2 experiment hall (where scientists played the theme from The Monkees when he walked in), and the DZero detector in the long-since completed Main Ring. He also spent time in three control rooms.

In every location, he impressed the scientists he met with his understanding of physics and his full-on joy at seeing science in action.

“Who knew he is a life-long physics aficionado?” says scientist Adam Lyon, who gave Dolenz his Tevatron tour. “I had a great time talking with him.”

Dolenz says he sees plenty of connection between his twin interests of physics and music, noting that Einstein played the violin; Richard Feynman played bongos; and Queen guitarist Brian May is an astrophysicist on several experimental collaborations.

“According to theory the universe is constantly vibrating, down to even the smallest particles,” Dolenz says. “We talked a lot about vibrations in the ’60s, and Eastern philosophy has been talking about the vibration of the universe for thousands of years. Music is vibration and meter and frequency. There’s a lot of overlap.”

Dolenz enjoyed his time at Fermilab so much that he hung out at the lab’s on-site pub until late in the evening, chatting with scientists. And according to Higgins, who spent the most time with him, he’s hoping to return very soon.

“He’s still looking for the footage he shot in 1970, and plans to donate that to the archive,” she says. “But I told him he’s welcome here anytime.”

Monkee Micky Dolenz stands by a model particle accelerator with two Fermilab employees

Monkee Micky Dolenz stands by a model particle accelerator with Fermilab physicist Herman White and Fermilab Director of Communication Katie Yurkewicz.

Photo by Reidar Hahn, Fermilab

by Andre Salles at March 02, 2017 02:00 PM

March 01, 2017

Jon Butterworth - Life and Physics

February 28, 2017

Symmetrybreaking - Fermilab/SLAC

How to build a universe

Our universe should be a formless fog of energy. Why isn’t it?

Header: How to build a universe

According to the known laws of physics, the universe we see today should be dark, empty and quiet. There should be no stars, no planets, no galaxies and no life—just energy and simple particles diffusing further and further into an expanding universe.

And yet, here we are.

Cosmologists calculate that roughly 13.8 billion years ago, our universe was a hunk of thick, hot energy with no boundaries and its own rules. But then, in less than a microsecond, it matured, and the fundamental laws and properties of matter arose from the pandemonium. How did our elegant and intricate universe emerge? 

Inline 1: How to build a universe
Illustration by Corinne Mucha

The three conditions

The question “How is it here?” alludes to a conundrum that arose during the development of quantum mechanics. 

In 1928 Paul Dirac combined quantum theory and special relativity to predict the energy of an electron moving near the speed of light. But his equations produced two equally favorable answers: one positive and one negative. Because energy itself cannot be negative, Dirac mused that perhaps the two answers represented the particle’s two possible electric charges. The idea of oppositely charged matter-antimatter pairs was born.

Meanwhile, about six minutes away from Dirac’s office in Cambridge, physicist Patrick Blackett was studying the patterns etched in cloud chambers by cosmic rays. In 1933 he detected 14 tracks that showed a single particle of light colliding with an air molecule and bursting into two new particles. The spiral tracks of these new particles were mirror images of each other, indicating that they were oppositely charged. This was one of the first observations of what Dirac had predicted five years earlier—the birth of an electron-positron pair.

Today it’s well known that matter and antimatter are the ultimate wonder twins. They’re spontaneously born from raw energy as a team of two and vanish in a silent poof of energy when they merge and annihilate. This appearing-disappearing act spawned one of the most fundamental mysteries in the universe: What is engraved in the laws of nature that saved us from the broth of appearing and annihilating particles of matter and antimatter?

“We know this cosmic asymmetry must exist because here we are,” says Jessie Shelton, a theorist at the University of Illinois. “It’s a puzzling imbalance because theory requires three conditions—which all have to be true at once—to create this cosmic preference for matter.”

In the 1960s physicist Andrei Sakharov proposed this set of three conditions that could explain the appearance of our matter-dominated universe. Scientists continue to look for evidence of these conditions today.

Inline 2: How to build a universe
Illustration by Corinne Mucha

1. Breaking the tether

The first problem is that matter and antimatter always seem to be born together. Just as Blackett observed in the cloud chambers, uncharged energy transforms into evenly balanced matter-antimatter pairs. Charge is always conserved through any transition. For there to be an imbalance in the amounts of matter and antimatter, there needs to be a process that creates more of one than the other.

“Sakharov’s first criterion essentially says that there must be some new process that converts antimatter into matter, or vice versa,” says Andrew Long, a postdoctoral researcher in cosmology at the University of Chicago. “This is one of the things experimentalists are looking for in the lab.”

In the 1980s, scientists searched for evidence of Sakharov’s first condition by looking for signs of a proton decaying into a positron and two photons. They have yet to find evidence of this modern alchemy, but they continue to search. 

“We think that the early universe could have contained a heavy neutral particle that sometimes decayed into matter and sometimes decayed into antimatter, but not necessarily into both at the same time,” Long says.

Inline 3: How to build a universe
Illustration by Corinne Mucha

2. Picking a favorite

Matter and antimatter cannot co-habitate; they always annihilate when they come into contact. But the creation of just a little more matter than antimatter after the Big Bang—about one part in 10 billion—would leave behind the ingredients needed to build the entire visible universe.

How could this come about? Sakharov’s second criterion dictates that the matter-only process outlined in his first criterion must be more efficient than the opposing antimatter process. And specifically, “we need to see a favoritism for the right kinds of matter to agree with astronomical observations,” Shelton says.

Observations of light left over from the early universe and measurements of the first lightweight elements produced after the Big Bang show that the discrepancy must exist in a class of particles called baryons: protons, antiprotons and other particles constructed from quarks.

“These are snapshots of the early universe,” Shelton says. “From these snapshots, we can derive the density and temperature of the early universe and calculate the slight difference between the number of baryons and antibaryons.”

But this slight difference presents a problem. While there are some tiny discrepancies between the behavior of particles and their antiparticle counterparts, these idiosyncrasies are still consistent with the Standard Model and are not enough to explain the origin of the cosmic imbalance nor the universe’s tenderness towards matter.

Inline 4: How to build a universe
Illustration by Corinne Mucha

3. Taking a one-way street

In particle physics, any process that runs forward can just as easily run in reverse. A pair of photons can merge and morph into a particle and antiparticle pair. And just as easily, the particle and antiparticle pair can recombine into a pair of photons. This process happens all around us, continually. But because it is cyclical, there is no net gain or loss for a type of matter.

If this were always true, our young universe could have been locked in an infinite loop of creation and destruction. Without something slamming the brakes on these cycles at least for a moment, matter could not have evolved into the complex structures we see today.

“For every stitch that’s knit, there a simultaneous tug on the thread,” Long says. “We need a way to force the reaction to move forward and not simultaneously run in reverse at the same rate.”

Many cosmologists suspect that the gradual expansion and cooling of the universe was enough to lock matter into being, like a supersaturated sweet tea whose sugar crystals drop to the bottom of the glass as it cools (or in the “freezing” interpretation, like a sweet tea that instantly freezes into ice, locking sugar crystals in place without giving them a chance to dissolve).

Other cosmologists think that the plasma of the early universe may have contained bubbles that helped separate matter and antimatter (and then served as incubators for particles to acquire mass).

Several experiments at CERN are looking for evidence that the universe meets Sakharov’s three conditions. For instance, several precision experiments at CERN’s Antimatter Factory are looking for minuscule differences between the intrinsic characteristics of protons and antiprotons. The LHCb experiment at the Large Hadron Collider is examining the decay patterns of unstable matter and antimatter particles.

Shelton and Long both hope that more research from experiments at the LHC will be the key to building a more complete picture of our early universe.

LHC experiments could discover that the Higgs field served as the lock that halted the early universe’s perpetually evolving and devolving particle soup—especially if the field contained bubbles that froze faster than others, providing cosmic petri dishes in which matter and antimatter could evolve differently, Long says. “More measurements of the Higgs boson and the fundamental properties of matter and antimatter will help us develop better theories and a better understanding of what and where we come from.”

What exactly transpired during the birth of our universe may always remain a bit of an enigma, but we continue to seek new pieces of this formidable puzzle.

by Sarah Charley at February 28, 2017 04:07 PM

February 26, 2017

Clifford V. Johnson - Asymptotia

Sandwich Bag Graffiti

A little while back, toward the end of December last year, I did a long stretch of days where I needed to change my routine a bit to take advantage of a window of time that came up that I could use for pushing forward on the book. I was falling behind and desperately needed to improve my daily production rate of finished art in order to catch up. So, I ended up ditching making a sandwich in the morning, instead leaving very soon after getting up to head to my office. I then stopped taking my sandwich altogether when I ran out of bread and did not make the time in the evening to bake a fresh batch, as I do once a week or so, because I was just coming back home and falling into bed.

The USC catering outlets were all closed that week. This meant that I ended up seeking out a place to buy a sandwich near my office. I found a place [...] Click to continue reading this post

The post Sandwich Bag Graffiti appeared first on Asymptotia.

by Clifford at February 26, 2017 09:23 PM

February 23, 2017

Symmetrybreaking - Fermilab/SLAC

Instrument finds new earthly purpose

Detectors long used to look at the cosmos are now part of X-ray experiments here on Earth.

Sangjun Lee, Jamie Titus and Dennis Norlund at the Stanford Synchrotron Radiation Lightsource

Modern cosmology experiments—such as the BICEP instruments and the Keck Array in Antarctica—rely on superconducting photon detectors to capture signals from the early universe.

These detectors, called transition edge sensors, are kept at temperatures near absolute zero, at only tenths of a Kelvin. At this temperature, the “transition” between superconducting and normal states, the sensors function like an extremely sensitive thermometer. They are able to detect heat from cosmic microwave background radiation, the glow emitted after the Big Bang, which is only slightly warmer at around 3 Kelvin.

Scientists also have been experimenting with these same detectors to catch a different form of light, says Dan Swetz, a scientist at the National Institute of Standards and Technology. These sensors also happen to work quite well as extremely sensitive X-ray detectors.

NIST scientists, including Swetz, design and build the thin, superconducting sensors and turn them into pixelated arrays smaller than a penny. They construct an entire X-ray spectrometer system around those arrays, including a cryocooler, a refrigerator that can keep the detectors near absolute zero temperatures.

TES array and cover shown with penny coin for scale

TES array and cover shown with penny coin for scale.

Dan Schmidt, NIST

Over the past several years, these X-ray spectrometers built at the NIST Boulder MicroFabrication Facility have been installed at three synchrotrons at US Department of Energy national laboratories: the National Synchrotron Light Source at Brookhaven National Laboratory, the Advanced Photon Source at Argonne National Laboratory and most recently at the Stanford Synchrotron Radiation Lightsource at SLAC National Accelerator Laboratory.

Organizing the transition edge sensors into arrays made a more powerful detector. The prototype sensor—built in 1995—consisted of only one pixel.

These early detectors had poor resolution, says physicist Kent Irwin of Stanford University and SLAC. He built the original single-pixel transition edge sensor as a postdoc. Like a camera, the detector can capture greater detail the more pixels it has.

“It’s only now that we’re hitting hundreds of pixels that it’s really getting useful,” Irwin says. “As you keep increasing the pixel count, the science you can do just keeps multiplying. And you start to do things you didn’t even conceive of being possible before.”

Each of the 240 pixels is designed to catch a single photon at a time. These detectors are efficient, says Irwin, collecting photons that may be missed with more conventional detectors.

Spectroscopy experiments at synchrotrons examine subtle features of matter using X-rays. In these types of experiments, an X-ray beam is directed at a sample. Energy from the X-rays temporarily excites the electrons in the sample, and when the electrons return to their lower energy state, they release photons. The photons’ energy is distinctive for a given chemical element and contains detailed information about the electronic structure.

As the transition edge sensor captures these photons, every individual pixel on the detector functions as a high-energy-resolution spectrometer, able to determine the energy of each photon collected.

The researchers combine data from all the pixels and make note of the pattern of detected photons across the entire array and each of their energies. This energy spectrum reveals information about the molecule of interest.

These spectrometers are 100 times more sensitive than standard spectrometers, says Dennis Nordlund, SLAC scientist and leader of the transition edge sensor project at SSRL. This allows a look at biological and chemical details at extremely low concentrations using soft (low-energy) X-rays.

“These technology advances mean there are many things we can do now with spectroscopy that were previously out of reach,” Nordlund says. “With this type of sensitivity, this is when it gets really interesting for chemistry.”

Nordlund and his colleagues—Sangjun Lee, a SLAC postdoctoral research fellow, and Jamie Titus, a Stanford University doctoral student (pictured above at SSRL, from left: Lee, Titus and Nordlund)—have already used the transition-edge-sensor spectrometer at SSRL to probe for nitrogen impurities in nanodiamonds and graphene, as well as closely examine the metal centers of proteins and bioenzymes, such as hemoglobin and photosystem II. The project at SLAC was developed with 
support by the Department of Energy’s Laboratory Directed Research and Development.

The early experiments at Brookhaven looked at bonding and the chemical structure of nitrogen-bearing explosives. With the spectrometer at Argonne, a research team recently took scattering measurements on high-temperature superconducting materials.

“The instruments are very similar from a technical standpoint—same number of sensors, similar resolution and performance,” Swetz says. “But it’s interesting, the labs are all doing different science with the same basic equipment.”

At NIST, Swetz says they’re working to pair these detectors with less intense light sources, which could enable researchers to do X-ray experiments in their personal labs.

There are plans to build transition-edge-sensor spectrometers that will work in the higher energy hard X-ray region, which scientists at Argonne are working on for the next upgrade of Advanced Photon Source.

To complement this, the SLAC and NIST collaboration is engineering spectrometers that will handle the high repetition rate of X-ray laser pulses such as LCLS-II, the next generation of the free-electron X-ray laser at SLAC. This will require faster readout systems. The goal is to create a transition-edge-sensor array with as many as 10,000 pixels that can capture more than 10,000 pulses per second.

Irwin points out that the technology developed for synchrotrons, LCLS-II and future cosmic-microwave-background experiments provides shared benefit.

“The information really keeps bouncing back and forth between X-ray science and cosmology,” Irwin says.

by Amanda Solliday at February 23, 2017 06:00 PM

John Baez - Azimuth

Saving Climate Data (Part 6)

Scott Pruitt, who filed legal challenges against Environmental Protection Agency rules fourteen times, working hand in hand with oil and gas companies, is now head of that agency. What does that mean about the safety of climate data on the EPA’s websites? Here is an inside report:

• Dawn Reeves, EPA preserves Obama-Era website but climate change data doubts remain, InsideEPA.com, 21 February 2017.

For those of us who are backing up climate data, the really important stuff is in red near the bottom.

The EPA has posted a link to an archived version of its website from Jan. 19, the day before President Donald Trump was inaugurated and the agency began removing climate change-related information from its official site, saying the move comes in response to concerns that it would permanently scrub such data.

However, the archived version notes that links to climate and other environmental databases will go to current versions of them—continuing the fears that the Trump EPA will remove or destroy crucial greenhouse gas and other data.

The archived version was put in place and linked to the main page in response to “numerous [Freedom of Information Act (FOIA)] requests regarding historic versions of the EPA website,” says an email to agency staff shared by the press office. “The Agency is making its best reasonable effort to 1) preserve agency records that are the subject of a request; 2) produce requested agency records in the format requested; and 3) post frequently requested agency records in electronic format for public inspection. To meet these goals, EPA has re-posted a snapshot of the EPA website as it existed on January 19, 2017.”

The email adds that the action is similar to the snapshot taken of the Obama White House website.

The archived version of EPA’s website includes a “more information” link that offers more explanation.

For example, it says the page is “not the current EPA website” and that the archive includes “static content, such as webpages and reports in Portable Document Format (PDF), as that content appeared on EPA’s website as of January 19, 2017.”

It cites technical limits for the database exclusions. “For example, many of the links contained on EPA’s website are to databases that are updated with the new information on a regular basis. These databases are not part of the static content that comprises the Web Snapshot.” Searches of the databases from the archive “will take you to the current version of the database,” the agency says.

“In addition, links may have been broken in the website as it appeared” on Jan. 19 and those will remain broken on the snapshot. Links that are no longer active will also appear as broken in the snapshot.

“Finally, certain extremely large collections of content… were not included in the Snapshot due to their size” such as AirNow images, radiation network graphs, historic air technology transfer network information, and EPA’s searchable news releases.”

‘Smart’ Move

One source urging the preservation of the data says the snapshot appears to be a “smart” move on EPA’s behalf, given the FOIA requests it has received, and notes that even though other groups like NextGen Climate and scientists have been working to capture EPA’s online information, having it on EPA’s site makes it official.

But it could also be a signal that big changes are coming to the official Trump EPA site, and it is unclear how long the agency will maintain the archived version.

The source says while it is disappointing that the archive may signal the imminent removal of EPA’s climate site, “at least they are trying to accommodate public concerns” to preserve the information.

A second source adds that while it is good that EPA is seeking “to address the widespread concern” that the information will be removed by an administration that does not believe in human-caused climate change, “on the other hand, it doesn’t address the primary concern of the data. It is snapshots of the web text.” Also, information “not included,” such as climate databases, is what is difficult to capture by outside groups and is what really must be preserved.

“If they take [information] down” that groups have been trying to preserve, then the underlying concern about access to data remains. “Web crawlers and programs can do things that are easy,” such as taking snapshots of text, “but getting the data inside the database is much more challenging,” the source says.

The first source notes that EPA’s searchable databases, such as those maintained by its Clean Air Markets Division, are used by the public “all the time.”

The agency’s Office of General Counsel (OGC) Jan. 25 began a review of the implications of taking down the climate page—a planned wholesale removal that was temporarily suspended to allow for the OGC review.

But EPA did remove some specific climate information, including links to the Clean Power Plan and references to President Barack Obama’s Climate Action Plan. Inside EPA captured this screenshot of the “What EPA Is Doing” page regarding climate change. Those links are missing on the Trump EPA site. The archive includes the same version of the page as captured by our screenshot.

Inside EPA first reported the plans to take down the climate information on Jan. 17.

After the OGC investigation began, a source close to the Trump administration said Jan. 31 that climate “propaganda” would be taken down from the EPA site, but that the agency is not expected to remove databases on GHG emissions or climate science. “Eventually… the propaganda will get removed…. Most of what is there is not data. Most of what is there is interpretation.”

The Sierra Club and Environmental Defense Fund both filed FOIA requests asking the agency to preserve its climate data, while attorneys representing youth plaintiffs in a federal climate change lawsuit against the government have also asked the Department of Justice to ensure the data related to its claims is preserved.

The Azimuth Climate Data Backup Project and other groups are making copies of actual databases, not just the visible portions of websites.


by John Baez at February 23, 2017 05:22 PM

February 21, 2017

Symmetrybreaking - Fermilab/SLAC

Mobile Neutrino Lab makes its debut

The Mystery Machine for particles hits the road.

White trailer with the words

It’s not as flashy as Scooby Doo’s Mystery Machine, but scientists at Virginia Tech hope that their new vehicle will help solve mysteries about a ghost-like phenomenon: neutrinos.

The Mobile Neutrino Lab is a trailer built to contain and transport a 176-pound neutrino detector named MiniCHANDLER (Carbon Hydrogen AntiNeutrino Detector with a Lithium Enhanced Raghavan-optical-lattice). When it begins operations in mid-April, MiniCHANDLER will make history as the first mobile neutrino detector in the US.

“Our main purpose is just to see neutrinos and measure the signal to noise ratio,” says Jon Link, a member of the experiment and a professor of physics at Virginia Tech’s Center for Neutrino Physics. “We just want to prove the detector works.”

Neutrinos are fundamental particles with no electric charge, a property that makes them difficult to detect. These elusive particles have confounded scientists on several fronts for more than 60 years. MiniCHANDLER is specifically designed to detect neutrinos' antimatter counterparts, antineutrinos, produced in nuclear reactors, which are prolific sources of the tiny particles.

Fission at the core of a nuclear reactor splits uranium atoms, whose products themselves undergo a process that emits an electron and electron antineutrino. Other, larger detectors such as Daya Bay have capitalized on this abundance to measure neutrino properties.

MiniCHANDLER will serve as a prototype for future mobile neutrino experiments up to 1 ton in size.

Link and his colleagues hope MiniCHANDLER and its future counterparts will find answers to questions about sterile neutrinos, an undiscovered, theoretical kind of neutrino and a candidate for dark matter. The detector could also have applications for national security by serving as a way to keep tabs on material inside of nuclear reactors.

MiniCHANDLER echoes a similar mobile detector concept from a few years ago. In 2014, a Japanese team published results from another mobile neutrino detector, but their data did not meet the threshold for statistical significance. Detector operations were halted after all reactors in Japan were shut down for safety inspections.

“We can monitor the status from outside of the reactor buildings thanks to [a] neutrino’s strong penetration power,” Shugo Oguri, a scientist who worked on the Japanese team, wrote in an email.

Link and his colleagues believe their design is an improvement, and the hope is that MiniCHANDLER will be able to better reject background events and successfully detect neutrinos.

Neutrinos, where are you?

To detect neutrinos, which are abundant but interact very rarely with matter, physicists typically use huge structures such as Super-Kamiokande, a neutrino detector in Japan that contains 50,000 tons of ultra-pure water. Experiments are also often placed far underground to block out signals from other particles that are prevalent on Earth’s surface.

With its small size and aboveground location, MiniCHANDLER subverts both of these norms.

The detector uses solid scintillator technology, which will allow it to record about 100 antineutrino interactions per day. This interaction rate is less than the rate at large detectors, but MiniCHANDLER makes up for this with its precise tracking of antineutrinos.

Small plastic cubes pinpoint where in MiniCHANDLER an antineutrino interacts by detecting light from the interaction. However, the same kind of light signal can also come from other passing particles like cosmic rays. To distinguish between the antineutrino and the riffraff, Link and his colleagues look for multiple signals to confirm the presence of an antineutrino.

Those signs come from a process called inverse beta decay. Inverse beta decay occurs when an antineutrino collides with a proton, producing light (the first event) and also kicking a neutron out of the nucleus of the atom. These emitted neutrons are slower than the light and are picked up as a secondary signal to confirm the antineutrino interaction.

“[MiniCHANDLER] is going to sit on the surface; it's not shielded well at all. So it's going to have a lot of background,” Link says. “Inverse beta decay gives you a way of rejecting the background by identifying the two-part event.”

Monitoring the reactors

Scientists could find use for a mobile neutrino detector beyond studying reactor neutrinos. They could also use the detector to measure properties of the nuclear reactor itself.

A mobile neutrino detector could be used to determine whether a reactor is in use, Oguri says. “Detection unambiguously means the reactors are in operation—nobody can cheat the status.”

The detector could also be used to determine whether material from a reactor has been repurposed to produce nuclear weapons. Plutonium, an element used in the process of making weapons-grade nuclear material, produces 60 percent fewer detectable neutrinos than uranium, the primary component in a reactor core.

“We could potentially tell whether or not the reactor core has the right amount of plutonium in it,” Link says.

Using a neutrino detector would be a non-invasive way to track the material; other methods of testing nuclear reactors can be time-consuming and disruptive to the reactor’s processes.

But for now, Link just wants MiniCHANDLER to achieve a simple—yet groundbreaking—goal: Get the mobile neutrino lab running.

by Daniel Garisto at February 21, 2017 02:00 PM

February 18, 2017

John Baez - Azimuth

Azimuth Backup Project (Part 4)

The Azimuth Climate Data Backup Project is going well! Our Kickstarter campaign ended on January 31st and the money has recently reached us. Our original goal was $5000. We got $20,427 of donations, and after Kickstarter took its cut we received $18,590.96.

Next time I’ll tell you what our project has actually been doing. This time I just want to give a huge “thank you!” to all 627 people who contributed money on Kickstarter!

I sent out thank you notes to everyone, updating them on our progress and asking if they wanted their names listed. The blanks in the following list represent people who either didn’t reply, didn’t want their names listed, or backed out and decided not to give money. I’ll list people in chronological order: first contributors first.

Only 12 people backed out; the vast majority of blanks on this list are people who haven’t replied to my email. I noticed some interesting but obvious patterns. For example, people who contributed later are less likely to have answered my email yet—I’ll update this list later. People who contributed more money were more likely to answer my email.

The magnitude of contributions ranged from $2000 to $1. A few people offered to help in other ways. The response was international—this was really heartwarming! People from the US were more likely than others to ask not to be listed.

But instead of continuing to list statistical patterns, let me just thank everyone who contributed.

thank-you-message2_edited-1

Daniel Estrada
Ahmed Amer
Saeed Masroor
Jodi Kaplan
John Wehrle
Bob Calder
Andrea Borgia
L Gardner

Uche Eke
Keith Warner
Dean Kalahan
James Benson
Dianne Hackborn

Walter Hahn
Thomas Savarino
Noah Friedman
Eric Willisson
Jeffrey Gilmore
John Bennett
Glenn McDavid

Brian Turner

Peter Bagaric

Martin Dahl Nielsen
Broc Stenman

Gabriel Scherer
Roice Nelson
Felipe Pait
Kenneth Hertz

Luis Bruno


Andrew Lottmann
Alex Morse

Mads Bach Villadsen
Noam Zeilberger

Buffy Lyon

Josh Wilcox

Danny Borg

Krishna Bhogaonker
Harald Tveit Alvestrand


Tarek A. Hijaz, MD
Jouni Pohjola
Chavdar Petkov
Markus Jöbstl
Bjørn Borud


Sarah G

William Straub

Frank Harper
Carsten Führmann
Rick Angel
Drew Armstrong

Jesimpson

Valeria de Paiva
Ron Prater
David Tanzer

Rafael Laguna
Miguel Esteves dos Santos 
Sophie Dennison-Gibby




Randy Drexler
Peter Haggstrom


Jerzy Michał Pawlak
Santini Basra
Jenny Meyer


John Iskra

Bruce Jones
Māris Ozols
Everett Rubel



Mike D
Manik Uppal
Todd Trimble

Federer Fanatic

Forrest Samuel, Harmos Consulting








Annie Wynn
Norman and Marcia Dresner



Daniel Mattingly
James W. Crosby








Jennifer Booth
Greg Randolph





Dave and Karen Deeter

Sarah Truebe









Tieg Zaharia
Jeffrey Salfen
Birian Abelson

Logan McDonald

Brian Truebe
Jon Leland


Nicole



Sarah Lim







James Turnbull




John Huerta
Katie Mandel Bruce
Bethany Summer




Heather Tilert

Anna C. Gladstone



Naom Hart
Aaron Riley

Giampiero Campa

Julie A. Sylvia


Pace Willisson









Bangskij










Peter Herschberg

Alaistair Farrugia


Conor Hennessy




Stephanie Mohr




Torinthiel


Lincoln Muri 
Anet Ferwerda 


Hanna





Michelle Lee Guiney

Ben Doherty
Trace Hagemann







Ryan Mannion


Penni and Terry O'Hearn



Brian Bassham
Caitlin Murphy
John Verran






Susan


Alexander Hawson
Fabrizio Mafessoni
Anita Phagan
Nicolas Acuña
Niklas Brunberg

Adam Luptak
V. Lazaro Zamora






Branford Werner
Niklas Starck Westerberg
Luca Zenti and Marta Veneziano 


Ilja Preuß
Christopher Flint

George Read 
Courtney Leigh

Katharina Spoerri


Daniel Risse



Hanna
Charles-Etienne Jamme
rhackman41



Jeff Leggett

RKBookman


Aaron Paul
Mike Metzler


Patrick Leiser

Melinda

Ryan Vaughn
Kent Crispin

Michael Teague

Ben



Fabian Bach
Steven Canning


Betsy McCall

John Rees

Mary Peters

Shane Claridge
Thomas Negovan
Tom Grace
Justin Jones


Jason Mitchell




Josh Weber
Rebecca Lynne Hanginger
Kirby


Dawn Conniff


Michael T. Astolfi



Kristeva

Erik
Keith Uber

Elaine Mazerolle
Matthieu Walraet

Linda Penfold




Lujia Liu



Keith



Samar Tareem


Henrik Almén
Michael Deakin 
Rutger Ockhorst

Erin Bassett
James Crook



Junior Eluhu
Dan Laufer
Carl
Robert Solovay






Silica Magazine







Leonard Saers
Alfredo Arroyo García



Larry Yu













John Behemonth


Eric Humphrey


Svein Halvor Halvorsen



Karim Issa

Øystein Risan Borgersen
David Anderson Bell III











Ole-Morten Duesend







Adam North and Gabrielle Falquero

Robert Biegler 


Qu Wenhao






Steffen Dittmar




Shanna Germain






Adam Blinkinsop







John WS Marvin (Dread Unicorn Games)


Bill Carter
Darth Chronis 



Lawrence Stewart

Gareth Hodges

Colin Backhurst
Christopher Metzger

Rachel Gumper


Mariah Thompson

Falk Alexander Glade
Johnathan Salter




Maggie Unkefer
Shawna Maryanovich






Wilhelm Fitzpatrick
Dylan “ExoByte” Mayo
Lynda Lee




Scott Carpenter



Charles D, Payet
Vince Rostkowski


Tim Brown
Raven Daegmorgan
Zak Brueckner


Christian Page

Adi Shavit


Steven Greenberg
Chuck Lunney



Adriel Bustamente

Natasha Anicich



Bram De Bie
Edward L






Gray Detrick
Robert


Sarah Russell

Sam Leavin

Abilash Pulicken

Isabel Olondriz
James Pierce
James Morrison


April Daniels



José Tremblay Champagne


Chris Edmonds

Hans & Maria Cummings
Bart Gasiewiski


Andy Chamard



Andrew Jackson

Christopher Wright

Crystal Collins

ichimonji10


Alan Stern
Alison W


Dag Henrik Bråtane





Martin Nilsson


William Schrade


by John Baez at February 18, 2017 07:27 PM

February 17, 2017

Symmetrybreaking - Fermilab/SLAC

#AskSymmetry Twitter chat with Anne Schukraft

See Fermilab physicist Anne Schukraft's answers to readers’ questions about neutrinos.

Scientist Anne Schukraft surrounded by Harry Potter-inspired imagery and the phrase
<noscript>[<a href="http://storify.com/Symmetry/asksymmetry-twitter-chat-with-anne-schukraft" target="_blank">View the story "#AskSymmetry Twitter Chat with Anne Schukraft 2/17/17" on Storify</a>]</noscript>

February 17, 2017 06:29 PM

February 16, 2017

Symmetrybreaking - Fermilab/SLAC

Wizardly neutrinos

Why can a neutrino pass through solid objects?

Scientist Anne Schukraft surrounded by Harry Potter-inspired imagery and the phrase

Physicist Anne Schukraft of Fermi National Accelerator Laboratory explains.

Video of 5SniR5U6YTU

Have a burning question about particle physics? Let us know via email or Twitter (using the hashtag #AskSymmetry). We might answer you in a future video!

You can watch a playlist of the #AskSymmetry videos here. You can see Anne Schukraft's answers to readers' questions about neutrinos on Twitter here.​

by Lauren Biron at February 16, 2017 10:42 PM

February 14, 2017

Symmetrybreaking - Fermilab/SLAC

LHCb observes rare decay

Standard Model predictions align with the LHCb experiment’s observation of an uncommon decay.

The Standard Model is holding strong after a new precision measurement of a rare subatomic process.

For the first time, the LHCb experiment at CERN has independently observed the decay of the Bs0 particle—a heavy composite particle consisting of a bottom antiquark and a strange quark—into two muons. The LHCb experiment co-discovered this rare process in 2015 after combining results with the CMS experiment.

Theorists predicted that this particular decay would occur only a few times out of a billion.

“Our measurement is slightly lower than predictions, but well within the range of experimental uncertainty and fully compatible with our models,” says Flavio Archilli, one of the co-leaders of this analysis and a postdoc at Nikhef National Institute for Subatomic Physics. “The theoretical predictions are very accurate, so now we want to improve our precision to see if our measurement is sitting right on top of the expected value or slightly outside, which could be an indication of new physics.”

The LHCb experiment examines the properties and decay patterns of particles to search for cracks in the Standard Model, our best description of the fundamental particles and forces. Any deviations from the Standard Model’s predictions could be evidence of new physics at play.

Supersymmetry, for example, is a popular theory that adds a host of new particles to the Standard Model and ameliorates many of its shortcomings—such as mathematical imbalances between how the different types of particles contribute to subatomic interactions.

“We love this decay because it is one of the most promising places to search for any new effects of supersymmetry,” Archilli says. “Scientists searched for this decay for more than 30 years and now we finally have the first single-experiment observation.”

This new measurement by the LHCb experiment combines data taken from Run 1 and Run 2 of the Large Hadron Collider and employs more refined analysis techniques, making it the most precise measurement of this process to date. In addition to measuring the rate of this rare decay, LHCb researchers also measured how long the Bs0 particle lives before it transforms into the two muons—another measurement that agrees with the Standard Model’s predictions.

“It's gratifying to have achieved these results,” says Universita di Pisa scientist Matteo Rama, one of the co-leaders of this analysis. "They reward the efforts made to improve the analysis techniques, to exploit our data even further. We look forward to updating the measurement with more data with the hope to observe, one day, significant deviations from the Standard Model predictions."

Event display of a typical Bs0 decay into two muons

Event display of a typical Bs0 decay into two muons. The two muon tracks from the Bs0 decay are seen as a pair of green tracks traversing the whole detector.

LHCb collaboration

by Sarah Charley at February 14, 2017 07:39 PM

Symmetrybreaking - Fermilab/SLAC

Physics love poems

Advance your romance with science.

Header: Physics love poems

This Valentine’s Day, we challenged our readers to send us physics-inspired love poems. You answered the call: We received dozens of submissions—in four different languages! You can find some of our favorite entries below. 

But first, as a warm-up, enjoy a video of real scientists at Fermi National Accelerator Laboratory reciting physics-related Valentine’s Day haiku:

Video of lqoFbSyNDF8

Or read the haiku for yourself:

Reader poems

Thanks to all of our readers who submitted poems! In no particular order, here are some of our favorites:


For now, I’m seeing other quarks, some charming and some strange
But when we meet, I know we will all physics rearrange
For you, stop squark, will soon reveal the standard model as deficient
To me, you are my superpartner; the only one sufficient.
Without you, I just spin one-half of what our world could be
But you and I will couple soon in perfect symmetry.
All fundamental forces, we are meant to unify
In brilliant theory only love itself could clarify
Now though I may seem hypercharged and strongly interactive,
I must show my true colors if I hope to be attractive.
Without you, I just don’t feel really quite just like a top
But I’m confident I will yet find love in the name of stop.

- Jared Sagoff


The gravity that
Pulls my soul to you dilates:
Your beauty slows time.

- Philip Michaels


A Valentine for Two Quarks

Some people wish for one true love,
like dear old Ma and Pa.
That lifestyle’s not for us; we like
our quark ménage à trois.

You see, some like a threesome,
and I love both of you.
No green quark would be seen without
a red quark and a blue.

The sea is full of other quarks,
but darlings, I don’t heed ‘em.
You must believe I don’t exploit
my asymptotic freedom.

And when you pull away from me,
I just can’t take the stress.
My attraction just grows stronger
(coefficient alpha-s).

With you, my life is colourless;
you bring stability.
Without you, I’m unstable,
so I need you, Q.C.D.

I love our quirky, quarky love.
My Valentines, let’s carry on
exchanging gluons wantonly,
and make a little baryon.

- Cheryl Patrick


Will it work this time?
The wavefunction collapses.
Single once again.

- Anonymous


Our hearts were once close; two nucleons held tight
By a force that was strong, and a love that burned bright.
But, that force became weaker as the days faded ‘way,
And with it, our bond began to decay.

I’ve realize that opposites don’t always attract
(Otherwise, the atom would be more compact),
And opposites we were, our differences great,
Continuing this way, we’d annihilate.

In truth, I’ve quite had it with your duality,
Your warm disposition; cold mentality.
We must be entangled - what else can explain
How, though we are distant, you still cause me pain?

We’ve exchanged mediators, but our half-lives were short,
All data suggests we should promptly abort.
Our collision is over, and signatures thereof
Have vanished, leaving us not a quantum of love.

- Peter Voznyuk


Love ignited light,
Eternal and everywhere:
A Cosmic Background

- Akshay Jogoo


Like energy dear
our love will last forever,
theoretically

- Lauren Brennan


 

by Kathryn Jepsen at February 14, 2017 04:38 PM

February 13, 2017

Symmetrybreaking - Fermilab/SLAC

LZ dark matter detector on fast track

Construction has officially launched for the LZ next-generation dark matter experiment.

Scientists in a cleanroom assemble the prototype for the LZ detector’s core.

The race is on to build the most sensitive US-based experiment designed to directly detect dark matter particles. Department of Energy officials have formally approved a key construction milestone that will propel the project toward its April 2020 goal for completion.

The LUX-ZEPLIN experiment, which will be built nearly a mile underground at the Sanford Underground Research Facility in Lead, South Dakota, is considered one of the best bets yet to determine whether theorized dark matter particles known as WIMPs (weakly interacting massive particles) actually exist. 

The fast-moving schedule for LZ will help the US stay competitive with similar next-gen dark matter direct-detection experiments planned in Italy and China.

On February 9, the project passed a DOE review and approval stage known as Critical Decision 3, which accepts the final design and formally launches construction.

“We will try to go as fast as we can to have everything completed by April 2020,” says Murdock “Gil” Gilchriese, LZ project director and a physicist at Lawrence Berkeley National Laboratory, the lead lab for the project. “We got a very strong endorsement to go fast and to be first.” The LZ collaboration now has about 220 participating scientists and engineers who represent 38 institutions around the globe.

The nature of dark matter—which physicists describe as the invisible component or so-called “missing mass” in the universe —has eluded scientists since its existence was deduced through calculations by Swiss astronomer Fritz Zwicky in 1933.

The quest to find out what dark matter is made of, or to learn whether it can be explained by tweaking the known laws of physics in new ways, is considered one of the most pressing questions in particle physics.

Successive generations of experiments have evolved to provide extreme sensitivity in the search that will at least rule out some of the likely candidates and hiding spots for dark matter, or may lead to a discovery.

LZ will be at least 50 times more sensitive to finding signals from dark matter particles than its predecessor, the Large Underground Xenon experiment, which was removed from Sanford Lab last year to make way for LZ. The new experiment will use 10 metric tons of ultra-purified liquid xenon to tease out possible dark matter signals. 

“The science is highly compelling, so it’s being pursued by physicists all over the world,” says Carter Hall, the spokesperson for the LZ collaboration and an associate professor of physics at the University of Maryland. “It's a friendly and healthy competition, with a major discovery possibly at stake.”

A planned upgrade to the current XENON1T experiment at National Institute for Nuclear Physics’ Gran Sasso Laboratory in Italy, and China's plans to advance the work on PandaX-II, are also slated to be leading-edge underground experiments that will use liquid xenon as the medium to seek out a dark matter signal. Both of these projects are expected to have a similar schedule and scale to LZ, though LZ participants are aiming to achieve a higher sensitivity to dark matter than these other contenders.

Hall notes that while WIMPs are a primary target for LZ and its competitors, LZ’s explorations into uncharted territory could lead to a variety of surprising discoveries. “People are developing all sorts of models to explain dark matter,” he says. “LZ is optimized to observe a heavy WIMP, but it’s sensitive to some less-conventional scenarios as well. It can also search for other exotic particles and rare processes.”

LZ is designed so that if a dark matter particle collides with a xenon atom, it will produce a prompt flash of light followed by a second flash of light when the electrons produced in the liquid xenon chamber drift to its top. The light pulses, picked up by a series of about 500 light-amplifying tubes lining the massive tank—over four times more than were installed in LUX—will carry the telltale fingerprint of the particles that created them.

Illustration showing a dark matter particle interacting inside the LZ detector.

When a theorized dark matter particle known as a WIMP collides with a xenon atom, the xenon atom emits a flash of light (gold) and electrons. The flash of light is detected at the top and bottom of the liquid xenon chamber. An electric field pushes the electrons to the top of the chamber, where they generate a second flash of light (red).

SLAC National Accelerator Laboratory

Daniel Akerib, Thomas Shutt and Maria Elena Monzani are leading the LZ team at SLAC National Accelerator Laboratory. The SLAC effort includes a program to purify xenon for LZ by removing krypton, an element that is typically found in trace amounts with xenon after standard refinement processes. “We have already demonstrated the purification required for LZ and are now working on ways to further purify the xenon to extend the science reach of LZ,” Akerib says.

SLAC and Berkeley Lab collaborators are also developing and testing hand-woven wire grids that draw out electrical signals produced by particle interactions in the liquid xenon tank. Full-size prototypes will be operated later this year at a SLAC test platform. “These tests are important to ensure that the grids don't produce low-level electrical discharge when operated at high voltage, since the discharge could swamp a faint signal from dark matter,” Shutt says. 

Hugh Lippincott, a Wilson Fellow at Fermi National Accelerator Laboratory and the physics coordinator for the LZ collaboration, says, “Alongside the effort to get the detector built and taking data as fast as we can, we’re also building up our simulation and data analysis tools so that we can understand what we’ll see when the detector turns on. We want to be ready for physics as soon as the first flash of light appears in the xenon.” Fermilab is responsible for implementing key parts of the critical system that handles, purifies, and cools the xenon.

All of the components for LZ are painstakingly measured for naturally occurring radiation levels to account for possible false signals coming from the components themselves. A dust-filtering cleanroom is being prepared for LZ's assembly and a radon-reduction building is under construction at the South Dakota site—radon is a naturally occurring radioactive gas that could interfere with dark matter detection. These steps are necessary to remove background signals as much as possible.

The vessels that will surround the liquid xenon, which are the responsibility of the UK participants of the collaboration, are now being assembled in Italy. They will be built with the world's most ultra-pure titanium to further reduce background noise.

To ensure unwanted particles are not misread as dark matter signals, LZ's liquid xenon chamber will be surrounded by another liquid-filled tank and a separate array of photomultiplier tubes that can measure other particles and largely veto false signals. Brookhaven National Laboratory is handling the production of another very pure liquid, known as a scintillator fluid, that will go into this tank.

The cleanrooms will be in place by June, Gilchriese says, and preparation of the cavern where LZ will be housed is underway at Sanford Lab. Onsite assembly and installation will begin in 2018, he adds, and all of the xenon needed for the project has either already been delivered or is under contract. Xenon gas, which is costly to produce, is used in lighting, medical imaging and anesthesia, space-vehicle propulsion systems, and the electronics industry.

“South Dakota is proud to host the LZ experiment at SURF and to contribute 80 percent of the xenon for LZ,” says Mike Headley, executive director of the South Dakota Science and Technology Authority (SDSTA) that oversees the facility. “Our facility work is underway and we’re on track to support LZ’s timeline.”

UK scientists, who make up about one-quarter of the LZ collaboration, are contributing hardware for most subsystems. Henrique Araújo, from Imperial College London, says, “We are looking forward to seeing everything come together after a long period of design and planning.”

Kelly Hanzel, LZ project manager and a Berkeley Lab mechanical engineer, adds, “We have an excellent collaboration and team of engineers who are dedicated to the science and success of the project.” The latest approval milestone, she says, “is probably the most significant step so far,” as it provides for the purchase of most of the major components in LZ’s supporting systems.

Major support for LZ comes from the DOE Office of Science’s Office of High Energy Physics, South Dakota Science and Technology Authority, the UK’s Science & Technology Facilities Council, and by collaboration members in South Korea and Portugal.

Editor's note: This article is based on a press release published by Berkeley Lab.

by Glenn Roberts Jr., Berkeley Lab at February 13, 2017 04:59 PM

February 10, 2017

Symmetrybreaking - Fermilab/SLAC

Physics love poem challenge

Think you can do better than the Symmetry staff? Send us your poems!

Illustration of two particles wearing space helmets meeting in a cloud of dark matter

Has the love of your life fallen for particle physics? Let the Symmetry team help you reach their heart—with haiku.

On Valentine’s Day, we will publish a collection of physics-related love poems written by Symmetry staff and—if you are so inclined—by readers like you!

Send your poems (haiku format optional) to letters@symmetrymagazine.org by Monday, February 13, at 10 a.m. Central. If we really like yours, we may send you a prize.

For inspiration, consider the following:

Poem: A strong force binds us: / electromagnetic love. / You're fundamental.
Artwork by Sandbox Studio, Chicago
Poem: Like regular love, / But more massive -- Our love is / Supersymmetric
Artwork by Sandbox Studio, Chicago
Poem: A quantum of love / Or more? The principle here / Is uncertainty.
Artwork by Sandbox Studio, Chicago

by Kathryn Jepsen at February 10, 2017 07:39 PM

February 07, 2017

Symmetrybreaking - Fermilab/SLAC

What ended the dark ages of the universe?

New experiments will help astronomers uncover the sources that helped make the universe transparent.

Header: What ended the dark ages of the universe?

When we peer through our telescopes into the cosmos, we can see stars and galaxies reaching back billions of years. This is possible only because the intergalactic medium we’re looking through is transparent. This was not always the case. 

Around 380,000 years after the Big Bang came recombination, when the hot mass of particles that made up the universe cooled enough for electrons to pair with protons, forming neutral hydrogen. This brought on the dark ages, during which the neutral gas in the intergalactic medium absorbed most of the high-energy photons around it, making the universe opaque to these wavelengths of light. 

Then, a few hundred million years later, new sources of energetic photons appeared, stripping hydrogen atoms of their electrons and returning them to their ionized state, ultimately allowing light to easily travel through the intergalactic medium. After this era of reionization was complete, the universe was fully transparent once again. 

Physicists are using a variety of methods to search for the sources of reionization, and finding them will provide insight into the first galaxies, the structure of the early universe and possibly even the properties of dark matter. 

Energetic sources

Current research suggests that most—if not all—of the ionizing photons came from the formation of the first stars and galaxies. “The reionization process is basically a competition between the rate at which stars produce ionizing radiation and the recombination rate in the intergalactic medium,” says Brant Robertson, a theoretical astrophysicist at the University of California, Santa Cruz. 

However, astronomers have yet to find these early galaxies, leaving room for other potential sources. The first stars alone may not have been enough. “There are undoubtedly other contributions, but we argue about how important those contributions are,” Robertson says. 

Active galactic nuclei, or AGN, could have been a source of reionization. AGN are luminous bodies, such as quasars, that are powered by black holes and release ultraviolet radiation and X-rays. However, scientists don’t yet know how abundant these objects were in the early universe. 

Another, more exotic possibility, is dark matter annihilation. In some models of dark matter, particles collide with each other, annihilating and producing matter and radiation. “If through this channel or something else we could find evidence for dark matter annihilation, that would be fantastically interesting, because it would immediately give you an estimate of the mass of the dark matter and how strongly it interacts with Standard Model particles,” says Tracy Slatyer, a particle physicist at MIT. 

Dark matter annihilation and AGN may have also indirectly aided reionization by providing extra heat to the universe. 

Probing the cosmic dawn

To test their theories of the course of cosmic reionization, astronomers are probing this epoch in the history of the universe using various methods including telescope observations, something called “21-centimeter cosmology” and probing the cosmic microwave background. 

Astronomers have yet to find evidence of the most likely source of reionization—the earliest stars—but they’re looking. 

By assessing the luminosity of the first galaxies, physicists could estimate how many ionizing photons they could have released. “[To date] there haven't been observations of the actual galaxies that are reionizing the universe—even Hubble can't deliver any of those—but the hope is that the James Webb Space Telescope can,” says John Wise, an astrophysicist at Georgia Tech. 

Some of the most telling information will come from 21-centimeter cosmology, so called because it studies 21-centimeter radio waves. Neutral hydrogen gives off radio waves of this frequency, ionized hydrogen does not. Experiments such as the forthcoming Hydrogen Epoch of Reionization Array will detect neutral hydrogen using radio telescopes tuned to this frequency. This could provide clinching evidence about the sources of reionization.

“The basic idea with 21-centimeter cosmology is to not look at the galaxies themselves, but to try to make direct measurements of the intergalactic medium—the hydrogen between the galaxies,” says Adrian Liu, a Hubble fellow at UC Berkeley. “This actually lets you, in principle, directly see reionization, [by seeing how] it affects the intergalactic medium.”

By locating where the universe is ionized and where it is not, astronomers can create a map of how neutral hydrogen is distributed in the early universe. “If galaxies are doing it, then you would have ionized bubbles [around them]. If it is dark matter—dark matter is everywhere—so you're ionizing everywhere, rather than having bubbles of ionizing gas,” says Steven Furlanetto, a theoretical astrophysicist at the University of California, Los Angeles. 

Physicists can also learn about sources of reionization by studying the cosmic microwave background, or CMB. 

When an atom is ionized, the electron that is released scatters and disrupts the CMB. Physicists can use this information to determine when reionization happened and put constraints on how many photons were needed to complete the process. 

For example, physicists reported last year that data released from the Planck satellite was able to lower its estimate of how much ionization was caused by sources other than galaxies. “Just because you could potentially explain it with star-forming galaxies, it doesn't mean that something else isn't lurking in the data,” Slatyer says. “We are hopefully going to get much better measurements of the reionization epoch using experiments like the 21-centimeter observations.” 

It is still too early to rule out alternative explanations for the sources of reionization, since astronomers are still at the beginning of uncovering this era in the history of our universe, Liu says. “I would say that one of the most fun things about working in this field is that we don't know exactly what happened.”

by Diana Kwon at February 07, 2017 06:00 PM

February 06, 2017

John Baez - Azimuth

Saving Climate Data (Part 5)

march-for-science-earth-day

There’s a lot going on! Here’s a news roundup. I will separately talk about what the Azimuth Climate Data Backup Project is doing.

I’ll start with the bad news, and then go on to some good news.

Tweaking the EPA website

Scientists are keeping track of how Trump administration is changing the Environmental Protection Agency website, with before-and-after photos, and analysis:

• Brian Kahn, Behold the “tweaks” Trump has made to the EPA website (so far), National Resources Defense Council blog, 3 February 2017.

There’s more about “adaptation” to climate change, and less about how it’s caused by carbon emissions.

All of this would be nothing compared to the new bill to eliminate the EPA, or Myron Ebell’s plan to fire most of the people working there:

• Joe Davidson, Trump transition leader’s goal is two-thirds cut in EPA employees, Washington Post, 30 January 2017.

If you want to keep track of this battle, I recommend getting a 30-day free subscription to this online magazine:

InsideEPA.com.

Taking animal welfare data offline

The Trump team is taking animal-welfare data offline. The US Department of Agriculture will no longer make lab inspection results and violations publicly available, citing privacy concerns:

• Sara Reardon, US government takes animal-welfare data offline, Nature Breaking News, 3 Feburary 2017.

Restricting access to geospatial data

A new bill would prevent the US government from providing access to geospatial data if it helps people understand housing discrimination. It goes like this:

Notwithstanding any other provision of law, no Federal funds may be used to design, build, maintain, utilize, or provide access to a Federal database of geospatial information on community racial disparities or disparities in access to affordable housing._

For more on this bill, and the important ways in which such data has been used, see:

• Abraham Gutman, Scott Burris, and the Temple University Center for Public Health Law Research, Where will data take the Trump administration on housing?, Philly.com, 1 February 2017.

The EDGI fights back

The Environmental Data and Governance Initiative or EDGI is working to archive public environmental data. They’re helping coordinate data rescue events. You can attend one and have fun eating pizza with cool people while saving data:

• 3 February 2017, Portland
• 4 February 2017, New York City
• 10-11 February 2017, Austin Texas
• 11 February 2017, U. C. Berkeley, California
• 18 February 2017, MIT, Cambridge Massachusetts
• 18 February 2017, Haverford Connecticut
• 18-19 February 2017, Washington DC
• 26 February 2017, Twin Cities, Minnesota

Or, work with EDGI to organize one your own data rescue event! They provide some online tools to help download data.

I know there will also be another event at UCLA, so the above list is not complete, and it will probably change and grow over time. Keep up-to-date at their site:

Environmental Data and Governance Initiative.

Scientists fight back

The pushback is so big it’s hard to list it all! For now I’ll just quote some of this article:

• Tabitha Powledge, The gag reflex: Trump info shutdowns at US science agencies, especially EPA, 27 January 2017.

THE PUSHBACK FROM SCIENCE HAS BEGUN

Predictably, counter-tweets claiming to come from rebellious employees at the EPA, the Forest Service, the USDA, and NASA sprang up immediately. At The Verge, Rich McCormick says there’s reason to believe these claims may be genuine, although none has yet been verified. A lovely head on this post: “On the internet, nobody knows if you’re a National Park.”

At Hit&Run, Ronald Bailey provides handles for several of these alt tweet streams, which he calls “the revolt of the permanent government.” (That’s a compliment.)

Bailey argues, “with exception perhaps of some minor amount of national security intelligence, there is no good reason that any information, data, studies, and reports that federal agencies produce should be kept from the public and press. In any case, I will be following the Alt_Bureaucracy feeds for a while.”

NeuroDojo Zen Faulkes posted on how to demand that scientific societies show some backbone. “Ask yourself: “Have my professional societies done anything more political than say, ‘Please don’t cut funding?’” Will they fight?,” he asked.

Scientists associated with the group_ 500 Women Scientists _donned lab coats and marched in DC as part of the Women’s March on Washington the day after Trump’s Inauguration, Robinson Meyer reported at the Atlantic. A wildlife ecologist from North Carolina told Meyer, “I just can’t believe we’re having to yell, ‘Science is real.’”

Taking a cue from how the Women’s March did its social media organizing, other scientists who want to set up a Washington march of their own have put together a closed Facebook group that claims more than 600,000 members, Kate Sheridan writes at STAT.

The #ScienceMarch Twitter feed says a date for the march will be posted in a few days. [The march will be on 22 April 2017.] The group also plans to release tools to help people interested in local marches coordinate their efforts and avoid duplication.

At The Atlantic, Ed Yong describes the political action committee 314Action. (314=the first three digits of pi.)

Among other political activities, it is holding a webinar on Pi Day—March 14—to explain to scientists how to run for office. Yong calls 314Action the science version of Emily’s List, which helps pro-choice candidates run for office. 314Action says it is ready to connect potential candidate scientists with mentors—and donors.

Other groups may be willing to step in when government agencies wimp out. A few days before the Inauguration, the Centers for Disease Control and Prevention abruptly and with no explanation cancelled a 3-day meeting on the health effects of climate change scheduled for February. Scientists told Ars Technica’s Beth Mole that CDC has a history of running away from politicized issues.

One of the conference organizers from the American Public Health Association was quoted as saying nobody told the organizers to cancel.

I believe it. Just one more example of the chilling effect on global warming. In politics, once the Dear Leader’s wishes are known, some hirelings will rush to gratify them without being asked.

The APHA guy said they simply wanted to head off a potential last-minute cancellation. Yeah, I guess an anticipatory pre-cancellation would do that.

But then—Al Gore to the rescue! He is joining with a number of health groups—including the American Public Health Association—to hold a one-day meeting on the topic Feb 16 at the Carter Center in Atlanta, CDC’s home base. Vox’s Julia Belluz reports that it is not clear whether CDC officials will be part of the Gore rescue event.

The Sierra Club fights back

The Sierra Club, of which I’m a proud member, is using the Freedom of Information Act or FOIA to battle or at least slow the deletion of government databases. They wisely started even before Trump took power:

• Jennifer A Dlouhy, Fearing Trump data purge, environmentalists push to get records, BloombergMarkets, 13 January 2017.

Here’s how the strategy works:

U.S. government scientists frantically copying climate data they fear will disappear under the Trump administration may get extra time to safeguard the information, courtesy of a novel legal bid by the Sierra Club.

The environmental group is turning to open records requests to protect the resources and keep them from being deleted or made inaccessible, beginning with information housed at the Environmental Protection Agency and the Department of Energy. On Thursday [January 9th], the organization filed Freedom of Information Act requests asking those agencies to turn over a slew of records, including data on greenhouse gas emissions, traditional air pollution and power plants.

The rationale is simple: Federal laws and regulations generally block government agencies from destroying files that are being considered for release. Even if the Sierra Club’s FOIA requests are later rejected, the record-seeking alone could prevent files from being zapped quickly. And if the records are released, they could be stored independently on non-government computer servers, accessible even if other versions go offline.


by John Baez at February 06, 2017 02:15 AM

February 02, 2017

Symmetrybreaking - Fermilab/SLAC

Road trip science

The Escaramujo Project delivered detector technology by van to eight universities in Latin America.

Group photo of students who participated in the Escaramujo Project

Professors and students of physics in Latin America have much to offer the world of physics. But for those interested in designing and building the complex experiments needed to gather physics data, hands-on experimentation in much of Central and South America has been lacking. It was that gap that something called the Escaramujo Project aimed to fill by bringing basic components to students who could then assemble them into fully functional detectors.

“It was something completely new,” says Luis Rodolfo Pérez Sánchez, a student at the Universidad Autónoma de Chiapas, Mexico, who is writing his thesis based on measurements taken with the detector. “Until now, there was no device at the university where one could work directly with their hands.”

Each group of students built a detector, which they used to measure cosmic-ray muons (particles coming from space). But they did more than that. They used a Linux open-source computer operating system for the first time, calibrated the equipment, plotted data using the software ROOT and became part of an international community. The students used their detectors to participate in International Cosmic Day, an annual event where scientists around the world measure cosmic rays and share their data.

The Escaramujo Project is led by Federico Izraelevitch, who worked at Fermi National Accelerator Laboratory near Chicago during its planning stages and is now a professor at Instituto Dan Beninson in Argentina. During the project, Izraelevitch and his wife, Eleonora, traveled with three canine companions on a road trip from Chicago to Buenos Aires, stopping to teach workshops in Mexico, Guatemala, Costa Rica, Colombia, Ecuador, Peru and Bolivia. Many nights found them in spots with no tourist lodging or even places to camp with their van.

“People received us with a smile and gave us a cup of coffee, or food, or whatever we needed at the time,” Izraelevitch says. “People are amazing.” 

Map showing the route from Chicago to Buenos Aires

Federico and Eleonora Izraelevitch traveled by van from Chicago to Buenos Aires.

Escaramujo Project

In many locations, students took their detector on a field trip shortly after assembling it. The group in Pasto, Colombia, turned theirs into a muon telescope and carted it to the nearby Galeras volcano, where a kind local lent them a power supply to get things running. They studied an effect of the volcano: muon attenuation, or weakening of the muon signal. Students in La Paz, Bolivia, placed the detector in the back of a van and drove it to a lofty observatory, measuring how the muon flux changed with altitude. 

The Escaramujo Project forged direct connections between students at eight universities, who can now use their detectors to collect and share data with other Escaramujo participants.

“This state is one of the poorest states in Mexico,” says Karen Caballero, a professor at UNACH who brought the Escaramujo Project to the university. “The students in Chiapas don’t have the opportunity to participate in international initiatives, so this has been very, very important for them.”

Caballero says there are plans for the full Escaramujo cohort to use their detectors to calibrate expansions of the Latin American Giant Observatory, used for an experiment that began in 2005. LAGO uses multiple sites throughout Central and South America to study gamma-ray bursts, some of the most powerful explosions in the universe, as well as space weather.

While the workshops for the program wrapped up in early 2016, Izraelevitch says he hopes to visit more universities and lead more workshops in the future.

“Hopefully all these sites can continue growing and working as a collaboration in the future,” he says. “These people are capable and have all the knowledge and enthusiasm for being part of a major, first-class experiment.”

Students from the Universidad Autónoma de Chiapas in Mexico

Students at the Universidad Autónoma de Chiapas in Mexico built a detector with the Escaramujo Project.

Federico Izraelevitch

by Lauren Biron at February 02, 2017 04:39 PM

January 30, 2017

Symmetrybreaking - Fermilab/SLAC

Sign of a long-sought asymmetry

A result from the LHCb experiment shows what could be the first evidence of matter and antimatter baryons behaving differently.

Artistic representation of an asymmetric particle decay

A new result from the LHCb experiment at CERN could help explain why our universe is made of matter and not antimatter.

Matter particles, such as protons and electrons, all have an antimatter twin. These antimatter twins appear identical in nearly every respect except that their electric and magnetic properties are opposite.

Cosmologists predict that the Big Bang produced an equal amount of matter and antimatter, which is a conundrum because matter and antimatter annihilate into pure energy when they come into contact. Particle physicists are looking for any minuscule differences between matter and antimatter, which might explain why our universe contains planets and stars and not a sizzling broth of light and energy instead.

The Large Hadron Collider doesn’t just generate Higgs bosons during its high-energy proton collisions—it also produces antimatter. By comparing the decay patterns of matter particles with their antimatter twins, the LHCb experiment is looking for minuscule differences in how these rival particles behave.

“Many antimatter experiments study particles in a very confined and controlled environment,” says Nicola Neri, a researcher at Italian research institute INFN and one of the leaders of the study. “In our experiment, the antiparticles flow and decay, so we can examine other properties, such as the momenta and trajectories of their decay products.”

The result, published today in Nature Physics, examined the decay products of matter and antimatter baryons (a particles containing three quarks) and looked at the spatial distribution of the resulting daughter particles within the detector. Specifically, Neri and his colleagues looked for a very rare decay of the lambda-b particle (which contains an up quark, down quark and bottom quark) into a proton and three pions (which contain an up quark and anti-down quark).

Based on data from 6000 decays, Neri and his team found a difference in the spatial orientation of the daughter particles of the matter and antimatter lambda-bs.

“This is the first time we’ve seen evidence of matter and antimatter baryons behaving differently,” Neri says. “But we need more data before we can make a definitive claim.”

Statistically, the result has a significant of 3.3 sigma, which means its chances of being a just a statistical fluctuation (and not a new property of nature) is one out of a thousand. The traditional threshold for discovery is 5 sigma, which equates to odds of one out of more than a million.

For Neri, this result is more than early evidence of a never before seen process—it is a key that opens new research opportunities for LHCb physicists.

“We proved that we are there,” Neri says, “Our experiment is so sensitive that we can start systematically looking for this matter-antimatter asymmetry in heavy baryons at LHCb. We have this capability, and we will be able to do even more after the detector is upgraded next year.”

by Sarah Charley at January 30, 2017 06:14 PM

Matt Strassler - Of Particular Significance

Penny Wise, Pound Foolish

The cost to American science and healthcare of the administration’s attack on legal immigration is hard to quantify.  Maybe it will prevent a terrorist attack, though that’s hard to say.  What is certain is that American faculty are suddenly no longer able to hire the best researchers from the seven countries currently affected by the ban.  Numerous top scientists suddenly cannot travel here to share their work with American colleagues; or if already working here, cannot now travel abroad to learn from experts elsewhere… not to mention visiting their families.  Those caught outside the country cannot return, hurting the American laboratories where they are employed.

You might ask what the big deal is; it’s only seven countries, and the ban is temporary. Well (even ignoring the outsized role of Iran, whose many immigrant engineers and scientists are here because they dislike the ayatollahs and their alternative facts), the impact extends far beyond these seven.

The administration’s tactics are chilling.  Scientists from certain countries now fear that one morning they will discover their country has joined the seven, so that they too cannot hope to enter or exit the United States.  They will decide now to turn down invitations to work in or collaborate with American laboratories; it’s too risky.  At the University of Pennsylvania, I had a Pakistani postdoc, who made important contributions to our research effort. At the University of Washington we hired a terrific Pakistani mathematical physicist. Today, how could I advise someone like that to accept a US position?

Even those not worried about being targeted may decide the US is not the open and welcoming country it used to be.  Many US institutions are currently hiring people for the fall semester.  A lot of bright young scientists — not just Muslims from Muslim-majority nations — will choose instead to go instead to Canada, to the UK, and elsewhere, leaving our scientific enterprise understaffed.

Well, but this is just about science, yes?  Mostly elite academics presumably — it won’t affect the average person.  Right?

Wrong.  It will affect many of us, because it affects healthcare, and in particular, hospitals around the country.  I draw your attention to an article written by an expert in that subject:

http://www.cnn.com/2017/01/29/opinions/trump-ban-impact-on-health-care-vox/index.html

and I’d like to quote from the article (highlights mine):

“Our training hospitals posted job listings for 27,860 new medical graduates last year alone, but American medical schools only put out 18,668 graduates. International physicians percolate throughout the entire medical system. To highlight just one particularly intense specialty, fully 30% of American transplant surgeons started their careers in foreign medical schools. Even with our current influx of international physicians as well as steadily growing domestic medical school spots, the Association of American Medical Colleges estimates that we’ll be short by up to 94,700 doctors by 2025.

The President’s decision is as ill-timed as it was sudden. The initial 90-day order encompasses Match Day, the already anxiety-inducing third Friday in March when medical school graduates officially commit to their clinical training programs. Unless the administration or the courts quickly fix the mess President Trump just created, many American hospitals could face staffing crises come July when new residents are slated to start working.”

If you or a family member has to go into the hospital this summer and gets sub-standard care due to a lack of trained residents and doctors, you know who to blame.  Terrorism is no laughing matter, but you and your loved ones are vastly more likely to die due to a medical error than due to a terrorist.  It’s hard to quantify exactly, but it is clear that over the years since 2000, the number of Americans dying of medical errors is in the millions, while the number who died from terrorism is just over three thousand during that period, almost all of whom died on 9/11 in 2001. So addressing the terrorism problem by worsening a hospital problem probably endangers Americans more than it protects them.

Such is the problem of relying on alternative facts in place of solid scientific reasoning.


Filed under: Science and Modern Society Tagged: immigration

by Matt Strassler at January 30, 2017 01:27 PM

January 26, 2017

Symmetrybreaking - Fermilab/SLAC

The robots of CERN

TIM and other mechanical friends tackle jobs humans shouldn’t.

Robot with wheels, an arm and a camera in the tunnel of the Large Hadron Collider

The Large Hadron Collider is the world’s most powerful particle accelerator. Buried in the bedrock beneath the Franco-Swiss border, it whips protons through its nearly 2000 magnets 11,000 times every second.

As you might expect, the subterranean tunnel which houses the LHC is not always the friendliest place for human visitors.

“The LHC contains 120 tons of liquid helium kept at 1.9 Kelvin,” says Ron Suykerbuyk, an LHC operator. “This cooling system is used to keep the electromagnets in super conducting state capable of carrying up to 13,000 Amps of current through its wires. Even with all the safety systems we have in place, we prefer to limit our underground access when the cryogenic systems are on”.

But as with any machine, sometimes the LHC needs attention: inspections, repairs, tuning. The LHC is so secure that even with perfect conditions, it takes 30 minutes after the beam is shut off for the first humans to even arrive at the entrance to the tunnel.

But the robotics team at CERN asks: Why do we need humans for this job anyway?

Enter TIM—the Train Inspection Monorail. TIM is a chain of wagons, sensors and cameras that snake along a track bolted to the LHC tunnel’s ceiling. In the 1990s, the track held a cable car that transported machinery and people around the Large Electron-Position Collider, the first inhabitant of the tunnel. With the installation of the LHC, there was no longer room for both accelerator and the cable car, so the monorail was reconfigured for the sleeker TIM robots.

There are currently two TIM robots and plans to install two more in the next couple of years. These four TIM robots will patrol the different quadrants of the LHC, enabling operators to reach any part of the 17-mile tunnel within 20 minutes. As TIM slithers along the ceiling, an automated eye keeps watch for any changes in the tunnel and a robotic arm drops down to measure radiation. Other sensors measure the temperature, oxygen level and cell phone reception.

“In addition to performing environmental measurements, TIM is a safety system which can be the eyes and ears for members of the CERN Fire Brigade and operations team,” says Mario Di Castro, the leader of CERN’s robotics team. “Eventually we’d like to equip TIM with a fire extinguisher and other physical operations so that it can be the first responder in case of a crisis.”

TIM isn’t alone in its mission to provide a safer environment for its human coworkers. CERN also has three teleoperated robots that can assess troublesome areas, provide assessments of hazards and carry tools.

The main role of these three robots is to access radioactive areas.

Radiation is a type of energy carried by free-moving subatomic particles. As protons race around CERN’s accelerator complex, special equipment called collimators constrict their passage and absorb particles that have wandered away from the center of the beam pipe. This trimming process ensures that the proton stream is compact and tidy.

After a couple weeks of operation, the collimators have absorbed so many particles that they will reemit their energy—even after the beam is shut off. There is no radiation hazard to humans unless they are within a few meters of the collimators, and because the machine is fully automated, humans rarely need to perform check-ups. But occasionally, material in these restricted areas required attention.

By replacing humans with robots, engineers can quickly fix small problems without needing to wait long periods of time for the radiation to dissipate or sending personnel into potentially unsafe environments.

“CERN robots help perform repetitive and dangerous tasks that humans either prefer to avoid or are unable to do because of hazards, size constraints or the extreme environments in which they take place, such CERN experimental areas,” Di Castro says.

About half the time, these tasks are very simple, such as performing a visual assessment of the area or taking measurements. “Robots can replace humans for these simple tasks and improve the quality and timeliness of work,” he says.

Last year the SPS accelerator (which starts the acceleration process for particles that eventually move to the LHC) needed an oil refill to keep its parts running smoothly. But the accelerator itself was too radioactive for humans to visit, so one of the CERN robotics team’s robots rolled in gripping an oil can in its flexible arm.

In June 2016, scientists needed to dispose of radioactive Cobalt, Cesium and Americium they had used to calibrate radiation sensors. Two CERN robots cycled in with several tools, extracted the radioactive sources and packed them in thick protective containers for removal.

Over the last two years, these two robots have performed more than 30 interventions, saving humans both time and radiation doses.

As the LHC increases the power and particle collisions over the next decade, Di Castro and his team are preening these robot companions to increase their capabilities. “We are putting a strong commitment to adapt and develop existing robotic solutions to fit CERN’s evolving needs,” Di Castro says.

Video of wxKRW1Z2lWo

by Sarah Charley at January 26, 2017 02:00 PM

January 25, 2017

Sean Carroll - Preposterous Universe

What Happened at the Big Bang?

I had the pleasure earlier this month of giving a plenary lecture at a meeting of the American Astronomical Society. Unfortunately, as far as I know they don’t record the lectures on video. So here, at least, are the slides I showed during my talk. I’ve been a little hesitant to put them up, since some subtleties are lost if you only have the slides and not the words that went with them, but perhaps it’s better than nothing.

My assigned topic was “What We Don’t Know About the Beginning of the Universe,” and I focused on the question of whether there could have been space and time even before the Big Bang. Short answer: sure there could have been, but we don’t actually know.

So what I did to fill my time was two things. First, I talked about different ways the universe could have existed before the Big Bang, classifying models into four possibilities (see Slide 7):

  1. Bouncing (the universe collapses to a Big Crunch, then re-expands with a Big Bang)
  2. Cyclic (a series of bounces and crunches, extending forever)
  3. Hibernating (a universe that sits quiescently for a long time, before the Bang begins)
  4. Reproducing (a background empty universe that spits off babies, each of which begins with a Bang)

I don’t claim this is a logically exhaustive set of possibilities, but most semi-popular models I know fit into one of the above categories. Given my own way of thinking about the problem, I emphasized that any decent cosmological model should try to explain why the early universe had a low entropy, and suggested that the Reproducing models did the best job.

My other goal was to talk about how thinking quantum-mechanically affects the problem. There are two questions to ask: is time emergent or fundamental, and is Hilbert space finite- or infinite-dimensional. If time is fundamental, the universe lasts forever; it doesn’t have a beginning. But if time is emergent, there may very well be a first moment. If Hilbert space is finite-dimensional it’s necessary (there are only a finite number of moments of time that can possibly emerge), while if it’s infinite-dimensional the problem is open.

Despite all that we don’t know, I remain optimistic that we are actually making progress here. I’m pretty hopeful that within my lifetime we’ll have settled on a leading theory for what happened at the very beginning of the universe.

by Sean Carroll at January 25, 2017 11:30 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
March 24, 2017 11:51 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at