Particle Physics Planet

August 21, 2017

Peter Coles - In the Dark

Misty, by Ruth Padel

How I love

The darkwave music
Of a sun’s eclipse
You can’t see for cloud

The saxophonist playing ‘Misty’
In the High Street outside Barclays

Accompanied by mating-calls
Sparked off
In a Jaguar alarm

The way you’re always there
Where I’m thinking

Or several beats ahead.

by Ruth Padel

by telescoper at August 21, 2017 06:57 PM

Clifford V. Johnson - Asymptotia

Viewing the Eclipse

It's an exciting day today! Please don't lock your kids away, which seems to be an alarmingly common option (from looking at the news - many schools seem to be opting to do that; I wish they'd use they use some of those locked classrooms as camera obscura). Instead, use this as an opportunity to learn and teach about the wonderful solar system we live in.

Actually, to enjoy the experience, you never have to even look in the direction of the sun if you don't want to (or if you don't have the appropriate eclipse glasses)... you can see crescents everywhere during the partial eclipse if you look out for them. You can make a safe viewing device in a minute or two if you take the time.

Here's an NPR video that summarises the various viewing options: [...] Click to continue reading this post

The post Viewing the Eclipse appeared first on Asymptotia.

by Clifford at August 21, 2017 02:50 PM

Peter Coles - In the Dark

The Story of the 1919 Eclipse Expeditions

Unless you have been living on another planet, you will know that today there will be an eclipse of the Sun although from the UK it will be rather underwhelming, as only about 4% of the Sun’s disk will be covered by the moon; for totality you have to be in the United States.  For the record, however, the eclipse will begin 15:46 GMT on August 21 out over the Pacific. It will reach the coast of Oregon at Lincoln City, just west of Salem, at 16:04 GMT (09:04 local time) where it will reach its maximum  at 17:17 GMT (10:17 local time). The path of totality will then track right across the United States to South Carolina. For more details see here. Best wishes to all who are hoping to see this cosmic spectacle! I saw the total eclipse of August 11, 1999 from Alderney in the Channel Islands, and it was a very special experience.

Before starting I can’t resist adding this excerpt from the Times warning about the consequences of a mass influx of people to Cornwall for the 1999 eclipse. No doubt there are similar things going around about today’s eclipse:

I did write a letter to the Times complaining that, as a cosmologist, I felt this was very insulting to druids. They didn’t publish it.

This provides me with a good excuse to repost an old item about the famous expedition during which, on 29th May 1919, measurements were made that have gone down in history as vindicating Einstein’s (then) new general theory of relativity. I’ve written quite a lot about this in past years, including a little book and a slightly more technical paper. I decided, though, to post this little piece which is based on an article I wrote some years ago for Firstscience.




The Eclipse that Changed the Universe

A total eclipse of the Sun is a moment of magic: a scant few minutes when our perceptions of the whole Universe are turned on their heads. The Sun’s blinding disc is replaced by ghostly pale tentacles surrounding a black heart – an eerie experience witnessed by hundreds of millions of people throughout Europe and the Near East last August.

But one particular eclipse of the Sun, eighty years ago, challenged not only people’s emotional world. It was set to turn the science of the Universe on its head. For over two centuries, scientists had believed Sir Isaac Newton’s view of the Universe. Now his ideas had been challenged by a young German-Swiss scientist, called Albert Einstein. The showdown – Newton vs Einstein – would be the total eclipse of 29 May 1919.

Newton’s position was set out in his monumental Philosophiae Naturalis Principia Mathematica, published in 1687. The Principia – as it’s familiarly known – laid down a set of mathematical laws that described all forms of motion in the Universe. These rules applied as much to the motion of planets around the Sun as to more mundane objects like apples falling from trees.

At the heart of Newton’s concept of the Universe were his ideas about space and time. Space was inflexible, laid out in a way that had been described by the ancient Greek mathematician Euclid in his laws of geometry. To Newton, space was the immovable and unyielding stage on which bodies acted out their motions. Time was also absolute, ticking away inexorably at the same rate for everyone in the Universe.

Sir Isaac Newton, painted by Sir Godfrey Kneller. Picture Credit: National Portrait Gallery,

For over 200 years, scientists saw the Cosmos through Newton’s eyes. It was a vast clockwork machine, evolving by predetermined rules through regular space, against the beat of an absolute clock. This edifice totally dominated scientific thought, until it was challenged by Albert Einstein.

In 1905, Einstein dispensed with Newton’s absolute nature of space and time. Although born in Germany, during this period of his life he was working as a patent clerk in Berne, Switzerland. He encapsulated his new ideas on motion, space and time in his special theory of relativity. But it took another ten years for Einstein to work out the full consequences of his ideas, including gravity. The general theory of relativity, first aired in 1915, was as complete a description of motion as Newton had prescribed in his Principia. But Einstein’s description of gravity required space to be curved. Whereas for Newton space was an inflexible backdrop, for Einstein it had to bend and flex near massive bodies. This warping of space, in turn, would be responsible for guiding objects such as planets along their orbits.

Albert Einstein (left), pictured with Arthur Stanley Eddington (right). Picture Credit: Royal Greenwich Observatory.

By the time he developed his general theory, Einstein was back in Germany, working in Berlin. But a copy of his general theory of relativity was soon smuggled through war-torn Europe to Cambridge. There it was read by Arthur Stanley Eddington, Britain’s leading astrophysicist. Eddington realised that Einstein’s theory could be tested. If space really was distorted by gravity, then light passing through it would not travel in a straight line, but would follow a curved path. The stronger the force of gravity, the more the light would be bent. The bending would be largest for light passing very close to a very massive body, such as the Sun.

Unfortunately, the most massive objects known to astronomers at the time were also very bright. This was before black holes were seriously considered, and stars provided the strongest gravitational fields known. The Sun was particularly useful, being a star right on our doorstep. But it is impossible to see how the light from faint background stars might be bent by the Sun’s gravity, because the Sun’s light is so bright it completely swamps the light from objects beyond it.


A scientific sketch of the path of totality for the 1919 eclipse. Picture Credit: Royal Greenwich Observatory.

Eddington realised the solution. Observe during a total eclipse, when the Sun’s light is blotted out for a few minutes, and you can see distant stars that appear close to the Sun in the sky. If Einstein was right, the Sun’s gravity would shift these stars to slightly different positions, compared to where they are seen in the night sky at other times of the year when the Sun far away from them. The closer the star appears to the Sun during totality, the bigger the shift would be.

Eddington began to put pressure on the British scientific establishment to organise an experiment. The Astronomer Royal of the time, Sir Frank Watson Dyson, realised that the 1919 eclipse was ideal. Not only was totality unusually long (around six minutes, compared with the two minutes we experienced in 1999) but during totality the Sun would be right in front of the Hyades, a cluster of bright stars.

But at this point the story took a twist. Eddington was a Quaker and, as such, a pacifist. In 1917, after disastrous losses during the Somme offensive, the British government introduced conscription to the armed forces. Eddington refused the draft and was threatened with imprisonment. In the end, Dyson’s intervention was crucial persuading the government to spare Eddington. His conscription was postponed under the condition that, if the war had finished by 1919, Eddington himself would lead an expedition to measure the bending of light by the Sun. The rest, as they say, is history.

The path of totality of the 1919 eclipse passed from northern Brazil, across the Atlantic Ocean to West Africa. In case of bad weather (amongst other reasons) two expeditions were organised: one to Sobral, in Brazil, and the other to the island of Principe, in the Gulf of Guinea close to the West African coast. Eddington himself went to Principe; the expedition to Sobral was led by Andrew Crommelin from the Royal Observatory at Greenwich.

British scientists in the field at their observing site in Sobral in 1919. Picture Credit: Royal Greenwich Observatory

The expeditions did not go entirely according to plan. When the day of the eclipse (29 May) dawned on Principe, Eddington was greeted with a thunderstorm and torrential rain. By mid-afternoon the skies had partly cleared and he took some pictures through cloud.

Meanwhile, at Sobral, Crommelin had much better weather – but he had made serious errors in setting up his equipment. He focused his main telescope the night before the eclipse, but did not allow for the distortions that would take place as the temperature climbed during the day. Luckily, he had taken a backup telescope along, and this in the end provided the best results of all.

After the eclipse, Eddington himself carefully measured the positions of the stars that appeared near the Sun’s eclipsed image, on the photographic plates exposed at both Sobral and Principe. He then compared them with reference positions taken previously when the Hyades were visible in the night sky. The measurements had to be incredibly accurate, not only because the expected deflections were small. The images of the stars were also quite blurred, because of problems with the telescopes and because they were seen through the light of the Sun’s glowing atmosphere, the solar corona.

Before long the results were ready. Britain’s premier scientific body, the Royal Society, called a special meeting in London on 6 November. Dyson, as Astronomer Royal took the floor, and announced that the measurements did not support Newton’s long-accepted theory of gravity. Instead, they agreed with the predictions of Einstein’s new theory.

The final proof: the small red line shows how far the position of the star has been shifted by the Sun’s gravity. Each star experiences a tiny deflection, but averaged over many exposures the results definitely support Einstein’s theory. Picture Credit: Royal Greenwich Observatory.

The press reaction was extraordinary. Einstein was immediately propelled onto the front pages of the world’s media and, almost overnight, became a household name. There was more to this than purely the scientific content of his theory. After years of war, the public embraced a moment that moved mankind from the horrors of destruction to the sublimity of the human mind laying bare the secrets of the Cosmos. The two pacifists in the limelight – the British Eddington and the German-born Einstein – were particularly pleased at the reconciliation between their nations brought about by the results.

But the popular perception of the eclipse results differed quite significantly from the way they were viewed in the scientific establishment. Physicists of the day were justifiably cautious. Eddington had needed to make significant corrections to some of the measurements, for various technical reasons, and in the end decided to leave some of the Sobral data out of the calculation entirely. Many scientists were suspicious that he had cooked the books. Although the suspicion lingered for years in some quarters, in the end the results were confirmed at eclipse after eclipse with higher and higher precision.

In this cosmic ‘gravitational lens,’ a huge cluster of galaxies distorts the light from more distant galaxies into a pattern of giant arcs.  Picture Credit: NASA

Nowadays astronomers are so confident of Einstein’s theory that they rely on the bending of light by gravity to make telescopes almost as big as the Universe. When the conditions are right, gravity can shift an object’s position by far more than a microscopic amount. The ideal situation is when we look far out into space, and centre our view not on an individual star like the Sun, but on a cluster of hundreds of galaxies – with a total mass of perhaps 100 million million suns. The space-curvature of this immense ‘gravitational lens’ can gather the light from more remote objects, and focus them into brilliant curved arcs in the sky. From the size of the arcs, astronomers can ‘weigh’ the cluster of galaxies.

Einstein didn’t live long enough to see through a gravitational lens, but if he had he would definitely have approved….

by telescoper at August 21, 2017 11:36 AM

August 20, 2017

Christian P. Robert - xi'an's og

model misspecification in ABC

With David Frazier and Judith Rousseau, we just arXived a paper studying the impact of a misspecified model on the outcome of an ABC run. This is a question that naturally arises when using ABC, but that has been not directly covered in the literature apart from a recently arXived paper by James Ridgway [that was earlier this month commented on the ‘Og]. On the one hand, ABC can be seen as a robust method in that it focus on the aspects of the assumed model that are translated by the [insufficient] summary statistics and their expectation. And nothing else. It is thus tolerant of departures from the hypothetical model that [almost] preserve those moments. On the other hand, ABC involves a degree of non-parametric estimation of the intractable likelihood, which may sound even more robust, except that the likelihood is estimated from pseudo-data simulated from the “wrong” model in case of misspecification.

In the paper, we examine how the pseudo-true value of the parameter [that is, the value of the parameter of the misspecified model that comes closest to the generating model in terms of Kullback-Leibler divergence] is asymptotically reached by some ABC algorithms like the ABC accept/reject approach and not by others like the popular linear regression [post-simulation] adjustment. Which suprisingly concentrates posterior mass on a completely different pseudo-true value. Exploiting our recent assessment of ABC convergence for well-specified models, we show the above convergence result for a tolerance sequence that decreases to the minimum possible distance [between the true expectation and the misspecified expectation] at a slow enough rate. Or that the sequence of acceptance probabilities goes to zero at the proper speed. In the case of the regression correction, the pseudo-true value is shifted by a quantity that does not converge to zero, because of the misspecification in the expectation of the summary statistics. This is not immensely surprising but we hence get a very different picture when compared with the well-specified case, when regression corrections bring improvement to the asymptotic behaviour of the ABC estimators. This discrepancy between two versions of ABC can be exploited to seek misspecification diagnoses, e.g. through the acceptance rate versus the tolerance level, or via a comparison of the ABC approximations to the posterior expectations of quantities of interest which should diverge at rate Vn. In both cases, ABC reference tables/learning bases can be exploited to draw and calibrate a comparison with the well-specified case.

Filed under: Statistics Tagged: ABC, all models are wrong, Australia, likelihood-free methods, Melbourne, Mission Beach, model mispecification, Monash University, statistical modelling

by xi'an at August 20, 2017 10:17 PM

Christian P. Robert - xi'an's og

ZapperZ - Physics and Physicists

RIP Vern Ehlers
The first physicist ever elected to the US Congress has passed away. Vern Ehlers, a moderate Republican from Michigan, passed away at the age of 83.

Vern Ehlers, 83, a research physicist and moderate Republican who represented a western Michigan congressional district for 17 years, died late Tuesday at a Grand Rapids nursing facility, Melissa Morrison, funeral director at Zaagman Memorial Chapel, said Wednesday.

I reported on here when he decided to retire back in 2010. And of course, when he was serving Congress along with 2 other elected officials who were physicist, I cited a NY Times article that clearly demonstrated how desperate we are to have someone with science background serving as politicians.

Unfortunately, right now, the US Congress has only ONE representative who is a trained physicist (Bill Foster). It somehow reflects on the lack of rationality that is going on in Washington DC right now.


by ZapperZ ( at August 20, 2017 03:53 PM

ZapperZ - Physics and Physicists

Solar Eclipse, Anyone?
It's a day before we here in Chicago will get to see a partial solar eclipse. I know of people who are already in downstate Illinois at Carbondale to view the total eclipse (they will get another total eclipse in 2024, I think).

So, any of you will be look up, hopefully with proper eye wear, to view the eclipse tomorrow? I actually will be teaching a class during the main part of the eclipse, but I may just let the students out for a few minutes just to join the crowd on campus who will be doing stuff for the eclipse. Too bad I won't be teaching optics, or this will be an excellent tie-in with the subject matter.


by ZapperZ ( at August 20, 2017 02:43 PM

The n-Category Cafe

Simplicial Sets vs. Simplicial Complexes

I’m looking for a reference. Homotopy theorists love simplicial sets; certain other topologists love simplicial complexes; they are related in various ways, and I’m interested in one such relation.

Let me explain…

There’s a category <semantics>SSet<annotation encoding="application/x-tex">SSet</annotation></semantics> of simplicial sets and a category <semantics>SCpx<annotation encoding="application/x-tex">SCpx</annotation></semantics> of simplicial complexes.
There is, I believe, a functor

<semantics>F:SCpxSSet<annotation encoding="application/x-tex">F: SCpx \to SSet</annotation></semantics>

that takes the simplices in a simplicial complex, which have unordered vertices, and creates a simplicial set in which the vertices of each simplex are given all possible orderings.

There is a geometric realization functor

<semantics>||:SSetTop<annotation encoding="application/x-tex">| \cdot | : SSet \to Top </annotation></semantics>

There’s also a geometric realization functor for simplicial complexes. I’d better give it another name… let’s say

<semantics>[]:SCpxTop<annotation encoding="application/x-tex"> [ \cdot ] : SCpx \to Top </annotation></semantics>

Here’s what I want a reference for, assuming it’s true: there’s a natural transformation <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics> from the composite

<semantics>SCpxFSSet||Top<annotation encoding="application/x-tex"> SCpx \stackrel{F}{\longrightarrow} SSet \stackrel{|\cdot|}{\longrightarrow} Top </annotation></semantics>


<semantics>SCpx[]Top<annotation encoding="application/x-tex"> SCpx \stackrel{[\cdot]}{\longrightarrow} Top </annotation></semantics>

and for any simplicial complex <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>

<semantics>α X:|F(X)|[X]<annotation encoding="application/x-tex"> \alpha_X : |F(X)| \to [X] </annotation></semantics>

is a homotopy equivalence. (I don’t think it’s a homeomorphism; a bunch of simplices need to be squashed down.)

Since this field is fraught with confusion, I’d better say exactly what I mean by some things. By a simplicial set I mean a functor <semantics>X:Δ opSet<annotation encoding="application/x-tex">X : \Delta^{op} \to Set </annotation></semantics>, and a morphism between simplicial sets to be a natural transformation between them. That seems pretty uncontroversial. By a simplicial complex I mean what Wikipedia calls an abstract simplicial complex, to distinguish them from simplicial complexes that are made of concrete and used as lawn ornaments… or something like that. Namely, I mean a set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> equipped with a family <semantics>U X<annotation encoding="application/x-tex">U_X</annotation></semantics> of non-empty finite subsets that is downward closed and contains all singletons. By a map of simplicial complexes from <semantics>(X,U X)<annotation encoding="application/x-tex">(X,U_X)</annotation></semantics> to <semantics>(Y,U Y)<annotation encoding="application/x-tex">(Y,U_Y)</annotation></semantics> I mean a map <semantics>f:XY<annotation encoding="application/x-tex">f: X \to Y</annotation></semantics> such that the image of any set in <semantics>U X<annotation encoding="application/x-tex">U_X</annotation></semantics> is a set in <semantics>U Y<annotation encoding="application/x-tex">U_Y</annotation></semantics>.

On MathOverflow the topologist Allen Hatcher wrote:

In some areas simplicial sets are far more natural and useful than simplicial complexes, in others the reverse is true. If one drew a Venn diagram of the people using one or the other structure, the intersection might be very small.

Perhaps this cultural divide is reflected in the fact that the usual definition of an abstract simplicial complex involves a set that contains the set of vertices, whose extra members are entirely irrelevant when it comes to defining morphisms! It’s like defining a vector space to consist of two sets, one of which is a vector space in the usual sense and the other of which doesn’t show up in the definition of linear map. Someone categorically included would instead take the approach described on the nLab.

Ultimately you get equivalent categories either way. And as Alex Hoffnung and I explain, the resulting category of simplicial complexes is a quasitopos of concrete sheaves:

A quasitopos has many, but not quite all, of the good properties of a topos. Simplicial sets are a topos of presheaves, so from a category-theoretic viewpoint they’re more tractable than simplicial complexes.

by john ( at August 20, 2017 05:24 AM

August 19, 2017

Christian P. Robert - xi'an's og

Il cemeterio de alpinismo

In the cemetery around Chiesa Vecchia in Macugnaga, at the bottom of Monte Rosa (and the other size from Zermatt), there is such a number of alpinists and guides buried there that the cemetery is called the cemetery of the alpinists. A memorial recalls deaths of local guides and climbers on the different routes of the Monte Rosa group [there is no Monte Rosa peak per se but a collection of 15 tops above 4000m]. Plus crosses and plaques on the church wall for those whose bodies were not recovered, according to a local guide. (Which sounds strange given that these are not the Himalayas! Unless these are glacier-related deaths…]

Filed under: Mountains, pictures, Travel Tagged: cemetary, Chiesa Vecchia, Dufourspitze, Italia, Italian Alps, Macugnaga, memorial, Monte Rosa, mountains

by xi'an at August 19, 2017 10:17 PM

Peter Coles - In the Dark

The Vale of Clwyd

Why did nobody tell me that Beethoven wrote a collection of 26 Welsh Folk Songs? I had to rely on BBC Radio 3 to educate me about them!

Here’s one example, Number 19 in the published collection, arranged for soprano voice with piano, violin and cello accompaniment and  called The Vale of Clwyd .

Here is a picture taken across the Vale of Clwyd, taken by Jeff Buck.


Photo © Jeff Buck (cc-by-sa/2.0)


by telescoper at August 19, 2017 10:58 AM

August 18, 2017

Christian P. Robert - xi'an's og

the DeepMind debacle

“I hope for a world where data is at the heart of understanding and decision making. To achieve this we need better public dialogue.” Hetan Shah

As I was reading one of the Nature issues I brought on vacations, while the rain was falling on an aborted hiking day on the fringes of Monte Rosa, I came across a 20 July tribune by Hetan Shah, executive director of the RSS. A rare occurrence of a statistician’s perspective in Nature. The event prompting this column is the ruling against the Royal Free London hospital group providing patient data to DeepMind for predicting kidney. Without the patients’ agreement. And with enough information to identify the patients. The issues raised by Hetan Shah are that data transfers should become open, and that they should be commensurate in volume and details to the intended goals. And that public approval should be seeked. While I know nothing about this specific case, I find the article overly critical of DeepMind, which interest in health related problems is certainly not pure and disinterested but nonetheless can contribute advances in (personalised) care and prevention through its expertise in machine learning. (Disclaimer: I have neither connection nor conflict with the company!) And I do not see exactly how public approval or dialogue can help in making progress in handling data, unless I am mistaken in my understanding of “the public”. The article mentions the launch of a UK project on data ethics, involving several [public] institutions like the RSS: this is certainly commandable and may improve personal data is handled by companies, but I would not call this conglomerate representative of the public, which most likely does not really trust these institutions either…

Filed under: Books, Statistics, Travel Tagged: data privacy, DeepMind, Monte Rosa, Nature, personalised medicine, Royal Statistical Society, RSS, topology, vacations

by xi'an at August 18, 2017 10:17 PM

Peter Coles - In the Dark

Natwest T20 Blast: Glamorgan v Middlesex

This evening sees the last set of group matches in this summer’s Natwest T20 Blast. Weather permitting, I’ll be at the SSE Swalec Stadium at 7pm to Glamorgan play Middlesex. Glamorgan are currently top of the South Group, with only two teams (Hampshire and Surrey) able to catch them:

This means that Glamorgan have already qualified for the Quarter Finals to take place next week. If they finish in one of the top two places they will have a home tie against the third or fourth club from the North (or, more properly, Midlands) group. If they finish third they will play away against whichever Midlands team finishes second in that group.

Hampshire are also guaranteed a Quarter Final place but there are many possibilities for the other two slots: only Gloucestershire, who played their final game last night, are definitely eliminated.

Normally, a home Quarter Final tie would regarded as a `reward’ for doing well in the group, but this season Glamorgan haven’t won any of their home games (either losing them or having them rained off). They might do better to lose tonight and play their next match somewhere else! However, if they beat Middlesex (or if tonight’s game is rained off) I’ll have another match in this competition to watch at Sophia Gardens. After that, proper cricket resumes in the form of championship matches against Sussex (at Colwyn Bay) and in Cardiff against Northamptonshire and Gloucestershire.

I have to say that I find the format of the Natwest T20 Blast group matches a bit strange. It would make sense for each of the 9 teams in each division to play each of the others home and away. That would mean 16 matches per side altogether. In fact each team plays only 14 matches: each plays six teams home and away and two teams only once. Presumably that is to avoid fixture congestion, but the group games are spread over a six week period, so I would have thought it wouldn’t be too difficult to fit another couple of games in.

This morning the Cardiff weather pulled out all the stops. I woke up to bright sunshine, then a few minutes later the rain was lashing down. Then we had thunder and lightning, with rain and hail, followed by more sunshine. It’s also been rather windy. It’s anyone’s guess what will happen this evening, but I’ve paid for my season ticket so I’ll try to make the best of it!

I’ll update this post with pictures of the action. If there is any!

UPDATE. Play was scheduled to start at 7pm. This was the scene at 7.02. 

Still raining. Toss delayed until further notice.

UPDATE to the UPDATE: After a pitch inspection at 8pm we finally got going at 8.20, with 14 overs a side. There were a couple of short interruptions when the rain started again, but the game was completed.

Glamorgan won the toss and decided to field. Middlesex got off to a terrible start and were at one point 7 for 3, and then 24 for 5. They recovered somewhat but could only reach 99 for 8 off their 14 overs. 

Despite a wobble in the middle when they lost 3 quick wickets, including the talismanic Ingram, Glamorgan reached the required round hundred comfortably to win by 7 wickets. 

Their reward is a home tie against Leicestershire next Wednesday evening. I hope the weather is a bit better then!

by telescoper at August 18, 2017 01:33 PM

Matt Strassler - Of Particular Significance

An Experience of a Lifetime: My 1999 Eclipse Adventure

Back in 1999 I saw a total solar eclipse in Europe, and it was a life-altering experience.  I wrote about it back then, but was never entirely happy with the article.  This week I’ve revised it.  It could still benefit from some editing and revision (comments welcome), but I think it’s now a good read.  It’s full of intellectual observations, but there are powerful emotions too.

If you’re interested, you can read it as a pdf, or just scroll down.



A Luminescent Darkness: My 1999 Eclipse Adventure

© Matt Strassler 1999

After two years of dreaming, two months of planning, and two hours of packing, I drove to John F. Kennedy airport, took the shuttle to the Air France terminal, and checked in.  I was brimming with excitement. In three days time, with a bit of luck, I would witness one the great spectacles that a human being can experience: a complete, utter and total eclipse of the Sun.

I had missed one eight years earlier. In July 1991, a total solar eclipse crossed over Baja California. I had thought seriously about driving the fourteen hundred miles from the San Francisco area, where I was a graduate student studying theoretical physics, to the very southern tip of the peninsula. But worried about my car’s ill health and scared by rumors of gasoline shortages in Baja, I chickened out. Four of my older colleagues, more worldly and more experienced, and supplied with a more reliable vehicle, drove down together. When they returned, exhilarated, they regaled us with stories of their magical adventure. Hearing their tales, I kicked myself for not going, and had been kicking myself ever since. Life is not so long that such opportunities can be rationalized or procrastinated away.

A total eclipse of the Sun is a event of mythic significance, so rare and extraordinary and unbelievable that it really ought to exist only in ancient legends, in epic poems, and in science fiction stories. There are other types of eclipses — partial and total eclipses of the Moon, in which the Earth blocks sunlight that normally illuminates the Moon, and various eclipses of the Sun in which the Moon blocks sunlight that normally illuminates the Earth. But total solar eclipses are in a class all their own. Only during the brief moments of totality does the Sun vanish altogether, leaving the shocked spectator in a suddenly darkened world, gazing uncomprehendingly at a black disk of nothingness.

Our species relies on daylight. Day is warm; day grows our food; day permits travel with a clear sense of what lies ahead. We are so fearful of the night — of what lurks there unseen, of the sounds that we cannot interpret. Horror films rely on this fear; demons and axe murderers are rarely found walking about in bright sunshine. Dark places are dangerous places; sudden unexpected darkness is worst of all. These are the conventions of cinema, born of our inmost psychology. But the Sun and the Moon are not actors projected on a screen. The terror is real.

It has been said that if the Earth were a member of a federation of a million planets, it would be a famous tourist attraction, because this home of ours would be the only one in the republic with such beautiful eclipses. For our skies are witness to a coincidence truly of cosmic proportions. It is a stunning accident that although the Sun is so immense that it could hold a million Earths, and the Moon so small that dozens could fit inside our planet, these two spheres, the brightest bodies in Earth’s skies, appear the same size. A faraway giant may seem no larger than a nearby child. And this perfect match of their sizes and distances makes our planet’s eclipses truly spectacular, visually and scientifically. They are described by witnesses as a sight of weird and unique beauty, a visual treasure completely unlike anything else a person will ever see, or even imagine.

But total solar eclipses are uncommon, occurring only once every year or two. Even worse, totality only occurs in a narrow band that sweeps across the Earth — often just across its oceans. Only a small fraction of the Earth sees a total eclipse in any century. And so these eclipses are precious; only the lucky, or the devoted, will experience one before they die.

In my own life, I’d certainly been more devoted than lucky. I knew it wasn’t wise to wait for the Moon’s shadow to find me by chance. Instead I was going on a journey to place myself in its path.

The biggest challenge in eclipse-chasing is the logistics. The area in which totality is visible is very long but very narrow. For my trip, in 1999, it was a long strip running west to east all across Europe, but only a hundred miles wide from north to south. A narrow zone crossing heavily populated areas is sure to attract a massive crowd, so finding hotels and transport can be difficult. Furthermore, although eclipses are precisely predictable, governed by the laws of gravity worked out by Isaac Newton himself, weather and human beings are far less dependable.

But I had a well-considered plan. I would travel by train to a small city east of Paris, where I had reserved a rental car. Keeping a close watch on the weather forecast, I would drive on back roads, avoiding clogged highways. I had no hotel reservations. It would have been pointless to make them for the night before the event, since it was well known that everything within two hours drive of the totality zone was booked solid. Moreover, I wanted the flexibility to adjust to the weather and couldn’t know in advance where I’d want to stay. So my idea was that on the night prior to the eclipse, I would drive to a good location in the path of the lunar shadow, and sleep in the back of my car. I had a sleeping bag with me to keep me warm, and enough lightweight clothing for the week — and not much else.

Oh, it was such a good plan, clean and simple, and that’s why my heart had so far to sink and my brain so ludicrous a calamity to contemplate when I checked my wallet, an hour before flight time, and saw a gaping black emptiness where my driver’s license was supposed to be. I was struck dumb. No license meant no car rental; no car meant no flexibility and no place to sleep. Sixteen years of driving and I had never lost it before; why, why, of all times, now, when it was to play a central role in a once-in-a-lifetime adventure?

I didn’t panic. I walked calmly back to the check-in counters, managed to get myself rescheduled for a flight on the following day, drove the three hours back to New Jersey, and started looking. It wasn’t in my car. Nor was it in the pile of unneeded items I’d removed from my wallet. Not in my suitcase, not under my bed, not in my office. As it was Sunday, I couldn’t get a replacement license. Hope dimmed, flickered, and went dark.

Deep breaths. Plan B?

I didn’t have a tent, and couldn’t easily have found one. But I did have a rain poncho, large enough to keep my sleeping bag off the ground. As long as it didn’t rain too hard, I could try, the night before the eclipse, to find a place to camp outdoors; with luck I’d find lodging for the other nights. I doubted this would be legal, but I was willing to take the chance. But what about my suitcase? I couldn’t carry that around with me into the wilderness. Fortunately, I knew a solution. For a year after college, I had studied music in France, and had often gone sightseeing by rail. On those trips I had commonly made use of the ubiquitous lockers at the train stations, leaving some luggage while I explored the nearby town. As for flexibility of location, that was unrecoverable; the big downside of Plan B was that I could no longer adjust to the weather. I’d just have to be lucky. I comforted myself with the thought that the worst that could happen to me would be a week of eating French food.

So the next day, carrying the additional weight of a poncho and an umbrella, but having in compensation discarded all inessential clothing and tourist information, I headed back to the airport, this time by bus. Without further misadventures, I was soon being carried across the Atlantic.

As usual I struggled to nap amid the loud silence of a night flight. But my sleeplessness was rewarded with one of those good omens that makes you think that you must be doing the right thing. As we approached the European coastline, and I gazed sleepily out my window, I suddenly saw a bright glowing light. It was the rising white tip of the thin crescent Moon.

Solar eclipses occur at New Moon, always. This is nothing but simple geometry; the Moon must place itself exactly between the Sun and the Earth to cause an eclipse, and that means the half of the Moon that faces us must be in shadow. (At Full Moon, the opposite is true; the Earth is between the Sun and the Moon, so the half of the Moon that faces us is in full sunlight. That’s when lunar eclipses can occur.) And just before a New Moon, the Moon is close to the Sun’s location in the sky. It becomes visible, as the Earth turns, just before the Sun does, rising as a morning crescent shortly before sunrise. (Similarly, we get an evening crescent just after a New Moon.)

There, out over the vast Atlantic, from a dark ocean of water into a dark sea of stars, rose the delicate thin slip of Luna the lover, on her way to her mystical rendezvous with Sol. Her crescent smiled at me and winked a greeting. I smiled back, and whispered, “see you in two days…” For totality is not merely the only time you can look straight at the Sun and see its crown. It is the only time you can see the New Moon.

We landed in Paris at 6:30 Monday morning, E-day-minus-two. I headed straight to the airport train station, and poured over rail maps and my road maps trying to guess a good location to use as a base. Eventually I chose a medium-sized town with the name Charleville-Mezieres. It was on the northern edge of the totality zone, at the end of a large spoke of the Paris-centered rail system, and was far enough from Paris, Brussels, and all large German towns that I suspected it might escape the worst of the crowds. It would then be easy, the night before the eclipse, to take a train back into the center of the zone, where totality would last the longest.

Two hours later I was in the Paris-East rail station and had purchased my ticket for Charleville-Mezieres. With ninety minutes to wait, I wandered around the station. It was evident that France had gone eclipse-happy. Every magazine had a cover story; every newspaper had a special insert; signs concerning the event were everywhere. Many of the magazines carried free eclipse glasses, with a black opaque metallic material for lenses that only the Sun can penetrate. Warnings against looking at the Sun without them were to be found on every newspaper front page. I soon learned that there had been a dreadful scandal in which a widely distributed shipment of imported glasses was discovered to be dangerously defective, leading the government to make a hurried and desperate attempt to recall them. There were also many leaflets advertising planned events in towns lying in the totality zone, and information about extra trains that would be running. A chaotic rush out of Paris was clearly expected.

Before noon I was on a train heading through the Paris suburbs into the farmlands of the Champagne region. The rocking of the train put me right to sleep, but the shrieking children halfway up the rail car quickly ended my nap. I watched the lovely sunlit French countryside as it rolled by. The Sun was by now well overhead — or rather, the Earth had rotated so that France was nearly facing the Sun head on. Sometimes, when the train banked on a turn, the light nearly blinded me, and I had to close my eyes.

With my eyelids shut, I thought about how I’d managed, over decades, to avoid ever once accidentally staring at the Sun for even a second… and about how almost every animal with eyes manages to do this during its entire life. It’s quite a feat, when you think about it. But it’s essential, of course. The Sun’s ferocious blaze is even worse than it appears, for it contains more than just visible light. It also radiates light too violet for us to see — ultraviolet — which is powerful enough to destroy our vision. Any animal lacking instincts powerful enough to keep its eyes off the Sun will go blind, soon to starve or be eaten. But humans are in danger during solar eclipses, because our intense curiosity can make us ignore our instincts. Many of us will suffer permanent eye damage, not understanding when and how it is safe to look at the Sun… which is almost, but not quite, never.

In fact the only time it is safe to look with the naked eye is during totality, when the Sun’s disk is completely blocked by the New Moon, and the world is dark. Then, and only then, can one see that the Sun is not a sphere, and that it has a sort of atmosphere, immense and usually unseen.

At the heart of the Sun, and source of its awesome power, is its nuclear furnace, nearly thirty million degrees hot and nearly five billion years old. All that heat gradually filters and boils out of the Sun’s core toward its visible surface, which is a mere six thousand degrees… still white-hot. Outside this region is a large irregular halo of material that is normally too dim to see against the blinding disk. The inner part of that halo is called the chromosphere; there, giant eruptions called “prominences” loop outward into space. The outer part of the halo is the “corona”, Latin for “crown.” The opportunity to see the Sun’s corona is one of the main reasons to seek totality.

Still very drowsy, but in a good mood, I arrived in Charleville. Wanting to leave my bags in the station while I looked for a hotel room, I searched for the luggage lockers. After three tiring trips around the station, I asked at a ticket booth. “Oh,” said the woman behind the desk, “we haven’t had them available since the Algerian terrorism of a few years ago.”

I gulped. This threatened plan B, for what was I to do with my luggage on eclipse day? I certainly couldn’t walk out into the French countryside looking for a place to camp while carrying a full suitcase and a sleeping bag! And even the present problem of looking for a hotel would be daunting. The woman behind the desk was sympathetic, but her only suggestion was to try one of the hotels near the station. Since the tourist information office was a mile away, it seemed the only good option, and I lugged my bags across the street.

Here, finally, luck smiled. The very first place I stopped at had a room for that night, reasonably priced and perfectly clean, if spartan. It was also available the night after the eclipse. My choice of Charleville had been wise. Unfortunately, even here, Eclipse Eve — Tuesday evening — was as bad as I imagined. The hoteliere assured me that all of Charleville was booked (and my later attempts to find a room, even a last-minute cancellation, proved fruitless.) Still, she she was happy for me to leave my luggage at the hotel while I tramped through the French countryside. Thus was Plan B saved.

Somewhat relieved, I wandered around the town. Charleville is not unattractive, and the orange sandstone 16th century architecture of its central square is very pleasing to the eye. By dusk I was exhausted and collapsed on my bed. I slept long and deep, and awoke refreshed. I took a short sightseeing trip by train, ate a delicious lunch, and tried one more time to find a room in Charleville for Eclipse Eve. Failing once again, I resolved to camp in the heart of the totality zone.

But where? I had several criteria in mind. For the eclipse, I wanted to be far from any large town or highway, so that streetlights, often automatically triggered by darkness, would not spoil the experience. Also I wanted hills and farmland; I wanted to be at a summit, with no trees nearby, in order to have the best possible view. It didn’t take long to decide on a location. About five miles south of the unassuming town of Rethel, rebuilt after total destruction in the first world war, my map showed a high hill. It seemed perfect.

Fortunately, I learned just in time that this same high hill had attracted the attention of the local authorities, and they had decided to designate this very place the “official viewing site” in the region. A hundred thousand people were expected to descend on Rethel and take shuttles from the town to the “site.” Clearly this was not where I wanted to be!

So instead, when I arrived in Rethel, I walked in another direction. I aimed for an area a few miles west of town, quiet hilly farmland.

Yet again, my luck seemed to be on the wane. By four it was drizzling, and by five it was raining. Darkness would settle at around eight, and I had little time to find a site for unobtrusive camping, much less a dry one. The rain stopped, restarted, hesitated, spat, but refused to go away. An unending mass of rain clouds could be seen heading toward me from the west. I had hoped to use trees for some shelter against rain, but now the trees were drenched and dripping, even worse than the rain itself.

Still completely unsure what I would do, I continued walking into the evening. I must have cut a very odd figure, carrying an open umbrella, a sleeping bag, and a small black backpack. I took a break in a village square, taking shelter at a church’s side door, where I munched on French bread and cheese. Maybe one of these farmers would let me sleep in a dry spot in his barn, I thought to myself. But I still hadn’t reached the hills I was aiming for, so I kept walking.

After another mile, I came to a hilltop with a dirt farm track crossing the road. There, just off the road to the right, was a large piece of farm machinery. And underneath it, a large, flat, sheltered spot. Hideous, but I could sleep there. Since it wasn’t quite nightfall yet and I could see a hill on the other side of the road along the same track, one which looked like it might be good for watching the eclipse, I took a few minutes to explore it. There I found another piece of farm equipment, also with a sheltered underbelly. This one was much further from the road, looked unused, and presumably offered both safer and quieter shelter. It was sitting just off the dirt track in a fallow field. The field was of thick, sticky, almost hard mud, the kind you don’t slip in and which doesn’t ooze but which gloms onto the sides of your shoe.

And so it was that Eclipse Eve found me spreading my poncho in a friendly unknown farmer’s field, twisting my body so as not to hit my head on the metal bars of my shelter, carefully unwrapping my sleeping bag and removing my shoes so as not to cover everything in mud, brushing my teeth in bottled water, and bedding down for the night. The whole scene was so absurd that I found myself sporting a slightly manic grin and giggling. But still, I was satisfied. Despite the odds, I was in the zone at the appointed time; when I awoke the next morning I would be scarcely two miles from my final destination. If the clouds were against me, so be it. I had done my part.

I slept pretty well, considering both my excitement and the uneven ground. At daybreak I was surrounded by fog, but by 8 a.m.~the fog was lifting, revealing a few spots of blue sky amid low clouds. My choice of shelter was also confirmed; my sleeping bag was dry, and across the road the other piece of machinery I had considered was already in use.

I packed up and started walking west again. The weather seemed uncertain, with three layers of clouds — low stratus, medium cumulus, and high cirrus — crossing over each other. Blue patches would appear, then close up. I trudged to the base of my chosen hill, then followed another dirt track to the top, where I was graced with a lovely view. The rolling paysage of fertile France stretched before me, blotched here and there with sunshine.  Again I had chosen well, better than I realized, as it turned out, for I was not alone on the hill. A Belgian couple had chosen it too — and they had a car…

There I waited. The minutes ticked by. The temperature fluctuated, and the fields changed color, as the Sun played hide and seek. I didn’t need these reminders of the Sun’s importance — that without its heat the Earth would freeze, and without its light, plants would not grow and the cycle of life would quickly end. I thought about how pre-scientific cultures had viewed the Sun. In cultures and religions around the world, the blazing disk has often been attributed divine power and regal authority. And why not? In the past century, we’ve finally learned what the Sun is made from and why it shines. But we are no less in awe than our ancestors, for the Sun is much larger, much older, and much more powerful than most of them imagined.

For a while, I listened to the radio. Crowds were assembling across Europe. Special events — concerts, art shows, contests — were taking place, organized by towns in the zone to coincide with the eclipse. This was hardly surprising. All those tourists had come for totality. But totality is brief, never more than a handful of minutes.  It’s the luck of geometry, the details of the orbits of the Earth and Moon, that set its duration. For my eclipse, the Moon’s shadow was only about a hundred miles wide. Racing along at three thousand miles per hour, it would darken any one location for at most two minutes. Now if a million people are expected to descend on your town for a two-minute event, I suppose it is a good idea to give them something else to do while they wait. And of course, the French cultural establishment loves this kind of opportunity. Multimedia events are their specialty, and they often give commissions to contemporary artists. I was particularly amused to discover later that an old acquaintance of mine — I met him in 1987 at the composers’ entrance exams for the Paris Conservatory — had been commissioned to write an orchestral piece, called “Eclipse,” for the festival in the large city of Reims. It was performed just before the moment of darkness.

Finally, around 11:30, the eclipse began. The Moon nibbled a tiny notch out of the sun. I looked at it briefly through my eclipse glasses, and felt the first butterflies of anticipation. The Belgian couple, in their late fourties, came up to the top of the hill and stood alongside me. They were Flemish, but the man spoke French, and we chatted for a while. It turned out he was a scientist also, and had spent some time in the United States, so we had plenty to talk about. But our discussion kept turning to the clouds, which showed no signs of dissipating. The Sun was often veiled by thin cirrus or completely hidden by thick cumulus. We kept a nervous watch.

Time crawled as the Moon inched across the brilliant disk. It passed the midway point and the Sun became a crescent. With only twenty minutes before totality, my Belgian friends conversed in Dutch. The man turned to me. “We have decided to drive toward that hole in the clouds back to the east,” he said in French. “It’s really not looking so good here. Do you want to come with us?” I paused to think. How far away was that hole? Would we end up back at the town? Would we get caught in traffic? Would we end up somewhere low? What were my chances if I stayed where I was? I hesitated, unsure. If I went with them, I was subject to their whims, not my own. But after looking at the oncoming clouds one more time, I decided my present location was not favorable. I joined them.

We descended the dirt track and turned left onto the road I’d taken so long to walk. It was completely empty. We kept one eye on where we were going and five eyes on the sky. After two miles, the crescent sun became visible through a large gap in the low clouds. There were still high thin clouds slightly veiling it, but the sky around it was a pale blue. We went a bit further, and then stopped… at the very same dirt track where I had slept the night before. A line of ten or fifteen cars now stretched along it, but there was plenty of room for our vehicle.

By now, with ten minutes to go, the light was beginning to change. When only five percent of the Sun remains, your eye can really tell. The blues become deeper, the whites become milkier, and everything is more subdued. Also it becomes noticeably cooler. I’d seen this light before, in New Mexico in 1994. I had gone there to watch an “annular” eclipse of the Sun. An annular eclipse occurs when the Moon passes directly in front of the Sun but is just a bit too far away from the Earth for its shadow to reach the ground. In such an eclipse, the Moon fails to completely block the Sun; a narrow ringlet, or “annulus”, often called the “ring of fire,” remains visible. That day I watched from a mountain top, site of several telescopes, in nearly clear skies. But imagine the dismay of the spectators as the four-and-a-half minutes of annularity were blocked by a five-minute cloud! Fortunately there was a bright spot. For a brief instant — no more than three seconds — the cloud became thin, and a perfect circle of light shone through, too dim to penetrate eclipse glasses but visible with the naked eye… a veiled, surreal vision.

On the dirt track in the middle of French fields, we started counting down the minutes. There was more and more tension in the air. I put faster speed film into my camera. The light became still milkier, and as the crescent became a fingernail, all eyes were focused either on the Sun itself or on a small but thick and dangerous-looking cloud heading straight for it. Except mine. I didn’t care if I saw the last dot of sunlight disappear. What I wanted to watch was the coming of Moon-shadow.

One of my motivations for seeking a hill was that I wanted to observe the approach of darkness. Three thousand miles an hour is just under a mile per second, so if one had a view extending out five miles or so, I thought, one could really see the edge coming. I expected it would be much like watching the shadow of a cloud coming toward me, with the darkness sweeping along the ground, only much darker and faster. I looked to the west and waited for the drama to unfold.

And it did, but it was not what I was expecting. Even though observing the shadow is a common thing for eclipse watchers to do, nothing I had ever read about eclipses prepared me in the slightest for what I was about to witness. I’ve never seen it photographed, or even described. Maybe it was an effect of all the clouds around us. Or maybe others, just as I do, find it difficult to convey.

For how can one relate the sight of daylight sliding swiftly, like an sigh, to deep twilight? of the western sky, seen through scattered clouds, changing seamlessly and inexorably from blue to pink to slate gray to the last yellow of sunset? of colors rising up out of the horizon and spreading across the sky like water from a broken dyke flooding onto a field?

I cannot find the right combination of words to capture the sense of being swept up, of being overwhelmed, of being transfixed with awe, as one might be before the summoning of a great wave or a great wind by the command of a god, yet all in utter silence and great beauty. Reliving it as I write this brings a tear. In the end I have nothing to compare it to.

The great metamorphosis passed. The light stabilized. Shaken, I looked up.

And quickly looked away. I had seen a near-disk of darkness, the fuzzy whiteness of the corona, and some bright dots around the disk’s edge, one especially bright where the Sun still clearly shone through. Accidentally I had seen with my naked eyes the “diamond ring,” a moment when the last brilliant drop of Sun and the glistening corona are simultaneously visible. It’s not safe to look at. I glanced again. Still several bright dots. I glanced again. Still there — but the Sun had to be covered by now…

So I looked longer, and realized that the Sun was indeed covered, that those bright dots were there to stay. There it was. The eclipsed Sun, or rather, the dark disk of the New Moon, surrounded by the Sun’s crown, studded at its edge with seven bright pink jewels. It was bizarre, awe-inspiring, a spooky hallucination. It shimmered.

The Sun’s corona didn’t really resemble what I had seen in photographs, and I could immediately see why. The corona looked as though it were made of glistening white wispy hair, billowing outward like a mop of whiskers. It gleamed with a celestial light, a shine resembling that of well-lit tinsel. No camera could capture that glow, no photograph reproduce it.

But the greatest, most delightful surprise was the seven beautiful gems. I knew they had to be the great eruptions on the surface of the Sun, prominences, huge magnetic storms larger than our planet and more violent than anything else in the solar system. However, nobody ever told me they were bright pink! I always assumed they were orange (silly of me, since the whole Sun looks orange if you look at it through an orange filter, which the photographs always do.) They were arranged almost symmetrically around the sun, with one of them actually well separated from its surface and halfway out into the lovely soft filaments of the corona. I explored them with my binoculars. The colors, the glistening timbre, the rich detail, it is a visual delight. The scene is living, vibrant, delicate and soft; by comparison, all the photographs and films seem dry, flat, deadened.

I was surprised at my calm. After the great rush of the shadow, the stasis of totality had caught me off guard.  Around me it was much lighter than I had expected. The sense was of late twilight, with a deep blue-purple sky; yet it was still bright enough to read by. The yellow light of late sunset stretched all the way around the horizon. The planet Venus was visible, but no stars peeked through the clouds. Perhaps longer eclipses have darker skies, a larger Moon-shadow putting daylight further away.

I had scarcely had time to absorb all of this when, just at the halfway point of totality, the dangerous-looking cumulus cloud finally arrived, and blotted out the view. A groan, but only a half-hearted one, emerged from the spectators; after all we’d seen what we’d come to see. I took in the colors emanating from the different parts of the sky, and then looked west again, waiting for the light to return. A thin red glow touched the horizon. I waited. Suddenly the red began to grow furiously. I yelled “Il revient!” — it is returning! — and then watched in awe as the reds became pinks, swarmed over us, turned yellow-white…

And then… it was daylight again. Normality, or a slightly muted version of it. The magical show was over, heavenly love had been consummated, we who had traveled far had been rewarded. The weather had been kind to us. There was a pause as we savored the experience, and waited for our brains to resume functioning. Then congratulations were passed around as people shook hands and hugged each other. I thanked my Belgian friends, who like me were smiling broadly. They offered me a ride back to town. I almost accepted, but stopped short, and instead thanked them again and told them I somehow wanted to be outside for a while longer. We exchanged addresses, said goodbyes, they drove off.

I started retracing my steps from the previous evening. As I walked back to the town of Rethel in the returning sunshine, the immensity of what I had seen began gradually to make its way through my skin into my blood, making me teary-eyed. I thought about myself, a scientist, educated and knowledgeable about the events that had just taken place, and tried to imagine what would have happened to me today if I had not had
that knowledge and had found myself, unexpectedly, in the Moon’s shadow.

It was not difficult; I had only to imagine what I would feel if the sky suddenly, without any warning, turned a fiery red instead of blue and began to howl. It would have been a living nightmare. The terror that I would have felt would have penetrated my bones. I would have fallen on my knees in panic; I would have screamed and wept; I would have called on every deity I knew and others I didn’t know for help; I would have despaired; I would have thought death or hell had come; I would have assumed my life was about to end. The two minutes of darkness, filled with the screams and cries of my neighbors, would have been timeless, maddening. When the Sun just as suddenly returned, I would have collapsed onto the ground with relief, profusely and weepingly thanking all of the deities for restoring the world to its former condition, and would have rushed home to relatives and friends, hoping to find some comfort and solace.

I would have sought explanations. I would have been willing to consider anything: dragons eating the Sun, spirits seeking to punish our village or country for its transgressions, evil and spiteful monsters trying to freeze the Earth, gods warning us of terrible things to come in future. But above all, I could never, never have imagined that this brief spine-chilling extinction and transformation of the Sun was a natural phenomenon. Nothing so spectacular and sudden and horrifying could have been the work of mere matter. It would once and for all have convinced me of the existence of creatures greater and more powerful than human beings, if I had previously had any doubt.

And I would have been forever changed. No longer could I have entirely trusted the regularity of days and nights, of seasons, of years. For the rest of my life I would have always found myself glancing at the sky, wanting to make sure that all, for the moment, was well. For if the Sun could suddenly vanish for two minutes, perhaps the next time it could vanish for two hours, or two days… or two centuries. Or forever.

I pondered the impact that eclipses, both solar and lunar, have had throughout human history. They have shaped civilizations. Wars and slaughters were begun and ended on their appearance; they sent ordinary people to their deaths as appeasement sacrifices; new gods and legends were invoked to give meaning to them. The need to predict them, and the coincidences which made their prediction possible, helped give birth to astronomy as a mathematically precise science, in China, in Greece, in modern Europe — developments without which my profession, and even my entire technologically-based culture, might not exist.

It was an hour’s walk to Rethel, but that afternoon it was a long journey. It took me across the globe to nations ancient and distant. By the time I reached the town, I’d communed with my ancestors, reconsidered human history, and examined anew my tiny place in the universe.  If I’d been a bit calm during totality itself, I wasn’t anymore. What I’d seen was gradually filtering, with great potency, into my soul.

I took the train back to Charleville, and slept dreamlessly. The next two days were an opportunity to unwind, to explore, and to eat well. On my last evening I returned to Paris to visit my old haunts. I managed to sneak into the courtyard of the apartment house where I had had a one-room garret up five flights of stairs, with its spartan furnishings and its one window that looked over the roofs of Paris to the Eiffel Tower. I wandered past the old Music Conservatory, since moved to the northeast corner of town, and past the bookstore where I bought so much music. My favorite bakery was still open.

That night I slept in an airport hotel, and the next day flew happily home to the American continent. I never did find my driver’s license.

But psychological closure came already on the day following the eclipse. I spent that day in Laon, a small city perched magnificently atop a rocky hill that rises vertically out of the French plains. I wandered its streets and visited its sights — an attractive church, old houses, pleasant old alleyways, ancient walls and gates. As evening approached I began walking about, looking for a restaurant, and I came to the northwestern edge of town overlooking the new city and the countryside beyond. The clouds had parted, and the Sun, looking large and dull red, was low in the sky. I leaned on the city wall and watched as the turning Earth carried me, and Laon, and all of France, at hundreds of miles an hour, intent on placing itself between me and the Sun. Yet another type of solar eclipse, one we call “sunset.”

The ruddy disk touched the horizon. I remembered the wispy white mane and the brilliant pink jewels. In my mind the Sun had always been grand and powerful, life-giver and taker, essential and dangerous. It could blind, burn, and kill.  I respected it, was impressed and awed by it, gave thanks for it, swore at it, feared it. But in the strange light of totality, I had seen beyond its unforgiving, blazing sphere, and glimpsed a softer side of the Sun. With its feathery hair blowing in a dark sky, it had seemed delicate, even vulnerable. It is, I thought to myself, as mortal as we.

The distant French hills rose across its face. As it waned, I found myself feeling a warmth, even a tenderness — affection for this giant glowing ball of hydrogen, this protector of our planet, this lonely beacon in a vast emptiness… the only star you and I will ever know.

Filed under: Astronomy, History of Science Tagged: astronomy, earth, eclipse, moon, space, sun

by Matt Strassler at August 18, 2017 12:30 PM

Emily Lakdawalla - The Planetary Society Blog

NASA experiments will watch eclipse's effect on atmosphere
The upcoming solar eclipse isn’t just about watching the Moon block out the Sun. A suite of NASA-funded science experiments will to study the unseen effects of the eclipse on Earth's atmosphere.

August 18, 2017 11:00 AM

Peter Coles - In the Dark


After yesterday’s terrible news, it seems apt to remember happier times.



by telescoper at August 18, 2017 10:54 AM

Tommaso Dorigo - Scientificblogging

Revenge Of The Slimeballs - Part 4
This is the fourth part of Chapter 3 of the book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab". The chapter recounts the pioneering measurement of the Z mass by the CDF detector, and the competition with SLAC during the summer of 1989. The title of the post is the same as the one of chapter 3, and it refers to the way some SLAC physicists called their Fermilab colleagues, whose hadron collider was to their eyes obviously inferior to the electron-positron linear collider.

read more

by Tommaso Dorigo at August 18, 2017 09:28 AM

August 17, 2017

Emily Lakdawalla - The Planetary Society Blog

Celebrating the 40th anniversaries of the Voyager launches
Sunday, August 20 marks the 40th anniversary of the launch of Voyager 2. Tuesday, September 5, will be the 40th anniversary for Voyager 1. Throughout the next three weeks, we'll be posting new and classic material in honor of the Voyagers. Here's a preview.

August 17, 2017 11:07 PM

Christian P. Robert - xi'an's og

Das Kapital [not a book review]

A rather bland article by Gareth Stedman Jones in Nature reminded me that the first volume of Karl Marx’ Das Kapital is 150 years old this year. Which makes it appear quite close in historical terms [just before the Franco-German war of 1870] and rather remote in scientific terms. I remember going painstakingly through the books in 1982 and 1983, mostly during weekly train trips between Paris and Caen, and not getting much out of it! Even with the help of a cartoon introduction I had received as a 1982 Xmas gift! I had no difficulty in reading the text per se, as opposed to my attempt of Kant’s Critique of Pure Reason the previous summer [along with the other attempt to windsurf!], as the discourse was definitely grounded in economics and not in philosophy. But the heavy prose did not deliver a convincing theory of the evolution of capitalism [and of its ineluctable demise]. While the fundamental argument of workers’ labour being an essential balance to investors’ capital for profitable production was clearly if extensively stated, the extrapolations on diminishing profits associated with decreasing labour input [and the resulting collapse] were murkier and sounded more ideological than scientific. Not that I claim any competence in the matter: my attempts at getting the concepts behind Marxist economics stopped at this point and I have not been seriously thinking about it since! But it still seems to me that the theory did age very well, missing the increasing power of financial agents in running companies. And of course [unsurprisingly] the numerical revolution and its impact on the (des)organisation of work and the disintegration of proletariat as Marx envisioned it. For instance turning former workers into forced and poor entrepreneurs (Uber, anyone?!). Not that the working conditions are particularly rosy for many, from a scarsity of low-skill jobs, to a nurtured competition between workers for existing jobs (leading to extremes like the scandalous zero hour contracts!), to minimum wages turned useless by the fragmentation of the working space and the explosion of housing costs in major cities, to the hopelessness of social democracies to get back some leverage on international companies…

Filed under: Statistics Tagged: book reviews, comics, Das Kapital, economics, Immanuel Kant, Karl Marx, London, Marxism, Nature, Paris, philosophy, political economics

by xi'an at August 17, 2017 10:17 PM

John Baez - Azimuth

Complex Adaptive System Design (Part 3)

It’s been a long time since I’ve blogged about the Complex Adaptive System Composition and Design Environment or CASCADE project run by John Paschkewitz. For a reminder, read these:

Complex adaptive system design (part 1), Azimuth, 2 October 2016.

Complex adaptive system design (part 2), Azimuth, 18 October 2016.

A lot has happened since then, and I want to explain it.

I’m working with Metron Scientific Solutions to develop new techniques for designing complex networks.

The particular problem we began cutting our teeth on is a search and rescue mission where a bunch of boats, planes and drones have to locate and save people who fall overboard during a boat race in the Caribbean Sea. Subsequently the Metron team expanded the scope to other search and rescue tasks. But the real goal is to develop very generally applicable new ideas on designing and ‘tasking’ networks of mobile agents—that is, designing these networks and telling the agents what to do.

We’re using the mathematics of ‘operads’, in part because Spivak’s work on operads has drawn a lot of attention and raised a lot of hopes:

• David Spivak, The operad of wiring diagrams: formalizing a graphical language for databases, recursion, and plug-and-play circuits.

An operad is a bunch of operations for sticking together smaller things to create bigger ones—I’ll explain this in detail later, but that’s the core idea. Spivak described some specific operads called ‘operads of wiring diagrams’ and illustrated some of their potential applications. But when we got going on our project, we wound up using a different class of operads, which I’ll call ‘network operads’.

Here’s our dream, which we’re still trying to make into a reality:

Network operads should make it easy to build a big network from smaller ones and have every agent know what to do. You should be able to ‘slap together’ a network, throwing in more agents and more links between them, and automatically have it do something reasonable. This should be more flexible than an approach where you need to know ahead of time exactly how many agents you have, and how they’re connected, before you can tell them what to do.

You don’t want a network to malfunction horribly because you forgot to hook it up correctly. You want to focus your attention on optimizing the network, not getting it to work at all. And you want everything to work so smoothly that it’s easy for the network to adapt to changing conditions.

To achieve this we’re using network operads, which are certain special ‘typed operads’. So before getting into the details of our approach, I should say a bit about typed operads. And I think that will be enough for today’s post: I don’t want to overwhelm you with too much information at once.

In general, a ‘typed operad’ describes ways of sticking together things of various types to get new things of various types. An ‘algebra’ of the operad gives a particular specification of these things and the results of sticking them together. For now I’ll skip the full definition of a typed operad and only highlight the most important features. A typed operad O has:

• a set T of types.

• collections of operations O(t_1,...,t_n ; t) where t_i, t \in T. Here t_1, \dots, t_n are the types of the inputs, while t is the type of the output.

• ways to compose operations. Given an operation
f \in O(t_1,\dots,t_n ;t) and n operations

g_1 \in O(t_{11},\dots,t_{1 k_1}; t_1),\dots, g_n \in O(t_{n1},\dots,t_{n k_n};t_n)

we can compose them to get

f \circ (g_1,\dots,g_n) \in O(t_{11}, \dots, t_{nk_n};t)

These must obey some rules.

But if you haven’t seen operads before, you’re probably reeling in horror—so I need to rush in and save you by showing you the all-important pictures that help explain what’s going on!

First of all, you should visualize an operation f \in O(t_1, \dots, t_n; t) as a little gizmo like this:

It has n inputs at top and one output at bottom. Each input, and the output, has a ‘type’ taken from the set T. So, for example, if you operation takes two real numbers, adds them and spits out the closest integer, both input types would be ‘real’, while the output type would be ‘integer’.

The main thing we do with operations is compose them. Given an an operation f \in O(t_1,\dots,t_n ;t), we can compose it with n operations

g_1 \in O(t_{11},\dots,t_{1 k_1}; t_1), \quad \dots, \quad g_n \in O(t_{n1},\dots,t_{n k_n};t_n)

by feeding their outputs into the inputs of f, like this:

The result is an operation we call

f \circ (g_1, \dots, g_n)

Note that the input types of f have to match the output types of the g_i for this to work! This is the whole point of types: they forbid us from composing operations in ways that don’t make sense.

This avoids certain stupid mistakes. For example, you can take the square root of a positive number, but you may not want to take the square root of a negative number, and you definitely don’t want to take the square root of a hamburger. While you can land a plane on an airstrip, you probably don’t want to land a plane on a person.

The operations in an operad are quite abstract: they aren’t really operating on anything. To render them concrete, we need another idea: operads have ‘algebras’.

An algebra A of the operad O specifies a set of things of each type t \in T such that the operations of O act on these sets. A bit more precisely, an algebra consists of:

• for each type t \in T, a set A(t) of things of type t

• an action of O on A, that is, a collection of maps

\alpha : O(t_1,...,t_n ; t) \times A(t_1) \times \cdots \times A(t_n) \to A(t)

obeying some rules.

In other words, an algebra turns each operation f \in O(t_1,...,t_n ; t) into a function that eats things of types t_1, \dots, t_n and spits out a thing of type t.

When we get to designing systems with operads, the fact that the same operad can have many algebras will be useful. Our operad will have operations describing abstractly how to hook up networks to form larger networks. An algebra will give a specific implementation of these operations. We can use one algebra that’s fairly fine-grained and detailed about what the operations actually do, and another that’s less detailed. There will then be a map between from the first algebra to the second, called an ‘algebra homomorphism’, that forgets some fine-grained details.

There’s a lot more to say—all this is just the mathematical equivalent of clearing my throat before a speech—but I’ll stop here for now.

And as I do—since it also takes me time to stop talking—I should make it clear yet again that I haven’t even given the full definition of typed operads and their algebras! Besides the laws I didn’t write down, there’s other stuff I omitted. Most notably, there’s a way to permute the inputs of an operation in an operad, and operads have identity operations, one for each type.

To see the full definition of an ‘untyped’ operad, which is really an operad with just one type, go here:

• Wikipedia, Operad theory.

They just call it an ‘operad’. Note that they first explain ‘non-symmetric operads’, where you can’t permute the inputs of operations, and then explain operads, where you can.

If you’re mathematically sophisticated, you can easily guess the laws obeyed by a typed operad just by looking at this article and inserting the missing types. You can also see the laws written down in Spivak’s paper, but with some different terminology: he calls types ‘objects’, he calls operations ‘morphisms’, and he calls typed operads ‘symmetric colored operads’—or once he gets going, just ‘operads’.

You can also see the definition of a typed operad in Section 2.1 here:

• Donald Yau, Operads of wiring diagrams.

What I would call a typed operad with S as its set of types, he calls an ‘S-colored operad’.

I guess it’s already evident, but I’ll warn you that the terminology in this subject varies quite a lot from author to author: for example, a certain community calls typed operads ‘symmetric multicategories’. This is annoying at first but once you understand the subject it’s as ignorable as the fact that mathematicians have many different accents. The main thing to remember is that operads come in four main flavors, since they can either be typed or untyped, and they can either let you permute inputs or not. I’ll always be working with typed operads where you can permute inputs.

Finally, I’ll say that while the definition of operad looks lengthy and cumbersome at first, it becomes lean and elegant if you use more category theory.

Next time I’ll give you an example of an operad: the simplest ‘network

by John Baez at August 17, 2017 08:42 AM

August 16, 2017

Symmetrybreaking - Fermilab/SLAC

QuarkNet takes on solar eclipse science

High school students nationwide will study the effects of the solar eclipse on cosmic rays.

Group photo of students and teachers involved in QuarkNet

While most people are marveling at Monday’s eclipse, a group of researchers will be measuring its effects on cosmic rays—particles from space that collide with the earth’s atmosphere to produce muons, heavy cousins of the electron. But these researchers aren’t the usual PhD-holding suspects: They’re still in high school.

More than 25 groups of high school students and teachers nationwide will use small-scale detectors to find out whether the number of cosmic rays raining down on Earth changes during an eclipse. Although the eclipse event will last only three hours, this student experiment has been a months-long collaboration.

The cosmic ray detectors used for this experiment were provided as kits by QuarkNet, an outreach program that gives teachers and students opportunities to try their hands at high-energy physics research. Through QuarkNet, high school classrooms can participate in a whole range of physics activities, such as analyzing real data from the CMS experiment at CERN and creating their own experiments with detectors.

“Really active QuarkNet groups run detectors all year and measure all sorts of things that would sound crazy to a physicist,” says Mark Adams, QuarkNet’s cosmic ray studies coordinator. “It doesn’t really matter what the question is as long as it allows them to do science.”

And this year’s solar eclipse will give students a rare chance to answer a cosmic question: Is the sun a major producer of the cosmic rays that bombard Earth, or do they come from somewhere else?

“We wanted to show that, if the rate of cosmic rays changes a lot during the eclipse, then the sun is a big source of cosmic rays,” Adams says. “We sort of know that the sun is not the main source, but it’s a really neat experiment. As far as we know, no one has ever done this with cosmic ray muons at the surface.”

Adams and QuarkNet teacher Nate Unterman will be leading a group of nine students and five adults to Missouri to the heart of the path of totality—where the moon will completely cover the sun—to take measurements of the event. Other QuarkNet groups will stay put, measuring what effect a partial eclipse might have on cosmic rays in their area.  

Most cosmic rays are likely high-energy particles from exploding stars deep in space, which are picked up via muons in QuarkNet detectors. But the likely result of the experiment—that cosmic rays don’t change their rate when the moon moves in front of the sun—doesn’t eclipse the excitement for the students in the collaboration.

“They’ve been working for months and months to develop the design for the measurements and the detectors,” Adams says. “That’s the great part—they’re not focused on what the answer is but the best way to find it.”

Photo of three students carrying a long detector while another holds the door
Mark Adams

by Leah Poffenberger at August 16, 2017 05:46 PM

Emily Lakdawalla - The Planetary Society Blog

Could the total solar eclipse reveal a comet?
Next week's solar eclipse will reveal the Sun's corona, nearby bright planets and stars, and, if we get extremely lucky, a comet!

August 16, 2017 11:00 AM

August 15, 2017

Symmetrybreaking - Fermilab/SLAC

Dark matter hunt with LUX-ZEPLIN

A video from SLAC National Accelerator Laboratory explains how the upcoming LZ experiment will search for the missing 85 percent of the matter in the universe.

Illustration of a cut-away view of the inside of the LZ detector

What exactly is dark matter, the invisible substance that accounts for 85 percent of all the matter in the universe but can’t be seen even with our most advanced scientific instruments?

Most scientists believe it’s made of ghostly particles that rarely bump into their surroundings. That’s why billions of dark matter particles might zip right through our bodies every second without us even noticing. Leading candidates for dark matter particles are WIMPs, or weakly interacting massive particles.

Scientists at SLAC National Accelerator Laboratory are helping to build and test one of the biggest and most sensitive detectors ever designed to catch a WIMP: the LUX-ZEPLIN or LZ detector. The following video explains how it works.

Dark Matter Hunt with LUX-ZEPLIN (LZ)

Video of Dark Matter Hunt with LUX-ZEPLIN (LZ)

August 15, 2017 04:36 PM

Emily Lakdawalla - The Planetary Society Blog

A dispatch from the path of totality: the 2017 solar eclipse in Ravenna, Nebraska
Ravenna, population 1,400, sits on the plains of central Nebraska, and almost on the center line of the path of totality for the upcoming Great American Eclipse. Nebraska native Shane Pekny reports on how this small town is preparing for the big event.

August 15, 2017 11:00 AM

Emily Lakdawalla - The Planetary Society Blog

Bill Nye's top eclipse tip: Protect your eyes
Bill Nye, CEO of The Planetary Society, has some suggestions for staying safe during next week's solar eclipse.

August 15, 2017 11:00 AM

Tommaso Dorigo - Scientificblogging

Revenge Of The Slimeballs - Part 3
This is the third part of Chapter 3 of the book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab". The chapter recounts the pioneering measurement of the Z mass by the CDF detector, and the competition with SLAC during the summer of 1989. The title of the post is the same as the one of chapter 3, and it refers to the way some SLAC physicists called their Fermilab colleagues, whose hadron collider was to their eyes obviously inferior to the electron-positron linear collider.

read more

by Tommaso Dorigo at August 15, 2017 10:31 AM

John Baez - Azimuth

Norbert Blum on P versus NP

There’s a new paper on the arXiv that claims to solve a hard problem:

• Norbert Blum, A solution of the P versus NP problem.

Most papers that claim to solve hard math problems are wrong: that’s why these problems are considered hard. But these papers can still be fun to look at, at least if they’re not obviously wrong. It’s fun to hope that maybe today humanity has found another beautiful grain of truth.

I’m not an expert on the P versus NP problem, so I have no opinion on this paper. So don’t get excited: wait calmly by your radio until you hear from someone who actually works on this stuff.

I found the first paragraph interesting, though. Here it is, together with some highly non-expert commentary. Beware: everything I say could be wrong!

Understanding the power of negations is one of the most challenging problems in complexity theory. With respect to monotone Boolean functions, Razborov [12] was the first who could shown that the gain, if using negations, can be super-polynomial in comparision to monotone Boolean networks. Tardos [16] has improved this to exponential.

I guess a ‘Boolean network’ is like a machine where you feed in a string of bits and it computes new bits using the logical operations ‘and’, ‘or’ and ‘not’. If you leave out ‘not’ the Boolean network is monotone, since then making more inputs equal to 1, or ‘true’, is bound to make more of the output bits 1 as well. Blum is saying that including ‘not’ makes some computations vastly more efficient… but that this stuff is hard to understand.

For the characteristic function of an NP-complete problem like the clique function, it is widely believed that negations cannot help enough to improve the Boolean complexity from exponential to polynomial.

A bunch of nodes in a graph are a clique if each of these nodes is connected by an edge to every other. Determining whether a graph with n vertices has a clique with more than k nodes is a famous problem: the clique decision problem.

For example, here’s a brute-force search for a clique with at least 4 nodes:

The clique decision problem is NP-complete. This means that if you can solve it with a Boolean network whose complexity grows like some polynomial in n, then P = NP. But if you can’t, then P ≠ NP.

(Don’t ask me what the complexity of a Boolean network is; I can guess but I could get it wrong.)

I guess Blum is hinting that the best monotone Boolean network for solving the clique decision problem has a complexity that’s exponential in n. And then he’s saying it’s widely believed that not gates can’t reduce the complexity to a polynomial.

Since the computation of an one-tape Turing machine can be simulated by a non-monotone Boolean network of size at most the square of the number of steps [15, Ch. 3.9], a superpolynomial lower bound for the non-monotone network complexity of such a function would imply P ≠ NP.

Now he’s saying what I said earlier: if you show it’s impossible to solve the clique decision problem with any Boolean network whose complexity grows like some polynomial in n, then you’ve shown P ≠ NP. This is how Blum intends to prove P ≠ NP.

For the monotone complexity of such a function, exponential lower bounds are known [11, 3, 1, 10, 6, 8, 4, 2, 7].

Should you trust someone who claims they’ve proved P ≠ NP, but can’t manage to get their references listed in increasing order?

But until now, no one could prove a non-linear lower bound for the nonmonotone complexity of any Boolean function in NP.

That’s a great example of how helpless we are: we’ve got all these problems whose complexity should grow faster than any polynomial, and we can’t even prove their complexity grows faster than linear. Sad!

An obvious attempt to get a super-polynomial lower bound for the non-monotone complexity of the clique function could be the extension of the method which has led to the proof of an exponential lower bound of its monotone complexity. This is the so-called “method of approximation” developed by Razborov [11].

I don’t know about this. All I know is that Razborov and Rudich proved a whole bunch of strategies for proving P ≠ NP can’t possibly work. These strategies are called ‘natural proofs’. Here are some friendly blog articles on their result:

• Timothy Gowers, How not to prove that P is not equal to NP, 3 October 2013.

• Timothy Gowers, Razborov and Rudich’s natural proofs argument, 7 October 2013.

From these I get the impression that what Blum calls ‘Boolean networks’ may be what other people call ‘Boolean circuits’. But I could be wrong!


Razborov [13] has shown that his approximation method cannot be used to prove better than quadratic lower bounds for the non-monotone complexity of a Boolean function.

So, this method is unable to prove some NP problem can’t be solved in polynomial time and thus prove P ≠ NP. Bummer!

But Razborov uses a very strong distance measure in his proof for the inability of the approximation method. As elaborated in [5], one can use the approximation method with a weaker distance measure to prove a super-polynomial lower bound for the non-monotone complexity of a Boolean function.

This reference [5] is to another paper by Blum. And in the end, he claims to use similar methods to prove that the complexity of any Boolean network that solves the clique decision problem must grow faster than a polynomial.

So, if you’re trying to check his proof that P ≠ NP, you should probably start by checking that other paper!

The picture below, by Behnam Esfahbod on Wikicommons, shows the two possible scenarios. The one at left is the one Norbert Blum claims to have shown we’re in.

by John Baez at August 15, 2017 04:40 AM

August 14, 2017

Clifford V. Johnson - Asymptotia

A Skyline to Come?

I finished that short story project for that anthology I told you about and submitted the final files to the editor on Sunday. Hurrah. It'll appear next year and I'll give you a warning about when it is to appear once they announce the book. It was fun to work on this story. The sample above is a couple of process shots of me working (on my iPad) on an imagining of the LA skyline as it might look some decades from now. I've added several buildings among the ones that might be familiar. It is for the opening establishing shot of the whole book. There's one of San Francisco later on, by the way. (I learned more about the SF skyline and the Bay Bridge than I care to admit now...)

I will admit that I went a bit overboard with the art for this project! I intended to do a lot rougher and looser style in both pencil work and colour and of course ended up with far too much obsessing over precision and detail in the end (as you can also see here, here and here). As an interesting technical landmark [...] Click to continue reading this post

The post A Skyline to Come? appeared first on Asymptotia.

by Clifford at August 14, 2017 10:58 PM

August 12, 2017

Lubos Motl - string vacua and pheno

Arctic mechanism: a derivation of the multiple point criticality principle?
One of the ideas I found irresistible in my research during the last 3 weeks was the multiple point criticality principle mentioned in a recent blog post about a Shiu-Hamada paper.

Froggatt's and Nielsen's and Donald Bennett's multiple point criticality principle says that the parameters of quantum field theory are chosen on the boundaries of a maximum number of phases – i.e. so that something maximally special seems to happen over there.

This principle is supported by a reasonably impressive prediction of the fine-structure constant, the top quark mass, the Higgs boson mass, and perhaps the neutrino masses and/or the cosmological constant related to them.

In some sense, the principle modifies the naive "uniform measure" on the parameter space that is postulated by naturalness. We may say that the multiple point criticality principle not only modifies naturalness. It almost exactly negates it. The places with \(\theta=0\) where \(\theta\) is the distance from some phase transition are of measure zero, and therefore infinitely unlikely, according to naturalness. But the multiple point criticality principle says that they're really preferred. In fact, if there are several phase transitions and \(\theta_i\) measure the distances from several domain walls in the moduli space, the multiple point criticality principle wants to set all the parameters \(\theta_i\) equal to zero.

Is there an everyday life analogy for that? I think so. Look at the picture at the top and ignore the boat with the German tourist in it. What you see is the Arctic Ocean – with lots of water and ice over there. What is the temperature of the ice and the water? Well, it's about 0 °C, the melting point of water. In reality, the melting point is a bit different due to the salinity.

But in this case, there exists a very good reason to conclude that we're near the melting point. It's because we can see that the water and the ice co-exist. And the water may only exist above the melting point; and the ice may only exist beneath the melting point. The intersection of these two intervals is a narrow interval – basically the set containing the melting point only. If the water were much warmer than the melting point, it would have to cool quickly enough because the ice underneath is colder – it can't really be above the melting point.

(The heat needed for the ice to melt is equal to the heat needed to warm the same amount of water by some 80 °C if I remember well.)

How is it possible that the temperature 0 °C, although it's a special value of measure zero, is so popular in the Arctic Ocean? It's easy. If you study what's happening when you warm the ice – start with a body of ice only – you will ultimately get to the melting point and a part of ice will melt. You will obtain a mixture of the ice and water. Now, if you are adding additional heat, the ice no longer heats up. Instead, the extra heat will be used to transform an increasing fraction of the ice to the water – i.e. to melt the ice.

So the growth of the temperature stops at the melting point. Instead of the temperature, what the additional incoming heat increases is the fraction of the H2O molecules that have already adopted the liquid state. Only when the fraction increases to 100%, you get pure liquid water and the additional heating may increase the temperature above 0 °C.

In theoretical physics, we want things like the top quark mass \(m_t\) to be analogous to the temperature \(T\) of the Arctic water. Can we find a similar mechanism in physics that would just explain why the multiple point criticality principle is right?

The easiest way is to take the analogy literally and consider the multiverse. The multiverse may be just like the Arctic Ocean. And parts of it may be analogous to the floating ice, parts of it may be analogous to the water underneath. There could be some analogy of the "heat transfer" that forces something like \(m_t\) to be nearly the same in the nearby parts of the multiverse. But the special values of \(m_t\) that allow several phases may occupy a finite fraction of the multiverse and what is varying in this region isn't \(m_t\) but rather the percentage of the multiverse occupied by the individual phases.

There may be regions of the multiverse where several phases co-exist and several parameters analogous to \(m_t\) appear to be fine-tuned to special values.

I am not sure whether an analysis of this sort may be quantified and embedded into a proper full-blown cosmological model. It would be nice. But maybe the multiverse isn't really needed. It seems to me that at these special values of the parameters where several phases co-exist, the vacuum states could naturally be superpositions of quantum states built on several classically very different configurations. Such a law would make it more likely that the cosmological constant is described by a seesaw mechanism, too.

If it's true and if the multiple-phase special points are favored, it's because of some "attraction of the eigenvalues". If you know random matrix theory, i.e. the statistical theory of many energy levels in the nuclei, you know that the energy levels tend to repel each other. It's because some Jacobian factor is very small in the regions where the energy eigenvalues approach each other. Here, we need the opposite effect. We need the values of parameters such as \(m_t\) to be attracted to the special values where phases may be degenerate.

So maybe even if you avoid any assumption about the existence of any multiverse, you may invent a derivation at the level of the landscape only. We normally assume that the parameter spaces of the low-energy effective field theory (or their parts allowed in the landscape, i.e. those draining the swamp) are covered more or less uniformly by the actual string vacua. We know that this can't quite be right. Sometimes we can't even say what the "uniform distribution" is supposed to look like.

But this assumption of uniformity could be flawed in very specific and extremely interesting ways. It could be that the actual string vacua actually love to be degenerate – "almost equal" superpositions of vacua that look classically very different from each other. In general, there should be some tunneling in between the vacua and the tunneling gives you off-diagonal matrix elements (between different phases) to many parameters describing the low-energy physics of the vacua (coupling constants, cosmological constant).

And because of the off-diagonal elements, the actual vacua we should find when we're careful aren't actually "straightforward quantum coherent states" built around some classical configurations. But very often, they may like to be superpositions – with non-negligible coefficients – of many phases. If that's so, even the single vacuum – in our visible Universe – could be analogous to the Arctic Ocean in my metaphor and an explanation of the multiple point criticality principle could exist.

If it were right qualitatively, it could be wonderful. One could try to look for a refinement of this Arctic landscape theory – a theory that tries to predict more realistic probability distributions on the low-energy effective field theories' parameter spaces, distributions that are non-uniform and at least morally compatible with the multiple point criticality principle. This kind of reasoning could even lead us to a calculation of some values of the parameters that are much more likely than others – and it could be the right ones which are compatible with our measurements.

A theory of the vacuum selection could exist. I tend to think that this kind of research hasn't been sufficiently pursued partly because of the left-wing bias of the research community. They may be impartial in many ways but the biases often do show up even in faraway contexts. Leftists may instinctively think that non-uniform distributions are politically incorrect so they prefer the uniformity of naturalness or the "typical vacua" in the landscape. I have always felt that these Ansätze are naive and on the wrong track – and the truth is much closer to their negations. The apparent numerically empirical success of the multiple point criticality principle is another reason to think so.

Note that while we're trying to calculate some non-uniform distributions, the multiple point criticality principle is a manifestation of egalitarianism and multiculturalism from another perspective – because several phases co-exist as almost equal ones. ;-)

by Luboš Motl ( at August 12, 2017 04:49 PM

August 11, 2017

The n-Category Cafe

Magnitude Homology in Sapporo

John and I are currently enjoying Applied Algebraic Topology 2017 in the city of Sapporo, on the northern Japanese island of Hokkaido.

I spoke about magnitude homology of metric spaces. A central concept in applied topology is persistent homology, which is also a homology theory of metric spaces. But magnitude homology is different.

It was brought into being one year ago on this very blog, principally by Mike Shulman, though Richard Hepworth and Simon Willerton had worked out a special case before. You can read a long post of mine about it from a year ago, which in turn refers back to a very long comments thread of an earlier post.

But for a short account, try my talk slides. They introduce both magnitude itself (including some exciting new developments) and magnitude homology. Both are defined in the wide generality of enriched categories, but I concentrated on the case of metric spaces.

Slide on magnitude homology

Of course, John’s favourite slide was the one shown.

by leinster ( at August 11, 2017 08:23 PM

The n-Category Cafe

A Graphical Calculus for Proarrow Equipments

guest post by David Myers

Proarrow equipments (which also go by the names “fibrant double categories” and “framed bicategories”) are wonderful and fundamental category-like objects. If categories are the abstract algebras of functions, then equipments are the abstract algebras of functions and relations. They are a fantastic setting to do formal category theory, which you can learn about in Mike’s post about them on this blog!

For my undergraduate thesis, I came up with a graphical calculus for working with equipments. I wasn’t the first to come up with it (if you’re familiar with both string diagrams and equipments, it’s basically the only sort of thing that you’d try), but I did prove it sound using a proof similar to Joyal and Street’s proof of the soundness of the graphical calculus for monoidal categories. You can see the full paper on the arXiv, or see the slides from a talk I gave about it at CT2017 here. Below the fold, I’ll show you the diagrams and a bit of what you can do with them.

What is a Double Category?

A double category is a category internal to the category of categories. Now, this is fun to say, but takes a bit of unpacking. Here is a more elementary definition together with the string diagrams:

Definition: A double category has

  • Objects <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>, <semantics>C,<annotation encoding="application/x-tex">C,</annotation></semantics> <semantics><annotation encoding="application/x-tex">\ldots</annotation></semantics>, which will be written as bounded plane regions of different colors , , , <semantics><annotation encoding="application/x-tex">\ldots</annotation></semantics>.
  • Vertical arrows <semantics>f:<annotation encoding="application/x-tex">f :</annotation></semantics> <semantics><annotation encoding="application/x-tex">\rightarrow</annotation></semantics> , <semantics><annotation encoding="application/x-tex">\ldots</annotation></semantics>, which we will just call arrows and write as vertical lines , directed downwards, dividing the plane region from .
  • Horizontal arrows <semantics>J<annotation encoding="application/x-tex">J</annotation></semantics>, <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics>, <semantics>H:<annotation encoding="application/x-tex">H : </annotation></semantics> <semantics><annotation encoding="application/x-tex">\to</annotation></semantics> , <semantics><annotation encoding="application/x-tex">\ldots</annotation></semantics>, which we will just call proarrows and write as horizontal lines dividing the plane region from .
  • 2-cells ,<semantics><annotation encoding="application/x-tex">\ldots</annotation></semantics>, are represented as beads between the arrows and proarrows ,<semantics><annotation encoding="application/x-tex">\ldots</annotation></semantics>

The usual square notation is on the left, and the string diagrams are on the right.

There are two ways to compose 2-cells: horizontally, and vertically. These satisfy and interchange law saying that composing horizontally and then vertically is the same as composing vertically and then horizontally.

Note that when we compose 2-cells horizontally, we must compose the vertical arrows. Therefore, the vertical arrows will form a category. Similary, when we compose 2-cells vertically, we must compose the horizontal proarrows. Therefore, the horizontal proarrows will form a category. Except, this is not quite true; in most of our examples in the wild, the composition of proarrows will only commute up to isomorphism, so they will form a bicategory. I’ll just hand wave this away for the rest of the post.

This is about all there is to the graphical calculus for double categories. Any deformation of a double diagram that keeps the vertical arrows vertical and the horizontal proarrows horizontal will describe an equal composite in any double category.

Here are some examples of double categories:

In many double categories that we meet “in the wild”, the arrows will be function-like and the proarrows relation-like. These double categories are called equipments. In these cases, we can turn functions into relations by taking their graphs. This can be realized in the graphical calculus by bending vertical arrows horizontal.

Companions, Conjoints, and Equipments

An arrow has a companion if there is a proarrow together with two 2-cells and such that

= and = .

I call these the “kink identities”, because they are reminiscent of the “zig-zag identities” for adjunctions in string diagrams. We can think of as the graph of as a subset of its domain times its codomain.

Similarly, is said to have a conjoint if there is a proarrow together with two 2-cells and such that

<semantics>=<annotation encoding="application/x-tex">=</annotation></semantics> and <semantics>=<annotation encoding="application/x-tex">=</annotation></semantics> .

Definition: A proarrow equipment is a double category where every arrow has a conjoint and a companion.

The prototypical example of a proarrow equipment, and also the reason for the name, is the equipment of categories, functors, profunctors, and profunctor morphisms. In this equipment, companions are the restriction of the hom of the codomain by the functor on the left, and conjoints are the restriction of the hom of the codomain by the functor on the right.

In the equipment with objects sets, arrows functions, and proarrows relations, the companion and conjoint are the graph of a function as a relation from the domain to codomain or from the codomain to domain respectively.

The following lemma is a central elementary result of the theory of equipments:

Lemma (Spider Lemma): In an equipment, we can bend arrows. More formally, there is a bijective correspondence between diagrams of form of the left, and diagrams of the form of the right:

<semantics><annotation encoding="application/x-tex">\approx</annotation></semantics> .

Proof. The correspondence is given by composing the outermost vertical or horizontal arrows by their companion or conjoint (co)units, as suggested by the slight bends in the arrows above. The kink identities then ensure that these two processes are inverse to each other, giving the desired bijection.

In his post, Mike calls this the “fundamental lemma”. This is the engine humming under the graphical calculus; in short, the Spider Lemma says that we can bend vertical wires horizontal. We can use this bending to prove a classical result of category theory in a very general setting.

Hom-Set and Zig-Zag Adjunctions

It is a classical fact of category theory that an adjunction <semantics>fg:AB<annotation encoding="application/x-tex">f \dashv g : A \rightleftarrows B</annotation></semantics> may be defined using natural transformations <semantics>η:idfg<annotation encoding="application/x-tex">\eta: \id \rightarrow fg</annotation></semantics> and <semantics>ϵ:gfid<annotation encoding="application/x-tex">\epsilon: gf \rightarrow \id</annotation></semantics> (which we will call a zig-zag adjunction, after the coherence conditions they have to satisfy – also called the triangle equations), or by giving a natural isomorphism <semantics>ψ:B(f,1)A(1,g)<annotation encoding="application/x-tex">\psi : B(f, 1) \cong A(1, g)</annotation></semantics>. This equivalence holds in any proarrow equipment, which we can now show quickly and intuitively with string diagrams.

Suppose we have an adjunction <semantics><annotation encoding="application/x-tex">\dashv</annotation></semantics> , given by the vertical cells and , satisfying the zig-zag identities

= and = .

By bending the unit and counit, we get the horizontal cells and . Bending the zig-zag identities shows that these maps are inverse to each other

= = = ,

and are therefore the natural isomorphism <semantics><annotation encoding="application/x-tex">\cong</annotation></semantics> we wanted.

Going the other way, suppose is a natural isomorphism with inverse . That is,

= and = .

Then we can define a unit and counit by bending. These satisfy the zig-zag identities by pulling straight and using (1):

= = = ,

= = = .

Though this proof can be discovered graphically, it specializes to the usual argument in the case that the equipment is an equipment of enriched categories!

And Much, Much More!

In the paper, you’ll find that every deformation of an equipment diagram gives the same composite – the graphical calculus is sound. But you’ll also find an application of the calculus: a “Yoneda-style” embedding of every equipment into the equipment of categories enriched in it. The paper still definitely needs some work, so I welcome any feedback in the comments!

I hope these string diagrams make using equipments easier and more fun.

by john ( at August 11, 2017 05:23 AM

August 10, 2017

Symmetrybreaking - Fermilab/SLAC

Think FAST

The new Fermilab Accelerator Science and Technology facility at Fermilab looks to the future of accelerator science.

Scientists in laser safety goggles work in a laser lab

Unlike most particle physics facilities, the new Fermilab Accelerator Science and Technology facility (FAST) wasn’t constructed to find new particles or explain basic physical phenomena. Instead, FAST is a kind of workshop—a space for testing novel ideas that can lead to improved accelerator, beamline and laser technologies.

Historically, accelerator research has taken place on machines that were already in use for experiments, making it difficult to try out new ideas. Tinkering with a physicist’s tools mid-search for the secrets of the universe usually isn’t a great idea. By contrast, FAST enables researchers to study pieces of future high-intensity and high-energy accelerator technology with ease.

“FAST is specifically aiming to create flexible machines that are easily reconfigurable and that can be accessed on very short notice,” says Alexander Valishev, head of department that manages FAST. “You can roll in one experiment and roll the other out in a matter of days, maybe months, without expensive construction and operation costs.”

This flexibility is part of what makes FAST a useful place for training up new accelerator scientists. If a student has an idea, or something they want to study, there’s plenty of room for experimentation.

“We want students to come and do their thesis research at FAST, and we already have a number of students working.” Valishev says. “We have already had a PhD awarded on the basis of work done at FAST, but we want more of that.”

Yellow crymodule with RF distribution

This yellow cyromodule will house the superconducting cavities that take the beam’s energy from 50 to 300 MeV. 

Courtesy of Fermilab

Small ring, bright beam

FAST will eventually include three parts: an electron injector, a proton injector and a particle storage ring called the Integrable Optics Test Accelerator, or IOTA. Although it will be small compared to other rings—only 40 meters long, while Fermilab’s Main Injector has a circumference of 3 kilometers—IOTA will be the centerpiece of FAST after its completion in 2019. And it will have a unique feature: the ability to switch from being an electron accelerator to a proton accelerator and back again.

“The sole purpose of this synchrotron is to test accelerator technology and develop that tech to test ideas and theories to improve accelerators everywhere,” says Dan Broemmelsiek, a scientist in the IOTA/FAST department.

One aspect of accelerator technology FAST focuses on is creating higher-intensity or “brighter” particle beams.

Brighter beams pack a bigger particle punch. A high-intensity beam could send a detector twice as many particles as is usually possible. Such an experiment could be completed in half the time, shortening the data collection period by several years.

IOTA will test a new concept for accelerators called integrable optics, which is intended to create a more concentrated, stable beam, possibly producing higher intensity beams than ever before.

“If this IOTA thing works, I think it could be revolutionary,” says Jamie Santucci, an engineering physicist working on FAST. “It’s going to allow all kinds of existing accelerators to pack in way more beam. More beam, more data.”

Photoelectron gun

The beam starts here: Once electrons are sent down the beamline, they pass through the a set of solenoid magnets—the dark blue rings—before entering the first two superconducting cavities.

Courtesy of Fermilab

Maximum energy milestone

Although the completion of IOTA is still a few years away, the electron injector will reach a milestone this summer: producing an electron beam with the energy of 300 million electronvolts (MeV).

The electron injector for IOTA is a research vehicle in its own right,” Valishev says. It provides scientists a chance to test superconducting accelerators, a key piece of technology for future physics machines that can produce intense acceleration at relatively low power.

“At this point, we can measure things about the beam, chop it up or focus it,” Broemmelsiek says. “We can use cameras to do beam diagnostics, and there’s space here in the beamline to put experiments to test novel instrumentation concepts.”

The electron beam’s previous maximum energy of 50 MeV was achieved by passing the beam through two superconducting accelerator cavities and has already provided opportunities for research. The arrival of the 300 MeV beam this summer—achieved by sending the beam through another eight superconducting cavities—will open up new possibilities for accelerator research, with some experiments already planned to start as soon as the beam is online.

Yellow, red and black wires plugged into a device

Electronics for IOTA

Chip Edstrom

FAST forward

The third phase of FAST, once IOTA is complete, will be the construction of the proton injector.

“FAST is unique because we will specifically target creating high-intensity proton beams,” Valishev says.

This high-intensity proton beam research will directly translate to improving research into elusive particles called neutrinos, Fermilab’s current focus.

“In five to 10 years, you’ll be talking to a neutrino guy and they’ll go, ‘I don’t know what the accelerator guys did, but it’s fabulous. We’re getting more neutrinos per hour than we ever thought we would,’” Broemmelsiek says.

Creating new accelerator technology is often an overlooked area in particle physics, but the freedom to try out new ideas and discover how to build better machines for research is inherently rewarding for people who work at FAST.

“Our business is science, and we’re supposed to make science, and we work really hard to do that,” Broemmelsiek says. “But it’s also just plain ol’ fun.”

by Leah Poffenberger at August 10, 2017 01:00 PM

August 09, 2017

Tommaso Dorigo - Scientificblogging

Higgs Decays To Tau Leptons: CMS Sees Them First
I have recently been reproached, by colleagues who are members of the competing ATLAS experiment, of misusing the word "see" in this blog, in the context of searches for physics signals. That was because I reported that CMS recently produced a very nice result where we measure the rate of H->bb decays in events where the Higgs boson recoils against a energetic jet; that signal is not statistically significant, so they could argue that CMS did not "see" anything, as I wrote in the blog title. 

read more

by Tommaso Dorigo at August 09, 2017 02:41 PM

August 08, 2017

ZapperZ - Physics and Physicists

Hyperfine Splitting of Anti-Hydrogen Is Just Like Ordinary Hydrogen
More evidence that the antimatter world is practically identical to our regular matter world. The ALPHA collaboration at CERN has reported the first ever measurement of the anti-hydrogen hyperfine spectrum, and it is consistent to that measured for hydrogen.

Now, they have used microwaves to flip the spin of the positron. This resulted not only in the first precise determination of the antihydrogen hyperfine splitting, but also the first antimatter transition line shape, a plot of the spin flip probability versus the microwave frequency.

“The data reveal clear and distinct signatures of two allowed transitions, from which we obtain a direct, magnetic-field-independent measurement of the hyperfine splitting,” the researchers said.

“From a set of trials involving 194 detected atoms, we determine a splitting of 1,420.4 ± 0.5 MHz, consistent with expectations for atomic hydrogen at the level of four parts in 10,000.”

I am expecting a lot more studies on these anti-hydrogen, especially now that they have a very reliable way of sustaining these things.

The paper is an open access on Nature, so you should be able to read the entire thing for free.


by ZapperZ ( at August 08, 2017 03:20 PM

Symmetrybreaking - Fermilab/SLAC

A new search for dark matter 6800 feet underground

Prototype tests of the future SuperCDMS SNOLAB experiment are in full swing.

From left: SLAC's Tsuguo Aramak, Paul Brink and Mike Racine are performing final adjustments to the SuperCDMS SNOLAB engineering

When an extraordinarily sensitive dark matter experiment goes online at one of the world’s deepest underground research labs, the chances are better than ever that it will find evidence for particles of dark matter—a substance that makes up 85 percent of all matter in the universe but whose constituents have never been detected.

The heart of the experiment, called SuperCDMS SNOLAB, will be one of the most sensitive detectors for hypothetical dark matter particles called WIMPs, short for “weakly interacting massive particles.” SuperCDMS SNOLAB is one of two next-generation experiments (the other one being an experiment called LZ) selected by the US Department of Energy and the National Science Foundation to take the search for WIMPs to the next level, beginning in the early 2020s.

“The experiment will allow us to enter completely unexplored territory,” says Richard Partridge, head of the SuperCDMS SNOLAB group at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and SLAC National Accelerator Laboratory. “It’ll be the world’s most sensitive detector for WIMPs with relatively low mass, complementing LZ, which will look for heavier WIMPs.”  

The experiment will operate deep underground at Canadian laboratory SNOLAB inside a nickel mine near the city of Sudbury, where 6800 feet of rock provide a natural shield from high-energy particles from space, called cosmic rays. This radiation would not only cause unwanted background in the detector; it would also create radioactive isotopes in the experiment’s silicon and germanium sensors, making them useless for the WIMP search. That’s also why the experiment will be assembled from major parts at its underground location.

A detector prototype is currently being tested at SLAC, which oversees the efforts of the SuperCDMS SNOLAB project.

Colder than the universe

The only reason we know dark matter exists is that its gravity pulls on regular matter, affecting how galaxies rotate and light propagates. But researchers believe that if WIMPs exist, they could occasionally bump into normal matter, and these collisions could be picked up by modern detectors.

SuperCDMS SNOLAB will use germanium and silicon crystals in the shape of oversized hockey pucks as sensors for these sporadic interactions. If a WIMP hits a germanium or silicon atom inside these crystals, two things will happen: The WIMP will deposit a small amount of energy, causing the crystal lattice to vibrate, and it’ll create pairs of electrons and electron deficiencies that move through the crystal and alter its electrical conductivity. The experiment will measure both responses. 

“Detecting the vibrations is very challenging,” says KIPAC’s Paul Brink, who oversees the detector fabrication at Stanford. “Even the smallest amounts of heat cause lattice vibrations that would make it impossible to detect a WIMP signal. Therefore, we’ll cool the sensors to about one hundredth of a Kelvin, which is much colder than the average temperature of the universe.”

These chilly temperatures give the experiment its name: CDMS stands for “Cryogenic Dark Matter Search.” (The prefix “Super” indicates that the experiment is more sensitive than previous detector generations.)

The use of extremely cold temperatures will be paired with sophisticated electronics, such as transition-edge sensors that switch from a superconducting state of zero electrical resistance to a normal-conducting state when a small amount of energy is deposited in the crystal, as well as superconducting quantum interference devices, or SQUIDs, that measure these tiny changes in resistance.      

The experiment will initially have four detector towers, each holding six crystals. For each crystal material—silicon and germanium—there will be two different detector types, called high-voltage (HV) and interleaved Z-sensitive ionization phonon (iZIP) detectors. Future upgrades can further boost the experiment’s sensitivity by increasing the number of towers to 31, corresponding to a total of 186 sensors.

Working hand in hand

The work under way at SLAC serves as a system test for the future SuperCDMS SNOLAB experiment. Researchers are testing the four different detector types, the way they are integrated into towers, their superconducting electrical connectors and the refrigerator unit that cools them down to a temperature of almost absolute zero.

“These tests are absolutely crucial to verify the design of these new detectors before they are integrated in the experiment underground at SNOLAB,” says Ken Fouts, project manager for SuperCDMS SNOLAB at SLAC. “They will prepare us for a critical DOE review next year, which will determine whether the project can move forward as planned.” DOE is expected to cover about half of the project costs, with the other half coming from NSF and a contribution from the Canadian Foundation for Innovation. 

Important work is progressing at all partner labs of the SuperCDMS SNOLAB project. Fermi National Accelerator Laboratory is responsible for the cryogenics infrastructure and the detector shielding—both will enable searching for faint WIMP signals in an environment dominated by much stronger unwanted background signals. Pacific Northwest National Laboratory will lend its expertise in understanding background noise in highly sensitive precision experiments. A number of US universities are involved in various aspects of the project, including detector fabrication, tests, data analysis and simulation.

The project also benefits from international partnerships with institutions in Canada, France, the UK and India. The Canadian partners are leading the development of the experiment’s data acquisition and will provide the infrastructure at SNOLAB. 

“Strong partnerships create a lot of synergy and make sure that we’ll get the best scientific value out of the project,” says Fermilab’s Dan Bauer, spokesperson of the SuperCDMS collaboration, which consists of 109 scientists from 22 institutions, including numerous universities. “Universities have lots of creative students and principal investigators, and their talents are combined with the expertise of scientists and engineers at the national labs, who are used to successfully manage and build large projects.”

SuperCDMS SNOLAB will be the fourth generation of experiments, following CDMS-I at Stanford, CDMS-II at the Soudan mine in Minnesota, and a first version of SuperCDMS at Soudan, which completed operations in 2015.   

“Over the past 20 years we’ve been pushing the limits of our detectors to make them more and more sensitive for our search for dark matter particles,” says KIPAC’s Blas Cabrera, project director of SuperCDMS SNOLAB. “Understanding what constitutes dark matter is as fundamental and important today as it was when we started, because without dark matter none of the known structures in the universe would exist—no galaxies, no solar systems, no planets and no life itself.”

by Manuel Gnida at August 08, 2017 01:00 PM

John Baez - Azimuth

Applied Algebraic Topology 2017

In the comments on this blog post I’m taking some notes on this conference:

Applied Algebraic Topology 2017, August 8-12, 2017, Hokkaido University, Sapporo, Japan.

Unfortunately these notes will not give you a good summary of the talks—and almost nothing about applications of algebraic topology. Instead, I seem to be jotting down random cool math facts that I’m learning and don’t want to forget.

by John Baez at August 08, 2017 01:50 AM

August 07, 2017

CERN Bulletin


Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.

August 07, 2017 04:08 PM

CERN Bulletin

Yoga Club

Les activités du club de yoga reprennent le 1er septembre 

Yoga, Sophrologie, Tai Chi, Méditation

Êtes-vous à la recherche de bien-être, sérénité,
forme physique, souplesse de corps et d’esprit ?

Voulez-vous réduire votre stress ? 

Rejoignez le club de yoga! 

Des cours tous les jours de la semaine,
10 professeurs différents

August 07, 2017 03:08 PM

CERN Bulletin

Cine Club

Wednesday 9 August 2017 at 20.00
CERN Council Chamber

The Fifth Element

Directed by Luc Besson
France, 1997, 126 min

Two hundred and fifty years in the future, life as we know it is threatened by the arrival of Evil. Only the Fifth Element can stop the Evil from extinguishing life, as it tries to do every five thousand years. She is assisted by a former elite commando turned cab driver, Korben Dallas, who is, in turn, helped by Prince/Arsenio clone, Ruby Rhod. Unfortunately, Evil is being assisted by Mr. Zorg, who seeks to profit from the chaos that Evil will bring, and his alien mercenaries.

Original version English; French subtitles

*  *  *  *  *  *  *  *

Wednesday 16 August 2017 at 20.00
CERN Council Chamber

Mad Max 2 - The Road Warrior

Directed by George Miller
Australia, 1982, 94 min

A former Australian policeman now living in the post-apocalyptic Australian outback as a warrior agrees to help a community of survivors living in a gasoline refinery to defend them and their gasoline supplies from evil barbarian warriors.

Original version English; French subtitles

*  *  *  *  *  *  *  *

Wednesday 23 August 2017 at 20.00
CERN Council Chamber

THX 1138

Directed by George Lucas
USA, 1971, 86 min

The human race has been relocated to an underground city located beneath the Earth's surface. There, the population is entertained by holographic TV which broadcasts sex and violence and robotic police force enforces the law. In the underground city, all citizens are drugged to control their emotions and their behaviour and sex is a crime. Factory worker THX-1138 stops taking the drugs and he breaks the law when he finds himself falling in love with his room-mate LUH 3417 and is imprisoned when LUH 3417 is pregnant. Escaping from jail with illegal programmer SEN 5241 and a hologram named SRT, THX 1138 goes in search of LUH 3417 and escapes to the surface, whilst being pursued by robotic policemen..

Original version English; French subtitles

August 07, 2017 03:08 PM

CERN Bulletin

Offers for our members

Summer is here, enjoy our offers for the water parks!


Tickets "Zone terrestre": 24 € instead of 30 €.

Access to Aqualibi: 5 € instead of 6 € on presentation of your ticket purchased at the Staff Association.

Bonus! Free for children under 100 cm, with limited access to the attractions.

Free car park.

*  *  *  *  *  *  *  *


Day ticket:
-  Children: 33 CHF instead of 39 CHF
-  Adults : 33 CHF instead of 49 CHF

Bonus! Free for children under 5 years old.

August 07, 2017 02:08 PM

CERN Bulletin

Golf Club

Would you like to learn a new sport and meet new people?

The CERN Golf Club organises golf lessons for beginners starting in August or September.

The lesson series consist of 6 lessons of 1h30 each week, in a group of 6 people and given by the instructor Cedric Steinmetz at the Jiva Hill golf course in Crozet:

The cost for the golf lessons is 40 euros for CERN employees or family members plus the golf club membership fee of 30 CHF.

If you are interested in participating in these lessons or need more details, please contact us by email at:

August 07, 2017 02:08 PM

August 05, 2017

The n-Category Cafe

Instantaneous Dimension of Finite Metric Spaces via Magnitude and Spread

In June I went to the following conference.

This was held at the Będlewo Conference Centre which is run by the Polish Academy of Sciences’ Institute of Mathematics. Like Oberwolfach it is kind of in the middle of nowhere, being about half an hour’s bus ride from Poznan. (As our excursion guide told us, Poznan is 300km from anywhere: 300 km from Warsaw, 300 km from Berlin, 300 km from the sea and 300 km from the mountains.) You get to eat and drink in the palace pictured below; the seminar rooms and accommodation are in a somewhat less grand building out of shot of the photo.

Bedlewo palace

I gave a 20-minute long, magnitude-related talk. You can download the slides below. Do try the BuzzFeed-like quiz at the end. How many of the ten spaces can just identify just from their dimension profile?

To watch the animation I think that you will have to use acrobat reader. If you don’t want to use that then there’s a movie-free version.

Here’s the abstract.

Some spaces seem to have different dimensions at different scales. A long thin strip might appear one-dimensional at a distance, then two-dimensional when zoomed in on, but when zoomed in on even closer it is seen to be made of a finite array of points, so at that scale it seems zero-dimensional. I will present a way of quantifying this phenomenon.

The main idea is to think of dimension as corresponding to growth rate of size: when you double distances, a line will double in size and a square will quadruple in size. You then just need some good notions of size of metric spaces. One such notion is ‘magnitude’ which was introduced by Leinster, using category theoretic ideas. but was found to have links to many other areas of maths such as biodiversity and potential theory. There’s a closely related, but computationally more tractable, family of notions of size called ‘spreads’ which I introduced following connections with biodiversity.

Meckes showed that the asymptotic growth rate of the magnitude of a metric space is the Minkowski dimension (i.e. the usual dimension for squares and lines and the usual fractal dimension for things like Cantor sets). But this is zero for finite metric spaces. However, by considering growth rate non-asymptotically you get interesting looking results for finite metric spaces, such as the phenomenon described in the first parargraph.

I have blogged about instantaneous dimension before at this post. One connection with applied topology is that as in for persistent homology, one is considering what is happens to a metric space as you scale the metric.

The talk was in the smallest room of three parallel talks, so I had a reasonably small audience. However, it was very nice that almost everyone who was in the talk came up and spoke to me about it afterwards; some even told me how I could calculate magnitude of large metric spaces much faster! For instance Brad Nelson showed me how you can use iterative methods, such as the Krylov subspace method, for solving large linear systems numerically. This is much faster than just naively asking Maple to solve the linear system.

Anyway, do say below how well you did in the quiz!

by willerton ( at August 05, 2017 05:31 PM

Clifford V. Johnson - Asymptotia

The Big USC News You Haven’t Heard…

So here's some big USC news that you're probably not hearing about elsewhere. I think it's the best thing that's happened on campus for a long time, and it's well worth noting. As of today (4th August, when I wrote this), there's a Trader Joe's on campus!

It opened (relatively quietly) today and I stopped by on my way home to pick up a few things - something I've fantasized about doing for some time. It's a simple thing but it's also a major thing in my opinion. Leaving aside the fact that I can now sometimes get groceries on the way home (with a subway stop just a couple of blocks away) - and also now more easily stock up my office with long workday essentials like Scottish shortbread and sardines in olive oil, there's another reason this is big news. This part of the city (and points south) simply don't have as many good options (when it comes to healthy food) as other parts of the city. It is still big news when a grocery store like this opens south the 10 freeway. In fact, away from over on the West side (where the demographic changes significantly), there were *no* Trader Joe's stores south of the 10 until this one opened today**. (Yes, in 2017 - I can wait while you check your calendar.) I consider this at least as significant (if not more) as the Whole Foods opening in downtown at [...] Click to continue reading this post

The post The Big USC News You Haven’t Heard… appeared first on Asymptotia.

by Clifford at August 05, 2017 01:25 PM

The n-Category Cafe

The Rise and Spread of Algebraic Topology

People have been using algebraic topology in data analysis these days, so we’re starting to see conferences like this:

I’m giving the first talk at this one. I’ve done a lot of work on applied category theory, but only a bit on on applied algebraic topology. It was tempting to smuggle in some categories, operads and props under the guise of algebraic topology. But I decided it would be more useful, as a kind of prelude to the conference, to say a bit about the overall history of algebraic topology, and its inner logic: how it was inevitably driven to categories, and then 2-categories, and then <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categories.

This may be the least ‘applied’ of all the talks at this conference, but I’m hoping it will at least trigger some interesting thoughts. We don’t want the ‘applied’ folks to forget the grand view that algebraic topology has to offer!

Here are my talk slides:

Abstract. As algebraic topology becomes more important in applied mathematics it is worth looking back to see how this subject has changed our outlook on mathematics in general. When Noether moved from working with Betti numbers to homology groups, she forced a new outlook on topological invariants: namely, they are often functors, with two invariants counting as ‘the same’ if they are naturally isomorphic. To formalize this it was necessary to invent categories, and to formalize the analogy between natural isomorphisms between functors and homotopies between maps it was necessary to invent 2-categories. These are just the first steps in the ‘homotopification’ of mathematics, a trend in which algebra more and more comes to resemble topology, and ultimately abstract ‘spaces’ (for example, homotopy types) are considered as fundamental as sets. It is natural to wonder whether topological data analysis is a step in the spread of these ideas into applied mathematics, and how the importance of ‘robustness’ in applications will influence algebraic topology.

I thank Mike Shulman with some help on model categories and quasicategories. Any mistakes are, of course, my own fault.

by john ( at August 05, 2017 08:12 AM

John Baez - Azimuth

The Rise and Spread of Algebraic Topology

People have been using algebraic topology in data analysis these days, so we’re starting to see conferences like this:

Applied Algebraic Topology 2017, August 8-12, 2017, Hokkaido University, Sapporo, Japan.

I’m giving the first talk at this one. I’ve done a lot of work on applied category theory, but only a bit on on applied algebraic topology. It was tempting to smuggle in some categories, operads and props under the guise of algebraic topology. But decided it would be more useful, as a kind of prelude to the conference, to say a bit about the overall history of algebraic topology, and its inner logic: how it was inevitably driven to categories, and then 2-categories, and then ∞-categories.

This may be the least ‘applied’ of all the talks at this conference, but I’m hoping it will at least trigger some interesting thoughts. We don’t want the ‘applied’ folks to forget the grand view that algebraic topology has to offer!

Here are my talk slides:

The rise and spread of algebraic topology.

Abstract. As algebraic topology becomes more important in applied mathematics it is worth looking back to see how this subject has changed our outlook on mathematics in general. When Noether moved from working with Betti numbers to homology groups, she forced a new outlook on topological invariants: namely, they are often functors, with two invariants counting as ‘the same’ if they are naturally isomorphic. To formalize this it was necessary to invent categories, and to formalize the analogy between natural isomorphisms between functors and homotopies between maps it was necessary to invent 2-categories. These are just the first steps in the ‘homotopification’ of mathematics, a trend in which algebra more and more comes to resemble topology, and ultimately abstract ‘spaces’ (for example, homotopy types) are considered as fundamental as sets. It is natural to wonder whether topological data analysis is a step in the spread of these ideas into applied mathematics, and how the importance of ‘robustness’ in applications will influence algebraic topology.

I thank Mike Shulman with some help on model categories and quasicategories. Any mistakes are, of course, my own fault.

by John Baez at August 05, 2017 07:53 AM

August 04, 2017

Clifford V. Johnson - Asymptotia

Future Crowds…

Yeah, I still hate doing crowd scenes. (And the next panel is an even wider shot. Why do I do this to myself?)

Anyway, this is a glimpse of the work I'm doing on the final colour for a short science fiction story I wrote and drew for an anthology collection to appear soon. I mentioned it earlier. (Can't say more yet because it's all hush-hush still, involving lots of fancy writers I've really no business keeping company with.) I've [...] Click to continue reading this post

The post Future Crowds… appeared first on Asymptotia.

by Clifford at August 04, 2017 08:30 PM

Lubos Motl - string vacua and pheno

T2K: a two-sigma evidence supporting CP-violation in neutrino sector
Let me write a short blog post by a linker, not a thinker:
T2K presents hint of CP violation by neutrinos
The strange acronym T2K stands for Tokai to Kamioka. So the T2K experiment is located in Japan but the collaboration is heavily multi-national. It works much like the older K2K, KEK to Kamioka. Indeed, it's no coincidence that Kamioka sounds like Kamiokande. Average Japanese people probably tend to know the former, average physicists tend to know the latter. ;-)

Dear physicists, Kamiokande was named after Kamioka, not vice versa! ;-)

Muon neutrinos are created at the source.

These muon neutrinos go under ground through 295 kilometers of rock and they have the opportunity to change themselves into electron neutrinos.

In 2011, T2K claimed evidence for neutrino oscillations powered by \(\theta_{13}\), the last and least "usual" real angle in the mixing matrix. In Summer 2017, we still believe that this angle is nonzero, like the other two, \(12,23\), and F-theory, a version of string theory, had predicted its approximate magnitude rather correctly.

In 2013, they found more than 7-sigma evidence for electron-muon neutrino oscillations and received a Breakthrough Prize for that.

By some physical and technical arrangements, they are able to look at the oscillations of antineutrinos as well and measure all the processes. The handedness (left-handed or right-handed) of the neutrinos we know is correlated with their being neutrinos or antineutrinos. But this correlation makes it possible to conserve the CP-symmetry. If you replace neutrinos with antineutrinos and reflect all the reality and images in the mirror, so that left-handed become right-handed, the allowed left-handed neutrinos become the allowed right-handed antineutrinos so everything is fine.

But we know that the CP-symmetry is also broken by elementary particles in Nature – even though the spectrum of known particles and their allowed polarizations doesn't make this breaking unavoidable. The only experimentally confirmed source of CP-violation we know is the complex phase in the CKM matrix describing the relationship between upper-type and lower-type quark mass eigenstates.

Well, T2K has done some measurement and they have found some two-sigma evidence – deviation from the CP-symmetric predictions – supporting the claim that a similar CP-violating phase \(\delta_{CP}\), or another CP-violating effect, is nonzero even in the neutrino sector. So if it's true, the neutrinos' masses are qualitatively analogous to the quark masses. They have all the twists and phases and violations of naive symmetries that are allowed by the basic consistency.

Needless to say, the two-sigma evidence is very weak. Most such "weak caricatures of a discovery" eventually turn out to be coincidences and flukes. If they managed to collect 10 times more data and the two-sigma deviation would really follow from a real effect, a symmetry breaking, then it would be likely enough to discover the CP-violation in the neutrino sector at 5 sigma – which is considered sufficient evidence for experimental physicists to brag, get drunk, scream "discovery, discovery", accept a prize, and get drunk again (note that the 5-sigma process has 5 stages).

Ivan Mládek, Japanese [people] in [Czech town of] Jablonec, "Japonci v Jablonci". Japanese men are walking through a Jablonec bijou exhibition and buying corals for the government and the king. The girl sees that one of them has a crush on her. He gives her corals and she's immediately his. I don't understand it, you, my Japanese boy, even though you are not a man of Jablonec, I will bring you home. I will feed you nicely, to turn you into a man, and I won't let you leave to any Japan after that again. Visual arts by fifth-graders.

So while I think that most two-sigma claims ultimately fade away, this particular candidate for a discovery sounds mundane enough so that it could be true and 2 sigma could be enough for you to believe it is true. Theoretically speaking, there is no good reason to think that the complex phase should be absent in the neutrino sector. If quarks and leptons differ in such aspects, I think that neutrinos tend to have larger and more generic angles than the quarks, not vice versa.

by Luboš Motl ( at August 04, 2017 05:34 PM

ZapperZ - Physics and Physicists

First Observation of Neutrinos Bouncing Off Atomic Nucleus
An amazing feat out of Oak Ridge.

And it’s really difficult to detect these gentle interactions. Collar’s group bombarded their detector with trillions of neutrinos per second, but over 15 months, they only caught a neutrino bumping against an atomic nucleus 134 times. To block stray particles, they put 20 feet of steel and a hundred feet of concrete and gravel between the detector and the neutrino source. The odds that the signal was random noise is less than 1 in 3.5 million—surpassing particle physicists’ usual gold standard for announcing a discovery. For the first time, they saw a neutrino nudge an entire atomic nucleus.

Currently, the entire paper is available from the Science website.


by ZapperZ ( at August 04, 2017 12:58 AM

August 03, 2017

Lubos Motl - string vacua and pheno

Dark Energy Survey rivals the accuracy of Planck
Yesterday, the Fermilab brought us the press release
Dark Energy Survey reveals most accurate measurement of dark matter structure in the universe
celebrating a new result by the Dark Energy Survey (DES), a multinational collaboration studying dark matter and dark energy using a telescope in Chile, at an altitude of 2,200 meters.

DES wants to produce similar results as Planck – the modern sibling of WMAP and COBE, a satellite that studies the cosmic microwave background temperature in various directions very finely – but its method is very different. The DES telescope looks at things in the infrared – but it is looking at "regular things" such as the number of galaxy clusters, weak gravitational lensing, type IA supernovae, and baryon acoustic oscillations.

It sounds incredible to me but the DES transnational team is capable of detecting tiny distortions of the images of distant galaxies that are caused by gravitational lensing and by measuring how much distortion there is in a given direction, they determine the density of dark matter in that direction.

At the end, they determine some of the same cosmological parameters as Planck, e.g. that dark energy makes about 70 percent of the energy density of our Universe in average. And especially if you focus on a two-dimensional plane, you may see a slight disagreement between Planck's measurement based on the CMB and "pretty much all other methods" to measure the cosmological parameters.

Planck (the blue blob) implies a slightly higher fraction of the matter in the Universe, perhaps 30-40 percent, and a slightly higher clumpiness of matter than DES whose fraction of the matter is between 24-30 percent. Meanwhile, all the measurements aside from the truly historicaly "pure CMB" Planck measurement – which includes DES and Planck's own analysis of clusters – seem to be in a better agreement with each other.

So it's disappointing that cosmology still allows us to measure the fraction of matter just as "something between 25 and 40 percent or so" – the accuracy is lousier than we used to say. On the other hand, the disagreement is just 1.4-2.3 sigma, depending on what is exactly considered and how. This is a very low signal-to-noise ratio – the disagreement is very far from a discovery (we often like 5 sigma).

More importantly, even if the disagreement could be calculated to be 4 sigma or something like that, what's troubling is that such a disagreement gives us almost no clue about "how we should modify our standard cosmological model" to improve the fit. An extra sterile neutrino could be the thing we need. Or some cosmic strings added to the Universe. Or a modified profile for some galactic dark matter. But maybe some holographic MOND-like modification of gravity is desirable. Or a different model of dark energy – some variable cosmological constant. Or something totally different – if you weren't impressed by the fundamental diversity of the possible explanations I have mentioned.

The disagreement in one or two parameters is just way too little information to give us (by us, I mean the theorists) useful clues. So even if I can imagine that in some distant future, perhaps in the year 2200, people will already agree that our model of the cosmological constant was seriously flawed in some way I can't imagine now, the observations provide us with no guide telling us where we should go from here.

Aside from the DES telescope, Chile has similar compartments and colors on their national flag as Czechia and they also have nice protocol pens with pretty good jewels that every wise president simply has to credibly appreciate. When I say "credibly", it means not just by words and clichés but by acts, too.

So even if the disagreement were 4 sigma, I just wouldn't switch to a revolutionary mode – partly because the statistical significance isn't quite persuasive, partly because I don't know what kind of a revolution I should envision or participate in.

That's why I prefer to interpret the result of DES as something that isn't quite new or ground-breaking but that still shows how nontrivially we understand the life of the Universe that has been around for 13.800002017 ;-) billion years so far and how very different ways to interpret the fields in the Universe seem to yield (almost) the same outcome.

You may look for some interesting relevant tweets by cosmologist Shaun Hotchkiss.

by Luboš Motl ( at August 03, 2017 04:03 PM

Symmetrybreaking - Fermilab/SLAC

Our clumpy cosmos

The Dark Energy Survey reveals the most accurate measurement of dark matter structure in the universe.

Milky Way galaxy rising over the Dark Energy Camera in Chile

Imagine planting a single seed and, with great precision, being able to predict the exact height of the tree that grows from it. Now imagine traveling to the future and snapping photographic proof that you were right.

If you think of the seed as the early universe, and the tree as the universe the way it looks now, you have an idea of what the Dark Energy Survey (DES) collaboration has just done. In a presentation today at the American Physical Society Division of Particles and Fields meeting at the US Department of Energy’s (DOE) Fermi National Accelerator Laboratory, DES scientists will unveil the most accurate measurement ever made of the present large-scale structure of the universe.

These measurements of the amount and “clumpiness” (or distribution) of dark matter in the present-day cosmos were made with a precision that, for the first time, rivals that of inferences from the early universe by the European Space Agency’s orbiting Planck observatory. The new DES result (the tree, in the above metaphor) is close to “forecasts” made from the Planck measurements of the distant past (the seed), allowing scientists to understand more about the ways the universe has evolved over 14 billion years.

“This result is beyond exciting,” says Scott Dodelson of Fermilab, one of the lead scientists on this result. “For the first time, we’re able to see the current structure of the universe with the same clarity that we can see its infancy, and we can follow the threads from one to the other, confirming many predictions along the way.”

Most notably, this result supports the theory that 26 percent of the universe is in the form of mysterious dark matter and that space is filled with an also-unseen dark energy, which is causing the accelerating expansion of the universe and makes up 70 percent.

Paradoxically, it is easier to measure the large-scale clumpiness of the universe in the distant past than it is to measure it today. In the first 400,000 years following the Big Bang, the universe was filled with a glowing gas, the light from which survives to this day. Planck’s map of this cosmic microwave background radiation gives us a snapshot of the universe at that very early time. Since then, the gravity of dark matter has pulled mass together and made the universe clumpier over time. But dark energy has been fighting back, pushing matter apart. Using the Planck map as a start, cosmologists can calculate precisely how this battle plays out over 14 billion years.

“The DES measurements, when compared with the Planck map, support the simplest version of the dark matter/dark energy theory,” says Joe Zuntz, of the University of Edinburgh, who worked on the analysis. “The moment we realized that our measurement matched the Planck result within 7 percent was thrilling for the entire collaboration.”

map of dark matter is made from gravitational lensing measurements of 26 million galaxies

This map of dark matter is made from gravitational lensing measurements of 26 million galaxies in the Dark Energy Survey. The map covers about 1/30th of the entire sky and spans several billion light-years in extent. Red regions have more dark matter than average, blue regions less dark matter.

Chihway Chang of the Kavli Institute for Cosmological Physics at the University of Chicago and the DES collaboration.

The primary instrument for DES is the 570-megapixel Dark Energy Camera, one of the most powerful in existence, able to capture digital images of light from galaxies eight billion light-years from Earth. The camera was built and tested at Fermilab, the lead laboratory on the Dark Energy Survey, and is mounted on the National Science Foundation’s 4-meter Blanco telescope, part of the Cerro Tololo Inter-American Observatory in Chile, a division of the National Optical Astronomy Observatory. The DES data are processed at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.

Scientists on DES are using the camera to map an eighth of the sky in unprecedented detail over five years. The fifth year of observation will begin in August. The new results released today draw from data collected only during the survey’s first year, which covers 1/30th of the sky.

“It is amazing that the team has managed to achieve such precision from only the first year of their survey,” says National Science Foundation Program Director Nigel Sharp. “Now that their analysis techniques are developed and tested, we look forward with eager anticipation to breakthrough results as the survey continues.”

DES scientists used two methods to measure dark matter. First, they created maps of galaxy positions as tracers, and second, they precisely measured the shapes of 26 million galaxies to directly map the patterns of dark matter over billions of light-years using a technique called gravitational lensing.

To make these ultra-precise measurements, the DES team developed new ways to detect the tiny lensing distortions of galaxy images, an effect not even visible to the eye, enabling revolutionary advances in understanding these cosmic signals. In the process, they created the largest guide to spotting dark matter in the cosmos ever drawn (see image). The new dark matter map is 10 times the size of the one DES released in 2015 and will eventually be three times larger than it is now.

“It’s an enormous team effort and the culmination of years of focused work,” says Erin Sheldon, a physicist at the DOE’s Brookhaven National Laboratory, who co-developed the new method for detecting lensing distortions.

These results and others from the first year of the Dark Energy Survey will be released today online and announced during a talk by Daniel Gruen, NASA Einstein fellow at the Kavli Institute for Particle Astrophysics and Cosmology at DOE’s SLAC National Accelerator Laboratory, at 5 pm Central time. The talk is part of the APS Division of Particles and Fields meeting at Fermilab and will be streamed live.

The results will also be presented by Kavli fellow Elisabeth Krause of the Kavli Insitute for Particle Astrophysics and Cosmology at SLAC at the TeV Particle Astrophysics Conference in Columbus, Ohio, on Aug. 9; and by Michael Troxel, postdoctoral fellow at the Center for Cosmology and AstroParticle Physics at Ohio State University, at the International Symposium on Lepton Photon Interactions at High Energies in Guanzhou, China, on Aug. 10. All three of these speakers are coordinators of DES science working groups and made key contributions to the analysis.

“The Dark Energy Survey has already delivered some remarkable discoveries and measurements, and they have barely scratched the surface of their data,” says Fermilab Director Nigel Lockyer. “Today’s world-leading results point forward to the great strides DES will make toward understanding dark energy in the coming years.”

A version of this article was published by Fermilab.

August 03, 2017 02:37 PM

August 02, 2017

ZapperZ - Physics and Physicists

RHIC Sees Another First
The quark-gluon plasma created at Brookhaven's Relativistic Heavy Ion Collider (RHIC) continues to produce a rich body of information. They have now announced that the quark-gluon plasma has produced the most rapidly-spinning fluid ever produced.

Collisions with heavy ions—typically gold or lead—put lots of protons and neutrons in a small volume with lots of energy. Under these conditions, the neat boundaries of those particles break down. For a brief instant, quarks and gluons mingle freely, creating a quark-gluon plasma. This state of matter has not been seen since an instant after the Big Bang, and it has plenty of unusual properties. "It has all sorts of superlatives," Ohio State physicist Mike Lisa told Ars. "It is the most easily flowing fluid in nature. It's highly explosive, much more than a supernova. It's hotter than any fluid that's known in nature."
We can now add another superlative to the quark-gluon plasma's list of "mosts:" it can be the most rapidly spinning fluid we know of. Much of the study of the material has focused on the results of two heavy ions smacking each other head-on, since that puts the most energy into the resulting debris, and these collisions spit the most particles out. But in many collisions, the two ions don't hit each other head-on—they strike a more glancing blow.

It is a fascinating article, and you may read the significance of this study, especially in relation to how it informs us on certain aspect of QCD symmetry.

But if you know me, I never fail to try to point something out that is more general in nature, and something that the general public should take note of. I like this statement in the article very much, and I'd like to highlight it here:

But a logical "should" doesn't always equal a "does," so it's important to confirm that the resulting material is actually spinning. And that's a rather large technical challenge when you're talking about a glob of material roughly the same size as an atomic nucleus.

This is what truly distinguish science with other aspects of our lives. There are many instances, especially in politics, social policies, etc., where certain assertions are made and appear to be "obvious" or "logical", and yet, these are simply statements made without any valid evidence to support it. I can think of many ("Illegal immigrants taking away jobs", or "gay marriages undermines traditional marriages", etc...etc). Yet, no matter how "logical" these may appear to be, they are simply statements that are devoid of evidence to support them. Still, whenever they are uttered, many in the public accept them as FACTS or valid, without seeking or requiring evidence to support them. One may believe that "A should cause B", but DOES IT REALLY?

Luckily, this is NOT how it is done in science. No matter how obvious it is, or how verified something is, there are always new boundaries to push and a retesting of the ideas, even ones that are known to be true under certain conditions. And a set of experimental evidence is the ONLY standard that will settle and verify any assertion and statements.

This is why everyone should learn science, not just for the material, but to understand the methodology and technique. It is too bad they don't require politicians to have such skills.


by ZapperZ ( at August 02, 2017 10:45 PM

August 01, 2017

Symmetrybreaking - Fermilab/SLAC

Tuning in for science

The sprawling Square Kilometer Array radio telescope hunts signals from one of the quietest places on earth.

133 dishes across the Great Karoo, 400,000 square kilometer

When you think of radios, you probably think of noise. But the primary requirement for building the world’s largest radio telescope is keeping things almost perfectly quiet.

Radio signals are constantly streaming to Earth from a variety of sources in outer space. Radio telescopes are powerful instruments that can peer into the cosmos—through clouds and dust—to identify those signals, picking them up like a signal from a radio station. To do it, they need to be relatively free from interference emitted by cell phones, TVs, radios and their kin.

That’s one reason the Square Kilometer Array is under construction in the Great Karoo, 400,000 square kilometers of arid, sparsely populated South African plain, along with a component in the Outback of Western Australia. The Great Karoo is also a prime location because of its high altitude—radio waves can be absorbed by atmospheric moisture at lower altitudes. SKA currently covers some 1320 square kilometers of the landscape.

Even in the Great Karoo, scientists need careful filtering of environmental noise. Effects from different levels of radio frequency interference (RFI) can range from “blinding” to actually damaging the instruments. Through South Africa’s Astronomy Geographic Advantage Act, SKA is working toward “radio protection,” which would dedicate segments of the bandwidth for radio astronomy while accommodating other private and commercial RF service requirements in the region.

“Interference affects observational data and makes it hard and expensive to remove or filter out the introduced noise,” says Bernard Duah Asabere, Chief Scientist of the Ghana team of the African Very Long Baseline Interferometry Network (African VLBI Network, or AVN), one of the SKA collaboration groups in eight other African nations participating in the project.

SKA “will tackle some of the fundamental questions of our time, ranging from the birth of the universe to the origins of life,” says SKA Director-General Philip Diamond. Among the targets: dark energy, Einstein’s theory of gravity and gravitational waves, and the prevalence of the molecular building blocks of life across the cosmos.

SKA-South Africa can detect radio spectrum frequencies from 350 megahertz to 14 gigahertz. Its partner Australian component will observe the lower-frequency scale, from 50 to 350 megahertz. Visible light, for comparison, has frequencies ranging from 400 to 800 million megahertz. SKA scientists will process radiofrequency waves to form a picture of their source.

A precursor instrument to SKA called MeerKAT (named for the squirrel-sized critters indigenous to the area), is under construction in the Karoo. This array of 16 dishes in South Africa achieved first light on June 19, 2016. MeerKAT focused on 0.01 percent of the sky for 7.5 hours and saw 1300 galaxies—nearly double the number previously known in that segment of the cosmos. 

Since then, MeerKAT met another milestone with 32 integrated antennas. MeerKat will also reach its full array of 64 dishes early next year, making it one of the world’s premier radio telescopes. MeerKAT will eventually be integrated into SKA Phase 1, where an additional 133 dishes will be built. That will bring the total number of antennas for SKA Phase I in South Africa to 197 by 2023. So far, 32 dishes are fully integrated and are being commissioned for science operations.

On completion of SKA 2 by 2030, the detection area of the receiver dishes will exceed 1 square kilometer, or about 11,000,000 square feet. Its huge size will make it 50 times more sensitive than any other radio telescope. It is expected to operate for 50 years.

SKA is managed by a 10-nation consortium, including the UK, China, India and Australia as well as South Africa, and receives support from another 10 countries, including the US. The project is headquartered at Jodrell Bank Observatory in the UK.

The full SKA will use radio dishes across Africa and Australia, and collaboration members say it will have a farther reach and more detailed images than any existing radio telescope.

In preparation for the SKA, South Africa and its partner countries developed AVN to establish a network of radiotelescopes across the African continent. One of its projects is the refurbishing of redundant 30-meter-class antennas, or building new ones across the partner countries, to operate as networked radio telescopes.

The first project of its kind is the AVN Ghana project, where an idle 32-meter diameter dish has been refurbished and revamped with a dual receiver system at 5 and 6.7 gigahertz central frequencies for use as a radio telescope. The dish was previously owned and operated by the government and the company Vodafone Ghana as a telecommunications facility. Now it will explore celestial objects such as extragalactic nebulae, pulsars and other RF sources in space, such as molecular clouds, called masers.

Asabere’s group will be able to tap into areas of SKA’s enormous database (several supercomputers’ worth) over the Internet. So will groups in Botswana, Kenya, Madagascar, Mauritius, Mozambique, Namibia and Zambia. SKA is also offering extensive outreach in participating countries and has already awarded 931 scholarships, fellowships and grants.

Other efforts in Ghana include introducing astronomy in the school curricula, training students in astronomy and related technologies, doing outreach in schools and universities, receiving visiting students at the telescope site and hosting programs such as the West African International Summer School for Young Astronomers taking place this week.

Asabere, who achieved his advanced degrees in Sweden (Chalmers University of Technology) and South Africa (University of Johannesburg), would like to see more students trained in Ghana, and would like get more researchers on board. He also hopes for the construction of the needed infrastructure, more local and foreign partnerships and strong governmental backing.

“I would like the opportunity to practice my profession on my own soil,” he says.

That day might not be far beyond the horizon. The Leverhulme-Royal Society Trust and Newton Fund in the UK are co-funding extensive human capital development programs in the SKA-AVN partner countries. A seven-member Ghanaian team, for example, has undergone training in South Africa and has been instructed in all aspects of the project, including the operation of the telescope. 

Several PhD students and one MSc student from Ghana have received SKA-SA grants to pursue further education in astronomy and engineering. The Royal Society has awarded funding in collaboration with Leeds University to train two PhDs and 60 young aspiring scientists in the field of astrophysics.

Based on the success of the Leverhulme-Royal Society program, a joint UK-South Africa Newton Fund intervention (DARA—the Development in Africa with Radio Astronomy) has since been initiated in other partner countries to grow high technology skills that could lead to broader economic development in Africa. 

As SKA seeks answers to complex questions over the next five decades, there should be plenty of opportunities for science throughout the Southern Hemisphere. Though it lives in one of the quietest places, SKA hopes to be heard loud and clear.

by Mike Perricone at August 01, 2017 01:50 PM

July 31, 2017

Symmetrybreaking - Fermilab/SLAC

An underground groundbreaking

A physics project kicks off construction a mile underground.

Fourteen shovelers mark the start of LBNF construction.

For many government officials, groundbreaking ceremonies are probably old hat—or old hardhat. But how many can say they’ve been to a groundbreaking that’s nearly a mile underground?

A group of dignitaries, including a governor and four members of Congress, now have those bragging rights. On July 21, they joined scientists and engineers 4850 feet beneath the surface at the Sanford Underground Research Facility to break ground on the Long-Baseline Neutrino Facility (LBNF).

LBNF will house massive, four-story-high detectors for the Deep Underground Neutrino Experiment (DUNE) to learn more about neutrinos—invisible, almost massless particles that may hold the key to how the universe works and why matter exists.  Fourteen shovels full of dirt marked the beginning of construction for a project that could be, well, groundbreaking.

The Sanford Underground Research Facility in Lead, South Dakota resides in what was once the deepest gold mine in North America, which has been repurposed as a place for discovery of a different kind.

“A hundred years ago, we mined gold out of this hole in the ground. Now we’re going to mine knowledge,” said US Representative Kristi Noem of South Dakota in an address at the groundbreaking.

Transforming an old mine into a lab is more than just a creative way to reuse space. On the surface, cosmic rays from the sun constantly bombard us, causing cosmic noise in the sensitive detectors scientists use to look for rare particle interactions. But underground, shielded by nearly a mile of rock, there’s cosmic quiet. Cosmic rays are rare, making it easier for scientists to see what’s going on in their detectors without being clouded by interference.

Going down?

It may be easier to analyze data collected underground, but entering the subterranean science facility can be a chore. Nearly 60 people took a trip underground to the groundbreaking site, requiring some careful elevator choreography.

Before venturing into the deep below, reporters and representatives alike donned safety glasses, hardhats and wearable flashlights. They received two brass tags engraved with their names—one to keep and another to hang on a corkboard—a process called “brassing in.” This helps keep track of who’s underground in case of emergency.

The first group piled into the open-top elevator, known as a cage, to begin the descent. As the cage glides through a mile of mountain, it’s easy to imagine what it must have been like to be a miner back when Sanford Lab was the Homestake Mine. What’s waiting below may have changed, but the method of getting there hasn’t: The winch lowering the cage at 500-feet-a-minute is 80 years old and still works perfectly.

The ride to the 4850-level takes about 10 minutes in the cramped cage—it fits 35, but even with 20 people it feels tight. Water drips in through the ceiling as the open elevator chugs along, occasionally passing open mouths in the rock face of drifts once mined for gold.

 “When you go underground, you start to think ‘It has never rained in here. And there’s never been daylight,’” says Tim Meyer, Chief Operating Officer of Fermilab, who attended the groundbreaking. “When you start thinking about being a mile below the surface, it just seems weird, like you’re walking through a piece of Swiss cheese.”

Where the cage stops at the 4850-level would be the destination of most elevator occupants on a normal day, since the shaft ends near the entrance of clean research areas housing Sanford Lab experiments. But for the contingent traveling to the future site of LBNF/DUNE on the other end of the mine, the journey continued, this time in an open-car train. It’s almost like a theme-park ride as the motor (as it’s usually called by Sanford staff) clips along through a tunnel, but fortunately, no drops or loop-the-loops are involved.

“The same rails now used to transport visitors and scientists were once used by the Homestake miners to remove gold from the underground facility,” says Jim Siegrist, Associate Director of High Energy Physics at the Department of Energy. “During the ride, rock bolts and protective screens attached to the walls were visible by the light of the headlamp mounted on our hardhats.”

After a 15-minute ride, the motor reached its destination and it was business as usual for a groundbreaking ceremony: speeches, shovels and smiling for photos. A fresh coat of white paint (more than 100 gallons worth) covered the wall behind the officials, creating a scene that almost could have been on the surface.

“Celebrating the moment nearly a mile underground brought home the enormity of the task and the dedication required for such precise experiments,” says South Dakota Governor Dennis Daugaard. “I know construction will take some time, but it will be well worth the wait for the Sanford Underground Research Facility to play such a vital role in one of the most significant physics experiments of our time."

What’s the big deal?

The process to reach the groundbreaking site is much more arduous than reaching most symbolic ceremonies, so what would possess two senators, two representatives, a White House representative, a governor and delegates from three international science institutions (to mention a few of the VIPs) to make the trip? Only the beginning of something huge—literally.

“This milestone represents the start of construction of the largest mega-science project in the United States,” said Mike Headley, executive director of Sanford Lab.  

The 14 shovelers at the groundbreaking made the first tiny dent in the excavation site for LBNF, which will require the extraction of more than 870,000 tons of rock to create huge caverns for the DUNE detectors. These detectors will catch neutrinos sent 800 miles through the earth from Fermi National Accelerator Laboratory in the hopes that they will tell us something more about these strange particles and the universe we live in.

“We have the opportunity to see truly world-changing discovery,” said US Representative Randy Hultgren of Illinois. “This is unique—this is the picture of incredible discovery and experimentation going into the future.”

by Leah Poffenberger at July 31, 2017 07:35 PM

July 30, 2017

John Baez - Azimuth

A Compositional Framework for Reaction Networks

For a long time Blake Pollard and I have been working on ‘open’ chemical reaction networks: that is, networks of chemical reactions where some chemicals can flow in from an outside source, or flow out. The picture to keep in mind is something like this:

where the yellow circles are different kinds of chemicals and the aqua boxes are different reactions. The purple dots in the sets X and Y are ‘inputs’ and ‘outputs’, where certain kinds of chemicals can flow in or out.

Our paper on this stuff just got accepted, and it should appear soon:

• John Baez and Blake Pollard, A compositional framework for reaction networks, to appear in Reviews in Mathematical Physics.

But thanks to the arXiv, you don’t have to wait: beat the rush, click and download now!

Blake and I gave talks about this stuff in Luxembourg this June, at a nice conference called Dynamics, thermodynamics and information processing in chemical networks. So, if you’re the sort who prefers talk slides to big scary papers, you can look at those:

• John Baez, The mathematics of open reaction networks.

• Blake Pollard, Black-boxing open reaction networks.

But I want to say here what we do in our paper, because it’s pretty cool, and it took a few years to figure it out. To get things to work, we needed my student Brendan Fong to invent the right category-theoretic formalism: ‘decorated cospans’. But we also had to figure out the right way to think about open dynamical systems!

In the end, we figured out how to first ‘gray-box’ an open reaction network, converting it into an open dynamical system, and then ‘black-box’ it, obtaining the relation between input and output flows and concentrations that holds in steady state. The first step extracts the dynamical behavior of an open reaction network; the second extracts its static behavior. And both these steps are functors!

Lawvere had the idea that the process of assigning ‘meaning’ to expressions could be seen as a functor. This idea has caught on in theoretical computer science: it’s called ‘functorial semantics’. So, what we’re doing here is applying functorial semantics to chemistry.

Now Blake has passed his thesis defense based on this work, and he just needs to polish up his thesis a little before submitting it. This summer he’s doing an internship at the Princeton branch of the engineering firm Siemens. He’s working with Arquimedes Canedo on ‘knowledge representation’.

But I’m still eager to dig deeper into open reaction networks. They’re a small but nontrivial step toward my dream of a mathematics of living systems. My working hypothesis is that living systems seem ‘messy’ to physicists because they operate at a higher level of abstraction. That’s what I’m trying to explore.

Here’s the idea of our paper.

The idea

Reaction networks are a very general framework for describing processes where entities interact and transform int other entities. While they first showed up in chemistry, and are often called ‘chemical reaction networks’, they have lots of other applications. For example, a basic model of infectious disease, the ‘SIRS model’, is described by this reaction network:

S + I \stackrel{\iota}{\longrightarrow} 2 I  \qquad  I \stackrel{\rho}{\longrightarrow} R \stackrel{\lambda}{\longrightarrow} S

We see here three types of entity, called species:

S: susceptible,
I: infected,
R: resistant.

We also have three `reactions’:

\iota : S + I \to 2 I: infection, in which a susceptible individual meets an infected one and becomes infected;
\rho : I \to R: recovery, in which an infected individual gains resistance to the disease;
\lambda : R \to S: loss of resistance, in which a resistant individual becomes susceptible.

In general, a reaction network involves a finite set of species, but reactions go between complexes, which are finite linear combinations of these species with natural number coefficients. The reaction network is a directed graph whose vertices are certain complexes and whose edges are called reactions.

If we attach a positive real number called a rate constant to each reaction, a reaction network determines a system of differential equations saying how the concentrations of the species change over time. This system of equations is usually called the rate equation. In the example I just gave, the rate equation is

\begin{array}{ccl} \displaystyle{\frac{d S}{d t}} &=& r_\lambda R - r_\iota S I \\ \\ \displaystyle{\frac{d I}{d t}} &=&  r_\iota S I - r_\rho I \\  \\ \displaystyle{\frac{d R}{d t}} &=& r_\rho I - r_\lambda R \end{array}

Here r_\iota, r_\rho and r_\lambda are the rate constants for the three reactions, and S, I, R now stand for the concentrations of the three species, which are treated in a continuum approximation as smooth functions of time:

S, I, R: \mathbb{R} \to [0,\infty)

The rate equation can be derived from the law of mass action, which says that any reaction occurs at a rate equal to its rate constant times the product of the concentrations of the species entering it as inputs.

But a reaction network is more than just a stepping-stone to its rate equation! Interesting qualitative properties of the rate equation, like the existence and uniqueness of steady state solutions, can often be determined just by looking at the reaction network, regardless of the rate constants. Results in this direction began with Feinberg and Horn’s work in the 1960’s, leading to the Deficiency Zero and Deficiency One Theorems, and more recently to Craciun’s proof of the Global Attractor Conjecture.

In our paper, Blake and I present a ‘compositional framework’ for reaction networks. In other words, we describe rules for building up reaction networks from smaller pieces, in such a way that its rate equation can be figured out knowing those those of the pieces. But this framework requires that we view reaction networks in a somewhat different way, as ‘Petri nets’.

Petri nets were invented by Carl Petri in 1939, when he was just a teenager, for the purposes of chemistry. Much later, they became popular in theoretical computer science, biology and other fields. A Petri net is a bipartite directed graph: vertices of one kind represent species, vertices of the other kind represent reactions. The edges into a reaction specify which species are inputs to that reaction, while the edges out specify its outputs.

You can easily turn a reaction network into a Petri net and vice versa. For example, the reaction network above translates into this Petri net:

Beware: there are a lot of different names for the same thing, since the terminology comes from several communities. In the Petri net literature, species are called places and reactions are called transitions. In fact, Petri nets are sometimes called ‘place-transition nets’ or ‘P/T nets’. On the other hand, chemists call them ‘species-reaction graphs’ or ‘SR-graphs’. And when each reaction of a Petri net has a rate constant attached to it, it is often called a ‘stochastic Petri net’.

While some qualitative properties of a rate equation can be read off from a reaction network, others are more easily read from the corresponding Petri net. For example, properties of a Petri net can be used to determine whether its rate equation can have multiple steady states.

Petri nets are also better suited to a compositional framework. The key new concept is an ‘open’ Petri net. Here’s an example:

The box at left is a set X of ‘inputs’ (which happens to be empty), while the box at right is a set Y of ‘outputs’. Both inputs and outputs are points at which entities of various species can flow in or out of the Petri net. We say the open Petri net goes from X to Y. In our paper, we show how to treat it as a morphism f : X \to Y in a category we call \textrm{RxNet}.

Given an open Petri net with rate constants assigned to each reaction, our paper explains how to get its ‘open rate equation’. It’s just the usual rate equation with extra terms describing inflows and outflows. The above example has this open rate equation:

\begin{array}{ccr} \displaystyle{\frac{d S}{d t}} &=&  - r_\iota S I - o_1 \\ \\ \displaystyle{\frac{d I}{d t}} &=&  r_\iota S I - o_2  \end{array}

Here o_1, o_2 : \mathbb{R} \to \mathbb{R} are arbitrary smooth functions describing outflows as a function of time.

Given another open Petri net g: Y \to Z, for example this:

it will have its own open rate equation, in this case

\begin{array}{ccc} \displaystyle{\frac{d S}{d t}} &=& r_\lambda R + i_2 \\ \\ \displaystyle{\frac{d I}{d t}} &=& - r_\rho I + i_1 \\  \\ \displaystyle{\frac{d R}{d t}} &=& r_\rho I - r_\lambda R  \end{array}

Here i_1, i_2: \mathbb{R} \to \mathbb{R} are arbitrary smooth functions describing inflows as a function of time. Now for a tiny bit of category theory: we can compose f and g by gluing the outputs of f to the inputs of g. This gives a new open Petri net gf: X \to Z, as follows:

But this open Petri net gf has an empty set of inputs, and an empty set of outputs! So it amounts to an ordinary Petri net, and its open rate equation is a rate equation of the usual kind. Indeed, this is the Petri net we have already seen.

As it turns out, there’s a systematic procedure for combining the open rate equations for two open Petri nets to obtain that of their composite. In the example we’re looking at, we just identify the outflows of f with the inflows of g (setting i_1 = o_1 and i_2 = o_2) and then add the right hand sides of their open rate equations.

The first goal of our paper is to precisely describe this procedure, and to prove that it defines a functor

\diamond: \textrm{RxNet} \to \textrm{Dynam}

from \textrm{RxNet} to a category \textrm{Dynam} where the morphisms are ‘open dynamical systems’. By a dynamical system, we essentially mean a vector field on \mathbb{R}^n, which can be used to define a system of first-order ordinary differential equations in n variables. An example is the rate equation of a Petri net. An open dynamical system allows for the possibility of extra terms that are arbitrary functions of time, such as the inflows and outflows in an open rate equation.

In fact, we prove that \textrm{RxNet} and \textrm{Dynam} are symmetric monoidal categories and that d is a symmetric monoidal functor. To do this, we use Brendan Fong’s theory of ‘decorated cospans’.

Decorated cospans are a powerful general tool for describing open systems. A cospan in any category is just a diagram like this:

We are mostly interested in cospans in \mathrm{FinSet}, the category of finite sets and functions between these. The set S, the so-called apex of the cospan, is the set of states of an open system. The sets X and Y are the inputs and outputs of this system. The legs of the cospan, meaning the morphisms i: X \to S and o: Y \to S, describe how these inputs and outputs are included in the system. In our application, S is the set of species of a Petri net.

For example, we may take this reaction network:

A+B \stackrel{\alpha}{\longrightarrow} 2C \quad \quad C \stackrel{\beta}{\longrightarrow} D

treat it as a Petri net with S = \{A,B,C,D\}:

and then turn that into an open Petri net by choosing any finite sets X,Y and maps i: X \to S, o: Y \to S, for example like this:

(Notice that the maps including the inputs and outputs into the states of the system need not be one-to-one. This is technically useful, but it introduces some subtleties that I don’t feel like explaining right now.)

An open Petri net can thus be seen as a cospan of finite sets whose apex S is ‘decorated’ with some extra information, namely a Petri net with S as its set of species. Fong’s theory of decorated cospans lets us define a category with open Petri nets as morphisms, with composition given by gluing the outputs of one open Petri net to the inputs of another.

We call the functor

\diamond: \textrm{RxNet} \to \textrm{Dynam}

gray-boxing because it hides some but not all the internal details of an open Petri net. (In the paper we draw it as a gray box, but that’s too hard here!)

We can go further and black-box an open dynamical system. This amounts to recording only the relation between input and output variables that must hold in steady state. We prove that black-boxing gives a functor

\square: \textrm{Dynam} \to \mathrm{SemiAlgRel}

(yeah, the box here should be black, and in our paper it is). Here \mathrm{SemiAlgRel} is a category where the morphisms are semi-algebraic relations between real vector spaces, meaning relations defined by polynomials and inequalities. This relies on the fact that our dynamical systems involve algebraic vector fields, meaning those whose components are polynomials; more general dynamical systems would give more general relations.

That semi-algebraic relations are closed under composition is a nontrivial fact, a spinoff of the Tarski–Seidenberg theorem. This says that a subset of \mathbb{R}^{n+1} defined by polynomial equations and inequalities can be projected down onto \mathbb{R}^n, and the resulting set is still definable in terms of polynomial identities and inequalities. This wouldn’t be true if we didn’t allow inequalities. It’s neat to see this theorem, important in mathematical logic, showing up in chemistry!

Structure of the paper

Okay, now you’re ready to read our paper! Here’s how it goes:

In Section 2 we review and compare reaction networks and Petri nets. In Section 3 we construct a symmetric monoidal category \textrm{RNet} where an object is a finite set and a morphism is an open reaction network (or more precisely, an isomorphism class of open reaction networks). In Section 4 we enhance this construction to define a symmetric monoidal category \textrm{RxNet} where the transitions of the open reaction networks are equipped with rate constants. In Section 5 we explain the open dynamical system associated to an open reaction network, and in Section 6 we construct a symmetric monoidal category \textrm{Dynam} of open dynamical systems. In Section 7 we construct the gray-boxing functor

\diamond: \textrm{RxNet} \to \textrm{Dynam}

In Section 8 we construct the black-boxing functor

\square: \textrm{Dynam} \to \mathrm{SemiAlgRel}

We show both of these are symmetric monoidal functors.

Finally, in Section 9 we fit our results into a larger ‘network of network theories’. This is where various results in various papers I’ve been writing in the last few years start assembling to form a big picture! But this picture needs to grow….

by John Baez at July 30, 2017 03:22 PM

July 28, 2017

Clifford V. Johnson - Asymptotia

I Went Walking, and…

Well, that was nice. Was out for a walk with my son and ran into Walter Isaacson. (The Aspen Center for Physics, which I'm currently visiting, is next door to the Aspen Institute. He's the president and CEO of it.) He wrote the excellent Einstein biography that was the official book of the Genius series I worked on as science advisor. We chatted, and it turns out we have mutual friends and acquaintances.

He was pleased to hear that they got a science advisor on board and that the writers (etc) did such a good job with the science. I also learned that he has a book on Leonardo da Vinci coming out [...] Click to continue reading this post

The post I Went Walking, and… appeared first on Asymptotia.

by Clifford at July 28, 2017 08:32 PM

July 27, 2017

Tommaso Dorigo - Scientificblogging

An ATLAS 240 GeV Higgs-Like Fluctuation Meets Predictions From Independent Researcher
A new analysis by the ATLAS collaboration, based of the data collected in 13 TeV proton-proton collisions delivered by the LHC in 2016, finds an excess of X-->4 lepton events at a mass of 240 GeV, with a local significance of 3.6 standard deviations. The search, which targeted objects of similar phenomenology to the 125 GeV Higgs boson discovered in 2012, is published in ATLAS CONF-2017-058. Besides the 240 GeV excess, another one at 700 GeV is found, with the same statistical significance.

read more

by Tommaso Dorigo at July 27, 2017 11:24 AM

July 26, 2017

Symmetrybreaking - Fermilab/SLAC

Angela Fava: studying neutrinos around the globe

This experimental physicist has followed the ICARUS neutrino detector from Gran Sasso to Geneva to Chicago.

Photo of Angela Fava giving a talk at the Fermilab User's Meeting

Physicist Angela Fava has been at the enormous ICARUS detector’s side for over a decade. As an undergraduate student in Italy in 2006, she worked on basic hardware for the neutrino hunting experiment: tightening bolts and screws, connecting and reconnecting cables, learning how the detector worked inside and out.

ICARUS (short for Imaging Cosmic And Rare Underground Signals) first began operating for research in 2010, studying a beam of neutrinos created at European laboratory CERN and launched straight through the earth hundreds of miles to the detector’s underground home at INFN Gran Sasso National Laboratory.

In 2014, the detector moved to CERN for refurbishing, and Fava relocated with it. In June ICARUS began a journey across the ocean to the US Department of Energy’s Fermi National Accelerator Laboratory to take part in a new neutrino experiment. When it arrives today, Fava will be waiting.

Fava will go through the installation process she helped with as a student, this time as an expert.

Photo of a shipping container with the words
Caraban Gonzalez, Noemi Ordan, Julien Marius, CERN

Journey to ICARUS

As a child growing up between Venice and the Alps, Fava always thought she would pursue a career in math. But during a one-week summer workshop before her final year of high school in 2000, she was drawn to experimental physics.

At the workshop, she realized she had more in common with physicists. Around the same time, she read about new discoveries related to neutral, rarely interacting particles called neutrinos. Scientists had recently been surprised to find that the extremely light particles actually had mass and that different types of neutrinos could change into one another. And there was still much more to learn about the ghostlike particles.

At the start of college in 2001, Fava immediately joined the University of Padua neutrino group. For her undergraduate thesis research, she focused on the production of hadrons, making measurements essential to studying the production of neutrinos. In 2004, her research advisor Alberto Guglielmi and his group joined the ICARUS collaboration, and she’s been a part of it ever since.

Fava jests that the relationship actually started much earlier: “ICARUS was proposed for the first time in 1983, which is the year I was born. So we are linked from birth.”

Fava remained at the University of Padua in the same research group for her graduate work. During those years, she spent about half of her time at the ICARUS detector, helping bring it to life at Gran Sasso.

Once all the bolts were tightened and the cables were attached, ICARUS scientists began to pursue their goal of using the detector to study how neutrinos change from one type to another.

During operation, Fava switched gears to create databases to store and log the data. She wrote code to automate the data acquisition system and triggering, which differentiates between neutrino events and background such as passing cosmic rays. “I was trying to take part in whatever activity was going on just to learn as much as possible,” she says.

That flexibility is a trait that Claudio Silverio Montanari, the technical director of ICARUS, praises. “She has a very good capability to adapt,” he says. “Our job, as physicists, is putting together the pieces and making the detector work.”

Photo of the ICARUS shipping container being transported by truck
Caraban Gonzalez, Noemi Ordan, Julien Marius, CERN

Changing it up

Adapting to changing circumstances is a skill both Fava and ICARUS have in common. When scientists proposed giving the detector an update at CERN and then using it in a suite of neutrino experiments at Fermilab, Fava volunteered to come along for the ride.

Once installed and operating at Fermilab, ICARUS will be used to study neutrinos from a source a few hundred meters away from the detector. In its new iteration, ICARUS will search for sterile neutrinos, a hypothetical kind of neutrino that would interact even more rarely than standard neutrinos. While hints of these low-mass particles have cropped up in some experiments, they have not yet been detected.

At Fermilab, ICARUS also won’t be buried below more than half a mile of rock, a feature of the INFN setup that shielded it from cosmic radiation from space. That means the triggering system will play an even bigger role in this new experiment, Fava says.

“We have a great challenge ahead of us.” She’s up to the task.

by Liz Kruesi at July 26, 2017 04:09 PM

Tommaso Dorigo - Scientificblogging

Revenge Of The Slimeballs - Part 2
This is the second part of a section taken from Chapter 3 of the book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab". The chapter recounts the pioneering measurement of the Z mass by the CDF detector, and the competition with SLAC during the summer of 1989. The title of the post is the same as the one of chapter 3, and it refers to the way some SLAC physicists called their Fermilab colleagues, whose hadron collider was to their eyes obviously inferior to the electron-positron linear collider.

read more

by Tommaso Dorigo at July 26, 2017 09:58 AM

July 25, 2017

Symmetrybreaking - Fermilab/SLAC

Turning plots into stained glass

Hubert van Hecke, a heavy-ion physicist, transforms particle physics plots into works of art.

Stained glass image inspired by Fibonacci numbers

At first glance, particle physicist Hubert van Hecke’s stained glass windows simply look like unique pieces of art. But there is much more to them than pretty shapes and colors. A closer look reveals that his creations are actually renditions of plots from particle physics experiments.  

Van Hecke learned how to create stained glass during his undergraduate years at Louisiana State University. “I had an artistic background—my father was a painter, so I thought, if I need a humanities credit, I'll just sign up for this,” van Hecke recalls. “So in order to get my physics’ bachelors, I took stained glass.” 

Over the course of two semesters, van Hecke learned how to cut pieces of glass from larger sheets, puzzle them together, then solder and caulk the joints. “There were various assignments that gave you an enormous amount of elbow room,” he says. “One of them was to do something with Fibonacci numbers, and one was pick your favorite philosopher and made a window related to their work.” 

Van Hecke continued to create windows and mirrors throughout graduate school but stopped for many years while working as a full-time heavy-ion physicist at Los Alamos National Laboratory and raising a family. Only recently did he return to his studio—this time, to create pieces inspired by physics. 

“I had been thinking about designs for a long time—then it struck me that occasionally, you see plots that are interesting, beautiful shapes,” van Hecke says. “So I started collecting pictures as I saw them.”

His first plot-based window, a rectangle-shaped piece with red, orange and yellow glass, was inspired by the results of a neutrino flavor oscillation study from the MiniBooNE experiment at Fermi National Accelerator Laboratory. He created two pieces after that, one from a plot generated during the hunt for the Higgs boson at the Tevatron, also at Fermilab and the other based on an experiment with quarks and gluons. 

According to van Hecke, what inspires him about these plots is “purely the shapes.” 

“In terms of the physics, it's what I run across—for example, I see talks about heavy ion physics, elementary particle physics, and neutrinos, [but] I haven't really gone out and searched in other fields,” he says. “Maybe there are nice plots in biology or astronomy.”

Although van Hecke has not yet displayed his pieces publicly, if he does one day, he plans to include explanations for the phenomena the plots illustrate, such as neutrinos and the Standard Model, as a unique way to communicate science. 

But before that, van Hecke plans to create more stained glass windows. As of two months ago, he is semiretired—and in between runs to Fermilab, where he is helping with the effort to use Argonne National Laboratory's SeaQuest experiment to search for dark photons, he hopes to spend more time in the studio creating the pieces left on the drawing board, which include plots found in experiments investigating the Standard Model, neutrinoless double decay and dark matter interactions. 

“I hope to make a dozen or more,” he says. “As I bump into plots, I'll collect them and hopefully, turn them all into windows.” 

by Diana Kwon at July 25, 2017 01:00 PM

Lubos Motl - string vacua and pheno

Wrong turns, basins, GUT critics, and creationists
A notorious holy warrior against physics recently summarized a talk by Nima Arkani-Hamed as follows:
I think Arkani-Hamed is right to identify the 1974 GUT hypothesis as the starting point that led the field into this wrong basin.
As far as I can see, Nima has never made a discovery – or claimed a discovery – that would show that grand unification was wrong or the center of a "wrong basin". Instead, Nima made the correct general point that if you try to improve your state-of-the-art theoretical picture gradually and by small changes that look like improvements, you may find a local minimum (or optimum) but that may be different from the global minimum (or optimum) – from the correct theory. So sometimes one needs to make big steps.

Is grand unification correct? Are three non-gravitational forces that we know merged into one at a high energy scale? My answer is that we don't know at this moment – the picture has appealing properties, especially in combination with SUSY, but nothing is firmly established and pictures without it may be good enough, too – and I am rather confident that Nima agrees with answer, Peter W*it's classic lies notwithstanding. Even if we take the latest stringy constructions and insights for granted, there exist comparably attractive compactifications where the electroweak and strong forces are unified at a higher scale; and compactifications where they aren't. String theory always morally unifies all forces, including gravity, but this type of unification is more general and may often be non-unification according to the technical, specific, field-theoretical definition of unification.

Nevertheless, W*it made this untrue statement in his blog post and the discussion started among the crackpots who visit that website: Was grand unification the first "wrong turn"?

Funnily enough, the N*t Even Wr*ng crackpots get divided to two almost equally large camps. In fact, if this community ever managed to discuss at least this basic technical question – what was the first wrong turn in theoretical physics – their estimated thresholds would fill a nearly perfect continuum. For many of them, Einstein's relativity was already the collapse of physics. For others, it was quantum mechanics. Another group would pick quantum field theory. Another group would pick renormalization. One more clique would pick the confining QCD. Those would be the groups that deny the theories that are rather clearly experimentally established.

But nothing special would happen at that place. There would be "more moderate" groups that would identify the grand unification as the first wrong turn, or supersymmetric field theories as the first wrong turn, or bosonic string theory, or superstring theory, or non-perturbative string theory, or M-theory, or the flux vacua, or something else.

I've met members of every single one of these groups. Needless to say, as we go towards more far-reaching or newer ideas that haven't been experimentally established, we're genuinely increasingly uncertain whether they're right. But because we can't rule out these ideas, they unavoidably keep on reappearing in research and proposed new theories. It can't be otherwise!

In May, I pointed out that the criticisms of inflation are silly because the true breakthrough of inflation was to notice a mechanism that is really "generic" in the kind of theories we normally use and that have been successfully tested (presence of scalar fields; existence of points away from the minima of the potential; de-Sitter-like cosmic expansion at these places of the configuration space), and that seems to be damn useful to improve certain perceived defects of the Big Bang Theory. Although people aren't 100.000% sure about inflation and especially its technical details, they have eaten the forbidden apple and figured out that the taste is so good that they keep on returning to the tree and pick some fruits from it.

To a large extent, exactly the same comment may be made about grand unification, supersymmetry, string theory, and all these other ideas that the crackpots often like to attack as heresies. Even though we're not 100% certain that either of these ideas holds in the Universe around us, we are 100% sure that because these possible theories and new structures have already been theoretically discovered and they seem to make lots of sense as parts of our possible explanation of physical phenomena, a community of honest theoretical physicists simply cannot outlaw or erase these possibilities again. To ban them would mean to lie into our eyes.

That's exactly what the N*t Even Wrong crackpots want to do – they would love to ban much of theoretical physics, although they haven't agreed whether the ban would apply to all physics after 1900, 1905, 1915, 1925, 1945, 1973, 1977, 1978, 1984, 1995, 2000, 2003, or another number. ;-) But they're obsessed with bans on ideas just like the Catholic Inquisition was obsessed with bans on ideas. This approach is fundamentally incompatible with the scientific approach to our knowledge.

New evidence – or a groundbreaking new theory or experiment(s) – may emerge that will make some or all ideas studied e.g. since the 1970s irrelevant for physics of the world around us. But because such an event hasn't taken place yet, physicists simply can't behave as if it has already taken place. In particular, no new physics beyond the Standard Model has been discovered yet which makes it clear that all conceivable theories of physics beyond the Standard Model would suffer from the same drawback, namely their not having been proven yet.

By the way, the disagreement about the identification of the "first wrong turn" is completely analogous to the "continuum of creationist and intelligent design theories" as it was discussed by Eugenie Scott, an anti-creationist activist.

Just like you can ask what was the first wrong turn in high energy physics, you may ask what is the first or most modest claim by Darwin's theory that is wrong – or the most recent event in the Darwinian picture of the history of species that couldn't happen according to the Darwinian story.

If you collect the answers from the critics of evolution, you will find out that they're equally split as Peter W*it's fellow crackpots. In fact, the hypothesized "first wrong statement" of the standard picture of the history of Earth and life may be anything and all the choices of these wrong statements fill a continuum – they cover all statements of cosmology, geology, biology, macroevolution, and microevolution that have ever been made.

Some people deny that the Universe is more than thousands of years old. Others do accept it but they don't accept that life on Earth is old. Some people accept that but they claim that many "kinds" of animals and plants had to be born simultaneously and independently because they're too different.

In general, "kinds" are supposed to be more general, larger, and more Biblical taxonomic groups than "species" – although "kinds" isn't one of the groups that are used by the conventional scientific taxonomy. However, when you ask how large these "kinds" groups are (questions like whether horses belong to the same "kind" as zebras), various critics of evolution will give you all conceivable answers. Some of them will say that "kinds" are just somewhat bigger than scientific species (those critics of evolution are the most radical ones and many of their statements may really be falsified "almost in the lab"), others will say that they are substantially bigger. Another group will say that "kinds" are vastly larger and they will "only" ban the evolution that would relate birds and lizards or dinosaurs and mammals etc. These "most moderate intelligent designers" might tell you the same thing as the evolutionists concerning the evolution of all vertebrates, for example, but they still leave some of the "largest division of organisms" to an intelligent creator.

The actual reason for the absence of an agreed upon boundary is obviously the absence of any evidence for any such boundary. In fact, it looks almost certain that no such boundary actually exists – and all life on Earth indeed has the common origin.

Summary: continuum of alternative theories shows that none of them is defensible

Again, to summarize, critics of theoretical physics just like critics of evolution form a continuum.

All of them have to believe in some very important new "boundaries" but any specific location of such a boundary looks absolutely silly and unjustified. Some critics of the evolutionary biology say that zebras and horses may have a common ancestor but zebras and llamas can't. Does it make any sense? Why would you believe that two completely analogous differences – zebra-horse and zebra-llama differences – must have totally, qualitatively, metaphysically different explanations? Such a theory looks extremely awkward and inefficient. Once Nature has mechanisms to create zebras and horses from a common ancestor, why shouldn't the same mechanism be enough to explain the rise of llamas and zebras from common ancestors, too?

The case of the critics of physics is completely analogous. If grand unification were the first wrong turn, how do you justify that the group \(SU(3)\times SU(2)\times U(1)\) is "allowed" to be studied in physics, while \(SO(10)\) is already blasphemous or "unscientific" (their word for "blasphemous")? It doesn't make the slightest sense. They're two groups and both of them admit models that are consistent with everything we know. \(SO(10)\) is really simpler and prettier – while its models arguably have to use an uglier (and more extended) spectrum of matter (the new Higgs bosons etc.).

Well, the only rational conclusion is that the efforts to postulate any "red lines" of this kind are utterly stupid. Biologists must be allowed to study arbitrarily deep evolutionary processes and theoretical high energy physicists must be allowed to study all ideas that have ever emerged, look tantalizing, and haven't been ruled out. And critics of theoretical physics must be acknowledged to be intellectually inconsequential deluded animals.

by Luboš Motl ( at July 25, 2017 09:31 AM

July 21, 2017

Lubos Motl - string vacua and pheno

Does weak gravity conjecture predict neutrino type, masses and cosmological constant?
String cosmologist Gary Shiu and his junior collaborator Yuta Hamada (Wisconsin) released a rather fascinating hep-th preprint today
Weak Gravity Conjecture, Multiple Point Principle and the Standard Model Landscape
They are combining some of the principles that are seemingly most abstract, most stringy, and use them in such a way that they seem to deduce an estimate for utterly observable quantities such as a realistic magnitude of neutrino masses, their being Dirac, and a sensible estimate for the cosmological constant, too.

What have they done?

In 2005, when I watched him happily, Cumrun Vafa coined the term swampland for the "lore" that was out there but wasn't clearly articulated before that. Namely the lore that even in the absence of the precise identified vacuum of string theory, string theory seems to make some general predictions and ban certain things that would be allowed in effective quantum field theories. According to Vafa, the landscape may be large but it is still just an infinitely tiny, precious fraction embedded in a much larger and less prestigious region, the swampland, the space of possible effective field theories which is full of mud, feces, and stinking putrefying corpses of critics of string theory such as Mr Šmoits. Vafa's paper is less colorful but be sure that this is what he meant. ;-)

The weak gravity conjecture – the hypothesis (justified by numerous very different and complementary pieces of evidence) that consistency of quantum gravity really demands gravity among elementary particles to be weaker than other forces – became the most well-known example of the swampland reasoning. But Cumrun and his followers have pointed out several other general predictions that may be made in string theory but not without it.

Aside from the weak gravity conjecture, Shiu and Hamada use one particular observation: that theories of quantum gravity (=string/M-theory in the most general sense) should be consistent not only in their original spacetime but it should also be possible to compactify them while preserving the consistency.

Shiu and Hamada use this principle for the Core Theory, as Frank Wilczek calls the Standard Model combined with gravity. Well, it's only the Standard Model part that is "really" exploited by Shiu and Hamada. However, the fact that the actual theory also contains quantum gravity is needed to justify the application of the quantum gravity anti-swampland principle. Their point is highly creative. When the surrounding Universe including the Standard Model is a vacuum of string/M-theory, some additional operations – such as extra compactification – should be possible with this vacuum.

On top of these swampland things, Shiu and Hamada also adopt another principle, Froggatt's and Nielsen's and Donald Bennett's multiple point criticality principle. The principle says that the parameters of quantum field theory are chosen on the boundaries of a maximum number of phases – i.e. so that something special seems to happen over there. This principle has been used to argue that the fine-structure constant should be around \(\alpha\approx 1/(136.8\pm 9)\), the top quark mass should be \(m_t\approx 173\pm 5 \GeV\), the Higgs mass should be \(m_h\approx 135\pm 9 \GeV\), and so on. The track record of this principle looks rather impressive to me. In some sense, this principle isn't just inequivalent to naturalness; it is close to its opposite. Naturalness could favor points in the bulk of a "single phase"; the multiple criticality principle favors points in the parameter space that are of "measure zero" to a maximal power, in fact.

Fine. So Shiu and Hamada take our good old Standard Model and compactify one or two spatial dimensions on a circle \(S^1\) or the torus \(T^2\) because you shouldn't be afraid of doing such things with the string theoretical vacua, and our Universe is one of them. When they compactify it, they find out that aside from the well-known modest Higgs vev, there is also a stationary point where the Higgs vev is Planckian.

So they analyze the potential as the function of the scalar fields and find out that depending on the unknown facts about the neutrinos, these extra stationary points may be unstable because of various new instabilities. Now, they also impose the multiple point criticality principle and demand our 4-dimensional vacuum to be degenerate with the 3-dimensional compactification – where one extra spatial dimension becomes a short circle. This degeneracy is an unusual, novel, stringy application of the multiple criticality principle that was previously used for boring quantum field theories only.

This degeneracy basically implies that the neutrino masses must be of order \(1-10\meV\). Obviously, they knew in advance that they wanted to get a similar conclusion because this conclusion seems to be most consistent with our knowledge about neutrinos. And neutrinos should be Dirac fermions, not Majorana fermions. Dirac neutrinos are needed for the spin structure to disable a decay by Witten's bubble of nothing. On top of that, the required vacua only exist if the cosmological constant is small enough, so they have a new justification for the smallness of the cosmological constant that must be comparable to the fourth power of these neutrino masses, too – and as you may know, this is a good approximate estimate of the cosmological constant, too.

Note that back in 1994, Witten still believed that the cosmological constant had to be zero and he used a compactification of our 4D spacetime down to 3D to get an argument. In some sense, Shiu and Hamada are doing something similar – they don't cite that paper by Witten, however – except that their setup is more advanced and it produces a conclusion that is compatible with the observer nonzero cosmological constant.

Jožin from the Swampland mainly eats the inhabitants of Prague. And who could have thought? He can only be dealt with effectively with the help of a crop duster.

So although these principles are abstract and at least some of them seem unproven or even "not sufficiently justified", there seems to be something correct about them because Shiu and Hamada may extract rather realistic conclusions out of these principles. But if they are right, I think that they did much more than an application of existing principles. They applied them in truly novel, creative ways.

If their apparent success were more than just a coincidence, I would love to understand the deeper reasons why the multiple criticality principle is right and many other things that are needed for a satisfactory explanation why this "had to work".

by Luboš Motl ( at July 21, 2017 02:30 PM

Symmetrybreaking - Fermilab/SLAC

Watch the underground groundbreaking

This afternoon, watch a livestream of the start of excavation for the future home of the Deep Underground Neutrino Experiment.

Photo of the Yates surface facilities at Sanford Lab, a white building surrounded by tree-covered mountains

Today in South Dakota, dignitaries, scientists and engineers will mark the start of construction of the future home of America's flagship neutrino experiment with a groundbreaking ceremony.

Participants will hold shovels and give speeches. But this will be no ordinary groundbreaking. It will take place a mile under the earth at Sanford Underground Research Facility, the deepest underground physics lab in the United States.

The groundbreaking will celebrate the beginning of excavation for the Long-Baseline Neutrino Facility, which will house the Deep Underground Neutrino Experiment. When complete, LBNF/DUNE will be the largest experiment ever built in the US to study the properties of mysterious particles called neutrinos. Unlocking the mysteries of these particles could help explain more about how the universe works and why matter exists at all.

Watch the underground groundbreaking at 2:20 p.m. Mountain Time (3:20 p.m. Central) via livestream.

by Kathryn Jepsen at July 21, 2017 01:00 PM

July 20, 2017

Axel Maas - Looking Inside the Standard Model

Getting better
One of our main tools in our research are numerical simulations. E.g. the research of the previous entry would have been impossible without.

Numerical simulations require computers to run them. And even though computers become continuously more powerful, they are limited in the end. Not to mention that they cost money to buy and to use. Yes, also using them is expensive. Think of the electricity bill or even having space available for them.

So, to reduce the costs, we need to use them efficiently. That is good for us, because we can do more research in the same time. And that means that we as a society can make scientific progress faster. But it also reduces financial costs, which in fundamental research almost always means the taxpayer's money. And it reduces the environmental stress which we exercise by having and running the computers. That is also something which should not be forgotten.

So what does efficiently mean?

Well, we need to write our own computer programs. What we do nobody did before us. Most of what we do is really the edge of what we understand. So nobody was here before us and could have provided us with computer programs. We do them ourselves.

For that to be efficient, we need three important ingredients.

The first seems to be quite obvious. The programs should be correct before we use them to make a large scale computation. It would be very wasteful to run on a hundred computers for several months, just to figure out it was all for naught, because there was an error. Of course, we need to test them somewhere, but this can be done with much less effort. But this takes actually quite some time. And is very annoying. But it needs to be done.

The next two issues seems to be the same, but are actually subtly different. We need to have fast and optimized algorithms. The important difference is: The quality of the algorithm decides how fast it can be in principle. The actual optimization decides to which extent it uses this potential.

The latter point is something which requires a substantial amount of experience with programming. It is not something which can be learned theoretically. And it is more of a craftsmanship than anything else. Being good in optimization can make a program a thousand times faster. So, this is one reason why we try to teach students programming early, so that they can acquire the necessary experience before they enter research in their thesis work. Though there is still today research work which can be done without computers, it has become markedly less over the decades. It will never completely vanish, though. But it may well become a comparatively small fraction.

But whatever optimization can do, it can do only so much without good algorithms. And now we enter the main topic of this entry.

It is not only the code which we develop by ourselves. It is also the algorithms. Because again, they are new. Nobody did this before. So it is also up to us to make them efficient. But to really write a good algorithm requires knowledge about its background. This is called domain-specific knowledge. Knowing the scientific background. One reason more why you cannot get it off-the-shelf. Thus, if you want to calculate something new in research using computer simulations that means usually sitting down and writing a new algorithm.

But even once an algorithm is written down this does not mean that it is necessarily already the fastest possible one. Also this requires on the one hand experience, but even more so it is something new. And it is thus research as well to make it fast. So they can, and need to be, made better.

Right now I am supervising two bachelor theses where exactly this is done. The algorithms are indeed directly those which are involved with the research mentioned in the beginning. While both are working on the same algorithm, they do it with quite different emphasis.

The aim in one project is to make the algorithm faster, without changing its results. It is a classical case of improving an algorithm. If successful, it will make it possible to push the boundaries of what projects can be done. Thus, it makes computer simulations more efficient, and thus satisfies allows to do more research. One goal reached. Unfortunately the 'if' already tells that, as always with research, there is never a guarantee that it is possible. But if this kind of research should continue, it is necessary. The only alternative is waiting for a decade for the computers to become faster, and doing something different in the time in between. Not a very interesting option.

The other one is a little bit different. Here, the algorithm should be modified to serve a slightly different goal. It is not a fundamentally different goal, but subtly different so. Thus, while it does not create a fundamentally new algorithm, it still does create something new. Something, which will make a different kind of research possible. Without the modification, the other kind of research may not be possible for some time to come. But just as it is not possible to guarantee that an algorithm can be made more efficient, it is also not always possible that an algorithm with any reasonable amount of potential can be created at all. So this is also true research.

Thus, it remains exciting of what both theses will ultimately lead to.

So, as you see, behind the scenes research is quite full of the small things which make the big things possible. Both of these projects are probably closer to our everyday work than most of the things I have been posting before. The everyday work in research is quite often grinding. But, as always, this is what makes the big things ultimately possible. Without such projects as these two theses, our progress would be slowed down to a snail's speed.

by Axel Maas ( at July 20, 2017 03:38 PM

Andrew Jaffe - Leaves on the Line

Python Bug Hunting

This is a technical, nerdy post, mostly so I can find the information if I need it later, but possibly of interest to others using a Mac with the Python programming language, and also since I am looking for excuses to write more here. (See also updates below.)

It seems that there is a bug in the latest (mid-May 2017) release of Apple’s macOS Sierra 10.12.5 (ok, there are plenty of bugs, as there in any sufficiently complex piece of software).

It first manifested itself (to me) as an error when I tried to load the jupyter notebook, a web-based graphical front end to Python (and other languages). When the command is run, it opens up a browser window. However, after updating macOS from 10.12.4 to 10.12.5, the browser didn’t open. Instead, I saw an error message:

    0:97: execution error: "http://localhost:8888/tree?token=<removed>" doesn't understand the "open location" message. (-1708)

A little googling found that other people had seen this error, too. I was able to figure out a workaround pretty quickly: this behaviour only happens when I wanted to use the “default” browser, which is set in the “General” tab of the “System Preferences” app on the Mac (I have it set to Apple’s own “Safari” browser, but you can use Firefox or Chrome or something else). Instead, there’s a text file you can edit to explicitly set the browser that you want jupyter to use, located at ~/.jupyter/, by including the line

c.NotebookApp.browser = u'Safari'

(although an unrelated bug in Python means that you can’t currently use “Chrome” in this slot).

But it turns out this isn’t the real problem. I went and looked at the code in jupyter that is run here, and it uses a Python module called webbrowser. Even outside of jupyter, trying to use this module to open the default browser fails, with exactly the same error message (though I’m picking a simpler URL at instead of the jupyter-related one above):

>>> import webbrowser
>>> br = webbrowser.get()
0:33: execution error: "" doesn't understand the "open location" message. (-1708)

So I reported this as an error in the Python bug-reporting system, and hoped that someone with more experience would look at it.

But it nagged at me, so I went and looked at the source code for the webbrowser module. There, it turns out that the programmers use a macOS command called “osascript” (which is a command-line interface to Apple’s macOS automation language “AppleScript”) to launch the browser, with a slightly different syntax for the default browser compared to explicitly picking one. Basically, the command is osascript -e 'open location ""'. And this fails with exactly the same error message. (The similar code osascript -e 'tell application "Safari" to open location ""' which picks a specific browser runs just fine, which is why explicitly setting “Safari” back in the jupyter file works.)

But there is another way to run the exact same AppleScript command. Open the Mac app called “Script Editor”, type open location "" into the window, and press the “run” button. From the experience with “osascript”, I expected it to fail, but it didn’t: it runs just fine.

So the bug is very specific, and very obscure: it depends on exactly how the offending command is run, so appears to be a proper bug, and not some sort of security patch from Apple (and it certainly doesn’t appear in the 10.12.5 release notes). I have filed a bug report with Apple, but these are not publicly accessible, and are purported to be something of a black hole, with little feedback from the still-secretive Apple development team.


by Andrew at July 20, 2017 08:45 AM

July 19, 2017

Axel Maas - Looking Inside the Standard Model

Tackling ambiguities
I have recently published a paper with a rather lengthy and abstract title. I wanted to enlighten in this entry a little bit what is going on.

The paper is actually on a problem which occupies me by now since more than a decade. And this is the problem how to really define what we mean when we talk about gluons. The reason for this problem is a certain ambiguity. This ambiguity arises because it is often much more convenient to have auxiliary additional stuff around to make calculations simple. But then you have to deal with this additional stuff. In a paper last year I noted that the amount of stuff is much larger than originally anticipated. So you have to deal with more stuff.

The aim of the research leading to the paper was to make progress with that.

So what did I do? To understand this, it is first necessary to say a few words about how we describe gluons. We describe them by mathematical functions. The simplest such mathematical functions makes, loosely speaking, a statement about how probable it is that a gluon moves from one point to another. Since a fancy word for moving is propagating, this function is called a propagator.

So the first question I posed was whether the ambiguity in dealing with the stuff affects this. You may ask whether this should happen at all. Is a gluon not a particle? Should this not be free of ambiguities? Well, yes and no. A particle which we actually detect should be free of ambiguities. But gluons are not detected. Gluons are, in fact, never seen directly. They are confined. This is a very peculiar feature of the strong force. And one which is not satisfactorily fully understood. But it is experimentally well established.

Since therefore something happens to gluons before we can observe them, there is now a way out. If the gluon is ambiguous, then this ambiguity has to be canceled by whatever happens to it. Then whatever we detect is not ambiguous. But cancellations are fickle things. If you are not careful in your calculations, something is left uncanceled. And then your results become ambiguous. This has to be avoided. Of course, this is purely a problem for us theoreticians. The experimentalists never have this problem. A long time ago I actually already wrote together with a few other people a paper on this, showing how it may proceed.

So, the natural first step is to figure out what you have to cancel. And therefore to map the ambiguity in its full extent. The possibilities discussed since decades look roughly like this:

As you see, at short distances there is (essentially) no ambiguity. This is actually quite well understood. It is a feature very deeply embedded in the strong interaction. It has to do with the fact that, despite its name, the strong interaction makes itself less known the shorter the distance. But for weak effects we have very precise tools, and we therefore understand it.

On the other hand at long distances - well, there we knew for a long time not even qualitatively what is going on for sure. But, finally, over the decades, we were able to constrain the behavior at least partly. Now, I tested a large part of the remaining range of ambiguities. In the end, it indeed mattered little. There is almost no effect left of the ambiguity on the behavior of the gluon. So, it seems we have this under control.

Or do we? One of the important things in research is that it is never sufficient to confirm your result just by looking at a single thing. Either your explanation fits everything we see and measure, or it cannot be the full story. Or may even be wrong and the agreement with part of the observations is just a lucky coincidence. Well, actually not lucky. Rather terrible, since this misguides you.

Of course, doing all in one go is a horrendous amount of work, and so you work on a few at the time. Preferably, you first work on those where the most problems are expected. It is just ultimately that you need to have covered everything. But you cannot stop and claim victory before you did.

So I did, and looked in the paper at a handful of other quantities. And indeed, in some of them there remain effects. Especially, if you look at how strong the strong interaction is, depending on the distance where you measure it, something remains:

The effects of the ambiguity are thus not qualitative. So it does not change our qualitative understanding of how the strong force works. But there remains some quantitative effect, which we need to take into account.

There is one more important side effect. When I calculated the effects of the ambiguity, I learned also to control how the ambiguity manifests. This does not alter that there is an ambiguity, nor that it has consequences. But it allows others to reproduce how I controlled the ambiguity. This is important because now two results from different sources can be put together, and when using the same control they will fit such that for experimental observables the ambiguity cancels. And thus we have achieved the goal.

To be fair, however, this is currently at the level of an operative control. It is not yet a mathematically well-defined and proven procedure. As with so many cases, this still needs to be developed. But having operative control allows to develop the rigorous control easier than starting without it. So, progress has been made.

by Axel Maas ( at July 19, 2017 08:00 AM

Axel Maas - Looking Inside the Standard Model

Using evolution for particle physics
(I will start to illustrate the entries with some simple sketches. I am not very experienced with it, and thus, they will be quite basic. But with making more of them I should gain experience, and they should become better eventually)

This entry will be on the recently started bachelor thesis of Raphael Wagner.

He is addressing the following problem. One of the mainstays of our research are computer simulations. But our computer simulations are not exact. They work by simulating a physical system many times with different starts. The final result is then an average over all the simulations. There is an (almost) infinite number of starts. Thus, we cannot include them all. As a consequence, our average is not the exact value we are looking for. Rather, it is an estimate. We can also estimate in which range around the real result should be.

This is sketched in the following picture

The black line is our estimate and the red lines give the range were the true value should be. From left to right some parameter runs. In the case of the thesis, the parameter is the time. The value is roughly the probability for a particle to survive this time. So we have an estimate for the survivability probability.

Fortunately, we know a little more. From quite basic principles we know that this survivability cannot depend in an arbitrary way on the time. Rather, it has a particular mathematical form. This function depends only on a very small set of numbers. The most important one is the mass of the particle.

What we then do is to start with some theory. We simulate it. And then we extract from such a survival probability the masses of the particles. Yes, we do not know them beforehand. This is because the masses of particles are changed in a quantum theory by quantum effects. These are which we simulate, to get a final value of the masses.

Up to now, we try to determine the mass in a very simple-minded way: We determined them by just looking for numbers for the mathematical functions which are closest to the data. That seems reasonable. Unfortunately, the function is not so simple. Thus, you can mathematically show that this does not give necessarily the best result. You can imagine this in the following way: Imagine you want to find the deepest valley in area. Surely, walking down hill will get you in a valley. But only walking down hill this will usually not be the deepest one:

But this is the way we determine the numbers so far. So there may be other options.

There is a different possibility. In the picture of the hills, you could rather deploy a number of ants, of which some prefer to walk up, some down, and some sometimes so and otherwise opposite. The ants live, die, and reproduce. Now, if you give the ants more to eat if they live in a deeper valley, at some time evolution will bring the population to live in the deepest valley:

And then you have what you want.

This is called a genetic algorithm. It is used in many areas of engineering. The processor of the computer or smartphone you use to read this has likely been optimized using such algorithms.

The bachelor thesis is now to apply the same idea to find better estimates for the masses of the particles in our simulations. This requires to understand what would be the equivalent to the deepness of the valley and the food for the ants. And how long we let evolution run its course. Then, we have only to monitor the (virtual) ants to find our prize.

by Axel Maas ( at July 19, 2017 07:53 AM

July 18, 2017

Symmetrybreaking - Fermilab/SLAC

Shaking the dark matter paradigm

A theory about gravity challenges our understanding of the universe.

Gravity vs. Dark Matter Reflection alternatives with dark matter on the left and no dark matter on the right.

For millennia, humans held a beautiful belief. Our planet, Earth, was at the center of a vast universe, and all of the planets and stars and celestial bodies revolved around us. This geocentric model, though it had floated around since 6th century BCE, was written in its most elegant form by Claudius Ptolemy in 140 AD.

When this model encountered problems, such as the retrograde motions of planets, scientists reworked the data to fit the model by coming up with phenomena such as epicycles, mini orbits.

It wasn’t until 1543, 1400 years later, that Nicolaus Copernicus set in motion a paradigm shift that would give way to centuries of new discoveries. According to Copernicus’ radical theory, Earth was not the center of the universe but simply one of a long line of planets orbiting around the sun.

But even as evidence that we lived in a heliocentric system piled up and scientists such as Galileo Galilei perfected the model, society held onto the belief that the entire universe orbited around Earth until the early 19th century.

To Erik Verlinde, a theoretical physicist at the University of Amsterdam, the idea of dark matter is the geocentric model of the 21st century. 

“What people are doing now is allowing themselves free parameters to sort of fit the data,” Verlinde says. “You end up with a theory that has so many free parameters it's hard to disprove.”

Dark matter, an as-yet-undetected form of matter that scientists believe makes up more than a quarter of the mass and energy of the universe, was first theorized when scientists noticed that stars at the outer edges of galaxies and galaxy clusters were moving much faster than Newton’s theory of gravity said they should. Up until this point, scientists have assumed that the best explanation for this is that there must be missing mass in the universe holding those fast-moving stars in place in the form of dark matter. 

But Verlinde has come up with a set of equations that explains these galactic rotation curves by viewing gravity as an emergent force — a result of the quantum structure of space.

The idea is related to dark energy, which scientists think is the cause for the accelerating expansion of our universe. Verlinde thinks that what we see as dark matter is actually just interactions between galaxies and the sea of dark energy in which they’re embedded.

“Before I started working on this I never had any doubts about dark matter,” Verlinde says. “But then I started thinking about this link with quantum information and I had the idea that dark energy is carrying more of the dynamics of reality than we realize.”

Verlinde is not the first theorist to come up with an alternative to dark matter. Many feel that his theory echoes the sentiment of physicist Mordehai Milgrom’s equations of “modified Newtonian dynamics,” or MOND. Just as Einstein modified Newton’s laws of gravity to fit to the scale of planets and solar systems, MOND modifies Einstein’s laws of gravity to fit to the scale of galaxies and galaxy clusters.

Verlinde, however, makes the distinction that he’s not deriving the equations of MOND, rather he’s deriving what he calls a “scaling relation,” or a volume effect of space-time that only becomes important at large distances. 

Stacy McGaugh, an astrophysicist at Case Western Reserve University, says that while MOND is primarily the notion that the effective force of gravity changes with acceleration, Verlinde’s ideas are more of a ground-up theoretical work.

“He's trying to look at the structure of space-time and see if what we call gravity is a property that emerges from that quantum structure, hence the name emergent gravity,” McGaugh says. “In principle, it's a very different approach that doesn't necessarily know about MOND or have anything to do with it.”

One of the appealing things about Verlinde’s theory, McGaugh says, is that it naturally produces evidence of MOND in a way that “just happens.” 

“That's the sort of thing that one looks for,” McGaugh says. “There needs to be some basis of why MOND happens, and this theory might provide it.”

Verlinde’s ideas have been greeted with a fair amount of skepticism in the scientific community, in part because, according to Kathryn Zurek, a theoretical physicist at the US Department of Energy’s Lawrence Berkeley National Laboratory, his theory leaves a lot unexplained. 

“Theories of modified gravity only attempt to explain galactic rotation curves [those fast-moving planets],” Zurek says. “As evidence for dark matter, that's only one very small part of the puzzle. Dark matter explains a whole host of observations from the time of the cosmic microwave background when the universe was just a few hundred thousand years old through structure formation all the way until today.”


Inline: Shaking the dark matter paradigm
Illustration by Ana Kova

Zurek says that in order for scientists to start lending weight to his claims, Verlinde needs to build the case around his theory and show that it accommodates a wider range of observations. But, she says, this doesn’t mean that his ideas should be written off.

“One should always poke at the paradigm,” Zurek says, “even though the cold dark matter paradigm has been hugely successful, you always want to check your assumptions and make sure that you're not missing something that could be the tip of the iceberg.”

McGaugh had a similar crisis of faith in dark matter when he was working on an experiment wherein MOND’s predictions were the only ones that came true in his data. He had been making observations of low-surface-brightness galaxies, wherein stars are spread more thinly than galaxies such as the Milky Way where the stars are crowded relatively close together.

McGaugh says his results did not make sense to him in the standard dark matter context, and it turned out that the properties that were confusing to him had already been predicted by Milgrom’s MOND equations in 1983, before people had even begun to take seriously the idea of low-surface-brightness galaxies.

Although McGaugh’s experience caused him to question the existence of dark matter and instead argue for MOND, others have not been so quick to join the cause.

“We subscribe to a particular paradigm and most of our thinking is constrained within the boundaries of that paradigm, and so if we encounter a situation in which there is a need for a paradigm shift, it's really hard to think outside that box,” McGaugh says. “Even though we have rules for the game as to when you're supposed to change your mind and we all in principle try to follow that, in practice there are some changes of mind that are so big that we just can't overcome our human nature.”

McGaugh says that many of his colleagues believe that there’s so much evidence for dark matter that it’s a waste of time to consider any alternatives. But he believes that all of the evidence for dark matter might instead be an indication that there is something wrong with our theories of gravity. 

“I kind of worry that we are headed into another thousand years of dark epicycles,” McGaugh says.

But according to Zurek, if MOND came up with anywhere near the evidence that has been amassed for the dark matter paradigm, people would be flocking to it. The problem, she says, is that at the moment MOND just does not come anywhere near to passing the number of tests that cold dark matter has. She adds that there are some physicists who argue that the cold dark matter paradigm can, in fact, explain those observations about low-surface-brightness galaxies.

Recently, Case Western held a workshop wherein they gathered together representatives from different communities, including those working on dark matter models, to discuss dwarf galaxies and the external field effect, which is the notion that very low-density objects will be affected by what’s around them. MOND predicts that the dynamics of a small satellite galaxy will depend on its proximity to its giant host in a way that doesn't happen with dark matter.

McGaugh says that in attendance at the workshop were a group of more philosophically inclined people who use a set of rules to judge theories, which they’ve put together by looking back at how theories have developed in the past. 

“One of the interesting things that came out of that was that MOND is doing better on that score card,” he says. “It’s more progressive in the sense that it's making successful predictions for new phenomena whereas in the case of dark matter we've had to repeatedly invoke ad hoc fixes to patch things up.”

Verlinde’s ideas, however, didn’t come up much within the workshop. While McGaugh says that the two theories are closely enough related that he would hope the same people pursuing MOND would be interested in Verlinde’s theory, he added that not everyone shares that attitude. Many are waiting for more theoretical development and further observational tests.

“The theory needs to make a clear prediction so that we can then devise a program to go out and test it,” he says. “It needs to be further worked out to get beyond where we are now.”

Verlinde says he realizes that he still needs to develop his ideas further and extend them to explain things such as the formation of galaxies and galaxy clusters. Although he has mostly been working on this theory on his own, he recognizes the importance of building a community around his ideas.

Over the past few months, he has been giving presentations at different universities, including Princeton, Harvard, Berkeley, Stanford, and Caltech. There is currently a large community of people working on ideas of quantum information and gravity, he says, and his main goal is to get more people, in particular string theorists, to start thinking about his ideas to help him improve them.

“I think that when we understand gravity better and we use those equations to describe the evolution of the universe, we may be able to answer questions more precisely about how the universe started,” Verlinde says. “I really think that the current description is only part of the story and there's a much deeper way of understanding it—maybe an even more beautiful way.”


by Ali Sundermier at July 18, 2017 01:00 PM

July 16, 2017

Matt Strassler - Of Particular Significance

Ongoing Chance of Northern (or Southern) Lights

As forecast, the cloud of particles from Friday’s solar flare (the “coronal mass emission”, or “CME”) arrived at our planet a few hours after my last post, early in the morning New York time. If you’d like to know how I knew that it had reached Earth, and how I know what’s going on now, scroll down to the end of this post and I’ll show you the data I was following, which is publicly available at all times.

So far the resulting auroras have stayed fairly far north, and so I haven’t seen any — though they were apparently seen last night in Washington and Wyoming, and presumably easily seen in Canada and Alaska. [Caution: sometimes when people say they’ve been “seen”, they don’t quite mean that; I often see lovely photos of aurora that were only visible to a medium-exposure camera shot, not to the naked eye.]  Or rather, I should say that the auroras have stayed fairly close to the Earth’s poles; they were also seen in New Zealand.

Russia and Europe have a good opportunity this evening. As for the U.S.? The storm in the Earth’s magnetic field is still going on, so tonight is still a definite possibility for northern states. Keep an eye out! Look for what is usually a white or green-hued glow, often in swathes or in stripes pointing up from the northern horizon, or even overhead if you’re lucky.  The stripes can move around quite rapidly.

Now, here’s how I knew all this.  I’m no expert on auroras; that’s not my scientific field at all.   But the U.S. Space Weather Prediction Center at the National Oceanic and Atmospheric Administration, which needs to monitor conditions in space in case they should threaten civilian and military satellites or even installations on the ground, provides a wonderful website with lots of relevant data.

The first image on the site provides the space weather overview; a screenshot from the present is shown below, with my annotations.  The upper graph indicates a blast of x-rays (a form of light not visible to the human eye) which is generated when the solar flare, the magnetically-driven explosion on the sun, first occurs.  Then the slower cloud of particles (protons, electrons, and other atomic nuclei, all of which have mass and therefore can’t travel at light’s speed) takes a couple of days to reach Earth.  It’s arrival is shown by the sudden jump in the middle graph.  Finally, the lower graph measures how active the Earth’s magnetic field is.  The only problem with that plot is it tends to be three hours out of date, so beware of that! A “Kp index” of 5 shows significant activity; 6 means that auroras are likely to be moving away from the poles, and 7 or 8 mean that the chances in a place like the north half of the United States are pretty good.  So far, 6 has been the maximum generated by the current flare, but things can fluctuate a little, so 6 or 7 might occur tonight.  Keep an eye on that lower plot; if it drops back down to 4, forget it, but it it’s up at 7, take a look for sure!


Also on the site is data from the ACE satellite.  This satellite sits 950 thousand miles [1.5 million kilometers] from Earth, between Earth and the Sun, which is 93 million miles [150 million kilometers] away.  At that vantage point, it gives us (and our other satellites) a little early warning, of up to an hour, before the cloud of slow particles from a solar flare arrives.  That provides enough lead-time to turn off critical equipment that might otherwise be damaged.  And you can see, in the plot below, how at a certain time in the last twenty-four hours the readings from the satellite, which had been tepid before, suddenly started fluctuating wildly.  That was the signal that the flare had struck the satellite, and would arrive shortly at our location.


It’s a wonderful feature of the information revolution that you can get all this scientific data yourself, and not wait around hoping for a reporter or blogger to process it for you.  None of this was available when I was a child, and I missed many a sky show.  A big thank you to NOAA, and to the U.S. taxpayers who make their work possible.



Filed under: Astronomy Tagged: astronomy, auroras, space

by Matt Strassler at July 16, 2017 09:09 PM

July 15, 2017

Matt Strassler - Of Particular Significance

Lights in the Sky (maybe…)

The Sun is busy this summer. The upcoming eclipse on August 21 will turn day into deep twilight and transfix millions across the United States.  But before we get there, we may, if we’re lucky, see darkness transformed into color and light.

On Friday July 14th, a giant sunspot in our Sun’s upper regions, easily visible if you project the Sun’s image onto a wall, generated a powerful flare.  A solar flare is a sort of magnetically powered explosion; it produces powerful electromagnetic waves and often, as in this case, blows a large quantity of subatomic particles from the Sun’s corona. The latter is called a “coronal mass ejection.” It appears that the cloud of particles from Friday’s flare is large, and headed more or less straight for the Earth.

Light, visible and otherwise, is an electromagnetic wave, and so the electromagnetic waves generated in the flare — mostly ultraviolet light and X-rays — travel through space at the speed of light, arriving at the Earth in eight and a half minutes. They cause effects in the Earth’s upper atmosphere that can disrupt radio communications, or worse.  That’s another story.

But the cloud of subatomic particles from the coronal mass ejection travels a few hundred times slower than light, and it takes it about two or three days to reach the Earth.  The wait is on.

Bottom line: a huge number of high-energy subatomic particles may arrive in the next 24 to 48 hours. If and when they do, the electrically charged particles among them will be trapped in, and shepherded by, the Earth’s magnetic field, which will drive them spiraling into the atmosphere close to the Earth’s polar regions. And when they hit the atmosphere, they’ll strike atoms of nitrogen and oxygen, which in turn will glow. Aurora Borealis, Northern Lights.

So if you live in the upper northern hemisphere, including Europe, Canada and much of the United States, keep your eyes turned to the north (and to the south if you’re in Australia or southern South America) over the next couple of nights. Dark skies may be crucial; the glow may be very faint.

You can also keep abreast of the situation, as I will, using NOAA data, available for instance at

The plot on the upper left of that website, an example of which is reproduced below, shows three types of data. The top graph shows the amount of X-rays impacting the atmosphere; the big jump on the 14th is Friday’s flare. And if and when the Earth’s magnetic field goes nuts and auroras begin, the bottom plot will show the so-called “Kp Index” climbing to 5, 6, or hopefully 7 or 8. When the index gets that high, there’s a much greater chance of seeing auroras much further away from the poles than usual.

The latest space weather overview plot

Keep an eye also on the data from the ACE satellite, lower down on the website; it’s placed to give Earth an early warning, so when its data gets busy, you’ll know the cloud of particles is not far away.

Wishing you all a great sky show!

Filed under: LHC News

by Matt Strassler at July 15, 2017 09:54 PM

July 13, 2017

Symmetrybreaking - Fermilab/SLAC

SLAC accelerator plans appear in Smithsonian art exhibit

The late artist June Schwarcz found inspiration in some unusual wrapping paper her husband brought home from the lab.


Photograph of June Schwarcz at home

Leroy Schwarcz, one of the first engineers hired to build SLAC National Accelerator Laboratory’s original 2-mile-long linear accelerator, thought his wife might like to use old mechanical drawings of the project as wrapping paper. So, he brought them home.

His wife, acclaimed enamelist June Schwarcz, had other ideas.

Today, works called SLAC Drawing III, VII and VIII, created in 1974 and 1975 from electroplated copper and enamel, form a unique part of a retrospective at the Smithsonian’s Renwick Gallery in Washington, D.C.

Among the richly formed and boldly textured and colored vessels that make up the majority of June’s oeuvre, the SLAC-inspired panels stand out for their fidelity to the mechanical design of their inspiration. 

The description next to the display at the gallery describe the “SLAC Blueprints” as resembling “ancient pictographs drawn on walls of a cave or glyphs carved in stone.” The designs appear to depict accelerator components, such as electromagnets and radio frequency structures.

According to Harold B. Nelson, who curated the exhibit with Bernard N. Jazzar, “The panels are quite unusual in the subtle color palette she chose; in her use of predominantly opaque enamels; in her reliance on a rectilinear, geometric format for her compositions; and in her reference in the work to machines, plans, numbers, and mechanical parts. 

“We included them because they are extremely beautiful and visually powerful. Together they form an important group within her body of work.”

Making history

June and Leroy Schwarcz met in the late 1930s and were married in 1943. Two years later they moved to Chicago where Leroy would become chief mechanical engineer for the University of Chicago’s synchrocyclotron, which was at the time the highest-energy proton accelerator in the world.

Having studied art and design at the Pratt Institute in Brooklyn several years earlier, June found her way into a circle of notable artists in Chicago, including Bauhaus legend László Moholy-Nagy, founder of Chicago’s Institute of Design.

Around 1954, June was introduced to enameling and shortly thereafter began to exhibit her art. She and her husband had two children and relocated several times during the 1950s for Leroy’s work. In 1958 they settled in Sausalito, California, where June set up her studio in the lower level of their hillside home. 

In 1961, Leroy became the first mechanical engineer hired by Stanford University to work on “Project M,” which would become the famous 2-mile-long linear accelerator at SLAC. He oversaw the engineers during early design and construction of the linac, which eventually enabled Nobel-winning particle physics research.

June and Leroy’s daughter, Kim Schwarcz, who made a living as a glass blower and textile artist until the mid 1980s and occasionally exhibited with her mother, remembers those early days at the future lab.

“Before SLAC was built, the offices were in Quonset huts, and my father used to bring me down, and I would bicycle all over the campus,” she recalled. “Pief was a family friend and so was Bob Mozley. Mom introduced Bob to his future wife…It was a small community and a really nice community.” 

W.K.H. “Pief” Panofsky was the first director of SLAC; he and Mozley were renowned SLAC physicists and national arms control experts.

Finding beauty

Kim was not surprised that her mother made art based on the SLAC drawings. She remembers June photographing the foggy view outside their home and getting inspiration from nature, ethnic art and Japanese clothing.

“She would take anything and make something out of it,” Kim said. “She did an enamel of an olive oil can once and a series called Adam’s Pants that were based on the droopy pants my son wore as a teen.”

But the fifteen SLAC-inspired compositions were unique and a family favorite; Kim and her brother Carl both own some of them, and others are at museums.

In a 2001 oral history interview with the Smithsonian Institution's Archives of American Art, June explained the detailed work involved in creating the SLAC drawings by varnishing, scribing, electroplating and enameling a copper sheet: “I'm primarily interested in having things that are beautiful, and of course, beauty is a complicated thing to devise, to find.”

Engineering art

Besides providing inspiration in the form of technical drawings, Leroy was influential in June’s career in other ways.

Around 1962 he introduced her to Jimmy Pope at the SLAC machine shop, who showed June how to do electroplating, a signature technique of her work. Electroplating involves using an electric current to deposit a coating of metal onto another material. She used it to create raised surfaces and to transform thin sheets of copper—which she stitched together using copper wire—into substantial, free-standing vessel-like forms. She then embellished these sculptures with colored enamel.

Leroy built a 30-gallon plating bath and other tools for June’s art-making at their shared workshop. 

“Mom was tiny, 5 feet tall, and she had these wobbly pieces on the end of a fork that she would put into a hot kiln. It was really heavy. Dad made a stand so she could rest her arm and slide the piece in,” Kim recalls.

“He was very inventive in that way, and very creative himself,” she said. “He did macramé in the 1960s, made wooden spoons and did scrimshaw carvings on bone that were really good.”

Kim remembers the lower-level workshop as a chaotic and inventive space. “For the longest time, there was a wooden beam in the middle of the workshop we would trip over. It was meant for a boat dad wanted to build—and eventually did build after he retired,” she said.

At SLAC Leroy’s work was driven by his “amazingly good intuition,” according to a tribute written by Mozley upon his colleague’s death in 1993. Even when he favored crude drawings to exact math, “his intuitive designs were almost invariably right,” he wrote. 

After the accelerator was built, Leroy turned his attention to the design, construction and installation of a streamer chamber scientists at SLAC used as a particle detector. In 1971 he took a leave of absence from the California lab to go back to Chicago and move the synchrocyclotron’s 2000-ton magnet from the university to Fermi National Accelerator Laboratory. 

“[Leroy] was the only person who could have done this because, although drawings existed, knowledge of the assembly procedures existed only in the minds of Leroy and those who had helped him put the cyclotron together,” Mozley wrote.

Beauty on display

June continued making art at her Sausalito home studio up until two weeks before her death in 2015 at the age of 97. A 2007 video shows the artist at work there 10 years prior to her passing. 

After Leroy died, her own art collection expanded on the shelves and walls of her home.

“As a kid, the art was just what mom did, and it never changed,” Kim remembers. “She couldn’t wait for us to go to school so she could get to work, and she worked through health challenges in later years.”

The Smithsonian exhibit is a unique collection of June’s celebrated work, with its traces of a shared history with SLAC and one of the lab’s first mechanical engineers.

“June had an exceptionally inquisitive mind, and we think you get a sense of the rich breadth of her vision in this wonderful body of work,” says curator Jazzar.

June Schwarcz: Invention and Variation is the first retrospective of the artist’s work in 15 years and includes almost 60 works. The exhibit runs through August 27 at the Smithsonian American Art Museum Renwick Gallery. 

Editor's note: Some of the information from this article was derived from an essay written by Jazzar and Nelson that appears in a book based on the exhibition with the same title.

by Angela Anderson at July 13, 2017 01:00 PM

July 12, 2017

Marco Frasca - The Gauge Connection

Something to say but not yet…

Last week I have been in Montpellier to attend QCD 17 Conference hosted at the CNRS and whose mainly organizer is Stephan Narison. At this conference participates a lot of people from CERN presenting new results very nearly to the main summer conferences. This year, QCD 17 was in conjuction with EPSHEP 2017 were the new results coming from LHC were firstly presented. This means that the contents of the talks in the two conferences just superposed in a matter of few hours.

On Friday, the last day of conference, I posted the following twitter after attending the talk by Shunsuke Honda on behalf of ATLAS at QCD 17:

and the reason was this slide

The title of the talk was “Cross sections and couplings of the Higgs Boson from ATLAS”. As you can read from it, there is a deviation of about 2 sigmas from the Standard Model for the Higgs decaying to ZZ(4l) for VBF. Indeed, they can claim agreement yet but it is interesting anyway (maybe are we missing anything?). The previous day at EPSHEP 2017, Ruchi Gupta on behalf of ATLAS presented an identical talk with the title “Measurement of the Higgs boson couplings and properties in the diphoton, ZZ and WW decay channels using the ATLAS detector” and the slide was the following:

The result is still there but with a somewhat sober presentation. What does this mean? Presently, this amounts to very few. We are still within the Standard Model even if something seems to peep out. In order to claim a discovery, this effect should be seen with a lower error and at CMS too. The implications would be that there could be a more complex spectrum of the Higgs sector with a possible new understanding of naturalness if such a spectrum would not have a formal upper bound. People at CERN promised more data coming in the next weeks. Let us see what will happen to this small effect.

Filed under: Conference, Particle Physics, Physics Tagged: ATLAS, CERN, Higgs decay

by mfrasca at July 12, 2017 12:56 PM

July 11, 2017

Symmetrybreaking - Fermilab/SLAC

A new model for standards

In an upcoming refresh, particle physics will define units of measurement such as the meter, the kilogram and the second.

Image of yellow ruler background with moon and red and Plank graphics

While America remains obstinate about using Imperial units such as miles, pounds and degrees Fahrenheit, most of the world has agreed that using units that are actually divisible by 10 is a better idea. The metric system, also known as the International System of Units (SI), is the most comprehensive and precise system for measuring the universe that humans have developed. 

In 2018, the 26th General Conference on Weights and Measures will convene and likely adopt revised definitions for the seven base metric system units for measuring: length, mass, time, temperature, electric current, luminosity and quantity.

The modern metric system owes its precision to particle physics, which has the tools to investigate the universe more precisely than any microscope. Measurements made by particle physicists can be used to refine the definitions of metric units. In May, a team of German physicists at the Physikalisch-Technische Bundesanstalt made the most precise measurements yet of the Boltzmann constant, which will be used to define units of temperature.

Since the metric system was established in the 1790s, scientists have attempted to give increasingly precise definitions to these units. The next update will define every base unit using fundamental constants of the universe that have been derived by particle physics.

meter (distance): 

Starting in 1799, the meter was defined by a prototype meter bar, which was just a platinum bar. Physicists eventually realized that distance could be defined by the speed of light, which has been measured with an accuracy to one part in a billion using an interferometer (interestingly, the same type of detector the LIGO collaboration used to discover gravitational waves). The meter is currently defined as the distance traveled by light (in a vacuum) for 1/299,792,458 of a second, and will remain effectively unchanged in 2018.

kilogram (mass):

For over a century, the standard kilogram has been a small platinum-iridium cylinder housed at the International Bureau of Weights and Measures in France. But even its precise mass fluctuates due to factors such as accumulation of microscopic dust. Scientists hope to redefine the kilogram in 2018 by setting the value of Planck’s constant to exactly 6.626070040×10-34 kilograms times meters squared per second. Planck’s constant is the smallest amount of quantized energy possible. This fundamental value, which is represented with the letter h, is integral to calculating energies in particle physics.

second (time):

The earliest seconds were defined as divisions of time between full moons. Later, seconds were defined by solar days, and eventually the time it took Earth to revolve around the sun. Today, seconds are defined by atomic time, which is precise to 1 part in 10 billion. Atomic time is calculated by periods of radiation by atoms, a measurement that relies heavily on particle physics techniques. One second is currently defined as 9,192,631,770 periods of the radiation for a Cesium-133 atom and will remain effectively unchanged. 

kelvin (temperature):

Kelvin is the temperature scale that starts at the coldest possible state of matter. Currently, a kelvin is defined by the triple point of water—where water can exist as a solid, liquid and gas. The triple point is 273.16 Kelvin, so a single kelvin is 1/273.16 of the triple point. But because water can never be completely pure, impurities can influence the triple point. In 2018 scientists hope to redefine kelvin by setting the value of Boltzmann’s constant to exactly 1.38064852×10−23 joules per kelvin. Boltzmann’s constant links the movement of particles in a gas (the average kinetic energy) to the temperature of the gas. Denoted by the symbol k, the Boltzmann constant is ubiquitous throughout physics calculations that involve temperature and entropy.  

ampere (electric current):

André-Marie Ampère, who is often considered the father of electrodynamics, has the honor of having the basic unit of electric current named after him. Right now, the ampere is defined by the amount of current required to produce of a force of 2×10−7 newtons for each meter between two parallel conductors of infinite length. Naturally, it’s a bit hard to come by things of infinite length, so the proposed definition is instead to define amperes by the fundamental charge of a particle. This new definition would rely on the charge of the electron, which will be set to 1.6021766208×10−19 amperes times seconds.

candela (luminosity):

The last of the base SI units to be established, the candela measures luminosity—what we typically refer to as brightness. Early standards for the candela used a phenomenon from quantum mechanics called “black body radiation.” This is the light that all objects radiate as a function of their heat. Currently, the candela is defined more fundamentally as 1/683 watt per square radian at a frequency of 540×1012 herz over a certain area, a definition which will remain effectively unchanged. Hard to picture? A candle, conveniently, emits about one candela of luminous intensity.

mole (quantity):

Different from all the other base units, the mole measures quantity alone. Over hundreds of years, scientists starting from Amedeo Avogadro worked to better understand how the number of atoms was related to mass, leading to the current definition of the mole: the number of atoms in 12 grams of carbon-12. This number, which is known as Avogadro’s constant and used in many calculations of mass in particle physics, is about 6 x 1023. To make the mole more precise, the new definition would set Avogadro’s constant to exactly 6.022140857×1023, decoupling it from the kilogram.

by Daniel Garisto at July 11, 2017 03:42 PM

July 10, 2017

Jon Butterworth - Life and Physics

July 07, 2017

Symmetrybreaking - Fermilab/SLAC

Quirks of the arXiv

Sometimes, physics papers turn funny.

Header: Quirks of the arXiv

Since it went up in 1991, the arXiv (pronounced like the word “archive”) has been a hub for scientific papers in quantitative fields such as physics, math and computer science. Many of its million-plus papers are serious products of intense academic work that are later published in peer-reviewed journals. Still, some manage to have a little more character than the rest. For your consideration, we’ve gathered seven of the quirkiest physics papers on the arXiv.

Can apparent superluminal neutrino speeds be explained as a quantum weak measurement?

In 2011, an experiment appeared to find particles traveling faster than the speed of light. To spare readers uninterested in lengthy calculations demonstrating the unlikeliness of this probably impossible phenomenon, the abstract for this analysis cut to the chase.

Paper Thumbnail

Quantum Tokens for Digital Signatures

Sometimes the best way to explain something is to think about how you might explain it to a child—for example, as a fairy tale.

Paper Thumbnail

A dialog on quantum gravity

Unless you’re intimately familiar with string theory and quantum loop gravity, this Socratic dialogue is like Plato’s Republic: It’s all Greek to you.

Paper Thumbnail

The Proof of Innocence

Pulled over after he was apparently observed failing to halt at a stop sign, the author of this paper, Dmitri Krioukov, was determined to prove his innocence—as only a scientist would.

Using math, he demonstrated that, to a police officer measuring the angular speed of Krioukov’s car, a brief obstruction from view could cause an illusion that the car did not stop. Krioukov submitted his proof to the arXiv; the judge ruled in his favor.

Paper Thumbnail

Quantum weak coin flipping with arbitrarily small bias

Not many papers in the arXiv illustrate their point with a tale involving human sacrifice. There’s something about quantum informatics that brings out the weird side of physicists.

Paper Thumbnail
Paper Thumbnail

10 = 6 + 4

A theorist calculated an alternative decomposition of 10 dimensions into 6 spacetime dimensions with local Conformal symmetry and 4-dimensional compact Internal Symmetry Space. For the title of his paper, he decided to go with something a little simpler.

Paper Thumbnail

Would Bohr be born if Bohm were born before Born?

This tricky tongue-twisting treatise theorizes a tangential timeline to testify that taking up quantum theories turns on timeliness.

Paper Thumbnail

by Daniel Garisto at July 07, 2017 01:00 PM

July 03, 2017

Symmetrybreaking - Fermilab/SLAC

When was the Higgs actually discovered?

The announcement on July 4 was just one part of the story. Take a peek behind the scenes of the discovery of the Higgs boson.

Photo from the back of a crowded conference room on the day of the Higgs announcement

Joe Incandela sat in a conference room at CERN and watched with his arms folded as his colleagues presented the latest results on the hunt for the Higgs boson. It was December 2011, and they had begun to see the very thing they were looking for—an unexplained bump emerging from the data.

“I was far from convinced,” says Incandela, a professor at the University of California, Santa Barbara and the former spokesperson of the CMS experiment at the Large Hadron Collider.

For decades, scientists had searched for the elusive Higgs boson: the holy grail of modern physics and the only piece of the robust and time-tested Standard Model that had yet to be found.

The construction of the LHC was motivated in large part by the absence of this fundamental component from our picture of the universe. Without it, physicists couldn’t explain the origin of mass or the divergent strengths of the fundamental forces.

“Without the Higgs boson, the Standard Model falls apart,” says Matthew McCullough, a theorist at CERN. “The Standard Model was fitting the experimental data so well that most of the theory community was convinced that something playing the role of Higgs boson would be discovered by the LHC.”

The Standard Model predicted the existence of the Higgs but did not predict what the particle’s mass would be. Over the years, scientists had searched for it across a wide range of possible masses. By 2011, there was only a tiny region left to search; everything else had been excluded by previous generations of experimentation. If the predicted Higgs boson were anywhere, it had to be there, right where the LHC scientists were looking.

But Incandela says he was skeptical about these preliminary results. He knew that the Higgs could manifest itself in many different forms, and this particular channel was extremely delicate.

“A tiny mistake or an unfortunate distribution of the background events could make it look like a new particle is emerging from the data when in reality, it’s nothing,” Incandela says.

A common mantra in science is that extraordinary claims require extraordinary evidence. The challenge isn’t just collecting the data and performing the analysis; it’s deciding if every part of the analysis is trustworthy. If the analysis is bulletproof, the next question is whether the evidence is substantial enough to claim a discovery. And if a discovery can be claimed, the final question is what, exactly, has been discovered? Scientists can have complete confidence in their results but remain uncertain about how to interpret them.

In physics, it’s easy to say what something is not but nearly impossible to say what it is. A single piece of corroborated, contradictory evidence can discredit an entire theory and destroy an organization’s credibility.

“We’ll never be able to definitively say if something is exactly what we think it is, because there’s always something we don’t know and cannot test or measure,” Incandela says. “There could always be a very subtle new property or characteristic found in a high-precision experiment that revolutionizes our understanding.”

With all of that in mind, Incandela and his team made a decision: From that point on, everyone would refine their scientific analyses using special data samples and a patch of fake data generated by computer simulations covering the interesting areas of their analyses. Then, when they were sure about their methodology and had enough data to make a significant observation, they would remove the patch and use their algorithms on all the real data in a process called unblinding.

“This is a nice way of providing an unbiased view of the data and helps us build confidence in any unexpected signals that may be appearing, particularly if the same unexpected signal is seen in different types of analyses,” Incandela says.

A few weeks before July 4, all the different analysis groups met with Incandela to present a first look at their unblinded results. This time the bump was very significant and showing up at the same mass in two independent channels.

“At that point, I knew we had something,” Incandela says. “That afternoon we presented the results to the rest of the collaboration. The next few weeks were among the most intense I have ever experienced.”

Meanwhile, the other general-purpose experiment at the LHC, ATLAS, was hot on the trail of the same mysterious bump.

Andrew Hard was a graduate student at The University of Wisconsin, Madison working on the ATLAS Higgs analysis with his PhD thesis advisor Sau Lan Wu.

“Originally, my plan had been to return home to Tennessee and visit my parents over the winter holidays,” Hard says. “Instead, I came to CERN every day for five months—even on Christmas. There were a few days when I didn't see anyone else at CERN. One time I thought some colleagues had come into the office, but it turned out to be two stray cats fighting in the corridor.”

Hard was responsible for writing the code that selected and calibrated the particles of light the ATLAS detector recorded during the LHC’s high-energy collisions. According to predictions from the Standard Model, the Higgs can transform into two of these particles when it decays, so scientists on both experiments knew that this project would be key to the discovery process.

“We all worked harder than we thought we could,” Hard says. “People collaborated well and everyone was excited about what would come next. All in all, it was the most exciting time in my career. I think the best qualities of the community came out during the discovery.”

At the end of June, Hard and his colleagues synthesized all of their work into a single analysis to see what it revealed. And there it was again—that same bump, this time surpassing the statistical threshold the particle physics community generally requires to claim a discovery.

“Soon everyone in the group started running into the office to see the number for the first time,” Hard says. “The Wisconsin group took a bunch of photos with the discovery plot.”

Hard had no idea whether CMS scientists were looking at the same thing. At this point, the experiments were keeping their latest results secret—with the exception of Incandela, Fabiola Gianotti (then ATLAS spokesperson) and a handful of CERN’s senior management, who regularly met to discuss their progress and results.

“I told the collaboration that the most important thing was for each experiment to work independently and not worry about what the other experiment was seeing,” Incandela says. “I did not tell anyone what I knew about ATLAS. It was not relevant to the tasks at hand.”

Still, rumors were circulating around theoretical physics groups both at CERN and abroad. Mccullough, then a postdoc at the Massachusetts Institute of Technology, was avidly following the progress of the two experiments.

“We had an update in December 2011 and then another one a few months later in March, so we knew that both experiments were seeing something,” he says. “When this big excess showed up in July 2012, we were all convinced that it was the guy responsible for curing the ails of the Standard Model, but not necessarily precisely that guy predicted by the Standard Model. It could have properties mostly consistent with the Higgs boson but still be not absolutely identical.”

The week before announcing what they’d found, Hard’s analysis group had daily meetings to discuss their results. He says they were excited but also nervous and stressed: Extraordinary claims require extraordinary confidence.

“One of our meetings lasted over 10 hours, not including the dinner break halfway through,” Hard says. “I remember getting in a heated exchange with a colleague who accused me of having a bug in my code.”

After both groups had independently and intensely scrutinized their Higgs-like bump through a series of checks, cross-checks and internal reviews, Incandela and Gianotti decided it was time to tell the world.

“Some people asked me if I was sure we should say something,” Incandela says. “I remember saying that this train has left the station. This is what we’ve been working for, and we need to stand behind our results.”

On July 4, 2012, Incandela and Gianotti stood before an expectant crowd and, one at a time, announced that decades of searching and generations of experiments had finally culminated in the discovery of a particle “compatible with the Higgs boson.”

Science journalists rejoiced and rushed to publish their stories. But was this new particle the long-awaited Higgs boson? Or not?

Discoveries in science rarely happen all at once; rather, they build slowly over time. And even when the evidence overwhelmingly points in a clear direction, scientists will rarely speak with superlatives or make definitive claims.

“There is always a risk of overlooking the details,” Incandela says, “and major revolutions in science are often born in the details.”

Immediately after the July 4 announcement, theorists from around the world issued a flurry of theoretical papers presenting alternative explanations and possible tests to see if this excess really was the Higgs boson predicted by the Standard Model or just something similar.

“A lot of theory papers explored exotic ideas,” McCullough says. “It’s all part of the exercise. These papers act as a straw man so that we can see just how well we understand the particle and what additional tests need to be run.”

For the next several months, scientists continued to examine the particle and its properties. The more data they collected and the more tests they ran, the more the discovery looked like the long-awaited Higgs boson. By March, both experiments had twice as much data and twice as much evidence.

“Amongst ourselves, we called it the Higgs,” Incandela says, “but to the public, we were more careful.”

It was increasingly difficult to keep qualifying their statements about it, though. “It was just getting too complicated,” Incandela says. “We didn’t want to always be in this position where we had to talk about this particle like we didn’t know what it was.”

On March 14, 2013—nine months and 10 days after the original announcement—CERN issued a press release quoting Incandela as saying, “to me, it is clear that we are dealing with a Higgs boson, though we still have a long way to go to know what kind of Higgs boson it is.”​

To this day, scientists are open to the possibility that the Higgs they found is not exactly the Higgs they expected.

“We are definitely, 100 percent sure that this is a Standard-Model-like Higgs boson,” Incandela says. “But we’re hoping that there’s a chink in that armor somewhere. The Higgs is a sign post, and we’re hoping for a slight discrepancy which will point us in the direction of new physics.”

by Sarah Charley at July 03, 2017 09:50 PM



[RSS 2.0 Feed] [Atom Feed]

Last updated:
August 21, 2017 09:06 PM
All times are UTC.

Suggest a blog: