Particle Physics Planet

October 21, 2017

Tommaso Dorigo - Scientificblogging

What Is Statistical Significance?
Yesterday, October 20, was the international day of Statistics. I took inspiration from it to select a clip from chapter 7 of my book "Anomaly! Collider physics and the quest for new phenomena at Fermilab" which attempts to explain how physicists use the concept of statistical significance to give a quantitative meaning to their measurements of new effects. I hope you will enjoy it....


read more

by Tommaso Dorigo at October 21, 2017 10:36 AM

October 20, 2017

Emily Lakdawalla - The Planetary Society Blog

#DPS2017: Progress report on Mars Reconnaissance Orbiter images of comet C/2013 A1 Siding Spring
Three years ago, on October 19, 2014, comet C/2013 A1 Siding Spring passed within 138,000 kilometers of Mars. At the 2017 meeting of the Division for Planetary Sciences of the American Astronomical Society, we heard a progress report on Mars orbiter imaging of the comet's nucleus.

October 20, 2017 10:19 PM

Christian P. Robert - xi'an's og

end of my dream! [aka no free lunch]

In June, last June, I received a collection of nominal tickets for the National Theatre next door, Les Gémeaux, which animates a rich and diverse scene, attracting to my suburb spectators from all over Paris. Scene that I do not attend as much as I would like, except for a few contemporary dance shows now and then… As I had not ordered these tickets, I was rather pleasantly surprised and wondered at the source of this princely gift, ruling out close family and anonymous sponsors! And the size of it was way too much for a commercial teaser. The most likely explanation was thus a mistake send to the wrong X. Nonetheless, being at home for the first concert, I went to check whether or not my allotted seats were occupied. And unsurprisingly they were. (I did not push the cheek as far as engaging into a discussion with my homonym!) Missing an Austrian theme with Mozart and Haydn. (But I have regular tickets for ext weekend!)

Filed under: pictures Tagged: Austria, concert, contemporary dance, Joseph Haydn, Les Gémeaux, Sceaux, Théatre National, Wolfgang Amadeus Mozart

by xi'an at October 20, 2017 10:17 PM

Christian P. Robert - xi'an's og

Peter Coles - In the Dark

Prize Port

Way back in July I went to the Third Day’s Play of the Test Match between England and South Africa. On the way to the ground I bought a copy of Money Week, a publication I have never read before, as requested by my old friend and regular commenter on this blog, Anton, who got the tickets for the match. During a lull in the play we did the crossword in the magazine, which was a not-too-difficult thematic puzzle. When I got back to Cardiff I posted off the solution, as a prize of a bottle of vintage port was on offer. Not hearing any more, forgot about it.

When I got back from India this week there was a card waiting for me saying that ParcelForce had attempted to deliver a package while I was away. This morning I went to the main post office to sign for it and pick it up. This is what I got:

I don’t know why it took so long to deliver the prize but it makes a nice change from the usual dictionaries!

by telescoper at October 20, 2017 12:49 PM

October 19, 2017

Christian P. Robert - xi'an's og

WBIC, practically

“Thus far, WBIC has received no more than a cursory mention by Gelman et al. (2013)”

I had missed this 2015  paper by Nial Friel and co-authors on a practical investigation of Watanabe’s WBIC. Where WBIC stands for widely applicable Bayesian information criterion. The thermodynamic integration approach explored by Nial and some co-authors for the approximation of the evidence, thermodynamic integration that produces the log-evidence as an integral between temperatures t=0 and t=1 of a powered evidence, is eminently suited for WBIC, as the widely applicable Bayesian information criterion is associated with the specific temperature t⁰ that makes the power posterior equidistant, Kullback-Leibler-wise, from the prior and posterior distributions. And the expectation of the log-likelihood under this very power posterior equal to the (genuine) evidence. In fact, WBIC is often associated with the sub-optimal temperature 1/log(n), where n is the (effective?) sample size. (By comparison, if my minimalist description is unclear!, thermodynamic integration requires a whole range of temperatures and associated MCMC runs.) In an ideal Gaussian setting, WBIC improves considerably over thermodynamic integration, the larger the sample the better. In more realistic settings, though, including a simple regression and a logistic [Pima Indians!] model comparison, thermodynamic integration may do better for a given computational cost although the paper is unclear about these costs. The paper also runs a comparison with harmonic mean and nested sampling approximations. Since the integral of interest involves a power of the likelihood, I wonder if a safe version of the harmonic mean resolution can be derived from simulations of the genuine posterior. Provided the exact temperature t⁰ is known…

Filed under: Statistics Tagged: Bayes factor, Bayesian model selection, evidence, harmonic mean estimator, MCMC, nested sampling, Pima Indians, power posterior, thermodynamic integration, WBIC

by xi'an at October 19, 2017 10:17 PM

Clifford V. Johnson - Asymptotia

New Trailer!

No, not the new Black Panther trailer, not the new trailer for The Last Jedi (although those are awesome)… No, this is a trailer for the book. You get to look inside for the first time: I may make a second trailer soon. The book appears mid-November. More at … Click to continue reading this post

The post New Trailer! appeared first on Asymptotia.

by Clifford at October 19, 2017 09:12 PM

Peter Coles - In the Dark

Determining the Hubble Constant the Bernard Schutz way

In my short post about Monday’s announcement of the detection of a pair of coalescing neutron stars (GW170817), I mentioned that one of the results that caught my eye in particular was the paper about using such objects to determine the Hubble constant.

Here is the key result from that paper, i.e. the posterior distribution of the Hubble constant H0 given the data from GW170817:

You can also see latest determinations from other methods, which appear to be in (slight) tension; you can read more about this here. Clearly the new result from GW170817 yields a fairly broad range for H0 but, as I said in my earlier post, it’s very impressive to be straddling the target with the first salvo.

Anyway, I just thought I’d mention here that the method of measuring the Hubble constant using coalescing binary neutron stars was invented by none other than Bernard Schutz of Cardiff University, who works in the Data Innovation Institute (as I do). The idea was first published in September 1986 in a Letter to Nature. Here is the first paragraph:

I report here how gravitational wave observations can be used to determine the Hubble constant, H 0. The nearly monochromatic gravitational waves emitted by the decaying orbit of an ultra–compact, two–neutron–star binary system just before the stars coalesce are very likely to be detected by the kilometre–sized interferometric gravitational wave antennas now being designed1–4. The signal is easily identified and contains enough information to determine the absolute distance to the binary, independently of any assumptions about the masses of the stars. Ten events out to 100 Mpc may suffice to measure the Hubble constant to 3% accuracy.

In in the paper, Bernard points out that a binary coalescence — such as the merger of two neutron stars — is a self calibrating `standard candle’, which means that it is possible to infer directly the distance without using the cosmic distance ladder. The key insight is that the rate at which the binary’s frequency changes is directly related to the amplitude of the gravitational waves it produces, i.e. how `loud’ the GW signal is. Just as the observed brightness of a star depends on both its intrinsic luminosity and how far away it is, the strength of the gravitational waves received at LIGO depends on both the intrinsic loudness of the source and how far away it is. By observing the waves with detectors like LIGO and Virgo, we can determine both the intrinsic loudness of the gravitational waves as well as their loudness at the Earth. This allows us to directly determine distance to the source.

It may have taken 31 years to get a measurement, but hopefully it won’t be long before there are enough detections to provide greater precision – and hopefully accuracy! – than the current methods can manage!

Above all, congratulations to Bernard for inventing a method which has now been shown to work very well!

by telescoper at October 19, 2017 11:49 AM

October 18, 2017

Christian P. Robert - xi'an's og

Langevin on a wrong bend

Arnak Dalayan and Avetik Karagulyan (CREST) arXived a paper the other week on a focussed study of the Langevin algorithm [not MALA] when the gradient of the target is incorrect. With the following improvements [quoting non-verbatim from the paper]:

  1. a varying-step Langevin that reduces the number of iterations for a given Wasserstein precision, compared with recent results by e.g. Alan Durmus and Éric Moulines;
  2. an extension of convergence results for error-prone evaluations of the gradient of the target (i.e., the gradient is replaced with a noisy version, under some moment assumptions that do not include unbiasedness);
  3. a new second-order sampling algorithm termed LMCO’, with improved convergence properties.

What is particularly interesting to me in this setting is the use in all these papers of a discretised Langevin diffusion (a.k.a., random walk with a drift induced by the gradient of the log-target) without the original Metropolis correction. The results rely on an assumption of [strong?] log-concavity of the target, with “user-friendly” bounds on the Wasserstein distance depending on the constants appearing in this log-concavity constraint. And so does the adaptive step. (In the case of the noisy version, the bias and variance of the noise also matter. As pointed out by the authors, there is still applicability to scaling MCMC for large samples. Beyond pseudo-marginal situations.)

“…this, at first sight very disappointing behavior of the LMC algorithm is, in fact, continuously connected to the exponential convergence of the gradient descent.”

The paper concludes with an interesting mise en parallèle of Langevin algorithms and of gradient descent algorithms, since the convergence rates are the same.

Filed under: Books, Statistics Tagged: CREST, Hastings-Metropolis sampler, Langevin diffusion, MALA, pseudo-marginal MCMC, scalable MCMC, stochastic gradient descent, Wasserstein distance

by xi'an at October 18, 2017 10:17 PM

Peter Coles - In the Dark

Back to Blighty

Just a brief post today to say that I got back safe and sound last night. I was up at 4am Tuesday Pune time (which was 11.30pm Monday UK time) and finally got to bed at about 11.30pm UK time last night, so apart from about 45 minutes doze on the flight I had been awake for 24 hours. Not surprisingly, I slept in this morning!

After another white-knuckle taxi ride (with the added complication of thick fog) I got Mumbai airport in good time. The flight itself was almost empty. Not only did I get a row of seats to myself in economy class, but the two rows in front in and the two rows behind were also unoccupied. I’m not entirely sure why the flight was so underbooked – as the outbound flight was absolutely crammed – but it may be that the festival of Diwali takes place this week (on Thursday or even today in some regions). Relatively few people are probably leaving India at this time compared with the many coming home to celebrate with friends and family. It’s a nice coincidence that Monday’s announcement of simultaneous detection of gravitational waves and electromagnetic radiation came so close to the Festival of Light, traditionally celebrated with fireworks and gifts of gold!

Despite liberal helpings of wine from the drinks trolley and the ability to lie down across three seats I still didn’t really sleep. I just don’t have the knack for sleeping on planes. Still, I did get to watch the film The Imitation Game which I hadn’t seen before and thought was very good.

We arrived back on schedule without the (usually) obligatory air traffic delays around Heathrow and, the arrivals hall being empty, I was out of the airport less than half an hour after landing. That’s a bit of record for me!

Anyway, I’ve various things to catch up on now that I’m back so I’ll try to get on with them. I’ll just end by thanking my hosts at IUCAA again for their hospitality and, while I’m at it, send a Happy Diwali message in Marathi to them and anyone else celebrating at this special time:


by telescoper at October 18, 2017 01:23 PM

Lubos Motl - string vacua and pheno

Anti-inflation quacks supported not by science but by special anti-science social interests
Sabine Hossenfelder wrote a rant:
I totally mean it: Inflation never solved the flatness problem.
I would personally never allow a student to get a degree from theoretical physics or something like that if she were a quack like her who just doesn't get the most fundamental ideas in the field – and she doesn't. But there are good reasons why she can "totally mean it". Quacks like her may do well because while her total scientific incompetence is a minus from the viewpoint of the actual experts, it's a plus from the viewpoint of the bad people "around" science.

For example, George Soros just gave $18 billion (80% of his wealth) to his "charities". You can be sure that a part of this money will be used to attack science, just like it was used in recent years. Obnoxious antiscientific whores aren't that bad, are they? In fact, they are good, discriminated against by the evil white male scientists, so why don't we turn statements like "inflation never solved the flatness problem" in a virtue that should be rewarded?

Just try to appreciate how much evil may be done with $18 billion that is sent to carefully, ideologically picked corners.

The whole system underlying science and other meritocratic human activities – at least those whose importance isn't "immediately" impacting the well-being of the ordinary Joe – is collapsing as the people allow scum like George Soros to create "compensating" anti-meritocratic structures that switch the evolution into the reverse: What sucks gets to the top.

Cosmic inflation is also about the "reversal of the dynamics" and that's why it's unquestionable for an expert – or an intelligent student in fields related to cosmology – that it must be considered a vital part of our current understanding of the history of the Universe.

I have explained why inflation solves – and is basically needed to solve – the flatness problem e.g. in my article celebrating Alan Guth or an answer at Physics Stack Exchange. Hossenfelder mixes inflation and initial conditions in weird and illogical ways.

But the cosmic inflation really is needed and it is needed to prepare viable initial conditions for the subsequent evolution according to the equations of the big bang theory. How does it work?

General relativity allows you to derive that in a uniform, isotropic Universe, the flat \(\RR^3\) slices may survive in time, during the expansion, if the matter density is "critical". So the matter density divided by some function of Hubble's constant etc. is called \(\Omega\) and \(\Omega=1\) means that the Universe is flat at a given moment and stays flat despite the expansion.

But it's never exactly flat and we may study how \(\Omega-1\) evolves with time. Pretty much by the definition of \(\Omega\) I mentioned, we may know that\[


\] And by Einstein's equations, we may study how \(|\Omega-1|\) increases or decreases with time. We find out that today, in the "normal" big bang expansion, it increases with time. The Universe is getting less flat as it gets older. Because it's still rather flat today, \(|\Omega-1|\leq 0.01\) or so, we may see that it was extremely flat seconds after the big bang, perhaps with\[

|\Omega-1| \approx 10^{-{\rm dozens}}

\] If you dig deeper and study increasingly early moments of our Universe, you will see – just because of Einstein's equations – that the Universe was increasingly unnaturally flat, and it was almost precisely flat at each corner. It means that our big bang theory only works is you supplement it with initial conditions that must be special – they must respect the extremely precise flatness at each point of the Universe.

Now, you should pick your initial conditions and/or logic to estimate whether one choice of them is likely or not. But because the value of \(|\Omega-1|\) has to be this unnaturally tiny, and it must hold independently at each of the zillions of places of the Universe, the normally calculated probability according to any sensible framework will be something like\[

10^{-{\rm dozens} \times N}

\] where \(N\) is the number of "independent" regions of the Universe that you need – you needed to impose the unlikely flatness for each of them. The probability is therefore insanely low. When some probability is this low, you could use it to disprove the theory – disprove the big bang theory. That's how we disprove theory. A scientific hypothesis is disproved if you can show that it depends on events that are predicted to be immensely unlikely. An immensely unlikely assumption that is needed may be translated to a very low probability that the hypothesis itself is right. That's how Bayesian reasoning and science work.

So you should better have some extra piece of the theory that says that these very flat initial conditions aren't really that insanely unlikely. And inflation is that extra piece – and, up to plagiarism and small modifications, and up to proposals whose success in achieving flatness hasn't been understood by many people (string gas cosmology), it's still generally considered to be the only known piece that can play this role.

How does inflation solve the flatness problem? It simply adds the terms to the expansion coming from the "temporary cosmological constant" \(V(\phi)\), the inflaton potential, and from the kinetic term of the inflaton, \((\dot\phi)^2\). And when the inflaton sits or moves far enough from the minimum of \(V(\phi)\), you will be able to see that there are new terms that decide about the time evolution of \(|\Omega-1|\). You remember I wrote that \(|\Omega-1|\) is increasing with time today? OK, so inflation simply adds new terms that reverse the overall evolution. As the Universe is getting older, \(|\Omega-1|\) is getting smaller during inflation – the Universe is getting flatter. It's doing so everywhere where the corresponding condition for the inflaton, basically \(V(\phi)\gt (\dot\phi)^2 \), is obeyed.

So from Einstein's equations applied to the well tested ordinary big bang expansion, we know that a second after the beginning, the Universe was extremely flat – even flatter than today, by the smallness of \(|\Omega-1|\). Why was it so flat? Because there was a previous era that made it flat. It made it almost precisely flat by a mechanism that is pretty much as simple and straightforward as the mechanism that makes it less flat today. There is just a new term with the opposite sign.

To some extent, some inflation or its "generalization" had to be there because we almost directly observe it. It's like a hot soup. The soup is too hot so you're waiting when it gets colder so that you don't burn your mouth when you eat it. While you're waiting, you may ask why it was so hot to start with? Well, you correctly guess, it's probably because someone heated it up. Some cook or heater or somebody like that. It's exactly the same with the flatness. We see that the flatness goes down – becomes less perfect (like cooling soup). Why was it so immensely flat (hot soup) in the past? Because there was a previous process that just made it flat.

What's so impenetrable about these basic ideas that people pretending to deserve their physics PhDs brag that "they really mean it" that this explanation doesn't work in some weird way? How it could not work? Her text and her title is exactly as indefensible as a comment of a visitor of the restaurant:
I totally mean it: heaters and ovens have never really solved the problem why the food got hot before the waitress brought us the food!
Oh, really? So what's the explanation that the soup was hot? Why would a sane and honest person deny even the very modest fact that we have quite some evidence that some mechanism has heated the soup before the waiter brought it to us?

You know, almost everyone who has at least "some" credentials in cosmology knows that Guth, Linde, and pals – and interpreters of those discoveries such as your humble correspondent – are right while Hossenfelder is just totally wrong. The evidence is totally on our side and the likes of Hossenfelder are demonstrably wrong. But in our intellectually deteriorating society, this fact is becoming increasingly irrelevant. The likes of Hossenfelder have been building a whole network of alternative institutions of fake science, one that has almost completely merged with the Soros-style network and the PC media.

I think that the letter of the top inflation researchers symbolized this immense gap. Pretty much everyone who has done something important in that field or related fields understands the case for inflation, the depth of the inflationary ideas, and their unmatched ability to explain the strange and seemingly unlikely features of the initial conditions that the normal big bang expansion requires. But you could see that all these top physicists and Nobel prize winners etc. didn't seem to make almost any impact on the broader "community" of the writers about science and stuff like that. An abyss has been growing for decades – and the top scientists' decision to avoid nontrivial interactions with the media has helped the growth. It's time to realize that this abyss is real and has become dangerous for the very survival of science.

by Luboš Motl ( at October 18, 2017 07:20 AM

October 17, 2017

Matt Strassler - Of Particular Significance

The Significance of Yesterday’s Gravitational Wave Announcement: an FAQ

Yesterday’s post on the results from the LIGO/VIRGO network of gravitational wave detectors was aimed at getting information out, rather than providing the pedagogical backdrop.  Today I’m following up with a post that attempts to answer some of the questions that my readers and my personal friends asked me.  Some wanted to understand better how to visualize what had happened, while others wanted more clarity on why the discovery was so important.  So I’ve put together a post which  (1) explains what neutron stars and black holes are and what their mergers are like, (2) clarifies why yesterday’s announcement was important — and there were many reasons, which is why it’s hard to reduce it all to a single soundbite.  And (3) there are some miscellaneous questions at the end.

First, a disclaimer: I am *not* an expert in the very complex subject of neutron star mergers and the resulting explosions, called kilonovas.  These are much more complicated than black hole mergers.  I am still learning some of the details.  Hopefully I’ve avoided errors, but you’ll notice a few places where I don’t know the answers … yet.  Perhaps my more expert colleagues will help me fill in the gaps over time.

Please, if you spot any errors, don’t hesitate to comment!!  And feel free to ask additional questions whose answers I can add to the list.


What are neutron stars and black holes, and how are they related?

Every atom is made from a tiny atomic nucleus, made of neutrons and protons (which are very similar), and loosely surrounded by electrons. Most of an atom is empty space, so it can, under extreme circumstances, be crushed — but only if every electron and proton convert to a neutron (which remains behind) and a neutrino (which heads off into outer space.) When a giant star runs out of fuel, the pressure from its furnace turns off, and it collapses inward under its own weight, creating just those extraordinary conditions in which the matter can be crushed. Thus: a star’s interior, with a mass one to several times the Sun’s mass, is all turned into a several-mile(kilometer)-wide ball of neutrons — the number of neutrons approaching a 1 with 57 zeroes after it.

If the star is big but not too big, the neutron ball stiffens and holds its shape, and the star explodes outward, blowing itself to pieces in a what is called a core-collapse supernova. The ball of neutrons remains behind; this is what we call a neutron star. It’s a ball of the densest material that we know can exist in the universe — a pure atomic nucleus many miles(kilometers) across. It has a very hard surface; if you tried to go inside a neutron star, your experience would be a lot worse than running into a closed door at a hundred miles per hour.

If the star is very big indeed, the neutron ball that forms may immediately (or soon) collapse under its own weight, forming a black hole. A supernova may or may not result in this case; the star might just disappear. A black hole is very, very different from a neutron star. Black holes are what’s left when matter collapses irretrievably upon itself under the pull of gravity, shrinking down endlessly. While a neutron star has a surface that you could smash your head on, a black hole has no surface — it has an edge that is simply a point of no return, called a horizon. In Einstein’s theory, you can just go right through, as if passing through an open door. You won’t even notice the moment you go in. [Note: this is true in Einstein’s theory. But there is a big controversy as to whether the combination of Einstein’s theory with quantum physics changes the horizon into something novel and dangerous to those who enter; this is known as the firewall controversy, and would take us too far afield into speculation.]  But once you pass through that door, you can never return.

Black holes can form in other ways too, but not those that we’re observing with the LIGO/VIRGO detectors.

Why are their mergers the best sources for gravitational waves?

One of the easiest and most obvious ways to make gravitational waves is to have two objects orbiting each other.  If you put your two fists in a pool of water and move them around each other, you’ll get a pattern of water waves spiraling outward; this is in rough (very rough!) analogy to what happens with two orbiting objects, although, since the objects are moving in space, the waves aren’t in a material like water.  They are waves in space itself.

To get powerful gravitational waves, you want objects each with a very big mass that are orbiting around each other at very high speed. To get the fast motion, you need the force of gravity between the two objects to be strong; and to get gravity to be as strong as possible, you need the two objects to be as close as possible (since, as Isaac Newton already knew, gravity between two objects grows stronger when the distance between them shrinks.) But if the objects are large, they can’t get too close; they will bump into each other and merge long before their orbit can become fast enough. So to get a really fast orbit, you need two relatively small objects, each with a relatively big mass — what scientists refer to as compact objects. Neutron stars and black holes are the most compact objects we know about. Fortunately, they do indeed often travel in orbiting pairs, and do sometimes, for a very brief period before they merge, orbit rapidly enough to produce gravitational waves that LIGO and VIRGO can observe.

Why do we find these objects in pairs in the first place?

Stars very often travel in pairs… they are called binary stars. They can start their lives in pairs, forming together in large gas clouds, or even if they begin solitary, they can end up pairing up if they live in large densely packed communities of stars where it is common for multiple stars to pass nearby. Perhaps surprisingly, their pairing can survive the collapse and explosion of either star, leaving two black holes, two neutron stars, or one of each in orbit around one another.

What happens when these objects merge?

Not surprisingly, there are three classes of mergers which can be detected: two black holes merging, two neutron stars merging, and a neutron star merging with a black hole. The first class was observed in 2015 (and announced in 2016), the second was announced yesterday, and it’s a matter of time before the third class is observed. The two objects may orbit each other for billions of years, very slowly radiating gravitational waves (an effect observed in the 70’s, leading to a Nobel Prize) and gradually coming closer and closer together. Only in the last day of their lives do their orbits really start to speed up. And just before these objects merge, they begin to orbit each other once per second, then ten times per second, then a hundred times per second. Visualize that if you can: objects a few dozen miles (kilometers) across, a few miles (kilometers) apart, each with the mass of the Sun or greater, orbiting each other 100 times each second. It’s truly mind-boggling — a spinning dumbbell beyond the imagination of even the greatest minds of the 19th century. I don’t know any scientist who isn’t awed by this vision. It all sounds like science fiction. But it’s not.

How do we know this isn’t science fiction?

We know, if we believe Einstein’s theory of gravity (and I’ll give you a very good reason to believe in it in just a moment.) Einstein’s theory predicts that such a rapidly spinning, large-mass dumbbell formed by two orbiting compact objects will produce a telltale pattern of ripples in space itself — gravitational waves. That pattern is both complicated and precisely predicted. In the case of black holes, the predictions go right up to and past the moment of merger, to the ringing of the larger black hole that forms in the merger. In the case of neutron stars, the instants just before, during and after the merger are more complex and we can’t yet be confident we understand them, but during tens of seconds before the merger Einstein’s theory is very precise about what to expect. The theory further predicts how those ripples will cross the vast distances from where they were created to the location of the Earth, and how they will appear in the LIGO/VIRGO network of three gravitational wave detectors. The prediction of what to expect at LIGO/VIRGO thus involves not just one prediction but many: the theory is used to predict the existence and properties of black holes and of neutron stars, the detailed features of their mergers, the precise patterns of the resulting gravitational waves, and how those gravitational waves cross space. That LIGO/VIRGO have detected the telltale patterns of these gravitational waves. That these wave patterns agree with Einstein’s theory in every detail is the strongest evidence ever obtained that there is nothing wrong with Einstein’s theory when used in these combined contexts.  That then in turn gives us confidence that our interpretation of the LIGO/VIRGO results is correct, confirming that black holes and neutron stars really exist and really merge. (Notice the reasoning is slightly circular… but that’s how scientific knowledge proceeds, as a set of detailed consistency checks that gradually and eventually become so tightly interconnected as to be almost impossible to unwind.  Scientific reasoning is not deductive; it is inductive.  We do it not because it is logically ironclad but because it works so incredibly well — as witnessed by the computer, and its screen, that I’m using to write this, and the wired and wireless internet and computer disk that will be used to transmit and store it.)


What makes it difficult to explain the significance of yesterday’s announcement is that it consists of many important results piled up together, rather than a simple takeaway that can be reduced to a single soundbite. (That was also true of the black hole mergers announcement back in 2016, which is why I wrote a long post about it.)

So here is a list of important things we learned.  No one of them, by itself, is earth-shattering, but each one is profound, and taken together they form a major event in scientific history.

First confirmed observation of a merger of two neutron stars: We’ve known these mergers must occur, but there’s nothing like being sure. And since these things are too far away and too small to see in a telescope, the only way to be sure these mergers occur, and to learn more details about them, is with gravitational waves.  We expect to see many more of these mergers in coming years as gravitational wave astronomy increases in its sensitivity, and we will learn more and more about them.

New information about the properties of neutron stars: Neutron stars were proposed almost a hundred years ago and were confirmed to exist in the 60’s and 70’s.  But their precise details aren’t known; we believe they are like a giant atomic nucleus, but they’re so vastly larger than ordinary atomic nuclei that can’t be sure we understand all of their internal properties, and there are debates in the scientific community that can’t be easily answered… until, perhaps, now.

From the detailed pattern of the gravitational waves of this one neutron star merger, scientists already learn two things. First, we confirm that Einstein’s theory correctly predicts the basic pattern of gravitational waves from orbiting neutron stars, as it does for orbiting and merging black holes. Unlike black holes, however, there are more questions about what happens to neutron stars when they merge. The question of what happened to this pair after they merged is still out — did the form a neutron star, an unstable neutron star that, slowing its spin, eventually collapsed into a black hole, or a black hole straightaway?

But something important was already learned about the internal properties of neutron stars. The stresses of being whipped around at such incredible speeds would tear you and I apart, and would even tear the Earth apart. We know neutron stars are much tougher than ordinary rock, but how much more? If they were too flimsy, they’d have broken apart at some point during LIGO/VIRGO’s observations, and the simple pattern of gravitational waves that was expected would have suddenly become much more complicated. That didn’t happen until perhaps just before the merger.   So scientists can use the simplicity of the pattern of gravitational waves to infer some new things about how stiff and strong neutron stars are.  More mergers will improve our understanding.  Again, there is no other simple way to obtain this information.

First visual observation of an event that produces both immense gravitational waves and bright electromagnetic waves: Black hole mergers aren’t expected to create a brilliant light display, because, as I mentioned above, they’re more like open doors to an invisible playground than they are like rocks, so they merge rather quietly, without a big bright and hot smash-up.  But neutron stars are big balls of stuff, and so the smash-up can indeed create lots of heat and light of all sorts, just as you might naively expect.  By “light” I mean not just visible light but all forms of electromagnetic waves, at all wavelengths (and therefore at all frequencies.)  Scientists divide up the range of electromagnetic waves into categories. These categories are radio waves, microwaves, infrared light, visible light, ultraviolet light, X-rays, and gamma rays, listed from lowest frequency and largest wavelength to highest frequency and smallest wavelength.  (Note that these categories and the dividing lines between them are completely arbitrary, but the divisions are useful for various scientific purposes.  The only fundamental difference between yellow light, a radio wave, and a gamma ray is the wavelength and frequency; otherwise they’re exactly the same type of thing, a wave in the electric and magnetic fields.)

So if and when two neutron stars merge, we expect both gravitational waves and electromagnetic waves, the latter of many different frequencies created by many different effects that can arise when two huge balls of neutrons collide.  But just because we expect them doesn’t mean they’re easy to see.  These mergers are pretty rare — perhaps one every hundred thousand years in each big galaxy like our own — so the ones we find using LIGO/VIRGO will generally be very far away.  If the light show is too dim, none of our telescopes will be able to see it.

But this light show was plenty bright.  Gamma ray detectors out in space detected it instantly, confirming that the gravitational waves from the two neutron stars led to a collision and merger that produced very high frequency light.  Already, that’s a first.  It’s as though one had seen lightning for years but never heard thunder; or as though one had observed the waves from hurricanes for years but never observed one in the sky.  Seeing both allows us a whole new set of perspectives; one plus one is often much more than two.

Over time — hours and days — effects were seen in visible light, ultraviolet light, infrared light, X-rays and radio waves.  Some were seen earlier than others, which itself is a story, but each one contributes to our understanding of what these mergers are actually like.

Confirmation of the best guess concerning the origin of “short” gamma ray bursts:  For many years, bursts of gamma rays have been observed in the sky.  Among them, there seems to be a class of bursts that are shorter than most, typically lasting just a couple of seconds.  They come from all across the sky, indicating that they come from distant intergalactic space, presumably from distant galaxies.  Among other explanations, the most popular hypothesis concerning these short gamma-ray bursts has been that they come from merging neutron stars.  The only way to confirm this hypothesis is with the observation of the gravitational waves from such a merger.  That test has now been passed; it appears that the hypothesis is correct.  That in turn means that we have, for the first time, both a good explanation of these short gamma ray bursts and, because we know how often we observe these bursts, a good estimate as to how often neutron stars merge in the universe.

First distance measurement to a source using both a gravitational wave measure and a redshift in electromagnetic waves, allowing a new calibration of the distance scale of the universe and of its expansion rate:  The pattern over time of the gravitational waves from a merger of two black holes or neutron stars is complex enough to reveal many things about the merging objects, including a rough estimate of their masses and the orientation of the spinning pair relative to the Earth.  The overall strength of the waves, combined with the knowledge of the masses, reveals how far the pair is from the Earth.  That by itself is nice, but the real win comes when the discovery of the object using visible light, or in fact any light with frequency below gamma-rays, can be made.  In this case, the galaxy that contains the neutron stars can be determined.

Once we know the host galaxy, we can do something really important.  We can, by looking at the starlight, determine how rapidly the galaxy is moving away from us.  For distant galaxies, the speed at which the galaxy recedes should be related to its distance because the universe is expanding.

How rapidly the universe is expanding has been recently measured with remarkable precision, but the problem is that there are two different methods for making the measurement, and they disagree.   This disagreement is one of the most important problems for our understanding of the universe.  Maybe one of the measurement methods is flawed, or maybe — and this would be much more interesting — the universe simply doesn’t behave the way we think it does.

What gravitational waves do is give us a third method: the gravitational waves directly provide the distance to the galaxy, and the electromagnetic waves directly provide the speed of recession.  There is no other way to make this type of joint measurement directly for distant galaxies.  The method is not accurate enough to be useful in just one merger, but once dozens of mergers have been observed, the average result will provide important new information about the universe’s expansion.  When combined with the other methods, it may help resolve this all-important puzzle.

Best test so far of Einstein’s prediction that the speed of light and the speed of gravitational waves are identical: Since gamma rays from the merger and the peak of the gravitational waves arrived within two seconds of one another after traveling 130 million years — that is, about 5 thousand million million seconds — we can say that the speed of light and the speed of gravitational waves are both equal to the cosmic speed limit to within one part in 2 thousand million million.  Such a precise test requires the combination of gravitational wave and gamma ray observations.

Efficient production of heavy elements confirmed:  It’s long been said that we are star-stuff, or stardust, and it’s been clear for a long time that it’s true.  But there’s been a puzzle when one looks into the details.  While it’s known that all the chemical elements from hydrogen up to iron are formed inside of stars, and can be blasted into space in supernova explosions to drift around and eventually form planets, moons, and humans, it hasn’t been quite as clear how the other elements with heavier atoms — atoms such as iodine, cesium, gold, lead, bismuth, uranium and so on — predominantly formed.  Yes they can be formed in supernovas, but not so easily; and there seem to be more atoms of heavy elements around the universe than supernovas can explain.  There are many supernovas in the history of the universe, but the efficiency for producing heavy chemical elements is just too low.

It was proposed some time ago that the mergers of neutron stars might be a suitable place to produce these heavy elements.  Even those these mergers are rare, they might be much more efficient, because the nuclei of heavy elements contain lots of neutrons and, not surprisingly, a collision of two neutron stars would produce lots of neutrons in its debris, suitable perhaps for making these nuclei.   A key indication that this is going on would be the following: if a neutron star merger could be identified using gravitational waves, and if its location could be determined using telescopes, then one would observe a pattern of light that would be characteristic of what is now called a “kilonova” explosion.   Warning: I don’t yet know much about kilonovas and I may be leaving out important details. A kilonova is powered by the process of forming heavy elements; most of the nuclei produced are initially radioactive — i.e., unstable — and they break down by emitting high energy particles, including the particles of light (called photons) which are in the gamma ray and X-ray categories.  The resulting characteristic glow would be expected to have a pattern of a certain type: it would be initially bright but would dim rapidly in visible light, with a long afterglow in infrared light.  The reasons for this are complex, so let me set them aside for now.  The important point is that this pattern was observed, confirming that a kilonova of this type occurred, and thus that, in this neutron star merger, enormous amounts of heavy elements were indeed produced.  So we now have a lot of evidence, for the first time, that almost all the heavy chemical elements on and around our planet were formed in neutron star mergers.  Again, we could not know this if we did not know that this was a neutron star merger, and that information comes only from the gravitational wave observation.


Did the merger of these two neutron stars result in a new black hole, a larger neutron star, or an unstable rapidly spinning neutron star that later collapsed into a black hole?

We don’t yet know, and maybe we won’t know.  Some scientists involved appear to be leaning toward the possibility that a black hole was formed, but others seem to say the jury is out.  I’m not sure what additional information can be obtained over time about this.

If the two neutron stars formed a black hole, why was there a kilonova?  Why wasn’t everything sucked into the black hole?

Black holes aren’t vacuum cleaners; they pull things in via gravity just the same way that the Earth and Sun do, and don’t suck things in some unusual way.  The only crucial thing about a black hole is that once you go in you can’t come out.  But just as when trying to avoid hitting the Earth or Sun, you can avoid falling in if you orbit fast enough or if you’re flung outward before you reach the edge.

The point in a neutron star merger is that the forces at the moment of merger are so intense that one or both neutron stars are partially ripped apart.  The material that is thrown outward in all directions, at an immense speed, somehow creates the bright, hot flash of gamma rays and eventually the kilonova glow from the newly formed atomic nuclei.  Those details I don’t yet understand, but I know they have been carefully studied both with approximate equations and in computer simulations such as this one and this one.  However, the accuracy of the simulations can only be confirmed through the detailed studies of a merger, such as the one just announced.  It seems, from the data we’ve seen, that the simulations did a fairly good job.  I’m sure they will be improved once they are compared with the recent data.




Filed under: Astronomy, Gravitational Waves Tagged: black holes, Gravitational Waves, LIGO, neutron stars

by Matt Strassler at October 17, 2017 04:03 PM

Tommaso Dorigo - Scientificblogging

Merging Neutron Stars: Why It's A Breakthrough, And Why You Should Stand Your Ground
Like many others, I listened to yesterday's (10/16/17) press release at the NSF without a special prior insight in the physics of neutron star mergers, or in the details of the measurements we can extract from the many observations that the detected event made possible. My knowledge of astrophysics is quite incomplete and piecemeal, so in some respects I could be considered a "layman" listening to a science outreach seminar.

Yet, of course, as a physicist I have a good basic understanding of the processes at the heart of the radiation emissions that took place two hundred million years ago in that faint, otherwise unconspicuous galaxy in Hydra. 

read more

by Tommaso Dorigo at October 17, 2017 12:16 PM

October 16, 2017

Peter Coles - In the Dark

GW News Day

Well, it has certainly been an eventful last day in India!

Over a hundred people gathered at IUCAA to see this evening’s press conference, which basically confirmed most of the rumours that had been circulating that a Gamma Ray Burst had been detected in both GW and EM radiation. I won’t write in detail about today’s announcement because (a) a really useful page of resources has been prepared by the group at IUCAA. Check out the fact sheet and (b) I haven’t really had time to digest all the science yet.

I will mention a couple of things, however. One is that the signal-to-noise ratio of this detection is a whopping 32.4, a value that astronomers can usually only dream of! The other is that neutron star coalescence offer the possibility to bypass the traditional `distance ladder’ approaches to get an independent measurement of the Hubble constant. The value obtained is in the range 62 to 107 km s-1 Mpc-1, which is admittedly fairly broad, but is based on only one observation of this type. It is extremely impressive to be straddling the target with the very first salvo.

The LIGO collaboration is over a thousand people. Add to that the staff of no fewer than seventy observatories (including seven in space). With all that’s going in the world, it’s great to see what humans of different nations across the globe can do when they come together and work towards a common goal. Scientific results of this kind will remembered long after the silly ramblings of our politicians and other fools have been forgotten.

I took part in a panel discussion after the results were presented, but sadly I won’t be here to see tomorrow’s papers. I hope people will save cuttings or post weblinks if there are any articles!

UPDATE: Here is a selection of the local press coverage.

Indian LIGO


As if these thrilling science results weren’t enough I finally managed to meet my old friend and former collaborator Varun Sahni (who was away last week). An invitation to dinner at his house was not to be resisted on my last night here, which explains why I didn’t write a post immediately after the press conference. Still, of got plenty of papers to read on the plane tomorrow so maybe I’ll do something when I get back.

Tomorrow morning I get up early to return to Mumbai for the flight home, and am not likely to be online again until Wednesday UK time.

Thanks to all at IUCAA (and TIFR) for making my stay so pleasant and interesting. It’s been 23 years since I was last here. I hope it’s not so long before I’m back again!

by telescoper at October 16, 2017 07:25 PM

Sean Carroll - Preposterous Universe

Standard Sirens

Everyone is rightly excited about the latest gravitational-wave discovery. The LIGO observatory, recently joined by its European partner VIRGO, had previously seen gravitational waves from coalescing black holes. Which is super-awesome, but also a bit lonely — black holes are black, so we detect the gravitational waves and little else. Since our current gravitational-wave observatories aren’t very good at pinpointing source locations on the sky, we’ve been completely unable to say which galaxy, for example, the events originated in.

This has changed now, as we’ve launched the era of “multi-messenger astronomy,” detecting both gravitational and electromagnetic radiation from a single source. The event was the merger of two neutron stars, rather than black holes, and all that matter coming together in a giant conflagration lit up the sky in a large number of wavelengths simultaneously.

Look at all those different observatories, and all those wavelengths of electromagnetic radiation! Radio, infrared, optical, ultraviolet, X-ray, and gamma-ray — soup to nuts, astronomically speaking.

A lot of cutting-edge science will come out of this, see e.g. this main science paper. Apparently some folks are very excited by the fact that the event produced an amount of gold equal to several times the mass of the Earth. But it’s my blog, so let me highlight the aspect of personal relevance to me: using “standard sirens” to measure the expansion of the universe.

We’re already pretty good at measuring the expansion of the universe, using something called the cosmic distance ladder. You build up distance measures step by step, determining the distance to nearby stars, then to more distant clusters, and so forth. Works well, but of course is subject to accumulated errors along the way. This new kind of gravitational-wave observation is something else entirely, allowing us to completely jump over the distance ladder and obtain an independent measurement of the distance to cosmological objects. See this LIGO explainer.

The simultaneous observation of gravitational and electromagnetic waves is crucial to this idea. You’re trying to compare two things: the distance to an object, and the apparent velocity with which it is moving away from us. Usually velocity is the easy part: you measure the redshift of light, which is easy to do when you have an electromagnetic spectrum of an object. But with gravitational waves alone, you can’t do it — there isn’t enough structure in the spectrum to measure a redshift. That’s why the exploding neutron stars were so crucial; in this event, GW170817, we can for the first time determine the precise redshift of a distant gravitational-wave source.

Measuring the distance is the tricky part, and this is where gravitational waves offer a new technique. The favorite conventional strategy is to identify “standard candles” — objects for which you have a reason to believe you know their intrinsic brightness, so that by comparing to the brightness you actually observe you can figure out the distance. To discover the acceleration of the universe, for example,  astronomers used Type Ia supernovae as standard candles.

Gravitational waves don’t quite give you standard candles; every one will generally have a different intrinsic gravitational “luminosity” (the amount of energy emitted). But by looking at the precise way in which the source evolves — the characteristic “chirp” waveform in gravitational waves as the two objects rapidly spiral together — we can work out precisely what that total luminosity actually is. Here’s the chirp for GW170817, compared to the other sources we’ve discovered — much more data, almost a full minute!

So we have both distance and redshift, without using the conventional distance ladder at all! This is important for all sorts of reasons. An independent way of getting at cosmic distances will allow us to measure properties of the dark energy, for example. You might also have heard that there is a discrepancy between different ways of measuring the Hubble constant, which either means someone is making a tiny mistake or there is something dramatically wrong with the way we think about the universe. Having an independent check will be crucial in sorting this out. Just from this one event, we are able to say that the Hubble constant is 70 kilometers per second per megaparsec, albeit with large error bars (+12, -8 km/s/Mpc). That will get much better as we collect more events.

So here is my (infinitesimally tiny) role in this exciting story. The idea of using gravitational-wave sources as standard sirens was put forward by Bernard Schutz all the way back in 1986. But it’s been developed substantially since then, especially by my friends Daniel Holz and Scott Hughes. Years ago Daniel told me about the idea, as he and Scott were writing one of the early papers. My immediate response was “Well, you have to call these things `standard sirens.'” And so a useful label was born.

Sadly for my share of the glory, my Caltech colleague Sterl Phinney also suggested the name simultaneously, as the acknowledgments to the paper testify. That’s okay; when one’s contribution is this extremely small, sharing it doesn’t seem so bad.

By contrast, the glory attaching to the physicists and astronomers who pulled off this observation, and the many others who have contributed to the theoretical understanding behind it, is substantial indeed. Congratulations to all of the hard-working people who have truly opened a new window on how we look at our universe.

by Sean Carroll at October 16, 2017 03:52 PM

Matt Strassler - Of Particular Significance

A Scientific Breakthrough! Combining Gravitational and Electromagnetic Waves

Gravitational waves are now the most important new tool in the astronomer’s toolbox.  Already they’ve been used to confirm that large black holes — with masses ten or more times that of the Sun — and mergers of these large black holes to form even larger ones, are not uncommon in the universe.   Today it goes a big step further.

It’s long been known that neutron stars, remnants of collapsed stars that have exploded as supernovas, are common in the universe.  And it’s been known almost as long that sometimes neutron stars travel in pairs.  (In fact that’s how gravitational waves were first discovered, indirectly, back in the 1970s.)  Stars often form in pairs, and sometimes both stars explode as supernovas, leaving their neutron star relics in orbit around one another.  Neutron stars are small — just ten or so kilometers (miles) across.  According to Einstein’s theory of gravity, a pair of stars should gradually lose energy by emitting gravitational waves into space, and slowly but surely the two objects should spiral in on one another.   Eventually, after many millions or even billions of years, they collide and merge into a larger neutron star, or into a black hole.  This collision does two things.

  1. It makes some kind of brilliant flash of light — electromagnetic waves — whose details are only guessed at.  Some of those electromagnetic waves will be in the form of visible light, while much of it will be in invisible forms, such as gamma rays.
  2. It makes gravitational waves, whose details are easier to calculate and which are therefore distinctive, but couldn’t have been detected until LIGO and VIRGO started taking data, LIGO over the last couple of years, VIRGO over the last couple of months.

It’s possible that we’ve seen the light from neutron star mergers before, but no one could be sure.  Wouldn’t it be great, then, if we could see gravitational waves AND electromagnetic waves from a neutron star merger?  It would be a little like seeing the flash and hearing the sound from fireworks — seeing and hearing is better than either one separately, with each one clarifying the other.  (Caution: scientists are often speaking as if detecting gravitational waves is like “hearing”.  This is only an analogy, and a vague one!  It’s not at all the same as acoustic waves that we can hear with our ears, for many reasons… so please don’t take it too literally.)  If we could do both, we could learn about neutron stars and their properties in an entirely new way.

Today, we learned that this has happened.  LIGO , with the world’s first two gravitational observatories, detected the waves from two merging neutron stars, 130 million light years from Earth, on August 17th.  (Neutron star mergers last much longer than black hole mergers, so the two are easy to distinguish; and this one was so close, relatively speaking, that it was seen for a long while.)  VIRGO, with the third detector, allows scientists to triangulate and determine roughly where mergers have occurred.  They saw only a very weak signal, but that was extremely important, because it told the scientists that the merger must have occurred in a small region of the sky where VIRGO has a relative blind spot.  That told scientists where to look.

The merger was detected for more than a full minute… to be compared with black holes whose mergers can be detected for less than a second.  It’s not exactly clear yet what happened at the end, however!  Did the merged neutron stars form a black hole or a neutron star?  The jury is out.

At almost exactly the moment at which the gravitational waves reached their peak, a blast of gamma rays — electromagnetic waves of very high frequencies — were detected by a different scientific team, the one from FERMI. FERMI detects gamma rays from the distant universe every day, and a two-second gamma-ray-burst is not unusual.  And INTEGRAL, another gamma ray experiment, also detected it.   The teams communicated within minutes.   The FERMI and INTEGRAL gamma ray detectors can only indicate the rough region of the sky from which their gamma rays originate, and LIGO/VIRGO together also only give a rough region.  But the scientists saw those regions overlapped.  The evidence was clear.  And with that, astronomy entered a new, highly anticipated phase.

Already this was a huge discovery.  Brief gamma-ray bursts have been a mystery for years.  One of the best guesses as to their origin has been neutron star mergers.  Now the mystery is solved; that guess is apparently correct. (Or is it?  Probably, but the gamma ray discovery is surprisingly dim, given how close it is.  So there are still questions to ask.)

Also confirmed by the fact that these signals arrived within a couple of seconds of one another, after traveling for over 100 million years from the same source, is that, indeed, the speed of light and the speed of gravitational waves are exactly the same — both of them equal to the cosmic speed limit, just as Einstein’s theory of gravity predicts.

Next, these teams quickly told their astronomer friends to train their telescopes in the general area of the source. Dozens of telescopes, from every continent and from space, and looking for electromagnetic waves at a huge range of frequencies, pointed in that rough direction and scanned for anything unusual.  (A big challenge: the object was near the Sun in the sky, so it could be viewed in darkness only for an hour each night!) Light was detected!  At all frequencies!  The object was very bright, making it easy to find the galaxy in which the merger took place.  The brilliant glow was seen in gamma rays, ultraviolet light, infrared light, X-rays, and radio.  (Neutrinos, particles that can serve as another way to observe distant explosions, were not detected this time.)

And with so much information, so much can be learned!

Most important, perhaps, is this: from the pattern of the spectrum of light, the conjecture seems to be confirmed that the mergers of neutron stars are important sources, perhaps the dominant one, for many of the heavy chemical elements — iodine, iridium, cesium, gold, platinum, and so on — that are forged in the intense heat of these collisions.  It used to be thought that the same supernovas that form neutron stars in the first place were the most likely source.  But now it seems that this second stage of neutron star life — merger, rather than birth — is just as important.  That’s fascinating, because neutron star mergers are much more rare than the supernovas that form them.  There’s a supernova in our Milky Way galaxy every century or so, but it’s tens of millenia or more between these “kilonovas”, created in neutron star mergers.

If there’s anything disappointing about this news, it’s this: almost everything that was observed by all these different experiments was predicted in advance.  Sometimes it’s more important and useful when some of your predictions fail completely, because then you realize how much you have to learn.  Apparently our understanding of gravity, of neutron stars, and of their mergers, and of all sorts of sources of electromagnetic radiation that are produced in those merges, is even better than we might have thought. But fortunately there are a few new puzzles.  The X-rays were late; the gamma rays were dim… we’ll hear more about this shortly, as NASA is holding a second news conference.

Some highlights from the second news conference:

  • New information about neutron star interiors, which affects how large they are and therefore how exactly they merge, has been obtained
  • The first ever visual-light image of a gravitational wave source, from the Swope telescope, at the outskirts of a distant galaxy; the galaxy’s center is the blob of light, and the arrow points to the explosion.

  • The theoretical calculations for a kilonova explosion suggest that debris from the blast should rather quickly block the visual light, so the explosion dims quickly in visible light — but infrared light lasts much longer.  The observations by the visible and infrared light telescopes confirm this aspect of the theory; and you can see evidence for that in the picture above, where four days later the bright spot is both much dimmer and much redder than when it was discovered.
  • Estimate: the total mass of the gold and platinum produced in this explosion is vastly larger than the mass of the Earth.
  • Estimate: these neutron stars were formed about 10 or so billion years ago.  They’ve been orbiting each other for most of the universe’s history, and ended their lives just 130 million years ago, creating the blast we’ve so recently detected.
  • Big Puzzle: all of the previous gamma-ray bursts seen up to now have always had shone in ultraviolet light and X-rays as well as gamma rays.   But X-rays didn’t show up this time, at least not initially.  This was a big surprise.  It took 9 days for the Chandra telescope to observe X-rays, too faint for any other X-ray telescope.  Does this mean that the two neutron stars created a black hole, which then created a jet of matter that points not quite directly at us but off-axis, and shines by illuminating the matter in interstellar space?  This had been suggested as a possibility twenty years ago, but this is the first time there’s been any evidence for it.
  • One more surprise: it took 16 days for radio waves from the source to be discovered, with the Very Large Array, the most powerful existing radio telescope.  The radio emission has been growing brighter since then!  As with the X-rays, this seems also to support the idea of an off-axis jet.
  • Nothing quite like this gamma-ray burst has been seen — or rather, recognized — before.  When a gamma ray burst doesn’t have an X-ray component showing up right away, it simply looks odd and a bit mysterious.  Its harder to observe than most bursts, because without a jet pointing right at us, its afterglow fades quickly.  Moreover, a jet pointing at us is bright, so it blinds us to the more detailed and subtle features of the kilonova.  But this time, LIGO/VIRGO told scientists that “Yes, this is a neutron star merger”, leading to detailed study from all electromagnetic frequencies, including patient study over many days of the X-rays and radio.  In other cases those observations would have stopped after just a short time, and the whole story couldn’t have been properly interpreted.



Filed under: Astronomy, Gravitational Waves

by Matt Strassler at October 16, 2017 03:10 PM

Peter Coles - In the Dark

Homes from Home in Pune

Since I’m coming back tomorrow I thought I’d wander around this morning and take a few pictures of where I’ve been staying most of the last 10 days or so. First, this is a snap of the housing complex which contains my guest apartment.

I’m actually in the first building on the right. Here is the front door.

The faculty at both IUCAA (Pune) and TFIR (Mumbai) live in housing areas provided by their respective institutions, so they form quite a close-knit community. Some of the senior staff in IUCAA are housed just round the corner from my place.

IUCAA is on the Pune University Campus (except that it has its own entrance from the main road that runs along the Northern edge of the campus, where there is a security post. There are a few of these around the IUCAA site itself, so it is very secure and quite private. The campus is large with many tree-lined roads. At its heart, on a small hill, you can find this building:

This is (or was) the Raj Bhavan (`Government House’) and it was essentially the Governor of Maharashtra’s residence during the Monsoon season. Built in 1866, it was a sort of home-from-home when Bombay (the state capital) became too unbearable.

When I was last here in 1994, this was the Main Building of the University and was quite busy. Now, however, it seems to be disused and is in a state of some disrepair, the gardens also need a bit of love and attention. There are many new buildings around the University of Pune campus (including a modern administration block nearby). Since this building is a relic of the old colonial days it may be that it will be demolished to make way for something that better suits modern India. By the way, there’s a stone slab just next to the site of this building that displays the preamble to the Constitution of India, as adopted in 1949.

Anyway, this afternoon and evening promise to be quite busy. There is a press conference at IUCAA at 6.30pm about the gravitational waves discovery I mentioned a few days ago. There will be presentations before a viewing of the live feed from Washington DC then there’ll be a panel answering questions from the press. They’ve asked me to be on the panel, so I might appear in the India media, but as I’m leaving first thing tomorrow morning I probably won’t see any of the coverage!

by telescoper at October 16, 2017 07:57 AM

October 14, 2017

ZapperZ - Physics and Physicists

Lazy Reporting And Taking Way Too Much Credit
It is not surprising that whenever a major discovery is made or a major award is given, as many people and institutions want to ride the coattail and be a part of it. I understand that.

But sometime, it is stretching it a bit waaaay too much, especially when the report itself sounds very lazy and weak.

The recent announcement of the Nobel Prize in physics being awarded to 3 figures who are instrumental in the discovery of gravitational waves seem to be one such case. I stumble across this news article out of what I believe is a local newspaper called the "Gonzales Weekly Citizen". The headline said:

LSU scientists win Nobel Prize in Physics

Of course, that perked my interest since I didn't know any of the 3 men who were awarded the prize are known to be associated with LSU (Louisiana State University, for those who are not familiar with this).

Now, it seems that the reporter is playing fast and loose. Rainer Weiss is listed as an "adjunct professor" in the LSU physics dept. Now, we all know that an adjunct professor is nothing more than a "contractor". That person is not considered as a staff member, but rather hired on a per-term basis or based on a contract. In most cases, the person is probably associated by another institution rather than the one where he/she is an adjunct professor of.

In fact, in this case, Rainer Weiss is more well-known as being associated with MIT than anywhere else. It is what is listed in all the news report for this award. In fact, if you look at the Nobel Prize page that announced this award, the profile on Weiss says:

Affiliation at the time of the award: LIGO/VIRGO Collaboration, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA

No mention of LSU. In fact, the LIGO project itself is a consortium of many universities and it is jointly administered by MIT and Caltech. One of the facilities may be in Louisiana, and LSU is involved in the project, but that's about it. They should be proud of their contribution to the project, but to over play it to this level is not quite right.

So this news report is misleading at best!

But that's not all! There's a certain level of laziness in the reporting.

LSU adjunct professor and MIT professor Emeritus Rainer Weiss and California professor Emeritus Kip Thorne are co-founders of the collaboration. Weiss won half of the prize, and the other half went to the California Institute of Technology professors involved.

I'm sorry, but they could not even bother to mention Barry Barish name? He's being relegated to being part of the "... California Institute of Technology professors involved." REALLY!

As I said, rather lazy reporting.


by ZapperZ ( at October 14, 2017 03:56 PM

Clifford V. Johnson - Asymptotia

Coming Soon…

Adding sound to trailer!! To appear soon. So you'll get to look inside the new book!


#thedialoguesbook #sketching #drawing #sketches #graphicnovels Click to continue reading this post

The post Coming Soon… appeared first on Asymptotia.

by Clifford at October 14, 2017 01:58 AM

October 13, 2017

Sean Carroll - Preposterous Universe

Mind-Blowing Quantum Mechanics

Trying to climb out from underneath a large pile of looming (and missed) deadlines, and in the process I’m hoping to ramp back up the real blogging. In the meantime, here are a couple of videos to tide you over.

First, an appearance a few weeks ago on Joe Rogan’s podcast. Rogan is a professional comedian and mixed-martial arts commentator, but has built a great audience for his wide-ranging podcast series. One of the things that makes him a good interviewer is his sincere delight in the material, as evidenced here by noting repeatedly that his mind had been blown. We talked for over two and a half hours, covering cosmology and quantum mechanics but also some bits about AI and pop culture.

And here’s a more straightforward lecture, this time at King’s College in London. The topic was “Extracting the Universe from the Wave Function,” which I’ve used for a few talks that ended up being pretty different in execution. This one was aimed at undergraduate physics students, some of whom hadn’t even had quantum mechanics. So the first half is a gentle introduction to many-worlds theory and why it’s the best version of quantum mechanics, and the second half tries to explain our recent efforts to emerge space itself out of quantum entanglement.

I was invited to King’s by Eugene Lim, one of my former grad students and now an extremely productive faculty member in his own right. It’s always good to see your kids grow up to do great things!

by Sean Carroll at October 13, 2017 03:01 PM

Lubos Motl - string vacua and pheno

Calls to dumb down science at Wikipedia have to be dismissed
Journalists i.e. pompous fools love to pretend they understand things even if they don't have the slightest clue

I have started hundreds of science and hundreds of non-science articles at Wikipedia, edited thousands of others, and actually gained some automatic administrator privileges that allow me to edit certain articles when most of the regular people can't. Wikipedia isn't perfect but it's been immensely helpful to me – and I think that many of you – many times. Well, it's fashionable to sling mud at Wikipedia but scientists use Wikipedia more than they admit. A project like that had to be created but I am still grateful to Jimbo Wales for actually turning the vision into reality – currently the fifth most visited website in the world – some years ago.

Now, Wikipedia isn't perfect and in many cases, its texts are biased if not downright untrue. I think it's obvious that politically flavored articles are mostly left-leaning. Whenever a topic has been politicized, you should be careful and realize that someone could have hidden some key information or promoted some fishy memes. In particular, whenever you read an article related to the debate about climate change, it is very likely that William Connolley, an official at the U.K. Green Party, has "touched" it. In recent years, however, his vegetarian diet has basically destroyed his brain so he is no longer able to write a comprehensible sentence.

I would say that in most cases, the key facts and definitions are included in the important enough articles and if there's some bias, it's just the bias in the tone in which the article is written. When it is so, a sensible reader such as you may still extract the useful information and rephrase it in a neutral way which removes all the left-wing flavor.

Hours ago, journalist Michael Byrne at Motherboard.Vice.Com claimed that
Wikipedia’s Science Articles Are Elitist
His subtitle says
Maybe Wikipedia readers shouldn’t need science degrees to digest articles about basic topics. Just an idea.
Well, it's an extremely stupid and pernicious idea. Articles about scientific topics such as those he mentioned are written in the elitist, rigorous enough, jargon-dependent style because they're articles about objects and concepts that are being used by scientists, an elite, and science needs a certain amount of rigor and jargon. You don't need the actual degrees to understand specialized science articles but you need the same skills or knowledge that could bring you an actual degree if you wanted to get one.

If you don't have the skills or knowledge that are necessary for people to get science degrees, you shouldn't be surprised that you can't understand articles about science.

Byrne wrote his rant because his brain had trouble with articles about nonribosomal peptides, electroweak interaction, and graphene. If you're seriously interested in these topics – like a student or a layman who has simply gotten to a comparable level – you will appreciate the comprehensive and comprehensible "real deal" information you may learn about these topics.

In the subtitle of his tirade, Byrne has incredibly claimed that the Wikipedia's articles about the "basic topics" should be addressed to the laymen without degrees but his first example were "nonribosomal peptides". He must surely be joking, right? In which universe the "nonribosomal peptides" may be considered a "basic topic"? It's a term in biochemistry or molecular biology that is composed of two words. So it's surely not a "basic topic". Even most of the one-word terms in molecular biology could be said "not to be a basic topic".

Who is actually expected to read such an article and find it useful? It's some undergraduate or graduate student who writes a paper about some related topic but isn't an expert in peptides or anything of the sort. So he may quickly become a "superficial expert" by reading the article. Or a researcher in any adjacent field. Or a journalist who is really, really good and capable of going beneath the surface – a science journalist who is working on himself or herself and tries to be closer to the actual scientists than to the laymen. A meaningful reader of the article about "nonribosomal peptides" must be someone whose skills place him on the scientific side of an aisle, a person who is at least capable of studying new texts about science.

Obviously, you're not supposed to be an expert in "nonribosomal peptides" if you find this article really useful (but even some experts may find it useful, I am convinced) but you must be in the experts' "universality class". You must have been able to learn about some "comparably difficult" concepts in the past. If you never have, you shouldn't expect to understand "nonribosomal peptides". No people working in a "regular occupation" may ever encounter "nonribosomal peptides" in any meaningful or natural way.

Try to emulate how a reader who is really not a molecular biologist reads the article about nonribosomal peptides if he accidentally gets there. It starts by:
Nonribosomal peptides (NRP) are a class of peptide secondary metabolites, usually produced by microorganisms like bacteria and fungi.
OK, first, we see that they may be written as NRP, an acronym. It's nice that molecular biologists can use acronyms. "N" stands for "non", "R" stands for "ribosomal", and "p" stands for "peptides". Cool, we learned something about the language but nothing about the substance so far. OK, next we learn that they're a "class". It's something like your classmates. If you don't know what a class is, Mr Byrne, you may imagine a "bunch", and that's good enough. ;-)

After the word "of", things get tougher. NRPs are "peptide secondary metabolites". A typical person outside molecular biology doesn't know what it means because "peptide secondary metabolites" sound as difficult as NRPs. However, if you know some molecular biology, or at least the typical "strategy" how definitions of complex terms are written everywhere in science, you will agree with me that progress has actually been accomplished.

"Peptides" are more elementary than "nonribosomal peptides". If you click at peptides, you will immediately learn that they're some chains of amino acid monomers. OK, some chains in biological molecules. You may extract some other words that you're intrigued by on the peptide page. But you will probably be intimidated soon – you're not really interested in peptides, are you? OK, similarly, you may click at the phrase secondary metabolite to see that they're some food-like compounds that aren't quite vital for growth but without them, you will feel unhealthy after some time. The articles are written in more science-like words than the words I am using but if you keep on clicking, you may learn the meaning of all these things "in layman's terms" after several clicks.

When you're not a real biochemist, not even someone who is working on becoming one, de iure or de facto, you will give up soon. And that's how the things should be. There exists absolutely no logical justification why a random layman or outsider should read or should be able or willing to read whole long encyclopedic entries about things like "nonribosomal peptides".

Mr Byrne indicates that when he misunderstands a text, the problem must be on the side of the writers. But this arrogant attitude is a result of the society's treatment of these spoiled brats, mediocre liars and filth generally known as the journalists. I am oversimplifying things now, so that Mr Byrne has a chance to understand, but my oversimplification is very accurate for Mr Byrne himself. In reality, average journalists aren't any elite. Just look at the undergraduates' IQ in different fields. Physicists at the top of the table had 130 while "communication" was the third layer from the bottom and they were at 112. That's less than one standard deviation above the average Joe. Chemists had 124. You can see that the IQ of the "communication folks" is exactly in between "chemists" and the "average Joe". What a surprise that a journalist may have about 50% of the problems with a text about biochemistry that the average Joe has – and this amount of problems may still feel as "too much".

The electroweak interaction is another article that Mr Byrne criticizes. I obviously find it elementary because that's the field in which I was trained – and I was interested in since my teenage years. But even for other people, is it so hard? Most people start at the beginning:
In particle physics, the electroweak interaction is the unified description of two of the four known fundamental interactions of nature: electromagnetism and the weak interaction.
Now, which word is unclear to you? It is the "unified description". It means that one can describe two things. And they're described in a unified way, i.e. simultaneously. Unified description of what? Two of four "fundamental interactions". You may click at "fundamental interactions" if you don't understand the phrase – that's the beauty of the hypertext and web. But frankly, I don't think that people who realize that they have trouble with "fundamental interactions" should read an article about "electroweak interaction".

A person who thinks like a physicist understands – or has rediscovered – that the world must apparently run on some most elementary processes of some kind, and those are naturally termed the fundamental interactions. The phrase is really self-explanatory. And yes, the two interactions that are unified are the electromagnetic and weak force. Again, you may click to learn what they are. And so on.

My general point is that the words used even in the first sentence of such articles may still be difficult and require extra explanations. Nevertheless, progress is made in the explanation because these difficult terms are more elementary than the terms that one is trying to explain. And that's how science and its definitions work. One is "reducing" a complex thing – or a complex, composite concept – to simpler ones.

But one usually doesn't reduce it "directly" to concepts that are clear to the layman. That's simply not how science works. Sciences – especially physics but even biochemistry, as we have seen – are skyscrapers with many floors that are built on top of each other. If you don't have a clue what a peptide or even an organic molecule is, you simply shouldn't expect to immediately understand what a "nonribosomal peptide" is. This expectation is exactly as foolish as attempts to build the 7th floor of a skyscraper at the moment when 3rd and 4th floors are not yet built. It's just not possible. It's obvious that it's not possible and it's obvious why it's not possible.

I think that everyone with common sense understands that not all problems – e.g. problems with the understanding of a scientific term – can be resolved or are resolved "immediately". But the actual reason why Mr Byrne tries to present this mundane fact as a "scandal" is that he is being reminded that his expertise isn't just one floor beneath those who can edit Wikipedia articles on physics or biochemistry – and not all of these people have to be the global elite. He is obviously several floors beneath these people and he simply can't swallow this fact. He thinks that as a science journalist, he has the right to be assured by everyone – and every page on the Internet – that he is on par with the actual researchers or at most one floor beneath them. But he's obviously not.

Another example of a difficult text that Mr Byrne picked is graphene. He compared it with the entry at the Encyclopedia Britannica which seems clearer to Mr Byrne. The first sentences say:
Graphene is an allotrope of carbon in the form of a two-dimensional, atomic-scale, hexagonal lattice in which one atom forms each vertex.

Graphene, a two-dimensional form of crystalline carbon, either a single layer of carbon atoms forming a honeycomb (hexagonal) lattice or several coupled layers of this honeycomb structure.
Which one is clearer? They're about the same. The first, Wikipedia text talks about "hexagonal lattice" while the second, Britannica, talks about "this honeycomb structure" while it also uses the adjective "hexagonal" as well as the noun "lattice". Both say that it is "two-dimensional". So at the end, the only difference is that the Wikipedia text has omitted the laymen's words such as "honeycomb" and it's more concise in general; and it has used the difficult term "allotrope".

But is it really a problem that you read a sentence with a word such as "allotrope"? I don't use the word myself. It may even be said that I strictly speaking didn't understand the word even passively. And I would bet that a big portion of the world's graphene experts don't actually actively use the word, either. But the meaning is clear from the context. On top of that, you can click at allotropes of carbon to immediately see a picture of various shapes of the infinite networks of carbon atoms, so you know what they mean without reading a single word on that page! But yes, sometimes, to be sure, you need at least to click a few times and to look at a few pictures.

Why should it be even simpler to get the meaning of the words? If "allotropes" were replaced by childish words everywhere, what would be the benefits that could compensate the reduced rigor and precision of the pages?

What is actually happening is that Mr Byrne is a part of a degenerated culture of people who don't really understand a damn thing but they constantly pretend that they understand everything or at least everything that is important. It's just bad. It's a reason why lots of important questions are being decided by self-confident morons such as Mr Byrne these days. Nonribosomal peptides, electroweak interaction, and graphene aren't terms from the everyday life of an average person so it should be absolutely unsurprising that it's nontrivial to understand – even superficially – what these terms actually mean.

For a regular person, and especially an educated one, it's much more important to understand that things may be hard and that there are lots of things that experts may understand while he doesn't than to understand any particular thing, e.g. what is roughly a peptide. For decades, the laymen and the journalists were dishonestly told "you can actually understand and work on things easily as well" – because popularizers of science wanted to look folksy. The result is that science itself has become endangered by these stupid yet overly self-confident laymen. They basically demand scientific encyclopedic entries to be removed from the Internet and they're suggesting many worse things, too. This "anti-elitist" process has had several stages. Populist scientists have persuaded journalists that those are "basically experts" if they learn some childish caricatures of science. And these journalists have assured their readers that the readers are also "basically experts" once they learn some even more oversimplified or misleading slogans – in extreme examples, slogans like "climate change is real". The result is that no one has a clue but everyone thinks he is very smart and demands everyone else to be equally "smart".

Folks like Mr Byrne don't have a clue. They're not capable of reading introductory texts that were written for the broader but intelligent public. But these likes of Mr Byrne nevertheless feel self-confident to "teach" everyone about the climate change, peptides, graphene, quantum mechanics, and sometimes even electroweak theory if not string theory (I found his text because of a breathtakingly idiotic sentence that includes "string theory"). I am sorry but you should not. If you're not capable of reading the Wikipedia introductions to these concepts, you shouldn't pretend that you know what you're talking about in front of your readers because you don't. If you pretend the understanding while you can't swallow even the introductory articles, you are deceiving your readers.

Mostly off-topic, inflation wars: The Backreaction has declared that "cosmic inflation is no longer science". I won't write about inflation wars because she doesn't really bring anything new and she is reacting to events that everyone discussed half a year ago or more. But let me just say that the very idea that "suddenly" a part of science ceases to be science proves that she's not thinking as a scientist. Either something is and always has been science, or it isn't and never was science. Her changing moods is a "female contribution" to science – as Sheldon Cooper would point out, we learn much more about the stages of her menstrual cycle than about cosmology from her article.

Her claims (and those by Steinhardt et al.) that the usual justifications for inflation aren't real or aren't serious shows that she is exactly one of the superficial readers of simplified enclyclopedia entries and slogans – such as those that Mr Byrne wanted to have above – but she has never really understood how (almost) any of these ideas work. The statement that flatness would be doubly exponentially unlikely without a preceding phase such as inflation is a fact. One may derive it just like \(11\times 11 = 121\). And it's about the same with all the other profound facts she tries to mock. If someone claims to be a physicist in a nearby field and doesn't get these facts, it's too bad. When the media and some other segments of the society play the game that similar fake scientists are real, it's harmful to the whole society.

by Luboš Motl ( at October 13, 2017 01:27 PM

October 11, 2017

Tommaso Dorigo - Scientificblogging

Trevor Hastie Lectures In Padova
Trevor Hastie, the Stanford University guru on Statistical Learning (he coined the term together with his colleagues Tibshirani and Friedman) is in Padova this week, where he is giving a short course on his pet topic and a seminar. I am happy to report this as this was partly made possible by the European Union-funded network of which I am the project coordinator, AMVA4NewPhysics. But most of the merit is of Prof. Giovanna Menardi, PI of the Padova node of the network, who organized it... And of course I am happy because I am learning from his insightful lectures!

(Above, prof. Menardi introduces the lectures).

read more

by Tommaso Dorigo at October 11, 2017 05:33 PM

Emily Lakdawalla - The Planetary Society Blog

Planetary Society-funded telescopes help find ring around Haumea, a distant dwarf planet
Haumea has a ring! Two telescopes used in the discovery—one in Slovenia, and one in Italy—received funding from The Planetary Society's Shoemaker Near Earth Object (NEO) Grant program, which helps amateur astronomers find, track and characterize near-Earth asteroids.

October 11, 2017 05:15 PM

Emily Lakdawalla - The Planetary Society Blog

American R&D Policy and the Push for Small Planetary Missions at NASA
Planetary Society Policy Adviser Jason Callahan summarizes his paper he presented at the 2017 International Astronautical Congress in Australia, where he examined NASA's low-cost Discovery program and how federal policies directed at higher education initially bolstered planetary science into a viable field.

October 11, 2017 04:00 PM

ZapperZ - Physics and Physicists

Electron Is Still A Point Particle
There have been experiments to measure the electric dipole moment of an electron, if any, which would indicate that (i) an electron has an internal structure and (ii) consequently it isn't a point particle that we have been assuming within QED. So far, all the experiments have not found any, and each measurement continues to increase the precision of the previous measurement.

Chalk this one up to follow the same trend[1]. This time, they are using a different technique to measure the electron dipole moment by using trapped molecular ions. The result of the experiment is an even more precise measurement, and lowered the upper bound of the dipole moment by several orders of magnitude when compared to the previous result.

Electron is still a spherical cow!


[1] W.B. Cairncross et al., Phys. Rev. Lett. v.119, p.153001 (2017).

by ZapperZ ( at October 11, 2017 12:36 PM

Clifford V. Johnson - Asymptotia

In Flight…

Over on Instagram - follow me there @asymptotia for lots of activity - I showed a couple of montages of sketches I did on the flight out to New Orleans and on the return portion. It is a bit of practice I like to do, which I have not done in a long time. It is part of the practice I did a lot when I was gearing up to do final drawing on the book:

[...] Click to continue reading this post

The post In Flight… appeared first on Asymptotia.

by Clifford at October 11, 2017 05:33 AM

October 10, 2017

Emily Lakdawalla - The Planetary Society Blog

Cassini’s Last Dance With Saturn: The Farewell Mosaic
Amateur image processor Ian Regan shares the story of processing Cassini's final images of the ringed planet.

October 10, 2017 12:00 PM

CERN Bulletin

Thank you, Philippe, and all the best for the journey ahead!

Philippe Trilhe, our friend and colleague in the Staff Association, will be going on leave before embarking on retirement. The Staff Association expresses a heartfelt thank you to Philippe for his unwavering commitment of 27 years!

Let us begin with a few important dates that have marked his journey within the Staff Association:

  • 1986: Philippe arrives at CERN and joins the Staff Association. Let us remind you that at the time joining the Staff Association was automatic and obligatory for all CERN staff members.
  • 1994: Philippe stands in the elections and begins his first term as a staff delegate. Over time, Philippe discovers the different internal commissions of the Staff Association, including the Individual Cases Commission and the Commission on Legal Matters, as well as the official bodies of CERN, such as Restaurant Supervisory Committee (CSR). According to Philippe, “It is a good thing to take part in several commissions and discover the work that they do… This way, everyone can find their place within the Staff Association”.
  • 1996: Philippe is invited to take part in the Executive Committee (CE) of the Staff Association (link).
  • 1998 : Philippe joins the Steering Committee of the Nursery School as Treasurer. He held this position for a year, and was then appointed the President of the Committee, a position he held until 30 September 2017.

Presidency of the EVEE Steering Committee

Children’s Day-Care Centre and School (EVEE) of the CERN Staff Association, often called the Nursery School, is a private structure under Swiss law destined for children from 4 months to 6 years old.

For 18 years, as a perfect business manager, Philippe was responsible for the operation of the CERN Staff Association’s Nursery School.

Supported by the employer, the Staff Association, Philippe contributed significantly to the development of the Nursery School by increasing the services offered to families in response to their expectations. This led to the creation of:

  • a canteen allowing children to eat on site at the structure in 2008;
  • a crèche for children from 4 months old in 2013;
  • a month-long summer camp (every year in July) in 2015.

As the manager of the structure, Philippe wanted to implement the concertation process, as applied at CERN, also to the Steering Committee in the framework of discussions between employees and the employer. That is why this concertation Committee exists today.

Concertation is important for us, delegates of the Association, and I am proud to say that in the past 18 years, the Children’s Day-Care Centre and School has never been shaken by social movements.

Moreover, inspired by the collective agreements in force in the Canton, Philippe always strived to ensure that the prevailing financial and social conditions in Geneva be applied to the structure.

And yet, even though the management duties were exciting, it has not always been an easy task!

Indeed: “There are certain decisions that employers must take that are not easy. Since I had both my role as a CERN staff delegate and the management position the employer appointed me in, I was often torn between the wellbeing of the personnel and my role. It can easily feel like schizophrenia. Thankfully, there have been great moments, too… The best memory I have from my term is from the inauguration of the crèche in 2013, where we got warm words from the then Director-General, Rolf Heuer, and the President of the Council, Professor Agnieszka Zalewska.”

When asked about his commitment to the Staff Association and the reasons why he became a delegate, Philippe told us: “It is first and foremost a personal endeavour. If you want to be a delegate, you already have inspiration within you, something deeply altruistic. It is advisable to stay a long time to know the issues thoroughly. Go ahead, if you are fully confident but do not expect anything in return. Then again, you might just find personal fulfilment, professional networks, and friendship with all delegates.”

The Staff Association would like to warmly thank Philippe for his commitment through all these years, for the EVEE structure and as a representative of the CERN personnel.

Philippe has left CERN at the end of September, and thus his career at CERN draws to a close.

We wish him all the best and every success in his new life!

October 10, 2017 11:10 AM

CERN Bulletin

Thank you for attending our public meetings on 5 and 6 October!

Many thanks to all of you, who accepted the Staff Association’s invitation and participated in our Public Meetings on Thursday 5 and Friday 6 October.

It is important that the personnel as a whole stays informed and interacts with their representatives at the Staff Association on issues that affect us all.

The presentation slides, in English, and the recordings of the meetings, in English and in French, are now available in Indico:

Keep informed! Consult your colleagues, and share and discuss with them the information provided by the Staff Association on topics such as: the CERN Health Insurance Scheme, the Pension Fund, the first MERIT exercise, promotions, the Kindergarten (EVEE), the Elections to the Staff Council.

We would like to remind you that the Staff Association is the statutory body for the collective representation of all CERN personnel, whether they are Staff Members, Fellows, Associates, Users, etc.

October 10, 2017 11:10 AM

October 09, 2017

Alexey Petrov - Symmetry factor

Non-linear teaching

I wanted to share some ideas about a teaching method I am trying to develop and implement this semester. Please let me know if you’ve heard of someone doing something similar.

This semester I am teaching our undergraduate mechanics class. This is the first time I am teaching it, so I started looking into a possibility to shake things up and maybe apply some new method of teaching. And there are plenty offered: flipped classroom, peer instruction, Just-in-Time teaching, etc.  They all look to “move away from the inefficient old model” where there the professor is lecturing and students are taking notes. I have things to say about that, but not in this post. It suffices to say that most of those approaches are essentially trying to make students work (both with the lecturer and their peers) in class and outside of it. At the same time those methods attempt to “compartmentalize teaching” i.e. make large classes “smaller” by bringing up each individual student’s contribution to class activities (by using “clickers”, small discussion groups, etc). For several reasons those approaches did not fit my goal this semester.

Our Classical Mechanics class is a gateway class for our physics majors. It is the first class they take after they are done with general physics lectures. So the students are already familiar with the (simpler version of the) material they are going to be taught. The goal of this class is to start molding physicists out of students: they learn to simplify problems so physics methods can be properly applied (that’s how “a Ford Mustang improperly parked at the top of the icy hill slides down…” turns into “a block slides down the incline…”), learn to always derive the final formula before plugging in numbers, look at the asymptotics of their solutions as a way to see if the solution makes sense, and many other wonderful things.

So with all that I started doing something I’d like to call non-linear teaching. The gist of it is as follows. I give a lecture (and don’t get me wrong, I do make my students talk and work: I ask questions, we do “duels” (students argue different sides of a question), etc — all of that can be done efficiently in a class of 20 students). But instead of one homework with 3-4 problems per week I have two types of homework assignments for them: short homeworks and projects.

Short homework assignments are single-problem assignments given after each class that must be done by the next class. They are designed such that a student need to re-derive material that we discussed previously in class with small new twist added. For example, in the block-down-to-incline problem discussed in class I ask them to choose coordinate axes in a different way and prove that the result is independent of the choice of the coordinate system. Or ask them to find at which angle one should throw a stone to get the maximal possible range (including air resistance), etc.  This way, instead of doing an assignment in the last minute at the end of the week, students have to work out what they just learned in class every day! More importantly, I get to change how I teach. Depending on how they did on the previous short homework, I adjust the material (both speed and volume) discussed in class. I also  design examples for the future sections in such a way that I can repeat parts of the topic that was hard for the students previously. Hence, instead of a linear propagation of the course, we are moving along something akin to helical motion, returning and spending more time on topics that students find more difficult. That’t why my teaching is “non-linear”.

Project homework assignments are designed to develop understanding of how topics in a given chapter relate to each other. There are as many project assignments as there are chapters. Students get two weeks to complete them.

Overall, students solve exactly the same number of problems they would in a normal lecture class. Yet, those problems are scheduled in a different way. In my way, students are forced to learn by constantly re-working what was just discussed in a lecture. And for me, I can quickly react (by adjusting lecture material and speed) using constant feedback I get from students in the form of short homeworks. Win-win!

I will do benchmarking at the end of the class by comparing my class performance to aggregate data from previous years. I’ll report on it later. But for now I would be interested to hear your comments!


by apetrov at October 09, 2017 09:45 PM

CERN Bulletin

Staff Association membership is free of charge for the rest of 2017

Starting from September 1st, membership of the Staff Association is free for all new members for the period up to the end of 2017.

This is to allow you to participate in the Staff Council elections.

Indeed, only Employed Members of the Personnel (MPE: staff and fellows) and Associated Members of the Personnel (MPA), who are members of the Staff Association, can:

  • stand for election and become a delegate of the personnel;
  • vote and elect their representatives to the Staff Council.

Do not hesitate any longer; join now!

October 09, 2017 03:10 PM

CERN Bulletin

Cine club

Wednesday 11 October 2017 at 20:00
CERN Council Chamber


Directed by Bob Fosse
USA, 1972, 124 minutes

In 1930's Berlin, American Sally Bowles works as a singer in the Kit Kat club. At her rooming house she meets Englishman Brian Roberts who has come to Berlin to improve his German. He hopes to pay his expenses by giving English language lessons. Sally is unconventional and she and Robert have a number of adventures together. The romp continues with several of their friends, including the very rich Maximilian von Heune. Life takes a sudden turn for Sally however and throughout it all, the rise of Nazism casts a shadow over everyone.

Original version English / German; English subtitles.


Wednesday 18 October 2017 at 20:00
CERN Council Chamber

Les demoiselles de Rochefort

Directed by Jacques Demy
France, 1967, 125 minutes

Two sisters leave their small seaside town of Rochefort in search of romance. Hired as carnival singers, one falls for an American musician, while the other must search for her ideal partner.

Original version French / English; English subtitles.


Special event - Ciné Concert - on 17 October !

October 09, 2017 03:10 PM

CERN Bulletin

CERN Scuba Diving Club

Interested in scuba diving? Fancy a fun trial dive?

Like every year, the CERN Scuba Diving Club is organizing two free trial dive sessions.

Where? Varembé Swimming Pool, Avenue Giuseppe Motta 46, 1202 Genève

When? 25th October and 1st November at 19:15 (one session per participant)

Price? Trial dives are FREE! Swimming pool entrance 5,40 CHF.

What to bring? Swimwear, towel, shower necessities and a padlock – diving equipment will be provided by the CSC.

For more information and to subscribe, follow the link below:

Looking forward to meeting you!

October 09, 2017 03:10 PM

Emily Lakdawalla - The Planetary Society Blog

Meet VOX, a proposed mission to uncover the secrets of Venus
Van Kane brings us newly released details of the Venus Origins eXplorer (VOX), one of NASA's 12 New Frontiers mission proposals.

October 09, 2017 11:00 AM

October 07, 2017

Lubos Motl - string vacua and pheno

Five homicides by Ethan Siegel
Ethan Siegel is a trained astrophysicist who writes some popular pieces on science, currently for Forbes.

Many of his texts about the elementary enough physics are excellent – or at least very good high school term papers. However, he sometimes writes about the state-of-the-art fundamental or particle physics and all these texts are complete garbage. Every expert must see that Siegel isn't one of them, he just doesn't understand the basic things and his knowledge doesn't exceed that of an average layman who has read several popular books on physics.

It's too bad that over 99% of his readers are totally incapable of figuring out that they're served complete junk and the self-confident tone with which Siegel writes about these matters that are way outside his expertise is a part of his scam.

That's also the case of his new essay
Five Brilliant Ideas For New Physics That Need To Die, Already.
What he doesn't appreciate is that in science, brilliant ideas and theories may only die when they're replaced with more brilliant ones or, ideally, when they're actually falsified experimentally. None of the five victims of his murders are "quite" falsified as of today although this claim is more obvious for some of them than for others. Siegel has described his planned murder of (or the global ban on)
  1. Proton decay
  2. Modified gravity
  3. Supersymmetry
  4. Technicolor
  5. WIMP dark matter
Siegel basically wants to murder almost all of physics.

First, proton decay. Siegel says that the protons don't decay because we haven't seen a proton decay yet. Holy cow. In reality, the absence of evidence doesn't mean the evidence of absence. I think that even the people without a PhD in particle physics should be capable of understanding the previous sentence. We have only experimentally observed that that the proton lifetime exceeds \(10^{34}\) years or so.

However, sensible typical models give some \(10^{34}\)-\(10^{36}\) years now, with natural enough "record breakers" giving \(10^{36}\) years among the non-supersymmetric models and \(10^{39}\) years among the supersymmetric ones. So the proton decay is obviously alive and kicking. One may discuss whether orders of magnitude should be added to the price of the huge experiments that are trying to find a decaying proton but to say that we have proven that the protons don't decay is just a complete lie or an embarrassing stupidity, depending on whether the speaker realizes that it's bullšit or not.

If one goes beyond the phenomenological model building and uses really state-of-the-art top-down perspective that includes quantum gravity and string theory, it seems much more likely that the proton has to be unstable. Why? Because a stable proton would mean that the baryon number is strictly conserved. However, it would mean – via Noether's theorem – that the corresponding \(U(1)_B\) symmetry is an exact symmetry in Nature. Now, in quantum gravity, all symmetries are local so there should better be a an unbroken \(U(1)_B\) gauge symmetry analogous to the electromagnetic symmetry.

If that existed with a natural value of the gauge coupling constant, it would be easy to see it. We obviously don't see it – a new type of a Coulomb-like force etc. So you could say that the coupling constant of this gauge symmetry could be tiny. But tiny gauge couplings are "mostly banned" as well by the weak gravity conjecture which has gained credibility since we announced it more than a decade ago. In particular, when a star collapses to a black hole which evaporates, the baryon number that was positive to start with almost certainly drops to zero because black holes mostly emit photons and gravitons, not protons. So it just looks unlikely that the proton may be perfectly stable. These arguments don't imply that an experiment is guaranteed to see a decaying proton by 2020 or any particular moment. But the arguments surely do imply that Siegel speaks like an utterly uninformed layman if he concludes that we know that the proton is stable.

Second, modified gravity. He says that every known variant of modified gravity disagrees with something we observe. I sort of agree with this appraisal. But that doesn't quite mean that there's no conceivable version of modified gravity that will be written down in the future and that agrees with everything. In recent years, there has been a mixed inflow of new arguments and evidence. Some of them have weakened the case of modified gravity, some of them strengthened it. Concerning the latter, I could mention the hype about Erik Verlinde's models. I think that this hype is unjustified but it's true that there exist possible scenarios within schemes that are natural in top-down physics that could justify some sort of MOND or derive MOND from holography, as I argued many years before Erik Verlinde.

To say the least, the modified gravity remains a major "frontrunner" to the most justifiable and smooth theories of dark matter – or theories that explain the same phenomena that are sometimes attributed to dark matter. To say that all people must stop thinking about modified gravity is just crazy.

Third, supersymmetry. Siegel would like to euthanize supersymmetry against its will as well. As I mentioned, the usual name describing the euthanasia against the victim's will is a murder or homicide.

He says that it's been proven that SUSY cannot explain the hierarchy problem i.e. the lightness of the Higgs boson relatively to the Planck scale. There are at least two problems with this statement. First, it's untrue. Second, even if it were true, it just wouldn't "kill" all the reasons why sensible people believe that supersymmetry must exist in Nature.

Concerning the current state of supersymmetry and naturalness, we have surely found from the LHC that the incorporation of supersymmetry in Nature, if it's exploited by Mother Nature at all, is less natural than a median supersymmetry researcher assumed a decade ago. However, it's simply not true that supersymmetry is no longer compatible with naturalness. Naturalness is fuzzy, somewhat dependent on personal preferences, subjective degree of tolerance etc. But most papers that study this seriously simply don't conclude that natural supersymmetry has been ruled out.

There's a lot of viable models with various superpartner masses below \(1\TeV\). On top of that, it's simply not true that all or most superpartner masses need to be this light for naturalness. Two random papers. This April 2016 paper by Baer and three co-authors estimates that one needs to collect 3,000 inverse femtobarns of data at the LHC, about 30 times more than what has been collected, to decide about the fate of natural SUSY. So they obviously concluded that the viability of natural SUSY will remain an open question for a long time.

A random similar paper – there are really many papers like that and they represent a bulk of recent literature about SUSY and naturalness: Dutta and Mimura, August 2016. They argue that MSSM with the electroweak supersymmetry breaking is still alive and may be alive even if \(\mu\), the Higgsino mass, is large. So the LHC has simply not excluded natural supersymmetry, let alone supersymmetry as a whole.

On top of that, nothing has changed about the other reasons to find SUSY attractive – its almost necessity in superstringy models of the Universe; gauge coupling unification; WIMP dark matter candidate, and others.

The LHC has shown that the range of validity of the Standard Model is way wider than a median particle physicist expected a decade ago (it hasn't deviated from my expectations much so far because I was always skeptical about claims that new physics had to be around the corner – it always looked like phenomenologists' bias to me, bias based on their desire to get the Nobel prize as soon as possible). But some physics is still to be expected at some moment of the future and supersymmetry remains a top candidate for the big idea that will be tested – and a big idea that fundamental physicists will treat very seriously even if the evidence will continue to be absent.

To try to "kill" supersymmetry is just another proof of someone's complete incompetence as a particle physicist. It's still equally important in theoretical studies; and even in phenomenology, it still has about the same fraction of the interest, and for good reasons.

Fourth, technicolor. That's the victim I am least eager to defend because technicolor looked badly motivated for a very long time. But to see is more positively, it has remained one of the phenomenological ideas that the model builders are naturally reviving in one way or another – one of the top ten ideas that are "not supersymmetry". Even if the justification for this idea is weak and it has never been really strong, you simply cannot "kill" or "ban" ideas in the top ten just because you love killing at a random moment for no good reason, as Siegel does.

Fifth, WIMP dark matter. It's incredible in the combination with Siegel's proposed murder of MOND. If he wants to ban physicists' research of both WIMP and MOND, what is really his explanation of the dark matter? Dark matter may be explained by some MOND or composed of WIMPs, SIMPs, MACHOs, planetary size black holes, axions, or two or three other major "types of particles". But some of these explanations has to be right. Or a different one but one can't switch to a new explanation before a sensible new one is actually proposed.

According to numerous people, WIMP and MOND could still be the top 2 available explanations for these phenomena at this moment – and Siegel would like to "kill" i.e. ban both of them? What is his alternative? No dark matter particle has been found directly even though it could have happened a priori. But some explanation has to be right and the probabilities of either answer haven't changed much by the null results of the experiments. In particular, WIMP remains an important plausible paradigm that researchers simply cannot ban because they don't really have viable replacements that would be in a demonstrably better shape.

His desire to make these important theories "die" without a falsification reflects his incompetence in fundamental physics and perhaps his unstoppable desire to murder for no good reason. Maybe Ethan Siegel needs to find a good psychiatrist.

by Luboš Motl ( at October 07, 2017 11:23 AM

October 06, 2017

John Baez - Azimuth

Vladimir Voevodsky, 1966 — 2017

Vladimir Voevodsky died last week. He won the Fields Medal in 2002 for proving the Milnor conjecture in a branch of algebra known as algebraic K-theory. He continued to work on this subject until he helped prove the more general Bloch–Kato conjecture in 2010.

Proving these results—which are too technical to easily describe to nonmathematicians!—required him to develop a dream of Grothendieck: the theory of motives. Very roughly, this is a way of taking the space of solutions of some polynomial equations and chopping it apart into building blocks. But this process of ‘chopping’ and also these building blocks, called ‘motives’, are very abstract—nothing easy to visualize.

It’s a bit like how a proton is made of quarks. You never actually see a quark in isolation, so you have to think very hard to realize they are there at all. But once you know this, a lot of things become clear.

This is wonderful, profound mathematics. But in the process of proving the Bloch-Kato conjecture, Voevodsky became tired of this stuff. He wanted to do something more useful… and more ambitious. He later said:

It was very difficult. In fact, it was 10 years of technical work on a topic that did not interest me during the last 5 of these 10 years. Everything was done only through willpower.

Since the autumn of 1997, I already understood that my main contribution to the theory of motives and motivic cohomology was made. Since that time I have been very conscious and actively looking for. I was looking for a topic that I would deal with after I fulfilled my obligations related to the Bloch-Kato hypothesis.

I quickly realized that if I wanted to do something really serious, then I should make the most of my accumulated knowledge and skills in mathematics. On the other hand, seeing the trends in the development of mathematics as a science, I realized that the time is coming when the proof of yet another conjecture won’t have much of an effect. I realized that mathematics is on the verge of a crisis, or rather, two crises.

The first is connected with the separation of “pure” and applied mathematics. It is clear that sooner or later there will be a question about why society should pay money to people who are engaged in things that do not have any practical applications.

The second, less obvious, is connected with the complication of pure mathematics, which leads to the fact that, sooner or later, the articles will become too complicated for detailed verification and the process of accumulating undetected errors will begin. And since mathematics is a very deep science, in the sense that the results of one article usually depend on the results of many and many previous articles, this accumulation of errors for mathematics is very dangerous.

So, I decided, you need to try to do something that will help prevent these crises. For the first crisis, this meant that it was necessary to find an applied problem that required for its solution the methods of pure mathematics developed in recent years or even decades.

He looked for such a problem. He studied biology and found an interesting candidate. He worked on it very hard, but then decided he’d gone down a wrong path:

Since childhood I have been interested in natural sciences (physics, chemistry, biology), as well as in the theory of computer languages, and since 1997, I have read a lot on these topics, and even took several student and post-graduate courses. In fact, I “updated” and deepened the knowledge that had to a very large extent. All this time I was looking for that I recognized open problems that would be of interest to me and to which I could apply modern mathematics.

As a result, I chose, I now understand incorrectly, the problem of recovering the history of populations from their modern genetic composition. I took on this task for a total of about two years, and in the end, already by 2009, I realized that what I was inventing was useless. In my life, so far, it was, perhaps, the greatest scientific failure. A lot of work was invested in the project, which completely failed. Of course, there was some benefit, of course—I learned a lot of probability theory, which I knew badly, and also learned a lot about demography and demographic history.

But he bounced back! He came up with a new approach to the foundations of mathematics, and helped organize a team at the Institute of Advanced Studies at Princeton to develop it further. This approach is now called homotopy type theory or univalent foundations. It’s fundamentally different from set theory. It treats the fundamental concept of equality in a brand new way! And it’s designed to be done with the help of computers.

It seems he started down this new road when the mathematician Carlos Simpson pointed out a serious mistake in a paper he’d written.

I think it was at this moment that I largely stopped doing what is called “curiosity-driven research” and started to think seriously about the future. I didn’t have the tools to explore the areas where curiosity was leading me and the areas that I considered to be of value and of interest and of beauty.

So I started to look into what I could do to create such tools. And it soon became clear that the only long-term solution was somehow to make it possible for me to use computers to verify my abstract, logical, and mathematical constructions. The software for doing this has been in development since the sixties. At the time, when I started to look for a practical proof assistant around 2000, I could not find any. There were several groups developing such systems, but none of them was in any way appropriate for the kind of mathematics for which I needed a system.

When I first started to explore the possibility, computer proof verification was almost a forbidden subject among mathematicians. A conversation about the need for computer proof assistants would invariably drift to Gödel’s incompleteness theorem (which has nothing to do with the actual problem) or to one or two cases of verification of already existing proofs, which were used only to demonstrate how impractical the whole idea was. Among the very few mathematicians who persisted in trying to advance the field of computer verification in mathematics during this time were Tom Hales and Carlos Simpson. Today, only a few years later, computer verification of proofs and of mathematical reasoning in general looks completely practical to many people who work on univalent foundations and homotopy type theory.

The primary challenge that needed to be addressed was that the foundations of mathematics were unprepared for the requirements of the task. Formulating mathematical reasoning in a language precise enough for a computer to follow meant using a foundational system of mathematics not as a standard of consistency to establish a few fundamental theorems, but as a tool that can be employed in ­everyday mathematical work. There were two main problems with the existing foundational systems, which made them inadequate. Firstly, existing foundations of mathematics were based on the languages of predicate logic and languages of this class are too limited. Secondly, existing foundations could not be used to directly express statements about such objects as, for example, the ones in my work on 2-theories.

Still, it is extremely difficult to accept that mathematics is in need of a completely new foundation. Even many of the people who are directly connected with the advances in homotopy type theory are struggling with this idea. There is a good reason: the existing foundations of mathematics—ZFC and category theory—have been very successful. Overcoming the appeal of category theory as a candidate for new foundations of mathematics was for me personally the most challenging.

Homotopy type theory is now a vital and exciting area of mathematics. It’s far from done, and to make it live up to Voevodsky’s dreams will require brand new ideas—not just incremental improvements, but actual sparks of genius. For some of the open problems, see Mike Shulman’s comment on the n-Category Café, and some replies to that.

I only met him a few times, but as far as I can tell Voevodsky was a completely unpretentious person. You can see that in the picture here.

He was also a very complex person. For example, you might not guess that he took great wildlife photos:

You also might not guess at this side of him:

In 2006-2007 a lot of external and internal events happened to me, after which my point of view on the questions of the “supernatural” changed significantly. What happened to me during these years, perhaps, can be compared most closely to what happened to Karl Jung in 1913-14. Jung called it “confrontation with the unconscious”. I do not know what to call it, but I can describe it in a few words. Remaining more or less normal, apart from the fact that I was trying to discuss what was happening to me with people whom I should not have discussed it with, I had in a few months acquired a very considerable experience of visions, voices, periods when parts of my body did not obey me, and a lot of incredible accidents. The most intense period was in mid-April 2007 when I spent 9 days (7 of them in the Mormon capital of Salt Lake City), never falling asleep for all these days.

Almost from the very beginning, I found that many of these phenomena (voices, visions, various sensory hallucinations), I could control. So I was not scared and did not feel sick, but perceived everything as something very interesting, actively trying to interact with those “beings” in the auditorial, visual and then tactile spaces that appeared around me (by themselves or by invoking them). I must say, probably, to avoid possible speculations on this subject, that I did not use any drugs during this period, tried to eat and sleep a lot, and drank diluted white wine.

Another comment: when I say “beings”, naturally I mean what in modern terminology are called complex hallucinations. The word “beings” emphasizes that these hallucinations themselves “behaved”, possessed a memory independent of my memory, and reacted to attempts at communication. In addition, they were often perceived in concert in various sensory modalities. For example, I played several times with a (hallucinated) ball with a (hallucinated) girl—and I saw this ball, and felt it with my palm when I threw it.

Despite the fact that all this was very interesting, it was very difficult. It happened for several periods, the longest of which lasted from September 2007 to February 2008 without breaks. There were days when I could not read, and days when coordination of movements was broken to such an extent that it was difficult to walk.

I managed to get out of this state due to the fact that I forced myself to start math again. By the middle of spring 2008 I could already function more or less normally and even went to Salt Lake City to look at the places where I wandered, not knowing where I was, in the spring of 2007.

In short, he was a genius akin to Cantor or Grothendieck, at times teetering on the brink of sanity, yet gripped by an immense desire for beauty and clarity, engaging in struggles that gripped his whole soul. From the fires of this volcano, truly original ideas emerge.

This last quote, and the first few quotes, are from some interviews in Russian, done by Roman Mikhailov, which Mike Stay pointed out to me. I used Google Translate and polished the results a bit:

Интервью Владимира Воеводского (часть 1), 1 July 2012. English version via Google Translate: Interview with Vladimir Voevodsky (Part 1).

Интервью Владимира Воеводского (часть 2), 5 July 2012. English version via Google Translate: Interview with Vladimir Voevodsky (Part 2).

The quote about the origins of ‘univalent foundations’ comes from his nice essay here:

• Vladimir Voevodsky, The origins and motivations of univalent foundations, 2014.

There’s also a good obituary of Voevodsky explaining its relation to Grothendieck’s idea in simple terms:

• Institute for Advanced Studies, Vladimir Voevodsky 1966–2017, 4 October 2017.

The photograph of Voevodsky is from Andrej Bauer’s website:

• Andrej Bauer, Photos of mathematicians.

To learn homotopy type theory, try this great book:

Homotopy Type Theory: Univalent Foundations of Mathematics, The Univalent Foundations Program, Institute for Advanced Study.

by John Baez at October 06, 2017 06:41 PM

October 05, 2017

Symmetrybreaking - Fermilab/SLAC

A radio for dark matter

Instead of searching for dark matter particles, a new device will search for dark matter waves.

Header: A radio for dark matter

Researchers are testing a prototype “radio” that could let them listen to the tune of mysterious dark matter particles. 

Dark matter is an invisible substance thought to be five times more prevalent in the universe than regular matter. According to theory, billions of dark matter particles pass through the Earth each second. We don’t notice them because they interact with regular matter only very weakly, through gravity.

So far, researchers have mostly been looking for dark matter particles. But with the dark matter radio, they want to look for dark matter waves.

Direct detection experiments for dark matter particles use large underground detectors. Researchers hope to see signals from dark matter particles colliding with the detector material. However, this only works if dark matter particles are heavy enough to deposit a detectable amount energy in the collision. 

“If dark matter particles were very light, we might have a better chance of detecting them as waves rather than particles,” says Peter Graham, a theoretical physicist at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory. “Our device will take the search in that direction.”

The dark matter radio makes use of a bizarre concept of quantum mechanics known as wave-particle duality: Every particle can also behave like a wave. 

Take, for example, the photon: the massless fundamental particle that carries the electromagnetic force. Streams of them make up electromagnetic radiation, or light, which we typically describe as waves—including radio waves. 

The dark matter radio will search for dark matter waves associated with two particular dark matter candidates.  It could find hidden photons—hypothetical cousins of photons with a small mass. Or it could find axions, which scientists think can be produced out of light and transform back into it in the presence of a magnetic field.

“The search for hidden photons will be completely unexplored territory,” says Saptarshi Chaudhuri, a Stanford graduate student on the project. “As for axions, the dark matter radio will close gaps in the searches of existing experiments.”

Intercepting dark matter vibes

A regular radio intercepts radio waves with an antenna and converts them into sound. What sound depends on the station. A listener chooses a station by adjusting an electric circuit, in which electricity can oscillate with a certain resonant frequency. If the circuit’s resonant frequency matches the station’s frequency, the radio is tuned in and the listener can hear the broadcast.

The dark matter radio works the same way. At its heart is an electric circuit with an adjustable resonant frequency. If the device were tuned to a frequency that matched the frequency of a dark matter particle wave, the circuit would resonate. Scientists could measure the frequency of the resonance, which would reveal the mass of the dark matter particle. 

The idea is to do a frequency sweep by slowly moving through the different frequencies, as if tuning a radio from one end of the dial to the other.

The electric signal from dark matter waves is expected to be very weak. Therefore, Graham has partnered with a team led by another KIPAC researcher, Kent Irwin. Irwin’s group is developing highly sensitive magnetometers known as superconducting quantum interference devices, or SQUIDs, which they’ll pair with extremely low-noise amplifiers to hunt for potential signals.

In its final design, the dark matter radio will search for particles in a mass range of trillionths to millionths of an electronvolt. (One electronvolt is about a billionth of the mass of a proton.) This is somewhat problematic because this range includes kilohertz to gigahertz frequencies—frequencies used for over-the-air broadcasting. 

“Shielding the radio from unwanted radiation is very important and also quite challenging,” Irwin says. “In fact, we would need a several-yards-thick layer of copper to do so. Fortunately we can achieve the same effect with a thin layer of superconducting metal.”

One advantage of the dark matter radio is that it does not need to be shielded from cosmic rays. Whereas direct detection searches for dark matter particles must operate deep underground to block out particles falling from space, the dark matter radio can operate in a university basement.

The researchers are now testing a small-scale prototype at Stanford that will scan a relatively narrow frequency range. They plan on eventually operating two independent, full-size instruments at Stanford and SLAC.

“This is exciting new science,” says Arran Phipps, a KIPAC postdoc on the project. “It’s great that we get to try out a new detection concept with a device that is relatively low-budget and low-risk.” 

The dark matter disc jockeys are taking the first steps now and plan to conduct their dark matter searches over the next few years. Stay tuned for future results.

by Manuel Gnida at October 05, 2017 01:23 PM

John Baez - Azimuth

Azimuth Backup Project (Part 5)

I haven’t spoken much about the Azimuth Climate Data Backup Project, but it’s going well, and I’ll be speaking about it soon, here:

International Open Access Week, Wednesday 25 October 2017, 9:30–11:00 a.m., University of California, Riverside, Orbach Science Library, Room 240.

“Open in Order to Save Data for Future Research” is the 2017 event theme.

Open Access Week is an opportunity for the academic and research community to learn about the potential benefits of sharing what they’ve learned with colleagues, and to help inspire wider participation in helping to make “open access” a new norm in scholarship, research and data planning and preservation.

The Open Access movement is made of up advocates (librarians, publishers, university repositories, etc.) who promote the free, immediate, and online publication of research.

The program will provide information on issues related to saving open data, including climate change and scientific data. The panelists also will describe open access projects in which they have participated to save climate data and to preserve end-of-term presidential data, information likely to be and utilized by the university community for research and scholarship.

The program includes:

• Brianna Marshall, Director of Research Services, UCR Library: Brianna welcomes guests and introduces panelists.

• John Baez, Professor of Mathematics, UCR: John will describe his activities to save US government climate data through his collaborative effort, the Azimuth Climate Data Backup Project. All of the saved data is now open access for everyone to utilize for research and scholarship.

• Perry Willett, Digital Preservation Projects Manager, California Digital Library: Perry will discuss the open data initiatives in which CDL participates, including the end-of-term presidential web archiving that is done in partnership with the Library of Congress, Internet Archive and University of North Texas.

• Kat Koziar, Data Librarian, UCR Library: Kat will give an overview of DASH, the UC system data repository, and provide suggestions for researchers interested in making their data open.

This will be the eighth International Open Access Week program hosted by the UCR Library.

The event is free and open to the public. Light refreshments will be served.

by John Baez at October 05, 2017 06:02 AM

Clifford V. Johnson - Asymptotia

Kitchen Capers…

It's time for a visit to the kitchen. This time the result was a spontaneous coconut tart, made from childhood memories... (and a bit of pastry). Here are two composite photos and their captions (from my instagram account) to tell the story:

[...] Click to continue reading this post

The post Kitchen Capers… appeared first on Asymptotia.

by Clifford at October 05, 2017 05:09 AM

October 03, 2017

Clifford V. Johnson - Asymptotia

Lasers and Gravitational Waves

Today’s Nobel Prize in physics has an interesting wrinkle to it. I summarised it in the extract above from a certain forthcoming book*. Click for a larger view. Congratulations to the winners Rainer Weiss, Barry C Barish and Kip S Thorne! There are some excellent descriptions (either for layperson level … Click to continue reading this post

The post Lasers and Gravitational Waves appeared first on Asymptotia.

by Clifford at October 03, 2017 08:19 PM

ZapperZ - Physics and Physicists

2017 Physics Nobel Prize Goes To Gravitational Wave Discovery
To say that this is a no-brainer and no surprise are an understatement.

The 2017 Nobel Prize in Physics goes to 3 central figures that made LIGO possible and the eventual discovery of gravitational wave in 2015.

The Nobel Prize in Physics 2017 was divided, one half awarded to Rainer Weiss, the other half jointly to Barry C. Barish and Kip S. Thorne "for decisive contributions to the LIGO detector and the observation of gravitational waves".

Congratulations to all of them!


by ZapperZ ( at October 03, 2017 12:24 PM

Symmetrybreaking - Fermilab/SLAC

Nobel recognizes gravitational wave discovery

Scientists Rainer Weiss, Kip Thorne and Barry Barish won the 2017 Nobel Prize in Physics for their roles in creating the LIGO experiment.

Illustration depicting two black holes circling one another and producing gravitational waves

Three scientists who made essential contributions to the LIGO collaboration have been awarded the 2017 Nobel Prize in Physics.

Rainer Weiss will share the prize with Kip Thorne and Barry Barish for their roles in the discovery of gravitational waves, ripples in space-time predicted by Albert Einstein. Weiss and Thorne conceived of LIGO, and Barish is credited with reviving the struggling experiment and making it happen.

“I view this more as a thing that recognizes the work of about 1000 people,” Weiss said during a Q&A after the announcement this morning. “It’s really a dedicated effort that has been going on, I hate to tell you, for as long as 40 years, people trying to make a detection in the early days and then slowly but surely getting the technology together to do it.”

Another founder of LIGO, scientist Ronald Drever, died in March. Nobel Prizes are not awarded posthumously.

According to Einstein’s general theory of relativity, powerful cosmic events release energy in the form of waves traveling through the fabric of existence at the speed of light. LIGO detects these disturbances when they disrupt the symmetry between the passages of identical laser beams traveling identical distances.

The setup for the LIGO experiment looks like a giant L, with each side stretching about 2.5 miles long. Scientists split a laser beam and shine the two halves down the two sides of the L. When each half of the beam reaches the end, it reflects off a mirror and heads back to the place where its journey began.

Normally, the two halves of the beam return at the same time. When there’s a mismatch, scientists know something is going on. Gravitational waves compress space-time in one direction and stretch it in another, giving one half of the beam a shortcut and sending the other on a longer trip. LIGO is sensitive enough to notice a difference between the arms as small as 1000th the diameter of an atomic nucleus.

Scientists on LIGO and their partner collaboration, called Virgo, reported the first detection of gravitational waves in February 2016. The waves were generated in the collision of two black holes with 29 and 36 times the mass of the sun 1.3 billion years ago. They reached the LIGO experiment as scientists were conducting an engineering test.

“It took us a long time, something like two months, to convince ourselves that we had seen something from outside that was truly a gravitational wave,” Weiss said.

LIGO, which stands for Laser Interferometer Gravitational-Wave Observatory, consists of two of these pieces of equipment, one located in Louisiana and another in Washington state.

The experiment is operated jointly by Weiss’s home institution, MIT, and Barish and Thorne’s home institution, Caltech. The experiment has collaborators from more than 80 institutions from more than 20 countries. A third interferometer, operated by the Virgo collaboration, recently joined LIGO to make the first joint observation of gravitational waves.

by Kathryn Jepsen at October 03, 2017 10:42 AM

October 02, 2017

The n-Category Cafe

Vladimir Voevodsky, June 4, 1966 - September 30, 2017

Vladimir Voevodsky died this Saturday. He was 51.

I met him a couple of times, and have a few stories, but I think I’ll just quote the Institute for Advanced Studies obituary and open up the thread for memories and conversation.

The Institute for Advanced Study is deeply saddened by the passing of Vladimir Voevodsky, Professor in the School of Mathematics.

Voevodsky, a truly extraordinary and original mathematician, made many contributions to the field of mathematics, earning him numerous honors and awards, including the Fields Medal.

Celebrated for tackling the most difficult problems in abstract algebraic geometry, Voevodsky focused on the homotopy theory of schemes, algebraic K-theory, and interrelations between algebraic geometry, and algebraic topology. He made one of the most outstanding advances in algebraic geometry in the past few decades by developing new cohomology theories for algebraic varieties. Among the consequences of his work are the solutions of the Milnor and Bloch-Kato Conjectures.

More recently he became interested in type-theoretic formalizations of mathematics and automated proof verification. He was working on new foundations of mathematics based on homotopy-theoretic semantics of Martin-Löf type theories. His new “Univalence Axiom” has had a dramatic impact in both mathematics and computer science.

A gathering to celebrate Voevodsky’s life and legacy is being planned and more information will be available soon.

by john ( at October 02, 2017 12:28 AM

October 01, 2017

Tommaso Dorigo - Scientificblogging

The Physics Of Boson Pairs
At 10:00 AM this morning, my smartphone alerted me that in two months I will have to deliver a thorough review on the physics of boson pairs - a 50 page thing which does not yet even exist in the world of ideas. So I have better start planning carefully my time in the next 60 days, to find at least two clean weeks where I may cram in the required concentration. That will be the hard part!

read more

by Tommaso Dorigo at October 01, 2017 09:59 AM

September 28, 2017

The n-Category Cafe

Applied Category Theory at UCR (Part 2)

I’m running a special session on applied category theory, and now the program is available:

This is going to be fun.

My former student Brendan Fong is now working with David Spivak at M.I.T., and they’re both coming. My collaborator John Foley at Metron is also coming: we’re working on the CASCADE project for designing networked systems.

Dmitry Vagner is coming from Duke: he wrote a paper with David and Eugene Lerman on operads and open dynamical system. Christina Vaisilakopolou, who has worked with David and Patrick Schultz on dynamical systems, has just joined our group at UCR, so she will also be here. And the three of them have worked with Ryan Wisnesky on algebraic databases. Ryan will not be here, but his colleague Peter Gates will: together with David they have a startup called Categorical Informatics, which uses category theory to build sophisticated databases.

That’s not everyone — for example, most of my students will be speaking at this special session, and other people too — but that gives you a rough sense of some people involved. The conference is on a weekend, but John Foley and David Spivak and Brendan Fong and Dmitry Vagner are staying on for longer, so we’ll have some long conversations… and Brendan will explain decorated corelations in my Tuesday afternoon network theory seminar.

Wanna see what the talks are about?

Here’s the program. Click on talk titles to see abstracts. For a multi-author talk, the person with the asterisk after their name is doing the talking. All the talks will be in Room 268 of the Highlander Union Building or ‘HUB’.

Saturday November 4, 2017, 9:00 a.m.-10:50 a.m.

9:00 a.m.
A higher-order temporal logic for dynamical systems.
David I. Spivak, M.I.T.

10:00 a.m.
Algebras of open dynamical systems on the operad of wiring diagrams.
Dmitry Vagner*, Duke University
David I. Spivak, M.I.T.
Eugene Lerman, University of Illinois at Urbana-Champaign

10:30 a.m.
Abstract dynamical systems.
Christina Vasilakopoulou*, University of California, Riverside
David Spivak, M.I.T.
Patrick Schultz, M.I.T.

Saturday November 4, 2017, 3:00 p.m.-5:50 p.m.

3:00 p.m.
Black boxes and decorated corelations.
Brendan Fong, M.I.T.

4:00 p.m.
Compositional modelling of open reaction networks.
Blake S. Pollard*, University of California, Riverside
John C. Baez, University of California, Riverside

4:30 p.m.
A bicategory of coarse-grained Markov processes.
Kenny Courser, University of California, Riverside

5:00 p.m.
A bicategorical syntax for pure state qubit quantum mechanics.
Daniel Michael Cicala, University of California, Riverside

5:30 p.m.
Open systems in classical mechanics.
Adam Yassine, University of California Riverside

Sunday November 5, 2017, 9:00 a.m.-10:50 a.m.

9:00 a.m.
Controllability and observability: diagrams and duality.
Jason Erbele, Victor Valley College

9:30 a.m.
Frobenius monoids, weak bimonoids, and corelations.
Brandon Coya, University of California, Riverside

10:00 a.m.
Compositional design and tasking of networks.
John D. Foley*, Metron, Inc.
John C. Baez, University of California, Riverside
Joseph Moeller, University of California, Riverside
Blake S. Pollard, University of California, Riverside

10:30 a.m.
Operads for modeling networks.
Joseph Moeller*, University of California, Riverside
John Foley, Metron Inc.
John C. Baez, University of California, Riverside
Blake S. Pollard, University of California, Riverside

Sunday November 5, 2017, 2:00 p.m.-4:50 p.m.

2:00 p.m.
Reeb graph smoothing via cosheaves.
Vin de Silva, Department of Mathematics, Pomona College

3:00 p.m.
Knowledge representation in bicategories of relations.
Evan Patterson*, Stanford University, Statistics Department

3:30 p.m.
The multiresolution analysis of flow graphs.
Steve Huntsman*, BAE Systems

4:00 p.m.
Data modeling and integration using the open source tool Algebraic Query Language (AQL).
Peter Y. Gates*, Categorical Informatics
Ryan Wisnesky, Categorical Informatics

by john ( at September 28, 2017 10:25 PM

Symmetrybreaking - Fermilab/SLAC

Conjuring ghost trains for safety

A Fermilab technical specialist recently invented a device that could help alert oncoming trains to large vehicles stuck on the tracks.

Photo of a train traveling along the tracks

Browsing YouTube late at night, Fermilab Technical Specialist Derek Plant stumbled on a series of videos that all begin the same way: a large vehicle—a bus, semi or other low-clearance vehicle—is stuck on a railroad crossing. In the end, the train crashes into the stuck vehicle, destroying it and sometimes even derailing the train. According to the Federal Railroad Administration, every year hundreds of vehicles meet this fate by trains, which can take over a mile to stop.

“I was just surprised at the number of these that I found,” Plant says. “For every accident that’s videotaped, there are probably many more.”

Inspired by a workplace safety class that preached a principle of minimizing the impact of accidents, Plant set about looking for solutions to the problem of trains hitting stuck vehicles.

Railroad tracks are elevated for proper drainage, and the humped profile of many crossings can cause a vehicle to bottom out. “Theoretically, we could lower all the crossings so that they’re no longer a hump. But there are 200,000 crossings in the United States,” Plant says. “Railroads and local governments are trying hard to minimize the number of these crossings by creating overpasses, or elevating roadways. That’s cost-prohibitive, and it’s not going to happen soon.”

Other solutions, such as re-engineering the suspension on vehicles likely to get stuck, seemed equally improbable.

After studying how railroad signaling systems work, Plant came up with an idea: to fake the presence of a train. His invention was developed in his spare time using techniques and principles he learned over his almost two decades at Fermilab. It is currently in the patent application process and being prosecuted by Fermilab’s Office of Technology Transfer.

“If you cross over a railroad track and you look down the tracks, you’ll see red or yellow or green lights,” he says. “Trains have traffic signals too.”

These signals are tied to signal blocks—segments of the tracks that range from a mile to several miles in length. When a train is on the tracks, its metal wheels and axle connect both rails, forming an electric circuit through the tracks to trigger the signals. These signals inform other trains not to proceed while one train occupies a block, avoiding pileups.

Plant thought, “What if other vehicles could trigger the same signal in an emergency?” By faking the presence of a train, a vehicle stuck on the tracks could give advanced warning for oncoming trains to stop and stall for time. Hence the name of Plant’s invention: the Ghost Train Generator.

To replicate the train’s presence, Plant knew he had to create a very strong electric current between the rails. The most straightforward way to do this is with massive amounts of metal, as a train does. But for the Ghost Train Generator to be useful in a pinch, it needs to be small, portable and easily applied. The answer to achieving these features lies in strong magnets and special wire.

“Put one magnet on one rail and one magnet on the other and the device itself mimics—electrically—what a train would look like to the signaling system,” he says. “In theory, this could be carried in vehicles that are at high risk for getting stuck on a crossing: semis, tour buses and first-response vehicles,” Plant says. “Keep it just like you would a fire extinguisher—just behind the seat or in an emergency compartment.”

Once the device is deployed, the train would receive the signal that the tracks were obstructed and stop. Then the driver of the stuck vehicle could call for emergency help using the hotline posted on all crossings.

Plant compares the invention to a seatbelt.

“Is it going to save your life 100 percent of the time? Nope, but smart people wear them,” he says. “It’s designed to prevent a collision when a train is more than two minutes from the crossing.”

And like a seatbelt, part of what makes Plant’s invention so appealing is its simplicity.

“The first thing I thought was that this is a clever invention,” says Aaron Sauers from Fermilab’s technology transfer office, who works with lab staff to develop new technologies for market. “It’s an elegant solution to an existing problem. I thought, ‘This technology could have legs.’”

The organizers of the National Innovation Summit seem to agree.  In May, Fermilab received an Innovation Award from TechConnect for the Ghost Train Generator. The invention will also be featured as a showcase technology in the upcoming Defense Innovation Summit in October.

The Ghost Train Generator is currently in the pipeline to receive a patent with help from Fermilab, and its prospects are promising, according to Sauers. It is a nonprovisional patent, which has specific claims and can be licensed. After that, if the generator passes muster and is granted a patent, Plant will receive a portion of the royalties that it generates for Fermilab.

Fermilab encourages a culture of scientific innovation and exploration beyond the field of particle physics, according to Sauers, who noted that Plant’s invention is just one of a number of technology transfer initiatives at the lab.

Plant agrees—Fermilab’s environment helped motivate his efforts to find a solution for railroad crossing accidents.

“It’s just a general problem-solving state of mind,” he says. “That’s the philosophy we have here at the lab.”

Editor's note: A version of this article was originally published by Fermilab.

by Daniel Garisto at September 28, 2017 05:33 PM

Symmetrybreaking - Fermilab/SLAC

Fermilab on display

The national laboratory opened usually inaccessible areas of its campus to thousands of visitors to celebrate 50 years of discovery.

Fermilab on display

Fermi National Accelerator Laboratory’s yearlong 50th anniversary celebration culminated on Saturday with an Open House that drew thousands of visitors despite the unseasonable heat.

On display were areas of the lab not normally open to guests, including neutrino and muon experiments, a portion of the accelerator complex, lab spaces and magnet and accelerator fabrication and testing areas, to name a few. There were also live links to labs around the world, including CERN, a mountaintop observatory in Chile, and the mile-deep Sanford Underground Research Facility that will house the international neutrino experiment, DUNE.

But it wasn’t all physics. In addition to hands-on demos and a STEM fair, visitors could also learn about Fermilab’s art and history, walk the prairie trails or hang out with the ever-popular bison. In all, some 10,000 visitors got to go behind-the-scenes at Fermilab, shuttled around on 80 buses and welcomed by 900 Fermilab workers eager to explain their roles at the lab. Below, see a few of the photos captured as Fermilab celebrated 50 years of discovery.

by Lauren Biron at September 28, 2017 03:47 PM

September 27, 2017

Matt Strassler - Of Particular Significance

LIGO and VIRGO Announce a Joint Observation of a Black Hole Merger

Welcome, VIRGO!  Another merger of two big black holes has been detected, this time by both LIGO’s two detectors and by VIRGO as well.

Aside from the fact that this means that the VIRGO instrument actually works, which is great news, why is this a big deal?  By adding a third gravitational wave detector, built by the VIRGO collaboration, to LIGO’s Washington and Louisiana detectors, the scientists involved in the search for gravitational waves now can determine fairly accurately the direction from which a detected gravitational wave signal is coming.  And this allows them to do something new: to tell their astronomer colleagues roughly where to look in the sky, using ordinary telescopes, for some form of electromagnetic waves (perhaps visible light, gamma rays, or radio waves) that might have been produced by whatever created the gravitational waves.

The point is that with three detectors, one can triangulate.  The gravitational waves travel for billions of years, traveling at the speed of light, and when they pass by, they are detected at both LIGO detectors and at VIRGO.  But because it takes light a few thousandths of a second to travel the diameter of the Earth, the waves arrive at slightly different times at the LIGO Washington site, the LIGO Louisiana site, and the VIRGO site in Italy.  The precise timing tells the scientists what direction the waves were traveling in, and therefore roughly where they came from.  In a similar way, using the fact that sound travels at a known speed, the times that a gunshot is heard at multiple locations can be used by police to determine where the shot was fired.

You can see the impact in the picture below, which is an image of the sky drawn as a sphere, as if seen from outside the sky looking in.  In previous detections of black hole mergers by LIGO’s two detectors, the scientists could only determine a large swath of sky where the observed merger might have occurred; those are the four colored regions that stretch far across the sky.  But notice the green splotch at lower left.  That’s the region of sky where the black hole merger announced today occurred.  The fact that this region is many times smaller than the other four reflects what including VIRGO makes possible.  It’s a small enough region that one can search using an appropriate telescope for something that is making visible light, or gamma rays, or radio waves.

Skymap of the LIGO/Virgo black hole mergers.

Image credit: LIGO/Virgo/Caltech/MIT/Leo Singer (Milky Way image: Axel Mellinger)


While a black hole merger isn’t expected to be observable by other telescopes, and indeed nothing was observed by other telescopes this time, other events that LIGO might detect, such as a merger of two neutron stars, may create an observable effect. We can hope for such exciting news over the next year or two.

Filed under: Astronomy, Gravitational Waves Tagged: black holes, Gravitational Waves, LIGO

by Matt Strassler at September 27, 2017 05:50 PM

Lubos Motl - string vacua and pheno

Proposed experiments seeking hidden sectors, millicharged particles
Off-topic I, LIGO: Virgo, the Italian sister (guess why she isn't a brother and what's the name of her bed partner), has finally joined her two LIGO brothers and they observed the GW170814 black hole merger together. Her signal is clearly weaker but it's there. Three detectors allowed precise localization of the event in both directions and will allow to test new consistency conditions that follow from GR. See LIGO's Twitter for more.

Off-topic II, quantum computing: a new PLB article promotes a faster hardware for quantum computers, with some photon pulses running around a room many times. See Science Alert for a summary. Because the qubit is embedded in an infinite-dimensional Hilbert space, the scheme may be easily made fault-tolerant.

Bob Henderson wrote about two proposed experiments to search for new (particle) physics outside the LHC's detectors:
How the Hidden Higgs Could Reveal Our Universe’s Dark Sector (Quanta Magazine)
There may be new Higgs-like bosons, superpartners predicted by supersymmetry, but completely new things – no physics beyond the Standard Model has been found as of today.

In particular, Henderson mentions a 2014 paper proposing the milliQan experiment and a 2016 paper proposing MATHUSLA (appeared in PLB).

MilliQan is clearly a distortion of the name Millikan who has performed famous oil droplet experiments to find the value of the elementary charge. Helpfully enough, Millikan's name starts with "milli" which is 1/1,000. ;-) Consequently, milliQ may be interpreted as a "tiny charge, about 1/1,000 of the charge of the electron" or another elementary particle.

The milliQan experiment should be sensitive to \(1\GeV\) particles, plus minus some 1-2 orders of magnitude, whose charges are between 0.001 and 0.1 of the electron charge. It's fun that one may propose such a thing and build it. But do I believe that such fractional charges exist? I would bet No. I really think that they don't exist at all, for some theoretical principles. In particular, grand unificiation and similar schemes would ban such a fragmentation of the elementary charge. On top of that, I think that the Millikan experiment by itself is a rather good empirical answer to the question and the answer it gave was that the electron charge is the elementary one.

But even if one assumes that this intuition is completely wrong and such particles are allowed, would they be found by this experiment? Would they have the masses that are close to \(1\GeV\), plus minus two orders of magnitude? Clearly, this condition reduces the probability of a find further. While the LHC hasn't found anything yet, it's still totally reasonable to imagine that new excesses suddenly start to grow in 2017 and a 5-sigma discovery will be made in a few years.

For this new experiment, it looks much less likely to me. But of course, if it's cheaper than say 1% of the LHC, it seems sensible to me to pay for such an experiment.

MATHUSLA, the other experiment that Henderson begins with, is named after a monster that is rumored to have lived 1,000 years ago. Well, even rumors about monsters living at the present are usually false – what about monsters a millennium ago? ;-) At any rate, MATHUSLA is spuposed to be a big, barn-like experiment in which some very long-lived particles – which are invisible inside the LHC and escape the LHC – are encouraged to decay to ordinary particles by the hay. I didn't quite understand whether the interior of the barn is important and how.

And the ordinary particles that result from such decays of the long-lived new hypothetical particles – whose lifetime times the speed of light is between millimeters and kilometers – are detected on the roof of the MATHUSLA barn. This kind of experiment is meant to be sensitive particularly to models with very extended, huge hidden sectors with many particles and/or their copies.

Those models are academically plausible and some of the arguments that they may provide us with new ways to solve the hierarchy problem are plausible after a few bottles of wine. Sorry, I can't accept them without the wine because the addition of an unnaturally large number of sectors is a sort of fine-tuning by itself. But I simply cannot get rid of the feeling that such experiments addressing such models are pure random guesswork. I don't really see the great new possibilities that the authors have discovered. In other words, I don't know what it would mean to "independently rediscover those things" and I don't understand how I could be proud about such a rediscovery. You may clearly generalize existing models in many ways – change the number of colors or factors of the gauge group, colors, or generations, and many similar things, from two to three, to five, to ten factorial, to infinity. I think that the last two possibilities aren't more natural or attractive than those smaller numbers at the beginning. At least without extra arguments, they don't seem to be.

Moreover, even if the experiment found something, I don't think it's clear at all that one should say that it provides us with evidence in favor of the particular models with extended hidden sectors that are being used to justify the experiment. There could also be "more minimal" models that incorporate such a new particle – which could be added to the Standard Model separately, in the well-known "who ordered that" way.

But of course, I can be wrong about all these guesses. I can misunderstand something important. I can be like Sheldon who was asked by Howard what's the name of the astronaut who will go to outer space with Howard's toilet. "Mohammed Lee," Sheldon answered because in the case of ignorance, the combination of the most frequent first name and most frequent last name gave him a mathematical edge. It turned out that the name was "Howard Wolowitz". Sheldon wouldn't have guessed it even if he had a million of attempts. ;-)

by Luboš Motl ( at September 27, 2017 05:26 PM

September 26, 2017

Symmetrybreaking - Fermilab/SLAC

Shining with possibility

As Jordan-based SESAME nears its first experiments, members are connecting in new ways.

Header: A new light

Early in the morning, physicist Roy Beck Barkai boards a bus in Tel Aviv bound for Jordan. By 10:30 a.m., he is on site at SESAME, a new scientific facility where scientists plan to use light to study everything from biology to archaeology. He is back home by 7 p.m., in time to have dinner with his children.

Before SESAME opened, the closest facility like it was in Italy. Beck Barkai often traveled for two days by airplane, train and taxi for a day or two of work—an inefficient and expensive process that limited his ability to work with specialized equipment from his home lab and required him to spend days away from his family.  

“For me, having the ability to kiss them goodbye in the morning and just before they went to sleep at night is a miracle,” Beck Barkai says. “It felt like a dream come true. Having SESAME at our doorstep is a big plus."

SESAME, also known as the International Centre for Synchrotron-Light for Experimental Science and Applications in the Middle East, opened its doors in May and is expected to host its first beams of synchrotron light this year. Scientists from around the world will be able to apply for time to use the facility’s powerful light source for their experiments. It’s the first synchrotron in the region. 

Beck Barkai says SESAME provides a welcome dose of convenience, as scientists in the region can now drive to a research center instead of flying with sensitive equipment to another country. It’s also more cost-effective.

Located in Jordan to the northwest of the city of Amman, SESAME was built by a collaboration made up of Cyprus, Egypt, Iran, Israel, Jordan, Pakistan, Turkey and the Palestinian Authority—a partnership members hope will improve relations among the eight neighbors.

“SESAME is a very important step in the region,” says SESAME Scientific Advisory Committee Chair Zehra Sayers. “The language of science is objective. It’s based on curiosity. It doesn’t need to be affected by the differences in cultural and social backgrounds. I hope it is something that we will leave the next generations as a positive step toward stability.”

Inline_1: A new light
Artwork by Ana Kova

Protein researcher and a University of Jordan professor Areej Abuhammad says she hopes SESAME will provide an environment that encourages collaboration. 

“I think through having the chance to interact, the scientists from around this region will learn to trust and respect each other,” she says. “I don’t think that this will result in solving all the problems in the region from one day to the next, but it will be a big step forward.”

The $100 million center is a state-of-the-art research facility that should provide some relief to scientists seeking time at other, overbooked facilities. SESAME plans to eventually host 100 to 200 users at a time. 

SESAME’s first two beamlines will open later this year. About twice per year, SESAME will announce calls for research proposals, the next of which is expected for this fall. Sayers says proposals will be evaluated for originality, preparedness and scientific quality. 

Groups of researchers hoping to join the first round of experiments submitted more than 50 applications. Once the lab is at full operation, Sayers says, the selection committee expects to receive four to five times more than that.

Opening up a synchrotron in the Middle East means that more people will learn about these facilities and have a chance to use them. Because some scientists in the region are new to using synchrotrons or writing the style of applications SESAME requires, Sayers asked the selection committee to provide feedback with any rejections. 

Abuhammad is excited for the learning opportunity SESAME presents for her students—and for the possibility that experiences at SESAME will spark future careers in science. 

She plans to apply for beam time at SESAME to conduct protein crystallography, a field that involves peering inside proteins to learn about their function and aid in pharmaceutical drug discovery. 

Another scientist vying for a spot at SESAME is Iranian chemist Maedeh Darzi, who studies the materials of ancient manuscripts and how they degrade. Synchrotrons are of great value to archaeologists because they minimize the damage to irreplaceable artifacts. Instead of cutting them apart, scientists can take a less damaging approach by probing them with particles. 

Darzi sees SESAME as a chance to collaborate with scientists from the Middle East and to promote science, peace and friendship. For her and others, SESAME could be a place where particles put things back together.

by Signe Brewster at September 26, 2017 02:13 PM

September 23, 2017

Lubos Motl - string vacua and pheno

Pariah moonshine
Erica Klarreich wrote an insightful review
Moonshine Link Discovered for Pariah Symmetries (Quanta Mag.)
of a new paper by Duncan, Mertens, and Ono in Nature,
Pariah moonshine (full paper, HTML).
That discovery is a counterpart of the monstrous and umbral moonshine – but instead of the monster group and umbral/mock modular forms, it deals with a pariah group and weight 3/2 modular forms.

The historical bottles of Old Hunter's, a Czech whiskey, indicate that the hunter was getting younger as a function of time. ;-)

The paper was originally sent to me by Willie Soon – who wasn't the only one who was entertained by the terminology. This portion of mathematics really uses very weird or comical jargon, maybe one that is over the edge. But I believe that the playful names ultimately reflect the unusual degree of excitement among the mathematicians and mathematical physicists who study these things – and I believe that this excitement is absolutely justified.

I don't want to cover their discoveries in detail but it may be a good idea to remind you of the three kinds of moonshine and how big a portion of ideas they cover.

OK, in all cases, there's something like a big group or a higher-dimensional geometry with some topology on one side; and some possibly generalized modular form on the other side. A modular form is basically a holomorphic function of usually one complex variable \(\tau\) that has some simple enough behavior if you change \(\tau\to \tau+1\) or \(\tau\to -1/\tau\), if you allow me this non-rigorous but arguably very helpful heuristic definition.

Some large, integer coefficients seem to be the same on both sides. Because the two sides look like coming from completely different corners of mathematics, you're tempted to think that the match must be a coincidence. Except that, as you may realize already shortly before you drink a bottle of moonshine that you win for your proof, the agreement isn't a coincidence at all. There exists a mathematical explanation why the numbers in the different corners of mathematics have to be equal.

This explanation is generally referred to as moonshine.

The first and still deepest – I hope that experts agree – example of moonshine is the monstrous moonshine which includes the monster group as the main actor on the geometric side. (Check for my introduction to moonshine at Quora if you're intrigued.) The monster group is the largest among the truly exceptional, so-called sporadic, finite groups. It has almost \(10^{54}\) elements. The order is the product of powers of all supersingular primes – there is a finite number of supersingular primes (all primes up to 71 except for 37,43,53,61,67) which are "more prime" than other primes.

The monster group has irreducible representations. The smallest one is obviously the 1-dimensional trivial one that doesn't transform at all. The next one is 196,883-dimensional. It turns out that the \(j\)-invariant, a unique (up to \(SL(2,\CC)\) transformations) holomorphic function mapping the fundamental region of the modular group to the whole complex plane (therefore a weight-zero modular form), may be expanded as\[

j(\tau) = {1 \over q} + 744 + 196884 q + 21493760 q^2 + \dots

\] for \(\tau\to i\infty\) where \(q=\exp(2\pi i \tau)\). The coefficient 196,884 looks similar to the dimension of the irrep of the monster group and indeed, it's no coincidence. There exists a string theory – basically the dynamics of ordinary string theory's strings propagating on a 24-dimensional torus defined as a quotient of \(\RR^{24}\) by the Leech lattice, the nicest among 24 different even self-dual 24-dimensional lattices (it's nicest because it happens to have no sites whose squared length is two, the positive minimum allowed by the even self-dual condition).

One can show that the partition sum (on the torus) of this string theory has to be basically the \(j\)-function. How? Partition sums in string theory basically have to be some modular functions and due to the absence of the sites I mentioned and some other simple arguments, the function must be weight-zero and those are basically unique. At the same moment, the string theory may have be proven to have a discrete symmetry which is a stringy extension of the "obvious" isometry group of the Leech lattice. The automorphism group of the Leech lattice is the Conway \({\rm CO}_0\) group and string theory enhances it to the full monster group \(M\).

So all the coefficients in the \(j\)-functions have to count some degeneracy in a string theory but because the string theory has a monster group symmetry, all these degeneracies have to come in full representations of the monster group. At low energy levels, the smallest representations appear, and \(196,884\) in the expansion of the \(j\)-invariant has to be \(1+196,883\), the dimension of the direct sum of the two smallest irreps. The numbers at higher levels are some simple combinations of lower-dimensional irreps of the monster group, too.

Great, so string theory on the Leech torus – in some sense, the coolest way to compactify all 24 transverse dimensions of the bosonic string – explains the monstrous moonshine.

Then there is another moonshine, umbral moonshine. It is a generalization that replaces the monster group by the Mathieu group \(M_{24}\) or by something that is slightly more general. The relevant string theory includes strings propagating on the K3 surfaces. Because the "total" homology of the K3 surfaces is 24-dimensional, this moonshine is also related to 24-dimensional lattices – but all 24 even self-dual ones (Niemeier lattices), not just the Leech lattice.

The generalization of this Mathieu moonshine is called "umbral moonshine" because "umbral" is a Latin adjective derived from "shadows" and the relevant modular functions aren't really true modular functions but mock modular functions, some generalizations – and generalizations like these mock ones may be called "shadows". See TRF blog posts with "umbral".

The integers that appear in the umbral moonshine are typically smaller but there are many of them and they may be matched in between the two sides in some way.

The newest type of moonshine which we discuss here is the pariah moonshine. So far the string theory compactification isn't really known, if I understand well, and there should be one so the readers are urged to immediately find it! What is "pariah" about it?

Well, look at this list of sporadic groups. They were named after the generalized mathematicians who discovered them – a generalized mathematician is defined as either a smart man, a monster, or a baby monster. ;-) The mathematicians after whom these exceptional finite simple groups which don't fall into any infinite families – the sporadic groups – are Mathieu (5 of them), Janko (4), Conway (3), Fischer (3), Higman-Sims, McLaughlin, Held, Rudvalis, Suzuki, O'Nan, Harada-Norton, Lyons, Thompson, baby monster, and monster (equivalently Fischer-Griess). That's 26 sporadic groups in total.

The Tits group is sometimes counted as the 27th sporadic group – it's "almost" a group of the Lie type, because of some subtleties.

OK, the diagram above shows the 26 full-blown sporadic groups. The relationships indicate how you can get the groups from each other as subquotients. Most of them end up in a herd, underlied by the monster \(M\) at the top, and those are called "the happy family" which has 20 members. The remaining 6 sporadic groups are the four groups on the right bottom side plus the Lyons and Janko-4 group on the top sides. These 6 groups were named the pariah groups – the opposite of the happy family had to be invented. All the 26 sporadic groups are exceptions or renegades of a sort – the pariah groups are arguably even more blasphemous, they're the renegades among heretics. ;-)

The new paper in Nature is mostly about the O'Nan sporadic group – although some related constructions probably exist for \(M_{11}\) and \(M_{23}\) near the bottom, descendants of the monster in the happy family. The authors chose the adjective "pariah" for the moonshine but they didn't immediately say something about all the pariah groups, I think. So maybe a better title could have been "O'Nan moonshine" – which sounds like some truly original Irish whiskey. ;-) (Recall that the Irish whiskey is spelled with "e" while the Scottish whisky is spelled without "e". Michael O'Nan was American and died at Princeton on July 31nd. Later, I learned that the authors did use the O'Nan moonshine phrase on the arXiv.)

The counterpart of the monster group's irrep's dimension 196,883 happens to be 26,752 (see the full list) for the O'Nan group and you may see it as a coefficient in weight-3/2 modular functions. That's a basic connection that this pariah moonshine is all about.

In some very vague sense, these moonshines are analogous to each other – recently, weight-1/2 modular functions were connected with the Thompson group, a member of the monster's happy family – but there seem to be rather deep technical, almost groundbreaking differences between the individual members of the community. What I want to say is that it would be totally wrong to impose some "egalitarianism" between the 26 sporadic groups and think that the knowledge about their moonshines is composed of 26 copies of the same clichés. Each of them is very different, has connections to somewhat different parts of mathematics, group theory, and string theory. Each of them has a slightly different story. Affirmative action would be totally indefensible here.

Above, I wrote that there were basically 3-4 moonshines – if you attach the major adjectives such as monsterous, Mathieu, umbral, and pariah. But with a finer technical classification, the number is larger. By 2013, Miranda Cheng, John Duncan, and Jeff Harvey conjectured the existence of 23 new moonshines – and Michael Griffin proved their existence two years later.

These moonshines are a deep wisdom in mathematics – mental wormholes that connect completely different regions of the realm of mathematical ideas. They may also be interpreted as dualities in string theory. In particular, the monstrously symmetric compactification of string theory defines the CFT that is holographically dual to pure gravity in the three-dimensional anti de Sitter space. Similarly, the other sporadic groups – the most complicated and exceptional groups in the theory of finite groups – are being connected to some of the simplest and most fundamental geometric compactifications of string theory. I think that there clearly seems to be a new complementarity here – a fundamental, structureless compactification of string theory is linked to the largest sporadic groups. You make the stringy compactification a little bit more arbitrary and the corresponding moonshine's group gets smaller or more regular.

Very generally, I have often said and some other people have said that string theory isn't just a theory of everything in physics. It may also provide us with a complete classification of all ideas in mathematics that are really worth something. I think that the moonshines and perhaps some related ideas are very particular and important manifestations of this power of string theory to classify and connect all profound mathematical ideas.

Again, I write it despite the fact that the stringy compactification explaining the pariah moonshine hasn't been found yet – in this sense, the proof of the pariah moonshine seems to be non-stringy, at least so far. The stringy compactification for this case could be a minor variation of the known ones – and maybe of M-theory or F-theory which would already be cool – but it could be more fundamental and more original, too. We should better find it or whatever replaces it. (Don't expect it to be a loop quantum gravity or another "very non-stringy" or crackpot-powered beasts, however LOL.)

By the way, at this moment, the Quanta Magazine review has only two comments – and both of them say that the readers have no chance to understand it. I am afraid that it's really something like that proverbial "twelve people in the world" who can really work on these things technically and even the number of people who understand it at the level of your humble correspondent is of order 100 in the world. I have mixed feelings about the question whether the number should be substantially higher, and whether it makes any sense to write popular reviews (and give popular talks about moonshine, as I did in the summer) at all.

There are some general prejudices that the laymen have about the "details of mathematical structures". Some people are numerologists – they are obsessed with seeing special numbers like 137 everywhere and invent lots of new moonshines which are rubbish, of course. They're enthusiastic cranks – their fantasy runs too hot. But most laymen are the exact opposite. They are too cold and they believe that there's nothing special to be seen anywhere. They basically think that all numbers are created morally equal. If something can be done with 248, it can be done with 252, too. Well, it isn't the case. The truth of mathematics is somewhere in between. Lots of very particular statements and structures exist that are tightly connected to some special integers and simply cannot be generalized to all integers. All the numbers describing the sporadic groups – and moonshines – are great examples of these exclusive rights and inequality between the numbers and choices in mathematics. I think that this is the kind of a very general, moral conclusion that the mathematical research has achieved and that hasn't been communicated to the public yet – a reason to think that the popularization isn't really working at all.

by Luboš Motl ( at September 23, 2017 10:36 AM

September 22, 2017

ZapperZ - Physics and Physicists

Common Mistakes By Students In Intro Physics
Rhett Allain has listed 3 common mistakes and misunderstanding done by student in intro kinematics physics courses.

I kinda agree with all of them, and I've seen them myself. In fact, when I teach "F=ma" and try to impress upon them its validity, I will ask them that if it is true, why do you need to keep your foot on the gas pedal to keep the vehicle moving at constant speed while driving? This appears to indicate that "F" produces a constant "speed", and thus, "a=0".

Tackling this is important, because the students already have a set of understanding of how the world around the works, whether correctly or not. It needs to be tackled head-on. I tackled this also in dealing with current where we calculate the drift velocity of conduction electrons. The students discover that the drift velocity is excruciatingly slow. So then I ask them that if the conduction electrons move like molasses, why does it appear that when I turn the switch on, the light comes on almost instantaneously?

Still, if we are nitpicking here, I have a small issue with the first item on Allain's list:

What happens when you have a constant force on an object? A very common student answer is that a constant force on an object will make it move at a constant speed—which is wrong, but it sort of makes sense.

Because he's using "speed" and not "velocity", it opens up a possibility of a special case of a central force, or even a centripetal force, in a circular motion where the object has a net force acting on it, but its speed remains the same. Because the central force is always perpendicular to the motion of the particle, it imparts no increase in speed, just a change in direction. So yes, the velocity changes, but the magnitude of the velocity (the speed) does not. So the misconception here isn't always wrong.


by ZapperZ ( at September 22, 2017 03:10 PM

Tommaso Dorigo - Scientificblogging

Top Quarks Observed In Proton-Nucleus Collisions For The First Time
The top quark is the heaviest known matter corpuscle we consider elementary. 
Elementary is an overloaded word in English, so I need to explain what it means in the context of subatomic particles. If we grab a dictionary we get several possibilities, like e.g.- elementary: pertanining to or dealing with elements, rudiments, or first principles

- elementary: of the nature of an ultimate constituent; uncompounded
- elementary: not decomposable into elements or other primary constituents
- elementary: simple

read more

by Tommaso Dorigo at September 22, 2017 12:28 PM

The n-Category Cafe

Schröder Paths and Reverse Bessel Polynomials

I want to show you a combinatorial interpretation of the reverse Bessel polynomials which I learnt from Alan Sokal. The sequence of reverse Bessel polynomials begins as follows.

<semantics>θ 0(R) =1 θ 1(R) =R+1 θ 2(R) =R 2+3R+3 θ 3(R) =R 3+6R 2+15R+15<annotation encoding="application/x-tex"> \begin{aligned} \theta_0(R)&=1\\ \theta_1(R)&=R+1\\ \theta_2(R)&=R^2+3R+3\\ \theta_3(R)&= R^3 +6R^2+15R+15 \end{aligned} </annotation></semantics>

To give you a flavour of the combinatorial interpretation we will prove, you can see that the second reverse Bessel polynomial can be read off the following set of ‘weighted Schröder paths’: multiply the weights together on each path and add up the resulting monomials.

Schroeder paths

In this post I’ll explain how to prove the general result, using a certain result about weighted Dyck paths that I’ll also prove. At the end I’ll leave some further questions for the budding enumerative combinatorialists amongst you.

These reverse Bessel polynomials have their origins in the theory of Bessel functions, but which I’ve encountered in the theory of magnitude, and they are key to a formula I give for the magnitude of an odd dimensional ball which I have just posted on the arxiv.

In that paper I use the combinatorial expression for these Bessel polynomials to prove facts about the magnitude.

Here, to simplify things slightly, I have used the standard reverse Bessel polynomials whereas in my paper I use a minor variant (see below).

I should add that a very similar expression can be given for the ordinary, unreversed Bessel polynomials; you just need a minor modification to the way the weights on the Schröder paths are defined. I will leave that as an exercise.

The reverse Bessel polynomials

The reverse Bessel polynomials have many properties. In particular they satisfy the recursion relation <semantics>θ i+1(R)=R 2θ i1(R)+(2i+1)θ i(R)<annotation encoding="application/x-tex"> \theta_{i+1}(R)=R^2\theta_{i-1}(R) + (2i+1)\theta_{i}(R) </annotation></semantics> and <semantics>θ i(R)<annotation encoding="application/x-tex">\theta_i(R)</annotation></semantics> satisfies the differential equation <semantics>Rθ i (R)2(R+i)θ i (R)+2iθ i(R)=0.<annotation encoding="application/x-tex"> R\theta_i^{\prime\prime}(R)-2(R+i)\theta_i^\prime(R)+2i\theta_i(R)=0. </annotation></semantics> There’s an explicit formula: <semantics>θ i(R)= t=0 i(i+t)!(it)!t!2 tR it.<annotation encoding="application/x-tex"> \theta_i(R) = \sum_{t=0}^i \frac{(i+t)!}{(i-t)!\, t!\, 2^t}R^{i-t}. </annotation></semantics>

I’m interested in them because they appear in my formula for the magnitude of odd dimensional balls. To be more precise, in my formula I use the associated Sheffer polynomials, <semantics>(χ i(R)) i=0 <annotation encoding="application/x-tex">(\chi_i(R))_{i=0}^\infty</annotation></semantics>; they are related by <semantics>χ i(R)=Rθ i1(R)<annotation encoding="application/x-tex">\chi_i(R)=R\theta_{i-1}(R)</annotation></semantics>, so the coefficients are the same, but just moved around a bit. These polynomials have a similar but slightly more complicated combinatorial interpretation.

In my paper I prove that the magnitude of the <semantics>(2p+1)<annotation encoding="application/x-tex">(2p+1)</annotation></semantics>-dimensional ball of radius <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> has the following expression:

<semantics>|B R 2p+1|=det[χ i+j+2(R)] i,j=0 p(2p+1)!Rdet[χ i+j(R)] i,j=0 p<annotation encoding="application/x-tex"> \left|B^{2p+1}_R \right|= \frac{\det[\chi_{i+j+2}(R)]_{i,j=0}^p}{(2p+1)!\, R\,\det[\chi_{i+j}(R)]_{i,j=0}^p} </annotation></semantics>

As the each polynomial <semantics>χ i(R)<annotation encoding="application/x-tex">\chi_i(R)</annotation></semantics> has a path counting interpretation, one can use the rather beautiful Lindström-Gessel-Viennot Lemma to give a path counting interpretation to the determinants in the above formula and find some explicit expression. I will probably blog about this another time. (Fellow host Qiaochu has also blogged about the LGV Lemma.)

Weighted Dyck paths

Before getting on to Bessel polynomials and weighted Schröder paths, we need to look at counting weighted Dyck paths, which are simpler and more classical.

A Dyck path is a path in the lattice <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}^2</annotation></semantics> which starts at <semantics>(0,0)<annotation encoding="application/x-tex">(0,0)</annotation></semantics>, stays in the upper half plane, ends back on the <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>-axis at <semantics>(2i,0)<annotation encoding="application/x-tex">(2{i},0)</annotation></semantics> and has steps going either diagonally right and up or right and down. The integer <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> is called the length of the path. Let <semantics>D i<annotation encoding="application/x-tex">D_{i}</annotation></semantics> be the set of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> Dyck paths.

For each Dyck path <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>, we will weight each edge going right and down, from <semantics>(x,y)<annotation encoding="application/x-tex">(x,y)</annotation></semantics> to <semantics>(x+1,y1)<annotation encoding="application/x-tex">(x+1,y-1)</annotation></semantics> by <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> then we will take <semantics>w(σ)<annotation encoding="application/x-tex">w(\sigma)</annotation></semantics>, the weight of <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>, to be the product of all the weights on its steps. Here are all five weighted Dyck paths of length six.

Dyck paths

Famously, the number of Dyck paths of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> is given by the <semantics>i<annotation encoding="application/x-tex">{i}</annotation></semantics>th Catalan number; here, however, we are interested in the number of paths weighted by the weighting(!). If we sum over the weights of each of the above diagrams we get <semantics>6+4+2+2+1=15<annotation encoding="application/x-tex">6+4+2+2+1=15</annotation></semantics>. Note that this is <semantics>5×3×1<annotation encoding="application/x-tex">5\times 3 \times 1</annotation></semantics>. This is a pattern that holds in general.

Theorem A. (Françon and Viennot) The weighted count of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> Dyck paths is equal to the double factorial of <semantics>2i1<annotation encoding="application/x-tex">2{i} -1</annotation></semantics>: <semantics> σD iw(σ) =(2i1)(2i3)(2i5)31 (2i1)!!.<annotation encoding="application/x-tex"> \begin{aligned} \sum_{\sigma\in D_{i}} w(\sigma)&= (2{i} -1)\cdot (2{i} -3)\cdot (2{i}-5)\cdot \cdots\cdot 3\cdot 1 \\ &\eqqcolon (2{i} -1)!!. \end{aligned} </annotation></semantics>

The following is a nice combinatorial proof of this theorem that I found in a survey paper by Callan. (I was only previously aware of a high-tech proof involving continued fractions and a theorem of Gauss.)

The first thing to note is that the weight of a Dyck path is actually counting something. It is counting the ways of labelling each of the down steps in the diagram by a positive integer less than the height (i.e.~the weight) of that step. We call such a labelling a height labelling. Note that we have no choice of weighting but we often have choice of height labelling. Here’s a height labelled Dyck path.

height labelled Dyck path

So the weighted count of Dyck paths of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> is precisely the number of height labelled Dyck paths of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics>. <semantics> σD iw(σ)=#{height labelled paths of length 2i}<annotation encoding="application/x-tex"> \sum_{\sigma\in D_{i}} w(\sigma) = \#\{\text{height labelled paths of length }\,\,2{i}\} </annotation></semantics>

We are going to consider marked Dyck paths, which just means we single out a specific vertex. A path of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> has <semantics>2i+1<annotation encoding="application/x-tex">2{i} + 1</annotation></semantics> vertices. Thus

<semantics>#{height labelled, MARKED paths of length 2i} =(2i+1)×#{height labelled paths of length 2i}.<annotation encoding="application/x-tex"> \begin{aligned} \#\{\text{height labelled,}\,\, &\text{ MARKED paths of length }\,\,2{i}\}\\ &=(2{i}+1)\times\#\{\text{height labelled paths of length }\,\,2{i}\}. \end{aligned} </annotation></semantics>

Hence the theorem will follow by induction if we find a bijection

<semantics>{height labelled, paths of length 2i} {height labelled, MARKED paths of length 2i2}.<annotation encoding="application/x-tex"> \begin{aligned} \{\text{height labelled,}\,\,&\text{ paths of length }\,\,2{i} \}\\ &\cong \{\text{height labelled, MARKED paths of length }\,\,2{i}-2 \}. \end{aligned} </annotation></semantics>

Such a bijection can be constructed in the following way. Given a height labelled Dyck path, remove the left-hand step and the first step that has a label of one on it. On each down step between these two deleted steps decrease the label by one. Now join the two separated parts of the path together and mark the vertex at which they are joined. Here is an example of the process.

dyck bijection

Working backwards it is easy to describe the inverse map. And so the theorem is proved.

Schröder paths and reverse Bessel polynomials

In order to give a path theoretic interpretation of reverse Bessel polynomials we will need to use Schröder paths. These are like Dyck paths except we allow a certain kind of flat step.

A Schröder path is a path in the lattice <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}^2</annotation></semantics> which starts at <semantics>(0,0)<annotation encoding="application/x-tex">(0,0)</annotation></semantics>, stays in the upper half plane, ends back on the <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>-axis at <semantics>(2i,0)<annotation encoding="application/x-tex">(2{i},0)</annotation></semantics> and has steps going either diagonally right and up, diagonally right and down, or horizontally two units to the right. The integer <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> is called the length of the path. Let <semantics>S i<annotation encoding="application/x-tex">S_{i}</annotation></semantics> be the set of all length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> Schröder paths.

For each Schröder path <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>, we will weight each edge going right and down, from <semantics>(x,y)<annotation encoding="application/x-tex">(x,y)</annotation></semantics> to <semantics>(x+1,y1)<annotation encoding="application/x-tex">(x+1,y-1)</annotation></semantics> by <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> and we will weight each flat edge by the indeterminate <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics>. Then we will take <semantics>w(σ)<annotation encoding="application/x-tex">w(\sigma)</annotation></semantics>, the weight of <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>, to be the product of all the weights on its steps.

Here is the picture of all six length four weighted Schröder paths again.

Schroeder paths

You were asked at the top of this post to check that the sum of the weights equals the second reverse Bessel polynomial. Of course that result generalizes!

The following theorem was shown to me by Alan Sokal, he proved it using continued fractions methods, but these essentially amount to the combinatorial proof I’m about to give.

Theorem B. The weighted count of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> Schröder paths is equal to the <semantics>i<annotation encoding="application/x-tex">{i}</annotation></semantics>th reverse Bessel polynomial: <semantics> σS iw(σ)=θ i(R).<annotation encoding="application/x-tex"> \sum_{\sigma\in S_{i}} w(\sigma)= \theta_{i}(R). </annotation></semantics>

The idea is to observe that you can remove the flat steps from a weighted Schröder path to obtain a weighted Dyck path. If a Schröder path has length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> and <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> upward steps then it has <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> downward steps and <semantics>it<annotation encoding="application/x-tex">{i}-t</annotation></semantics> flat steps, so it has a total of <semantics>i+t<annotation encoding="application/x-tex">{i}+t</annotation></semantics> steps. This means that there are <semantics>(i+tit)<annotation encoding="application/x-tex">\binom{{i}+t}{{i}-t}</annotation></semantics> length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> Schröder paths with the same underlying length <semantics>2t<annotation encoding="application/x-tex">2t</annotation></semantics> Dyck path (we just choose were to insert the flat steps). Let’s write <semantics>S i t<annotation encoding="application/x-tex">S^t_{i}</annotation></semantics> for the set of Schröder paths of length <semantics>2i<annotation encoding="application/x-tex">2{i}</annotation></semantics> with <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> upward steps. <semantics> σS iw(σ) = t=0 i σS i tw(σ)= t=0 i(i+tit) σD tw(σ)R it = t=0 i(i+tit)(2t1)!!R it = t=0 i(i+t)!(it)!(2t)!(2t)!2 tt!R it =θ i(R),<annotation encoding="application/x-tex"> \begin{aligned} \sum_{\sigma\in S_{i}} w(\sigma) &= \sum_{t=0}^{i} \sum_{\sigma\in S^t_{i}} w(\sigma) = \sum_{t=0}^{i} \binom{{i}+t}{{i}-t}\sum_{\sigma'\in D_t} w(\sigma')R^{{i}-t}\\ &= \sum_{t=0}^{i} \binom{{i}+t}{{i}-t}(2t-1)!!\,R^{{i}-t}\\ &= \sum_{t=0}^{i} \frac{({i}+t)!}{({i}-t)!\,(2t)!}\frac{(2t)!}{2^t t!}R^{{i}-t}\\ &= \theta_{i}(R), \end{aligned} </annotation></semantics> where the last equality comes from the formula for <semantics>θ i(R)<annotation encoding="application/x-tex">\theta_{i}(R)</annotation></semantics> given at the beginning of the post.

Thus we have the required combinatorial interpretation of the reverse Bessel polynomials.

Further questions

The first question that springs to mind for me is if it is possible to give a bijective proof of Theorem B, similar in style, perhaps (or perhaps not), to the proof given of Theorem A, basically using the recursion relation <semantics>θ i+1(R)=R 2θ i1(R)+(2i+1)θ i(R)<annotation encoding="application/x-tex"> \theta_{i+1}(R)=R^2\theta_{i-1}(R) + (2i+1)\theta_{i}(R) </annotation></semantics> rather than the explicit formular for them.

The second question would be whether the differential equation <semantics>Rθ i (R)2(R+i)θ i (R)+2iθ i(R)=0.<annotation encoding="application/x-tex"> R\theta_i^{\prime\prime}(R)-2(R+i)\theta_i^\prime(R)+2i\theta_i(R)=0. </annotation></semantics> has some sort of combinatorial interpretation in terms of paths.

I’m interested to hear if anyone has any thoughts.

by willerton ( at September 22, 2017 09:43 AM

The n-Category Cafe

Lattice Paths and Continued Fractions II

Last time we proved Flajolet’s Fundamental Lemma about enumerating Dyck paths. This time I want to give some examples, in particular to relate this to what I wrote previously about Dyck paths, Schröder paths and what they have to do with reverse Bessel polynomials.

We’ll see that the generating function of the sequence of reverse Bessel polynomials <semantics>(θ i(R)) i=0 <annotation encoding="application/x-tex">\left(\theta_i(R)\right)_{i=0}^\infty</annotation></semantics> has the following continued fraction expansion.

<semantics> i=0 θ i(R)t i=11Rtt1Rt2t1Rt3t1<annotation encoding="application/x-tex"> \sum_{i=0}^\infty \theta_i(R) \,t^i = \frac{1}{1-Rt- \frac{t}{1-Rt - \frac{2t}{1-Rt- \frac{3t}{1-\dots}}}} </annotation></semantics>

I’ll even give you a snippet of SageMath code so you can have a play around with this if you like.

Flajolet’s Fundamental Lemma

Let’s just recall from last time that if we take Motzkhin paths weighted by <semantics>a i<annotation encoding="application/x-tex">a_i</annotation></semantics>s, <semantics>b i<annotation encoding="application/x-tex">b_i</annotation></semantics>s and <semantics>c i<annotation encoding="application/x-tex">c_i</annotation></semantics>s as in this example,

weighted Motzkhin path

then when we sum the weightings of all Motzkhin paths together we have the following continued fraction expression. <semantics> σMotzkhinw a,b,c(σ)=11c 0a 1b 11c 1a 2b 21c 2a 3b 31[[a i,b i,c i]]<annotation encoding="application/x-tex"> \sum_{\sigma\,\,\mathrm{Motzkhin}} w_{a,b,c}(\sigma) = \frac{1} {1- c_{0} - \frac{a_{1} b_{1}} {1-c_{1} - \frac{a_{2} b_{2}} {1- c_2 - \frac{a_3 b_3} {1-\dots }}}} \in\mathbb{Z}[[a_i, b_i, c_i]] </annotation></semantics>

Jacobi continued fractions and Motzkhin paths

Flajolet’s Fundamental Lemma is very beautiful, but we want a power series going up in terms of path length. So let’s use another variable <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> to keep track of path length. All three types of step in a Motzkhin path have length one. We can set <semantics>a i=α it<annotation encoding="application/x-tex">a_i=\alpha_i t</annotation></semantics>, <semantics>b i=β it<annotation encoding="application/x-tex">b_i=\beta_i t</annotation></semantics> and <semantics>c i=γ it<annotation encoding="application/x-tex">c_i=\gamma_i t</annotation></semantics>. Then <semantics> σw a,b,c(σ)[α i,β i,γ i][[t]]<annotation encoding="application/x-tex">\sum_{\sigma} w_{a, b, c}(\sigma)\in \mathbb{Z}[\alpha_i, \beta_i, \gamma_i][[t]]</annotation></semantics>, and the coefficient of <semantics>t <annotation encoding="application/x-tex">t^\ell</annotation></semantics> will be the sum of the weights of Motzkhin paths of length <semantics><annotation encoding="application/x-tex">\ell</annotation></semantics>. This coefficient will be a polynomial (rather than a power series) as there are only finitely paths of a given length.

<semantics> =0 ( σMotzkhin lengthw α,β,γ(σ))t =11γ 0tα 1β 1t 21γ 1tα 2β 2t 21γ 2tα 3β 3t 21<annotation encoding="application/x-tex"> \sum_{\ell=0}^\infty\left(\sum_{\sigma\,\,\text{Motzkhin length}\,\,\ell} w_{\alpha,\beta,\gamma}(\sigma)\right)t^\ell = \frac{1} {1- \gamma_{0}t - \frac{\alpha_{1}\beta_1 t^2} {1-\gamma_{1}t - \frac{\alpha_{2}\beta_2 t^2} {1- \gamma_2t - \frac{\alpha_3 \beta_3 t^2} {1-\dots }}}} </annotation></semantics>

Such a continued fraction is called a Jacobi (or J-type) continued fraction. They crop up in the study of moments of orthogonal polynomials and also in birth-death processes.

For example, I believe that Euler proved the following Jacobi continued fraction expansion of the generating function of the factorials. <semantics> =0 !t =11tt 213t4t 215t9t 21<annotation encoding="application/x-tex"> \sum_{\ell=0}^\infty \ell!\, t^\ell = \frac{1} {1- t - \frac{t^2} {1-3 t - \frac{4 t^2} {1- 5t - \frac{9 t^2} {1-\dots }}}} </annotation></semantics> We can get the right hand side by taking <semantics>α i=β i=i<annotation encoding="application/x-tex">\alpha_i=\beta_i=i</annotation></semantics> and <semantics>γ i=2i+1<annotation encoding="application/x-tex">\gamma_i=2i+1</annotation></semantics>. Here is a Motzkhin path weighted in that way.

weighted Motzkhin path

The equation above is telling us that if we weight Motzkhin paths in that way, then the weighted count of Motzkhin paths of length <semantics><annotation encoding="application/x-tex">\ell</annotation></semantics> is <semantics>!<annotation encoding="application/x-tex">\ell!</annotation></semantics>, and that deserves an exclamation mark! (You’re invited to verify this for Motzkhin paths of length 4.)

I’ve put some SageMath code at the bottom of this post if you want to check the continued fraction equality numerically.

Stieltjes continued fractions and Dyck paths

A Dyck path is a Motzkhin path with no flat steps. So if we weight the flat steps in Motzkhin paths with <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> then when we do a weighted count then we just count the weighted Dyck paths. This means setting <semantics>γ i=0<annotation encoding="application/x-tex">\gamma_i=0</annotation></semantics>.

Also the weigh <semantics>α i<annotation encoding="application/x-tex">\alpha_i</annotation></semantics> on an up step always appears with the weight <semantics>β i<annotation encoding="application/x-tex">\beta_i</annotation></semantics> on a corresponding down step (what goes up must come down!) so we can simplify things by just putting a weighting <semantics>α iβ i<annotation encoding="application/x-tex">\alpha_i\beta_i</annotation></semantics> — which we’ll rename as <semantics>α i<annotation encoding="application/x-tex">\alpha_i</annotation></semantics> — on the down step from level <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> and put a weighting of <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> on each up step. We can call this weighting <semantics>w α<annotation encoding="application/x-tex">w_\alpha</annotation></semantics>.

Putting this together we get the following, where we’ve noted that there are no Dyck paths of odd length.

<semantics> n=0 ( σDyck length2nw α(σ))t 2n=11α 1t 21α 2t 21α 3t 21<annotation encoding="application/x-tex"> \sum_{n=0}^\infty\left(\sum_{\sigma\,\,\text{Dyck length}\,\,2n} w_\alpha(\sigma)\right)t^{2n} = \frac{1} {1- \frac{\alpha_{1} t^2} {1- \frac{\alpha_{2} t^2} {1- \frac{\alpha_3 t^2} {1-\dots }}}} </annotation></semantics>

This kind of continued fraction is called a Stieltjes (or S-type) continued fraction. Of course, we could replace <semantics>t 2<annotation encoding="application/x-tex">t^2</annotation></semantics> by <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> in the above, without any ill effect.

Previously we proved combinatorially that with the weighting where <semantics>α i=i<annotation encoding="application/x-tex">\alpha_i=i</annotation></semantics> the weighted count of Dyck paths of length <semantics>2n<annotation encoding="application/x-tex">2n</annotation></semantics> was precisely <semantics>(2n1)!!<annotation encoding="application/x-tex">(2n-1)!!</annotation></semantics>. This means that we have proved the following continued fraction expansion of the generating function of the odd double factorials.

<semantics> n=0 (2n1)!!t 2n=11t 212t 213t 21<annotation encoding="application/x-tex"> \sum_{n=0}^\infty (2n -1)!!\, t^{2n} = \frac{1} {1- \frac{ t^2} {1- \frac{2 t^2} {1- \frac{3 t^2} {1-\dots }}}} </annotation></semantics>

I believe this was originally proved by Gauss, but I have no idea how.

Again there’s some SageMath code at the end for you to see this in action.

Thron continued fractions and Schröder paths

What I’m really interested in, you’ll remember, is reverse Bessel polynomials, and these are giving weighted counts of Schroder paths. Using continued fractions in this context is less standard than for Dyck paths and Motzkhin paths as above, but it only requires a minor modification. I learnt about this from Alan Sokal.

The difference between Motzkhin paths and Schröoder paths is that the flat steps have length <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> in Schroder paths. Remember that the power of <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> was encoding the length, so we just have to assign <semantics>t 2<annotation encoding="application/x-tex">t^2</annotation></semantics> to each flat step rather than <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>. So if we put <semantics>a i=t<annotation encoding="application/x-tex">a_i= t</annotation></semantics>, <semantics>b i=α it<annotation encoding="application/x-tex">b_i=\alpha_i t</annotation></semantics> and <semantics>c i=γ it 2<annotation encoding="application/x-tex">c_i = \gamma_i t^2</annotation></semantics> in Flajolet’s Fundamental Lemma then we get the following.

<semantics> n=0 ( σSchroder length2nw α,γ(σ))t 2n=11γ 0t 2α 1t 21γ 1t 2α 2t 21γ 2t 2α 3t 21<annotation encoding="application/x-tex"> \sum_{n=0}^\infty\left(\sum_{\sigma\,\,\text{Schroder length}\,\,2n} w_{\alpha,\gamma}(\sigma)\right)t^{2n} = \frac{1} {1- \gamma_{0}t^2 - \frac{\alpha_{1} t^2} {1-\gamma_{1}t^2 - \frac{\alpha_{2} t^2} {1- \gamma_2t^2 - \frac{\alpha_3 t^2} {1-\dots }}}} </annotation></semantics>

Here <semantics>w α,γ<annotation encoding="application/x-tex">w_{\alpha, \gamma}</annotation></semantics> is the weighting where we put <semantics>α i<annotation encoding="application/x-tex">\alpha_i</annotation></semantics>s on the down steps and <semantics>γ i<annotation encoding="application/x-tex">\gamma_i</annotation></semantics>s on the flat steps.

This kind of continued fraction is called a Thron (or T-type) continued fraction. Again, we could replace <semantics>t 2<annotation encoding="application/x-tex">t^2</annotation></semantics> by <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> in the above, without any ill effect.

We saw before that if we take the weighting, <semantics>w rBp<annotation encoding="application/x-tex">w_{rBp}</annotation></semantics>, with <semantics>α i:=i<annotation encoding="application/x-tex">\alpha_i:=i</annotation></semantics> and <semantics>γ i:=R<annotation encoding="application/x-tex">\gamma_i:=R</annotation></semantics>, such as in the following picture,

weighted schroder path

then the weighted sum of Schroder paths of length <semantics>2n<annotation encoding="application/x-tex">2n</annotation></semantics> is precisely the <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>th reverse Bessel polynomial: <semantics>θ i(R)= σSchroder length2nw rBp(σ).<annotation encoding="application/x-tex"> \theta_i(R)= \sum_{\sigma\,\,\text{Schroder length}\,\,2n} w_{rBp}(\sigma). </annotation></semantics>

Putting that together with the Thron continued fraction above we get the following Thron continued fraction expansion for the generating function of the reverse Bessel polynomials.

<semantics> n=0 θ n(R)t n=11Rtt1Rt2t1Rt3t1<annotation encoding="application/x-tex"> \sum_{n=0}^\infty \theta_n(R) t^n = \frac{1}{1-Rt- \frac{t}{1-Rt - \frac{2t}{1-Rt- \frac{3t}{1-\dots}}}} </annotation></semantics>

This expression is given by Paul Barry, without any reference, in the formulas section of the entry in the Online Encyclopedia of Integer Sequences.

See the end of the post for some SageMath code to check this numerically.

In my recent magnitude paper I actually work backwards. I start with the continued fraction expansion as a given, and use Flajolet’s Fundamental Lemma to give the Schr&\ouml;der path interpretation of the reverse Bessel polynomials. Of course, I now know that I can bypass the use of continued fractions completely, and have a purely combinatorial proof of this interpretation. Regardless of that, however, the theory of lattice paths and continued fractions remains beautiful.

Appendix: Some SageMath code

It’s quite easy to play around with these continued fractions in SageMath, at least to some finite order. I thought I’d let you have some code to get you started…

Here’s some SageMath code for you to check the Jacodi continued fraction expansion of the generating function of the factorials.

# T = Z[t]
T.<t> = PolynomialRing(ZZ)
# We'll take the truncated continued fraction to be in the 
# ring of rational functions, P = Z(t)
P = Frac(T)

def j_ctd_frac(alphas, gammas):
    if alphas == [] or gammas == []:
        return 1
        return P(1/(1 - gammas[0]*t - alphas[0]*t^2* 
                        j_ctd_frac(alphas[1:], gammas[1:])))

cf(t) = j_ctd_frac([1, 4, 9, 16, 25, 36], [1, 3, 5, 7, 9, 11]) 
print cf(t).series(t, 10)

The above code can be used to define a Stieltjes continued fraction and check out the expansion of Gauss on the odd double factorials.

def s_ctd_frac(alphas):
    gammas = [0]*len(alphas)
    return j_ctd_frac(alphas, gammas)

cf(t) = s_ctd_frac([1, 2, 3, 4, 5, 6])
print cf(t).series(t, 13)

Here’s the code for getting the reverse Bessel polynomials from a Thron continued fraction.

S.<R> = PolynomialRing(ZZ)
T.<t> = PowerSeriesRing(S)

def t_ctd_frac(alphas, gammas):
    if alphas == [] or gammas == []:
        return 1
        return (1/(1- gammas[0]*t^2 - alphas[0]*t^2* 
                      t_ctd_frac(alphas[1:], gammas[1:])))

print T(t_ctd_frac([1, 2, 3, 4, 5, 6], [R, R, R, R, R, R]),

by willerton ( at September 22, 2017 09:42 AM

September 21, 2017

The n-Category Cafe

Applied Category Theory 2018

We’re having a conference on applied category theory!

The plenary speakers will be:

  • Samson Abramsky (Oxford)
  • John Baez (UC Riverside)
  • Kathryn Hess (EPFL)
  • Mehrnoosh Sadrzadeh (Queen Mary)
  • David Spivak (MIT)

There will be a lot more to say as this progresses, but for now let me just quote from the conference website.

Applied Category Theory (ACT 2018) is a five-day workshop on applied category theory running from April 30 to May 4 at the Lorentz Center in Leiden, the Netherlands.

Towards an integrative science: in this workshop, we want to instigate a multi-disciplinary research program in which concepts, structures, and methods from one scientific discipline can be reused in another. The aim of the workshop is to (1) explore the use of category theory within and across different disciplines, (2) create a more cohesive and collaborative ACT community, especially among early-stage researchers, and (3) accelerate research by outlining common goals and open problems for the field.

While the workshop will host talks on a wide range of applications of category theory, there will be three special tracks on exciting new developments in the field:

  1. Dynamical systems and networks
  2. Systems biology
  3. Cognition and AI
  4. Causality

Accompanying the workshop will be an Adjoint Research School for early-career researchers. This will comprise a 16 week online seminar, followed by a 4 day research meeting at the Lorentz Center in the week prior to ACT 2018. Applications to the school will open prior to October 1, and are due November 1. Admissions will be notified by November 15.

The organizers

Bob Coecke (Oxford), Brendan Fong (MIT), Aleks Kissinger (Nijmegen), Martha Lewis (Amsterdam), and Joshua Tan (Oxford)

We welcome any feedback! Please send comments to this link.

About Applied Category Theory

Category theory is a branch of mathematics originally developed to transport ideas from one branch of mathematics to another, e.g. from topology to algebra. Applied category theory refers to efforts to transport the ideas of category theory from mathematics to other disciplines in science, engineering, and industry.

This site originated from discussions at the Computational Category Theory Workshop at NIST on Sept. 28-29, 2015. It serves to collect and disseminate research, resources, and tools for the development of applied category theory, and hosts a blog for those involved in its study.

The Proposal: Towards an Integrative Science

Category theory was developed in the 1940s to translate ideas from one field of mathematics, e.g. topology, to another field of mathematics, e.g. algebra. More recently, category theory has become an unexpectedly useful and economical tool for modeling a range of different disciplines, including programming language theory [10], quantum mechanics [2], systems biology [12], complex networks [5], database theory [7], and dynamical systems [14].

A category consists of a collection of objects together with a collection of maps between those objects, satisfying certain rules. Topologists and geometers use category theory to describe the passage from one mathematical structure to another, while category theorists are also interested in categories for their own sake. In computer science and physics, many types of categories (e.g. topoi or monoidal categories) are used to give a formal semantics of domain-specific phenomena (e.g. automata [3], or regular languages [11], or quantum protocols [2]). In the applied category theory community, a long-articulated vision understands categories as mathematical workspaces for the experimental sciences, similar to how they are used in topology and geometry [13]. This has proved true in certain fields, including computer science and mathematical physics, and we believe that these results can be extended in an exciting direction: we believe that category theory has the potential to bridge specific different fields, and moreover that developments in such fields (e.g. automata) can be transferred successfully into other fields (e.g. systems biology) through category theory. Already, for example, the categorical modeling of quantum processes has helped solve an important open problem in natural language processing [9].

In this workshop, we want to instigate a multi-disciplinary research program in which concepts, structures, and methods from one discipline can be reused in another. Tangibly and in the short-term, we will bring together people from different disciplines in order to write an expository survey paper that grounds the varied research in applied category theory and lays out the parameters of the research program.

In formulating this research program, we are motivated by recent successes where category theory was used to model a wide range of phenomena across many disciplines, e.g. open dynamical systems (including open Markov processes and open chemical reaction networks), entropy and relative entropy [6], and descriptions of computer hardware [8]. Several talks will address some of these new developments. But we are also motivated by an open problem in applied category theory, one which was observed at the most recent workshop in applied category theory (Dagstuhl, Germany, in 2015): “a weakness of semantics/CT is that the definitions play a key role. Having the right definitions makes the theorems trivial, which is the opposite of hard subjects where they have combinatorial proofs of theorems (and simple definitions). […] In general, the audience agrees that people see category theorists only as reconstructing the things they knew already, and that is a disadvantage, because we do not give them a good reason to care enough” [1, pg. 61].

In this workshop, we wish to articulate a natural response to the above: instead of treating the reconstruction as a weakness, we should treat the use of categorical concepts as a natural part of transferring and integrating knowledge across disciplines. The restructuring employed in applied category theory cuts through jargon, helping to elucidate common themes across disciplines. Indeed, the drive for a common language and comparison of similar structures in algebra and topology is what led to the development category theory in the first place, and recent hints show that this approach is not only useful between mathematical disciplines, but between scientific ones as well. For example, the ‘Rosetta Stone’ of Baez and Stay demonstrates how symmetric monoidal closed categories capture the common structure between logic, computation, and physics [4].

[1] Samson Abramsky, John C. Baez, Fabio Gadducci, and Viktor Winschel. Categorical methods at the crossroads. Report from Dagstuhl Perspectives Workshop 14182, 2014.

[2] Samson Abramsky and Bob Coecke. A categorical semantics of quantum protocols. In Handbook of Quantum Logic and Quantum Structures. Elsevier, Amsterdam, 2009.

[3] Michael A. Arbib and Ernest G. Manes. A categorist’s view of automata and systems. In Ernest G. Manes, editor, Category Theory Applied to Computation and Control. Springer, Berlin, 2005.

[4] John C. Baez and Mike Stay. Physics, topology, logic and computation: a Rosetta stone. In Bob Coecke, editor, New Structures for Physics. Springer, Berlin, 2011.

[5] John C. Baez and Brendan Fong. A compositional framework for passive linear networks. arXiv e-prints, 2015.

[6] John C. Baez, Tobias Fritz, and Tom Leinster. A characterization of entropy in terms of information loss. Entropy, 13(11):1945-1957, 2011.

[7] Michael Fleming, Ryan Gunther, and Robert Rosebrugh. A database of categories. Journal of Symbolic Computing, 35(2):127-135, 2003.

[8] Dan R. Ghica and Achim Jung. Categorical semantics of digital circuits. In Ruzica Piskac and Muralidhar Talupur, editors, Proceedings of the 16th Conference on Formal Methods in Computer-Aided Design. Springer, Berlin, 2016.

[9] Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, Stephen Pulman, and Bob Coecke. Reasoning about meaning in natural language with compact closed categories and Frobenius algebras. In Logic and Algebraic Structures in Quantum Computing and Information. Cambridge University Press, Cambridge, 2013.

[10] Eugenio Moggi. Notions of computation and monads. Information and Computation, 93(1):55-92, 1991.

[11] Nicholas Pippenger. Regular languages and Stone duality. Theory of Computing Systems 30(2):121-134, 1997.

[12] Robert Rosen. The representation of biological systems from the standpoint of the theory of categories. Bulletin of Mathematical Biophysics, 20(4):317-341, 1958.

[13] David I. Spivak. Category Theory for Scientists. MIT Press, Cambridge MA, 2014.

[14] David I. Spivak, Christina Vasilakopoulou, and Patrick Schultz. Dynamical systems and sheaves. arXiv e-prints, 2016.

by john ( at September 21, 2017 11:06 PM

John Baez - Azimuth

Applied Category Theory at UCR (Part 2)

I’m running a special session on applied category theory, and now the program is available:

Applied category theory, Fall Western Sectional Meeting of the AMS, 4-5 November 2017, U.C. Riverside.

This is going to be fun.

My former student Brendan Fong is now working with David Spivak at MIT, and they’re both coming. My collaborator John Foley at Metron is also coming: we’re working on the CASCADE project for designing networked systems.

Dmitry Vagner is coming from Duke: he wrote a paper with David and Eugene Lerman on operads and open dynamical system. Christina Vaisilakopoulou, who has worked with David and Patrick Schultz on dynamical systems, has just joined our group at UCR, so she will also be here. And the three of them have worked with Ryan Wisnesky on algebraic databases. Ryan will not be here, but his colleague Peter Gates will: together with David they have a startup called Categorical Informatics, which uses category theory to build sophisticated databases.

That’s not everyone—for example, most of my students will be speaking at this special session, and other people too—but that gives you a rough sense of some people involved. The conference is on a weekend, but John Foley and David Spivak and Brendan Fong and Dmitry Vagner are staying on for longer, so we’ll have some long conversations… and Brendan will explain decorated corelations in my Tuesday afternoon network theory seminar.

Here’s the program. Click on talk titles to see abstracts. For a multi-author talk, the person with the asterisk after their name is doing the talking. All the talks will be in Room 268 of the Highlander Union Building or ‘HUB’.

Saturday November 4, 2017, 9:00 a.m.-10:50 a.m.

9:00 a.m.
A higher-order temporal logic for dynamical systems.
David I. Spivak, MIT

10:00 a.m.
Algebras of open dynamical systems on the operad of wiring diagrams.
Dmitry Vagner*, Duke University
David I. Spivak, MIT
Eugene Lerman, University of Illinois at Urbana-Champaign

10:30 a.m.
Abstract dynamical systems.
Christina Vasilakopoulou*, University of California, Riverside
David Spivak, MIT
Patrick Schultz, MIT

Saturday November 4, 2017, 3:00 p.m.-5:50 p.m.

3:00 p.m.
Black boxes and decorated corelations.
Brendan Fong, MIT

4:00 p.m.
Compositional modelling of open reaction networks.
Blake S. Pollard*, University of California, Riverside
John C. Baez, University of California, Riverside

4:30 p.m.
A bicategory of coarse-grained Markov processes.
Kenny Courser, University of California, Riverside

5:00 p.m.
A bicategorical syntax for pure state qubit quantum mechanics.
Daniel M. Cicala, University of California, Riverside

5:30 p.m.
Open systems in classical mechanics.
Adam Yassine, University of California Riverside

Sunday November 5, 2017, 9:00 a.m.-10:50 a.m.

9:00 a.m.
Controllability and observability: diagrams and duality.
Jason Erbele, Victor Valley College

9:30 a.m.
Frobenius monoids, weak bimonoids, and corelations.
Brandon Coya, University of California, Riverside

10:00 a.m.
Compositional design and tasking of networks.
John D. Foley*, Metron, Inc.
John C. Baez, University of California, Riverside
Joseph Moeller, University of California, Riverside
Blake S. Pollard, University of California, Riverside

10:30 a.m.
Operads for modeling networks.
Joseph Moeller*, University of California, Riverside
John Foley, Metron Inc.
John C. Baez, University of California, Riverside
Blake S. Pollard, University of California, Riverside

Sunday November 5, 2017, 2:00 p.m.-4:50 p.m.

2:00 p.m.
Reeb graph smoothing via cosheaves.
Vin de Silva, Department of Mathematics, Pomona College

3:00 p.m.
Knowledge representation in bicategories of relations.
Evan Patterson*, Stanford University, Statistics Department

3:30 p.m.
The multiresolution analysis of flow graphs.
Steve Huntsman*, BAE Systems

4:00 p.m.
Data modeling and integration using the open source tool Algebraic Query Language (AQL).
Peter Y. Gates*, Categorical Informatics
Ryan Wisnesky, Categorical Informatics

by John Baez at September 21, 2017 09:19 PM

Symmetrybreaking - Fermilab/SLAC

Concrete applications for accelerator science

A project called A2D2 will explore new applications for compact linear accelerators.

Tom Kroc, Matteo Quagliotto and Mike Geelhoed set up a sample beneath the A2D2 accelerator to test the electron beam.

Particle accelerators are the engines of particle physics research at Fermi National Accelerator Laboratory. They generate nearly light-speed, subatomic particles that scientists study to get to the bottom of what makes our universe tick. Fermilab experiments rely on a number of different accelerators, including a powerful, 500-foot-long linear accelerator that kick-starts the process of sending particle beams to various destinations.

But if you’re not doing physics research, what’s an accelerator good for?

It turns out, quite a lot: Electron beams generated by linear accelerators have all kinds of practical uses, such as making the wires used in cars melt-resistant or purifying water.

A project called Accelerator Application Development and Demonstration (A2D2) at Fermilab’s Illinois Accelerator Research Center will help Fermilab and its partners to explore new applications for compact linear accelerators, which are only a few feet long rather than a few hundred. These compact accelerators are of special interest because of their small size—they’re cheaper and more practical to build in an industrial setting than particle physics research accelerators—and they can be more powerful than ever.

“A2D2 has two aspects: One is to investigate new applications of how electron beams might be used to change, modify or process different materials,” says Fermilab’s Tom Kroc, an A2D2 physicist. “The second is to contribute a little more to the understanding of how these processes happen.”

To develop these aspects of accelerator applications, A2D2 will employ a compact linear accelerator that was once used in a hospital to treat tumors with electron beams. With a few upgrades to increase its power, the A2D2 accelerator will be ready to embark on a new venture: exploring and benchmarking other possible uses of electron beams, which will help specify the design of a new, industrial-grade, high-power machine under development by IARC and its partners.

It won’t be just Fermilab scientists using the A2D2 accelerator: As part of IARC, the accelerator will be available for use (typically through a formal CRADA or SPP agreement) by anyone who has a novel idea for electron beam applications. IARC’s purpose is to partner with industry to explore ways to translate basic research and tools, including accelerator research, into commercial applications.

“I already have a lot of people from industry asking me, ‘When can I use A2D2?’” says Charlie Cooper, general manager of IARC. “A2D2 will allow us to directly contribute to industrial applications—it’s something concrete that IARC now offers.”

Speaking of concrete, one of the first applications in mind for compact linear accelerators is creating durable pavement for roads that won’t crack in the cold or spread out in the heat. This could be achieved by replacing traditional asphalt with a material that could be strengthened using an accelerator. The extra strength would come from crosslinking, a process that creates bonds between layers of material, almost like applying glue between sheets of paper. A single sheet of paper tears easily, but when two or more layers are linked by glue, the paper becomes stronger.

“Using accelerators, you could have pavement that lasts longer, is tougher and has a bigger temperature range,” says Bob Kephart, director of IARC. Kephart holds two patents for the process of curing cement through crosslinking. “Basically, you’d put the road down like you do right now, and you’d pass an accelerator over it, and suddenly you’d turn it into really tough stuff—like the bed liner in the back of your pickup truck.”

This process has already caught the eye of the U.S. Army Corps of Engineers, which will be one of A2D2’s first partners. Another partner will be the Chicago Metropolitan Water Reclamation District, which will test the utility of compact accelerators for water purification. Many other potential customers are lining up to use the A2D2 technology platform.

“You can basically drive chemical reactions with electron beams—and in many cases those can be more efficient than conventional technology, so there are a variety of applications,” Kephart says. “Usually what you have to do is make a batch of something and heat it up in order for a reaction to occur. An electron beam can make a reaction happen by breaking a bond with a single electron.”

In other words, instead of having to cook a material for a long time to reach a specific heat that would induce a chemical reaction, you could zap it with an electron beam to get the same effect in a fraction of the time.

In addition to exploring the new electron-beam applications with the A2D2 accelerator, scientists and engineers at IARC are using cutting-edge accelerator technology to design and build a new kind of portable, compact accelerator, one that will take applications uncovered with A2D2 out of the lab and into the field. The A2D2 accelerator is already small compared to most accelerators, but the latest R&D allows IARC experts to shrink the size while increasing the power of their proposed accelerator even further.

“The new, compact accelerator that we’re developing will be high-power and high-energy for industry,” Cooper says. “This will enable some things that weren’t possible in the past. For something such as environmental cleanup, you could take the accelerator directly to the site.”

While the IARC team develops this portable accelerator, which should be able to fit on a standard trailer, the A2D2 accelerator will continue to be a place to experiment with how to use electron beams—and study what happens when you do.

“The point of this facility is more development than research, however there will be some research on irradiated samples,” says Fermilab’s Mike Geelhoed, one of the A2D2 project leads. “We’re all excited—at least I am. We and our partners have been anticipating this machine for some time now. We all want to see how well it can perform.”

Editor's note: This article was originally published by Fermilab.

by Leah Poffenberger at September 21, 2017 05:18 PM

September 19, 2017

Symmetrybreaking - Fermilab/SLAC

50 years of stories

To celebrate a half-century of discovery, Fermilab has been gathering tales of life at the lab.

People discussing Fermilab history

Science stories usually catch the eye when there’s big news: the discovery of gravitational waves, the appearance of a new particle. But behind the blockbusters are the thousands of smaller stories of science behind the scenes and daily life at a research institution. 

As the Department of Energy’s Fermi National Accelerator Laboratory celebrates its 50th anniversary year, employees past and present have shared memories of building a lab dedicated to particle physics.

Some shared personal memories: keeping an accelerator running during a massive snowstorm; being too impatient for the arrival of an important piece of detector equipment to stay put and wait for it to arrive; accidentally complaining about the lab to the lab’s director.

Others focused on milestones and accomplishments: the first daycare at a national lab, the Saturday Morning Physics Program built by Nobel laureate Leon Lederman, the birth of the web at Fermilab.

People shared memories of big names that built the lab: charismatic founding director Robert R. Wilson, fiery head of accelerator development Helen Edwards, talented lab artist Angela Gonzales.

And or course, employees told stories about Fermilab’s resident herd of bison.

There are many more stories to peruse. You can watch a playlist of the video anecdotes or find all of the stories (both written and video) collected on Fermilab’s 50th anniversary website.

by Lauren Biron at September 19, 2017 01:00 PM

September 15, 2017

Symmetrybreaking - Fermilab/SLAC

SENSEI searches for light dark matter

Technology proposed 30 years ago to search for dark matter is finally seeing the light.

Two scientists in hard hats stand next to a cart holding detector components.

In a project called SENSEI, scientists are using innovative sensors developed over three decades to look for the lightest dark matter particles anyone has ever tried to detect.

Dark matter—so named because it doesn’t absorb, reflect or emit light—constitutes 27 percent of the universe, but the jury is still out on what it’s made of. The primary theoretical suspect for the main component of dark matter is a particle scientists have descriptively named the weakly interactive massive particle, or WIMP.

But since none of these heavy particles, which are expected to have a mass 100 times that of a proton, have shown up in experiments, it might be time for researchers to think small.

“There is a growing interest in looking for different kinds of dark matter that are additives to the standard WIMP model,” says Fermi National Accelerator Laboratory scientist Javier Tiffenberg, a leader of the SENSEI collaboration. “Lightweight, or low-mass, dark matter is a very compelling possibility, and for the first time, the technology is there to explore these candidates.”

Sensing the unseen

In traditional dark matter experiments, scientists look for a transfer of energy that would occur if dark matter particles collided with an ordinary nucleus. But SENSEI is different; it looks for direct interactions of dark matter particles colliding with electrons.

“That is a big difference—you get a lot more energy transferred in this case because an electron is so light compared to a nucleus,” Tiffenberg says.

If dark matter had low mass—much smaller than the WIMP model suggests—then it would be many times lighter than an atomic nucleus. So if it were to collide with a nucleus, the resulting energy transfer would be far too small to tell us anything. It would be like throwing a ping-pong ball at a boulder: The heavy object wouldn’t go anywhere, and there would be no sign the two had come into contact.

An electron is nowhere near as heavy as an atomic nucleus. In fact, a single proton has about 1836 times more mass than an electron. So the collision of a low-mass dark matter particle with an electron has a much better chance of leaving a mark—it’s more bowling ball than boulder.

Bowling balls aren't exactly light, though. An energy transfer between a low-mass dark matter particle and an electron would leave only a blip of energy, one either too small for most detectors to pick up or easily overshadowed by noise in the data.

“The bowling ball will move a very tiny amount,” says Fermilab scientist Juan Estrada, a SENSEI collaborator. “You need a very precise detector to see this interaction of lightweight particles with something that is much heavier.”

That’s where SENSEI’s sensitive sensors come in.

SENSEI will use skipper charge-couple devices, also called skipper CCDs. CCDs have been used for other dark matter detection experiments, such as the Dark Matter in CCDs (or DAMIC) experiment operating at SNOLAB in Canada. These CCDs were a spinoff from sensors developed for use in the Dark Energy Camera in Chile and other dark energy search projects.

CCDs are typically made of silicon divided into pixels. When a dark matter particle passes through the CCD, it collides with the silicon’s electrons, knocking them free, leaving a net electric charge in each pixel the particle passes through. The electrons then flow through adjacent pixels and are ultimately read as a current in a device that measures the number of electrons freed from each CCD pixel. That measurement tells scientists about the mass and energy of the particle that got the chain reaction going. A massive particle, like a WIMP, would free a gusher of electrons, but a low-mass particle might free only one or two.

Typical CCDs can measure the charge left behind only once, which makes it difficult to decide if a tiny energy signal from one or two electrons is real or an error.

Skipper CCDs are a new generation of the technology that helps eliminate the “iffiness” of a measurement that has a one- or two-electron margin of error. “The big step forward for the skipper CCD is that we are able to measure this charge as many times as we want,” Tiffenberg says.

The charge left behind in the skipper CCD can be sampled multiple times and then averaged, a method that yields a more precise measurement of the charge deposited in each pixel than the measure-one-and-done technique. That’s the rule of statistics: With more data, you get closer to a property’s true value.

SENSEI scientists take advantage of the skipper CCD architecture, measuring the number of electrons in a single pixel a whopping 4000 times.

“This is a simple idea, but it took us 30 years to get it to work,” Estrada says.

From idea to reality to beyond

A small SENSEI prototype is currently running at Fermilab in a detector hall 385 feet below ground, and it has demonstrated that this detector design will work in the hunt for dark matter.

Skipper CCD technology and SENSEI were brought to life by Laboratory Directed Research and Development (LDRD) funds at Fermilab and Lawrence Berkeley National Laboratory (Berkeley Lab). LDRD programs are intended to provide funding for development of novel, cutting-edge ideas for scientific discovery.

The Fermilab LDRDs were awarded only recently—less than two years ago—but close collaboration between the two laboratories has already yielded SENSEI’s promising design, partially thanks to Berkeley lab’s previous work in skipper CCD design.

Fermilab LDRD funds allow researchers to test the sensors and develop detectors based on the science, and the Berkeley Lab LDRD funds support the sensor design, which was originally proposed by Berkeley Lab scientist Steve Holland.

“It is the combination of the two LDRDs that really make SENSEI possible,” Estrada says.

Future SENSEI research will also receive a boost thanks to a recent grant from the Heising-Simons Foundation.

“SENSEI is very cool, but what’s really impressive is that the skipper CCD will allow the SENSEI science and a lot of other applications,” Estrada says. “Astronomical studies are limited by the sensitivity of their experimental measurements, and having sensors without noise is the equivalent of making your telescope bigger—more sensitive.”

SENSEI technology may also be critical in the hunt for a fourth type of neutrino, called the sterile neutrino, which seems to be even more shy than its three notoriously elusive neutrino family members.

A larger SENSEI detector equipped with more skipper CCDs will be deployed within the year. It’s possible it might not detect anything, sending researchers back to the drawing board in the hunt for dark matter. Or SENSEI might finally make contact with dark matter—and that would be SENSEI-tional.

Editor's note: This article is based on an article published by Fermilab.

by Leah Poffenberger at September 15, 2017 07:00 PM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Thinking about space and time in Switzerland

This week, I spent a very enjoyable few days in Bern, Switzerland, attending the conference ‘Thinking about Space and Time: 100 Years of Applying and Interpreting General Relativity’. Organised by Claus Beisbart, Tilman Sauer and Christian Wüthrich, the workshop took place at the Faculty of Philosophy at the University of Bern, and focused on the early reception of Einstein’s general theory of relativity and the difficult philosophical questions raised by the theory. The conference website can be found here and the conference programme is here .

Image result for university of bern

The university of Bern, Switzerland

Of course, such studies also have a historical aspect, and I particularly enjoyed talks by noted scholars in the history and philosophy of 20th century science such as Chris Smeenk (‘Status of the Expanding Universe Models’), John Norton (‘The Error that Showed the Way; Einstein’s Path to the Field Equations’), Dennis Lehmkuhl (‘The Interpretation of Vacuum Solutions in Einstein’s Field Equations’), Daniel Kennefick (‘A History of Gravitational Wave Emission’) and Galina Weinstein (‘The Two-Body Problem in General Relativity as a Heuristic Guide to the Einstein-Rosen Bridge and the EPR Argument’). Other highlights were a review of the problem of dark energy (something I’m working on myself at the moment) by astrophysicist Ruth Durrer and back-to-back talks on the so-called black-hole information paradox from physicist Sabine Hossenfelder and philosopher Carina Prunkl. There were also plenty of talks on general relativity such as Claus Kiefer’s recall of the problems raised at the famous 1955 Bern conference (GR0),  and a really interesting talk on Noether’s theorems by Valeriya Chasova.


Walking to the conference through the old city early yesterday morning


Dr Valereria Chasova giving a talk on Noether’s theorems

My own talk, ‘Historical and Philosophical Aspects of Einstein’s 1917 Model of the Universe’, took place on the first day, the slides are here. (It’s based on our recent review of the Einstein World which has just appeared in EPJH). As for the philosophy talks, I don’t share the disdain some physicists have for philosophers. It seems to me that philosophy has a big role to play in understanding what we think we have discovered about space and time, not least in articulating the big questions clearly. After all, Einstein himself had great interest in the works of philosophers, from Ernst Mach to Hans Reichenbach, and there is little question that modern philosophers such as Harvey Brown have made important contributions to relativity studies. Of course, some philosophers are harder to follow than others, but this is also true of mathematical talks on relativity!

The conference finished with a tour of the famous Einstein Haus in Bern. It’s strange walking around the apartment Einstein lived in with Mileva all those years ago, it has been preserved extremely well. The tour included a very nice talk by Professor Hans Ott , President of the Albert Einstein Society, on AE’s work at the patent office, his 3 great breakthroughs of 1905, and his rise from obscurity to stardom in the years 1905-1909.

Einstein’s old apartment in Bern, a historic site maintained by the Albert Einstein Society

All in all, my favourite sort of conference. A small number of speakers and participants, with plenty of time for Q&A after each talk. I also liked the way the talks took place in a lecture room in the University of Bern, a pleasant walk from the centre of town through the old part of the city (not some bland hotel miles from anywhere). This afternoon, I’m off to visit the University of Zurich and the ETH, and then it’s homeward bound.


I had a very nice day being shown around  ETH Zurich, where Einstein studied as a student


Image may contain: sky, night and outdoor
Image may contain: sky and outdoor
Image may contain: one or more people, sky and outdoor
Image may contain: sky and outdoor
Imagine taking a mountain lift from the centre of town to lectures!

by cormac at September 15, 2017 09:41 AM

September 12, 2017

Symmetrybreaking - Fermilab/SLAC

Clearing a path to the stars

Astronomers are at the forefront of the fight against light pollution, which can obscure our view of the cosmos.

Header: Clearing a path to the stars

More than a mile up in the San Gabriel Mountains in Los Angeles County sits the Mount Wilson Observatory, once one of the cornerstones of groundbreaking astronomy. 

Founded in 1904, it was twice home to the largest telescope on the planet, first with its 60-inch telescope in 1908, followed by its 100-inch telescope in 1917. In 1929, Edwin Hubble revolutionized our understanding of the shape of the universe when he discovered on Mt. Wilson that it was expanding. 

But a problem was radiating from below. As the city of Los Angeles grew, so did the reach and brightness of its skyglow, otherwise known as light pollution. The city light overpowered the photons coming from faint, distant objects, making deep-sky cosmology all but impossible. In 1983, the Carnegies, who had owned the observatory since its inception, abandoned Mt. Wilson to build telescopes in Chile instead.

“They decided that if they were going to do greater, more detailed and groundbreaking science in astronomy, they would have to move to a dark place in the world,” says Tom Meneghini, the observatory’s executive director. “They took their money and ran.” 

(Meneghini harbors no hard feelings: “I would have made the same decision,” he says.)

Beyond being a problem for astronomers, light pollution is also known to harm and kill wildlife, waste energy and cause disease in humans around the globe. For their part, astronomers have worked to convince local governments to adopt better lighting ordinances, including requiring the installation of fixtures that prevent light from seeping into the sky. 

Inline_1: Clearing a path to the stars
Artwork by Corinne Mucha

Many towns and cities are already reexamining their lighting systems as the industry standard shifts from sodium lights to light-emitting diodes, or LEDs, which last longer and use far less energy, providing both cost-saving and environmental benefits. But not all LEDs are created equal. Different bulbs emit different colors, which correspond to different temperatures. The higher the temperature, the bluer the color. 

The creation of energy-efficient blue LEDs was so profound that its inventors were awarded the 2014 Nobel Prize in Physics. But that blue light turns out to be particularly detrimental to astronomers, for the same reason that the daytime sky is blue: Blue light scatters more than any other color. (Blue lights have also been found to be more harmful to human health than more warmly colored, amber LEDs. In 2016, the American Medical Association issued guidance to minimize blue-rich light, stating that it disrupts circadian rhythms and leads to sleep problems, impaired functioning and other issues.)

The effort to darken the skies has expanded to include a focus on LEDs, as well as an attempt to get ahead of the next industry trend. 

At a January workshop at the annual American Astronomical Society (AAS) meeting, astronomer John Barentine sought to share stories of towns and cities that had successfully battled light pollution. Barentine is a program manager for the International Dark-Sky Association (IDA), a nonprofit founded in 1988 to combat light pollution. He pointed to the city of Phoenix, Arizona. 

Arizona is a leader in reducing light pollution. The state is home to four of the 10 IDA-recognized “Dark Sky Communities” in the United States. “You can stand in the middle of downtown Flagstaff and see the Milky Way,” says James Lowenthal, an astronomy professor at Smith College.

But it’s not immune to light pollution. Arizona’s Grand Canyon National Park is designated by the IDA as an International Dark Sky Park, and yet, on a clear night, Barentine says, the horizon is stained by the glow of Las Vegas 170 miles away.

Inline_2: Clearing a path to the stars
Artwork by Corinne Mucha

In 2015, Phoenix began testing the replacement of some of its 100,000 or so old streetlights with LEDs, which the city estimated would save $2.8 million a year in energy bills. But they were using high-temperature blue LEDs, which would have bathed the city in a harsh white light. 

Through grassroots work, the local IDA chapter delayed the installation for six months, giving the council time to brush up on light pollution and hear astronomers’ concerns. In the end, the city went beyond IDA’s “best expectations,” Barentine says, opting for lights that burn at a temperature well under IDA’s maximum recommendations. 

“All the way around, it was a success to have an outcome arguably influenced by this really small group of people, maybe 10 people in a city of 2 million,” he says. “People at the workshop found that inspiring.”

Just getting ordinances on the books does not necessarily solve the problem, though. Despite enacting similar ordinances to Phoenix, the city of Northampton, Massachusetts, does not have enough building inspectors to enforce them. “We have this great law, but developers just put their lights in the wrong way and nobody does anything about it,” Lowenthal says. 

For many cities, a major part of the challenge of combating light pollution is simply convincing people that it is a problem. This is particularly tricky for kids who have never seen a clear night sky bursting with bright stars and streaked by the glow of the Milky Way, says Connie Walker, a scientist at the National Optical Astronomy Observatory who is also on the board of the IDA. “It’s hard to teach somebody who doesn’t know what they’ve lost,” Walker says.

Walker is focused on making light pollution an innate concern of the next generation, the way campaigns in the 1950s made littering unacceptable to a previous generation of kids. 

In addition to creating interactive light-pollution kits for children, the NOAO operates a citizen-science initiative called Globe at Night, which allows anyone to take measurements of brightness in their area and upload them to a database. To date, Globe at Night has collected more than 160,000 observations from 180 countries. 

It’s already produced success stories. In Norman, Oklahoma, for example, a group of high school students, with the assistance of amateur astronomers, used Globe at Night to map light pollution in their town. They took the data to the city council. Within two years, the town had passed stricter lighting ordinances. 

“Light pollution is foremost on our minds because our observatories are at risk,” Walker says. “We should really be concentrating on the next generation.”

by Laura Dattaro at September 12, 2017 01:00 PM

John Baez - Azimuth

Applied Category Theory 2018

There will be a conference on applied category theory!

Applied Category Theory (ACT 2018). School 23–27 April 2018 and workshop 30 April–4 May 2018 at the Lorentz Center in Leiden, the Netherlands. Organized by Bob Coecke (Oxford), Brendan Fong (MIT), Aleks Kissinger (Nijmegen), Martha Lewis (Amsterdam), and Joshua Tan (Oxford).

The plenary speakers will be:

• Samson Abramsky (Oxford)
• John Baez (UC Riverside)
• Kathryn Hess (EPFL)
• Mehrnoosh Sadrzadeh (Queen Mary)
• David Spivak (MIT)

There will be a lot more to say as this progresses, but for now let me just quote from the conference website:

Applied Category Theory (ACT 2018) is a five-day workshop on applied category theory running from April 30 to May 4 at the Lorentz Center in Leiden, the Netherlands.

Towards an Integrative Science: in this workshop, we want to instigate a multi-disciplinary research program in which concepts, structures, and methods from one scientific discipline can be reused in another. The aim of the workshop is to (1) explore the use of category theory within and across different disciplines, (2) create a more cohesive and collaborative ACT community, especially among early-stage researchers, and (3) accelerate research by outlining common goals and open problems for the field.

While the workshop will host discussions on a wide range of applications of category theory, there will be four special tracks on exciting new developments in the field:

1. Dynamical systems and networks
2. Systems biology
3. Cognition and AI
4. Causality

Accompanying the workshop will be an Adjoint Research School for early-career researchers. This will comprise a 16 week online seminar, followed by a 4 day research meeting at the Lorentz Center in the week prior to ACT 2018. Applications to the school will open prior to October 1, and are due November 1. Admissions will be notified by November 15.

The organizers

Bob Coecke (Oxford), Brendan Fong (MIT), Aleks Kissinger (Nijmegen), Martha Lewis (Amsterdam), and Joshua Tan (Oxford)

We welcome any feedback! Please send comments to this link.

About Applied Category Theory

Category theory is a branch of mathematics originally developed to transport ideas from one branch of mathematics to another, e.g. from topology to algebra. Applied category theory refers to efforts to transport the ideas of category theory from mathematics to other disciplines in science, engineering, and industry.

This site originated from discussions at the Computational Category Theory Workshop at NIST on Sept. 28-29, 2015. It serves to collect and disseminate research, resources, and tools for the development of applied category theory, and hosts a blog for those involved in its study.

The proposal: Towards an Integrative Science

Category theory was developed in the 1940s to translate ideas from one field of mathematics, e.g. topology, to another field of mathematics, e.g. algebra. More recently, category theory has become an unexpectedly useful and economical tool for modeling a range of different disciplines, including programming language theory [10], quantum mechanics [2], systems biology [12], complex networks [5], database theory [7], and dynamical systems [14].

A category consists of a collection of objects together with a collection of maps between those objects, satisfying certain rules. Topologists and geometers use category theory to describe the passage from one mathematical structure to another, while category theorists are also interested in categories for their own sake. In computer science and physics, many types of categories (e.g. topoi or monoidal categories) are used to give a formal semantics of domain-specific phenomena (e.g. automata [3], or regular languages [11], or quantum protocols [2]). In the applied category theory community, a long-articulated vision understands categories as mathematical workspaces for the experimental sciences, similar to how they are used in topology and geometry [13]. This has proved true in certain fields, including computer science and mathematical physics, and we believe that these results can be extended in an exciting direction: we believe that category theory has the potential to bridge specific different fields, and moreover that developments in such fields (e.g. automata) can be transferred successfully into other fields (e.g. systems biology) through category theory. Already, for example, the categorical modeling of quantum processes has helped solve an important open problem in natural language processing [9].

In this workshop, we want to instigate a multi-disciplinary research program in which concepts, structures, and methods from one discipline can be reused in another. Tangibly and in the short-term, we will bring together people from different disciplines in order to write an expository survey paper that grounds the varied research in applied category theory and lays out the parameters of the research program.

In formulating this research program, we are motivated by recent successes where category theory was used to model a wide range of phenomena across many disciplines, e.g. open dynamical systems (including open Markov processes and open chemical reaction networks), entropy and relative entropy [6], and descriptions of computer hardware [8]. Several talks will address some of these new developments. But we are also motivated by an open problem in applied category theory, one which was observed at the most recent workshop in applied category theory (Dagstuhl, Germany, in 2015): “a weakness of semantics/CT is that the definitions play a key role. Having the right definitions makes the theorems trivial, which is the opposite of hard subjects where they have combinatorial proofs of theorems (and simple definitions). […] In general, the audience agrees that people see category theorists only as reconstructing the things they knew already, and that is a disadvantage, because we do not give them a good reason to care enough” [1, pg. 61].

In this workshop, we wish to articulate a natural response to the above: instead of treating the reconstruction as a weakness, we should treat the use of categorical concepts as a natural part of transferring and integrating knowledge across disciplines. The restructuring employed in applied category theory cuts through jargon, helping to elucidate common themes across disciplines. Indeed, the drive for a common language and comparison of similar structures in algebra and topology is what led to the development category theory in the first place, and recent hints show that this approach is not only useful between mathematical disciplines, but between scientific ones as well. For example, the ‘Rosetta Stone’ of Baez and Stay demonstrates how symmetric monoidal closed categories capture the common structure between logic, computation, and physics [4].

[1] Samson Abramsky, John C. Baez, Fabio Gadducci, and Viktor Winschel. Categorical methods at the crossroads. Report from Dagstuhl Perspectives Workshop 14182, 2014.

[2] Samson Abramsky and Bob Coecke. A categorical semantics of quantum protocols. In Handbook of Quantum Logic and Quantum Structures. Elsevier, Amsterdam, 2009.

[3] Michael A. Arbib and Ernest G. Manes. A categorist’s view of automata and systems. In Ernest G. Manes, editor, Category Theory Applied to Computation and Control. Springer, Berlin, 2005.

[4] John C. Baez and Mike STay. Physics, topology, logic and computation: a Rosetta stone. In Bob Coecke, editor, New Structures for Physics. Springer, Berlin, 2011.

[5] John C. Baez and Brendan Fong. A compositional framework for passive linear networks. arXiv e-prints, 2015.

[6] John C. Baez, Tobias Fritz, and Tom Leinster. A characterization of entropy in terms of information loss. Entropy, 13(11):1945–1957, 2011.

[7] Michael Fleming, Ryan Gunther, and Robert Rosebrugh. A database of categories. Journal of Symbolic Computing, 35(2):127–135, 2003.

[8] Dan R. Ghica and Achim Jung. Categorical semantics of digital circuits. In Ruzica Piskac and Muralidhar Talupur, editors, Proceedings of the 16th Conference on Formal Methods in Computer-Aided Design. Springer, Berlin, 2016.

[9] Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, Stephen Pulman, and Bob Coecke. Reasoning about meaning in natural language with compact closed categories and Frobenius algebras. In Logic and Algebraic Structures in Quantum Computing and Information. Cambridge University Press, Cambridge, 2013.

[10] Eugenio Moggi. Notions of computation and monads. Information and Computation, 93(1):55–92, 1991.

[11] Nicholas Pippenger. Regular languages and Stone duality. Theory of Computing Systems 30(2):121–134, 1997.

[12] Robert Rosen. The representation of biological systems from the standpoint of the theory of categories. Bulletin of Mathematical Biophysics, 20(4):317–341, 1958.

[13] David I. Spivak. Category Theory for Scientists. MIT Press, Cambridge MA, 2014.

[14] David I. Spivak, Christina Vasilakopoulou, and Patrick Schultz. Dynamical systems and sheaves. arXiv e-prints, 2016.

by John Baez at September 12, 2017 05:32 AM

September 08, 2017

Sean Carroll - Preposterous Universe

Joe Polchinski’s Memories, and a Mark Wise Movie

Joe Polchinski, a universally-admired theoretical physicist at the Kavli Institute for Theoretical Physics in Santa Barbara, recently posted a 150-page writeup of his memories of doing research over the years.

Memories of a Theoretical Physicist
Joseph Polchinski

While I was dealing with a brain injury and finding it difficult to work, two friends (Derek Westen, a friend of the KITP, and Steve Shenker, with whom I was recently collaborating), suggested that a new direction might be good. Steve in particular regarded me as a good writer and suggested that I try that. I quickly took to Steve’s suggestion. Having only two bodies of knowledge, myself and physics, I decided to write an autobiography about my development as a theoretical physicist. This is not written for any particular audience, but just to give myself a goal. It will probably have too much physics for a nontechnical reader, and too little for a physicist, but perhaps there with be different things for each. Parts may be tedious. But it is somewhat unique, I think, a blow-by-blow history of where I started and where I got to. Probably the target audience is theoretical physicists, especially young ones, who may enjoy comparing my struggles with their own. Some disclaimers: This is based on my own memories, jogged by the arXiv and Inspire. There will surely be errors and omissions. And note the title: this is about my memories, which will be different for other people. Also, it would not be possible for me to mention all the authors whose work might intersect mine, so this should not be treated as a reference work.

As the piece explains, it’s a bittersweet project, as it was brought about by Joe struggling with a serious illness and finding it difficult to do physics. We all hope he fully recovers and gets back to leading the field in creative directions.

I had the pleasure of spending three years down the hall from Joe when I was a postdoc at the ITP (it didn’t have the “K” at that time). You’ll see my name pop up briefly in his article, sadly in the context of an amusing anecdote rather than an exciting piece of research, since I stupidly spent three years in Santa Barbara without collaborating with any of the brilliant minds on the faculty there. Not sure exactly what I was thinking.

Joe is of course a world-leading theoretical physicist, and his memories give you an idea why, while at the same time being very honest about setbacks and frustrations. His style has never been to jump on a topic while it was hot, but to think deeply about fundamental issues and look for connections others have missed. This approach led him to such breakthroughs as a new understanding of the renormalization group, the discovery of D-branes in string theory, and the possibility of firewalls in black holes. It’s not necessarily a method that would work for everyone, especially because it doesn’t necessarily lead to a lot of papers being written at a young age. (Others who somehow made this style work for them, and somehow survived, include Ken Wilson and Alan Guth.) But the purity and integrity of Joe’s approach to doing science is an example for all of us.

Somehow over the course of 150 pages Joe neglected to mention perhaps his greatest triumph, as a three-time guest blogger (one, two, three). Too modest, I imagine.

His memories make for truly compelling reading, at least for physicists — he’s an excellent stylist and pedagogue, but the intended audience is people who have already heard about the renormalization group. This kind of thoughtful but informal recollection is an invaluable resource, as you get to see not only the polished final product of a physics paper, but the twists and turns of how it came to be, especially the motivations underlying why the scientist chose to think about things one way rather than some other way.

(Idea: there is a wonderful online magazine called The Players’ Tribune, which gives athletes an opportunity to write articles expressing their views and experiences, e.g. the raw feelings after you are traded. It would be great to have something like that for scientists, or for academics more broadly, to write about the experiences [good and bad] of doing research. Young people in the field would find it invaluable, and non-scientists could learn a lot about how science really works.)

You also get to read about many of the interesting friends and colleagues of Joe’s over the years. A prominent one is my current Caltech colleague Mark Wise, a leading physicist in his own right (and someone I was smart enough to collaborate with — with age comes wisdom, or at least more wisdom than you used to have). Joe and Mark got to know each other as postdocs, and have remained friends ever since. When it came time for a scientific gathering to celebrate Joe’s 60th birthday, Mark contributed a home-made movie showing (in inimitable style) how much progress he had made over the years in the activities they had enjoyed together in their relative youth. And now, for the first time, that movie is available to the whole public. It’s seven minutes long, but don’t make the mistake of skipping the blooper reel that accompanies the end credits. Many thanks to Kim Boddy, the former Caltech student who directed and produced this lost masterpiece.

When it came time for his own 60th, Mark being Mark he didn’t want the usual conference, and decided instead to gather physicist friends from over the years and take them to a local ice rink for a bout of curling. (Canadian heritage showing through.) Joe being Joe, this was an invitation he couldn’t resist, and we had a grand old time, free of any truly serious injuries.

We don’t often say it out loud, but one of the special privileges of being in this field is getting to know brilliant and wonderful people, and interacting with them over periods of many years. I owe Joe a lot — even if I wasn’t smart enough to collaborate with him when he was down the hall, I learned an enormous amount from his example, and often wonder how he would think about this or that issue in physics.


by Sean Carroll at September 08, 2017 06:18 PM

Symmetrybreaking - Fermilab/SLAC

Detectors in the dirt

A humidity and temperature monitor developed for CMS finds a new home in Lebanon.

A technician from the Optosmart company examines the field in the Bekaa valley in Lebanon.

People who tend crops in Lebanon and people who tend particle detectors on the border of France and Switzerland have a need in common: large-scale humidity and temperature monitoring. A scientist who noticed this connection is working with farmers to try to use a particle physics solution to solve an agricultural problem.

Farmers, especially those in dry areas found in the Middle East, need to produce as much food as possible without using too much water. Scientists on experiments at the Large Hadron Collider want to track the health of their detectors—a sudden change in humidity or temperature can indicate a problem.

To monitor humidity and temperature in their detector, members of the CMS experiment at the LHC developed a fiber-optic system. Fiber optics are wires made from glass that can carry light. Etching small mirrors into the core of a fiber creates a “Bragg grating,” a system that either lets light through or reflects it back, based on its wavelength and the distance between the mirrors.

“Temperature will naturally have an impact on the distance between the mirrors because of the contraction and dilation of the material,” says Martin Gastal, a member of the CMS collaboration at the LHC. “By default, a Bragg grating sensor is a temperature sensor.”

Scientists at the University of Sannio and INFN Naples developed a material for the CMS experiment that could turn the temperature sensors into humidity monitors as well. The material expands when it comes into contact with water, and the expansion pulls the mirrors apart. The sensors were tested by a team from the Experimental Physics Department at CERN.

In December 2015, Lebanon signed an International Cooperation Agreement with CERN, and the Lebanese University joined CMS. As Professor Haitham Zaraket, a theoretical physicist at the Lebanese University and member of the CMS experiment, recalls, they picked fiber optic monitoring from a list of CMS projects for one of their engineers to work on. Martin then approached them about the possibility of applying the technology elsewhere.

With Lebanon’s water resources under increasing pressure from a growing population and agricultural needs, irrigation control seemed like a natural application. “Agriculture consumes quite a high amount of water, of fresh water, and this is the target of this project,” says Ihab Jomaa, the Department Head of Irrigation and Agrometeorology at the Lebanese Agricultural Research Institute. “We are trying to raise what we call in agriculture lately ‘water productivity.’”

The first step after formally establishing the Fiber Optic Sensor Systems for Irrigation (FOSS4I) collaboration was to make sure that the sensors could work at all in Lebanon’s clay-heavy soil. The Lebanese University shipped 10 kilograms of soil from Lebanon to Naples, where collaborators at University of Sannio adjusted the sensor design to increase the measurement range.

During phase one, which lasted from March to June, 40 of the sensors were used to monitor a small field in Lebanon. It was found that, contrary to the laboratory findings, they could not in practice sense the full range of soil moisture content that they needed to. Based on this feedback, “we are working on a new concept which is not just a simple modification of the initial architecture,” Haitham says. The new design concept is to use fiber optics to monitor an absorbing material planted in the soil rather than having a material wrapped around the fiber.

“We are reinventing the concept,” he says. “This should take some time and hopefully at the end of it we will be able to go for field tests again.” At the same time, they are incorporating parts of phase three, looking for soil parameters such as pesticide or chemicals inside the soil or other bacterial effects.

If the new concept is successfully validated, the collaboration will move on to testing more fields and more crops. Research and development always involves setbacks, but the FOSS4I collaboration has taken this one as an opportunity to pivot to a potentially even more powerful technology.

by Jameson O'Reilly at September 08, 2017 04:40 PM

John Baez - Azimuth

Postdoc in Applied Category Theory

guest post by Spencer Breiner

One Year Postdoc Position at Carnegie Mellon/NIST

We are seeking an early-career researcher with a background in category theory, functional programming and/or electrical engineering for a one-year post-doctoral position supported by an Early-concept Grant (EAGER) from the NSF’s Systems Science program. The position will be managed through Carnegie Mellon University (PI: Eswaran Subrahmanian), but the position itself will be located at the US National Institute for Standards and Technology (NIST), located in Gaithersburg, Maryland outside of Washington, DC.

The project aims to develop a compositional semantics for electrical networks which is suitable for system prediction, analysis and control. This work will extend existing methods for linear circuits (featured on this blog!) to include (i) probabilistic estimates of future consumption and (ii) top-down incentives for load management. We will model a multi-layered system of such “distributed energy resources” including loads and generators (e.g., solar array vs. power plant), different types of resource aggregation (e.g., apartment to apartment building), and across several time scales. We hope to demonstrate that such a system can balance local load and generation in order to minimize expected instability at higher levels of the electrical grid.

This post is available full-time (40 hours/5 days per week) for 12 months, and can begin as early as October 1st.

For more information on this position, please contact Dr. Eswaran Subrahmanian ( or Dr. Spencer Breiner (

by John Baez at September 08, 2017 05:46 AM

September 07, 2017

Matt Strassler - Of Particular Significance

Watch for Auroras

Those of you who remember my post on how to keep track of opportunities to see northern (and southern) lights will be impressed by this image from .

The latest space weather overview plot

The top plot shows the number of X-rays (high-energy photons [particles of light]) coming from the sun, and that huge spike in the middle of the plot indicates a very powerful solar flare occurred about 24 hours ago.  It should take about 2 days from the time of the flare for its other effects — the cloud of electrically-charged particles expelled from the Sun’s atmosphere — to arrive at Earth.  The electrically-charged particles are what generate the auroras, when they are directed by Earth’s magnetic field to enter the Earth’s atmosphere near the Earth’s magnetic poles, where they crash into atoms in the upper atmosphere, exciting them and causing them to radiate visible light.

The flare was very powerful, but its cloud of particles didn’t head straight for Earth.  We might get only a glancing blow.  So we don’t know how big an effect to expect here on our planet.  All we can do for now is be hopeful, and wait.

In any case, auroras borealis and australis are possible in the next day or so.  Watch for the middle plot to go haywire, and for the bars in the lower plot to jump higher; then you know the time has arrived.

Filed under: Astronomy Tagged: astronomy, auroras

by Matt Strassler at September 07, 2017 04:51 PM

September 05, 2017

Symmetrybreaking - Fermilab/SLAC

What can particles tell us about the cosmos?

The minuscule and the immense can reveal quite a bit about each other.

Header: Particle astro

In particle physics, scientists study the properties of the smallest bits of matter and how they interact. Another branch of physics—astrophysics—creates and tests theories about what’s happening across our vast universe.

While particle physics and astrophysics appear to focus on opposite ends of a spectrum, scientists in the two fields actually depend on one another. Several current lines of inquiry link the very large to the very small.

The seeds of cosmic structure

For one, particle physicists and astrophysicists both ask questions about the growth of the early universe. 

In her office at Stanford University, Eva Silverstein explains her work parsing the mathematical details of the fastest period of that growth, called cosmic inflation. 

“To me, the subject is particularly interesting because you can understand the origin of structure in the universe,” says Silverstein, a professor of physics at Stanford and the Kavli Institute for Particle Astrophysics and Cosmology. “This paradigm known as inflation accounts for the origin of structure in the most simple and beautiful way a physicist can imagine.” 

Scientists think that after the Big Bang, the universe cooled, and particles began to combine into hydrogen atoms. This process released previously trapped photons—elementary particles of light. 

The glow from that light, called the cosmic microwave background, lingers in the sky today. Scientists measure different characteristics of the cosmic microwave background to learn more about what happened in those first moments after the Big Bang.

According to scientists’ models, a pattern that first formed on the subatomic level eventually became the underpinning of the structure of the entire universe. Places that were dense with subatomic particles—or even just virtual fluctuations of subatomic particles—attracted more and more matter. As the universe grew, these areas of density became the locations where galaxies and galaxy clusters formed. The very small grew up to be the very large.

Scientists studying the cosmic microwave background hope to learn about more than just how the universe grew—it could also offer insight into dark matter, dark energy and the mass of the neutrino.

“It’s amazing that we can probe what was going on almost 14 billion years ago,” Silverstein says. “We can’t learn everything that was going on, but we can still learn an incredible amount about the contents and interactions.”

For many scientists, “the urge to trace the history of the universe back to its beginnings is irresistible,” wrote theoretical physicist Stephen Weinberg in his 1977 book The First Three Minutes. The Nobel laureate added, “From the start of modern science in the sixteenth and seventeenth centuries, physicists and astronomers have returned again and again to the problem of the origin of the universe.”

Searching in the dark

Particle physicists and astrophysicists both think about dark matter and dark energy. Astrophysicists want to know what made up the early universe and what makes up our universe today. Particle physicists want to know whether there are undiscovered particles and forces out there for the finding.

“Dark matter makes up most of the matter in the universe, yet no known particles in the Standard Model [of particle physics] have the properties that it should possess,” says Michael Peskin, a professor of theoretical physics at SLAC. “Dark matter should be very weakly interacting, heavy or slow-moving, and stable over the lifetime of the universe.”

There is strong evidence for dark matter through its gravitational effects on ordinary matter in galaxies and clusters. These observations indicate that the universe is made up of roughly 5 percent normal matter, 25 percent dark matter and 70 percent dark energy. But to date, scientists have not directly observed dark energy or dark matter.

“This is really the biggest embarrassment for particle physics,” Peskin says. “However much atomic matter we see in the universe, there’s five times more dark matter, and we have no idea what it is.” 

But scientists have powerful tools to try to understand some of these unknowns. Over the past several years, the number of models of dark matter has been expanding, along with the number of ways to detect it, says Tom Rizzo, a senior scientist at SLAC and head of the theory group.

Some experiments search for direct evidence of a dark matter particle colliding with a matter particle in a detector. Others look for indirect evidence of dark matter particles interfering in other processes or hiding in the cosmic microwave background. If dark matter has the right properties, scientists could potentially create it in a particle accelerator such as the Large Hadron Collider.

Physicists are also actively hunting for signs of dark energy. It is possible to measure the properties of dark energy by observing the motion of clusters of galaxies at the largest distances that we can see in the universe.

“Every time that we learn a new technique to observe the universe, we typically get lots of surprises,” says Marcelle Soares-Santos, a Brandeis University professor and a researcher on the Dark Energy Survey. “And we can capitalize on these new ways of observing the universe to learn more about cosmology and other sides of physics.”

Inline: Particle astro
Artwork by Ana Kova

Forces at play

Particle physicists and astrophysicists find their interests also align in the study of gravity. For particle physicists, gravity is the one basic force of nature that the Standard Model does not quite explain. Astrophysicists want to understand the important role gravity played and continues to play in the formation of the universe.

In the Standard Model, each force has what’s called a force-carrier particle or a boson. Electromagnetism has photons. The strong force has gluons. The weak force has W and Z bosons. When particles interact through a force, they exchange these force-carriers, transferring small amounts of information called quanta, which scientists describe through quantum mechanics. 

General relativity explains how the gravitational force works on large scales: Earth pulls on our own bodies, and planetary objects pull on each other. But it is not understood how gravity is transmitted by quantum particles. 

Discovering a subatomic force-carrier particle for gravity would help explain how gravity works on small scales and inform a quantum theory of gravity that would connect general relativity and quantum mechanics. 

Compared to the other fundamental forces, gravity interacts with matter very weakly, but the strength of the interaction quickly becomes larger with higher energies. Theorists predict that at high enough energies, such as those seen in the early universe, quantum gravity effects are as strong as the other forces. Gravity played an essential role in transferring the small-scale pattern of the cosmic microwave background into the large-scale pattern of our universe today.

“Another way that these effects can become important for gravity is if there’s some process that lasts a long time,” Silverstein says. “Even if the energies aren’t as high as they would need to be to be sensitive to effects like quantum gravity instantaneously.” 

Physicists are modeling gravity over lengthy time scales in an effort to reveal these effects.

Our understanding of gravity is also key in the search for dark matter. Some scientists think that dark matter does not actually exist; they say the evidence we’ve found so far is actually just a sign that we don’t fully understand the force of gravity.  

Big ideas, tiny details

Learning more about gravity could tell us about the dark universe, which could also reveal new insight into how structure in the universe first formed. 

Scientists are trying to “close the loop” between particle physics and the early universe, Peskin says. As scientists probe space and go back further in time, they can learn more about the rules that govern physics at high energies, which also tells us something about the smallest components of our world.

Artwork for this article is available as a printable poster.

by Amanda Solliday at September 05, 2017 02:18 PM

September 03, 2017

Jon Butterworth - Life and Physics

August 30, 2017

Symmetrybreaking - Fermilab/SLAC

Neural networks meet space

Artificial intelligence analyzes gravitational lenses 10 million times faster.

Neurons and Einstein ring

Researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have for the first time shown that neural networks—a form of artificial intelligence—can accurately analyze the complex distortions in spacetime known as gravitational lenses 10 million times faster than traditional methods.

“Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way and, in principle, on a cell phone’s computer chip,” says postdoctoral fellow Laurence Perreault Levasseur, a co-author of a study published today in Nature.

Lightning-fast complex analysis

The team at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), a joint institute of SLAC and Stanford, used neural networks to analyze images of strong gravitational lensing, where the image of a faraway galaxy is multiplied and distorted into rings and arcs by the gravity of a massive object, such as a galaxy cluster, that’s closer to us. The distortions provide important clues about how mass is distributed in space and how that distribution changes over time – properties linked to invisible dark matter that makes up 85 percent of all matter in the universe and to dark energy that’s accelerating the expansion of the universe.

Until now this type of analysis has been a tedious process that involves comparing actual images of lenses with a large number of computer simulations of mathematical lensing models. This can take weeks to months for a single lens.

But with the neural networks, the researchers were able to do the same analysis in a few seconds, which they demonstrated using real images from NASA’s Hubble Space Telescope and simulated ones.

To train the neural networks in what to look for, the researchers showed them about half a million simulated images of gravitational lenses for about a day. Once trained, the networks were able to analyze new lenses almost instantaneously with a precision that was comparable to traditional analysis methods. In a separate paper, submitted to The Astrophysical Journal Letters, the team reports how these networks can also determine the uncertainties of their analyses.

Grid of nine boxes showing various gravitational lenses

KIPAC researchers used images of strongly lensed galaxies taken with the Hubble Space Telescope to test the performance of neural networks, which promise to speed up complex astrophysical analyses tremendously.

Yashar Hezaveh/Laurence Perreault Levasseur/Phil Marshall/Stanford/SLAC National Accelerator Laboratory; NASA/ESA

Prepared for the data floods of the future

“The neural networks we tested—three publicly available neural nets and one that we developed ourselves—were able to determine the properties of each lens, including how its mass was distributed and how much it magnified the image of the background galaxy,” says the study’s lead author Yashar Hezaveh, a NASA Hubble postdoctoral fellow at KIPAC.

This goes far beyond recent applications of neural networks in astrophysics, which were limited to solving classification problems, such as determining whether an image shows a gravitational lens or not.

The ability to sift through large amounts of data and perform complex analyses very quickly and in a fully automated fashion could transform astrophysics in a way that is much needed for future sky surveys that will look deeper into the universe—and produce more data—than ever before.

The Large Synoptic Survey Telescope (LSST), for example, whose 3.2-gigapixel camera is currently under construction at SLAC, will provide unparalleled views of the universe and is expected to increase the number of known strong gravitational lenses from a few hundred today to tens of thousands.

“We won’t have enough people to analyze all these data in a timely manner with the traditional methods,” Perreault Levasseur says. “Neural networks will help us identify interesting objects and analyze them quickly. This will give us more time to ask the right questions about the universe.”

Convolutional neural network example with pictures of dogs and features

Scheme of an artificial neural network, with individual computational units organized into hundreds of layers. Each layer searches for certain features in the input image (at left). The last layer provides the result of the analysis. The researchers used particular kinds of neural networks, called convolutional neural networks, in which individual computational units (neurons, gray spheres) of each layer are also organized into 2-D slabs that bundle information about the original image into larger computational units.

Greg Stewart, SLAC National Accelerator Laboratory

A revolutionary approach

Neural networks are inspired by the architecture of the human brain, in which a dense network of neurons quickly processes and analyzes information.

In the artificial version, the “neurons” are single computational units that are associated with the pixels of the image being analyzed. The neurons are organized into layers, up to hundreds of layers deep. Each layer searches for features in the image. Once the first layer has found a certain feature, it transmits the information to the next layer, which then searches for another feature within that feature, and so on.

“The amazing thing is that neural networks learn by themselves what features to look for,” says KIPAC staff scientist Phil Marshall, a co-author of the paper. “This is comparable to the way small children learn to recognize objects. You don’t tell them exactly what a dog is; you just show them pictures of dogs.”

But in this case, Hezaveh says, “It’s as if they not only picked photos of dogs from a pile of photos, but also returned information about the dogs’ weight, height and age.”

Although the KIPAC scientists ran their tests on the Sherlock high-performance computing cluster at the Stanford Research Computing Center, they could have done their computations on a laptop or even on a cell phone, they said. In fact, one of the neural networks they tested was designed to work on iPhones.

“Neural nets have been applied to astrophysical problems in the past with mixed outcomes,” says KIPAC faculty member Roger Blandford, who was not a co-author on the paper. “But new algorithms combined with modern graphics processing units, or GPUs, can produce extremely fast and reliable results, as the gravitational lens problem tackled in this paper dramatically demonstrates. There is considerable optimism that this will become the approach of choice for many more data processing and analysis problems in astrophysics and other fields.”    

Editor's note: This article originally appeared as a SLAC press release.

by Manuel Gnida at August 30, 2017 05:08 PM



[RSS 2.0 Feed] [Atom Feed]

Last updated:
October 21, 2017 11:50 AM
All times are UTC.

Suggest a blog: