Particle Physics Planet

March 23, 2019

Peter Coles - In the Dark

Put it to the People!

Well, that was a very enjoyable and informative couple of days in London celebrating the 60th Birthday of Alan Heavens, but my trip to London is not yet over. Before going to Heathrow Airport for the flight back to Dublin this evening, I am taking part in a demonstration in Central London demanding a referendum as a last chance to avert the calamity of Brexit, halt Britain’s descent into nationalistic xenophobia, and prevent the social and economic harm being done by the ongoing madness. I have a feeling that Theresa May’s toxic speech on Wednesday evening in which she blamed everyone but herself for the mess that she has created will have galvanized many more than me into action.

I’m not sure whether this march – even if it is huge – will make much difference or even that it will be properly reported in the media, but one has to do something. Despite the short delay to the Brexit date agreed by the EU, I still think the most likely outcome of this shambles is that the UK leaves without a proper withdrawal agreement and thus begins a new life as a pariah state run by incompetent deadheads who know nothing other than the empty slogans that they regurgitate instead of answering real questions.

The only sensible response to the present impasse is to `Put it to the People’, but there is no time to organize a new referendum – a proper one, informed by facts as we now know them and without the wholesale unlawful behavior of the Leave campaign in the last one. I dismiss entirely any argument that a new referendum would be undemocratic in any way. Only those terminally gripped by Brexit insanity would argue that voting can be anti-democratic, especially since there is strong evidence from opinion polls that having seen the mess the Government has created a clear majority wishes to remain. If there isn’t time for a new referendum before the deadline – and further extensions by the EU are unlikely – then the best plan is to revoke the Article 50 notification to stop the clock.

I know I’m not alone in thinking this. An official petition demanding the Government revoke Article 50 has passed 4,000,000 signatures in just a few days. I’ve signed it and encourage you to do likewise, which you can do here.

And if you’re tempted to agree with the Prime Minister’s claim that people are just tired of Brexit and just want it to be over, then please bear in mind that the Withdrawal Agreement – which has taken two years to get nowhere – is only the start of the process. The UK is set for years of further negotiations on the terms of its future relationship not only with the European Union but also all the other agreements that will be terminated by the UK’s self-imposed isolation.

If Brexit does go ahead, which I’m afraid I think will be the case, then my participation in today’s march will not have been a waste – it seems a fitting way to say goodbye to the land of my birth, a country to which I no longer belong.

Anyway, I may be able to add a few pictures of the march in due course but, until then, here is an excerpt from Private Eye that made me laugh.

by telescoper at March 23, 2019 10:01 AM

March 22, 2019

Christian P. Robert - xi'an's og

Lies sleeping [book review]

This is the seventh book in the Rivers of London series, by Ben Aaronovitch, which I have been expecting a long time. Avoiding the teasers like The Furthest Station, which appears primarily as a way to capitalise on readers’ impatience. And maybe due to this long wait or simply fatigue of the writer (or reader?!), I found this volume quite weak, from the plot which major danger remains hidden to the duh? title, to the cavalcade of past characters (most of whom I could not place), to the somewhat repetitive interaction of Peter Grant with his colleagues and the boring description of car rides from one place of London to another, to an absence of hidden treasures from the true London, to the lack of new magical features in this universe, to a completely blah ending… Without getting into spoilers, this chase of the Faceless Man should have been the apex of the series, which mostly revolved around this top Evil, should have seen a smooth merging of the rivers when they join and die in the Channel, but the ending of this book is terribly disappointing. Sounds like the rivers are really drying out and should wait for the next monsoon to swell again to engaging pace and fascinating undercurrents! Although it seems the next book is on its way (and should land in Germany).

by xi'an at March 22, 2019 11:19 PM

Christian P. Robert - xi'an's og

statlearn 2019, Grenoble

In case you are near the French Alps next week, STATLEARN 2019 will take place in Grenoble, the week after next, 04 and 05 April. The program is quite exciting, registration is free of charge!, and still open, plus the mountains are next door!

by xi'an at March 22, 2019 03:42 PM

Peter Coles - In the Dark

A Trip on the Thames

Last night as part of the social programme of this conference the participants went on a boat trip along the Thames from Westminster Pier to Canary Wharf and back. It was a very enjoyable trip during which I got to talk to a lot of old friends as well as mingling with the many early career researchers at this meeting. I can’t help thinking, though, that graduate students seem to be getting younger..

Meanwhile, back in Burlington House, astronomers have found observational evidence of modifications to Newton’s gravity..

by telescoper at March 22, 2019 02:53 PM

Emily Lakdawalla - The Planetary Society Blog

Pretty Pictures of the Cosmos: Lesser-Known Luminaries
Award-winning astrophotographer Adam Block shares some of his latest images highlighting some hidden gems.

March 22, 2019 11:00 AM

Lubos Motl - string vacua and pheno

A scalar weak gravity conjecture seems powerful
Stringy quantum gravity may be predicting an \(r=0.07\) BICEP triumph

Many topics in theoretical physics seem frustratingly understudied to me but one of those that are doing great is the Weak Gravity Conjecture (WGC) which is approaching 500 followups at the rate of almost a dozen per month. WGC hasn't ever been among the most exciting ideas in theoretical physics for me – which is why the activity hasn't been enough to compensate my frustration about the other, silenced topics – but maybe the newest paper has changed this situation, at least a little bit.

Nightingales of Madrid by Waldemar Matuška. Lidl CZ goes through the Spanish week now.

Eduardo Gonzalo and Luis E. Ibáñez (Zeman should negotiate with the Spanish king and conclude that our ň and their ñ may be considered the same letter! Well, the name should also be spelled Ibáněz then but I don't want to fix too many small mistakes made by our Spanish friends) just released:
A Strong Scalar Weak Gravity Conjecture and Some Implications
and it seems like a strong cup of tea to me, indeed. The normal WGC notices that the electron-electron electric force is some \(10^{44}\) times stronger than their attractive gravity and figures out that this is a general feature of all consistent quantum gravity (string/M/F-theory) vacua. This fact may be justified by tons of stringy examples, by the consistency arguments dealing with the stability of near-extremal black holes, by the ban on "almost global symmetries" in gravity which you get by adjusting the gauge coupling to too small values, and other arguments.

Other authors have linked the inequality to the Cosmic Censorship Conjecture by Penrose (they're almost the same thing in some contexts), to other swampland-type inequalities by Vafa, and other interesting ideas. However, for a single chosen Universe, the statement seems very weak: a couple of inequalities. The gravitational constant is smaller than the constant for this electric-like force, another electric-like force, and that's it.

Yes, this Spanish variation seems to be stronger. First, we want to talk about scalar interactions mediated by scalars instead of gauge fields. At some level, this generalization must work. A scalar may be obtained by taking a gauge field component \(A_5\) and compactifying the fifth dimension. If the force mediated by the gauge field was strong, so should be one mediated by the scalar.

To make the story short, they decide that the scalar self-interactions must be stronger than gravity as well and decide that an inequality for the scalar potential should hold everywhere, at every damn point of the configuration space\[

2(V''')^2 - V'''' \cdot V'' - \frac{(V'')^2}{M_P^2} \geq 0.

\] It's some inequality for the 2nd, 3rd, 4th derivatives of the potential. The self-interaction's being strong says that the third derivative should mostly dominate, in some quantitative sense. That's a bit puzzling for the purely quartic interactions. For \(A\phi^2+B\phi^4\), the inequality seems violated for \(\phi=0\) because there's a minus sign in front of the fourth derivative term and the "purely second" derivative term, too (the third derivative term vanishes in the middle). Do we really believe that this first textbook example of a QFT is prohibited? Does quantum gravity predict that the Higgs mechanism is unavoidable? And if it does, couldn't this line of reasoning solve even the hierarchy problem in a new way?

OK, they decide this is their favorite inequality in two steps: the fourth-derivative term is added a bit later, for some consistency with axions.

The very fact that they have this local inequality is quite stunning. In old-fashioned effective field theories, you could think that you may invent almost any potential \(V(\phi)\) and there were no conditions. But now, calculate the left hand side of the inequality above. You get some function and of course it's plausible that it's positive in some intervals and negative in others. It's unlikely that you avoid negative values of the left hand side everywhere. But if it's negative anywhere, this whole potential is banned by the new Spanish Inquisition, I mean the new Spanish condition! Clearly, a large majority of the "truly man-made" potentials are just eliminated.

Now, the authors try to find a potential that saturates their inequality. It has two parameters and is the imaginary part of the dilogarithm. It's pretty funny how complicated functions can be obtained just by trying to saturate such a seemingly elementary condition – gravitation is weaker than self-interactions of the scalars – that is turned into equations in the most consistent imaginable way.

The potentials they're led to interpolate between asympotically linear and perhaps asymptotically exponentially dropping potentials. They also derive some swampland conjectures and find a link to the distance swampland conjecture, another somewhat well-known example of Vafa's swampland program.

The WGC-like thinking has been used to argue that string/M-theory prohibits "inflation with large excursions of the scalar field". The "large excursion" is basically prohibited in analogy with the "tiny gauge coupling", it's still morally the same inequality. And it's a "weak" inequality in the sense that there's one inequality per Universe.

But these Spaniards have a finer resolution and stronger claims – they study the inequalities locally on the configuration space. And in the case of inflation, they actually weaken some statements and say that large excursions of the inflaton are actually allowed if the potential is approximately linear. As you know, I do believe that inflation is probably necessary and almost established in our Universe. But the swampland reasoning has led Vafa and others to almost abandon inflation (and try to replace it with quintessence or something) because the swampland reasoning seemed to prohibit larger-than-Planck-distance excursions of the inflaton. Others were proposing monodromy inflation etc.

But these authors have a new loophole: asymptotically linear potentials are OK and allow the inflaton to go far and produce 50-60 \(e\)-foldings. If they were really relevant as potentials of the inflaton, you would have a very predictive theory. In particular, the tensor-to-scalar ratio should be \(r=0.07\) which is still barely allowed but could be discovered soon (or not). Do you remember the fights between BICEP2 and Planck? Planck has pushed BICEP2 to switch to publishing papers saying "we don't see anything" but I still see the primordial gravitational waves in their picture and \(r=0.07\) could explain why I do. According to some interpretations, Planck+BICEP2 still hint at \(r=0.06\pm 0.04\), totally consistent with the linear potential. BICEP3 and BICEP Array have been taking data in the recent year or two. Do they still see something? Perhaps I should ask: Do they see the tensor modes again? Hasn't the Brian guy who did it for the Nobel Prize given up? Are there others working on it?

These new authors also claim that a near-saturation of their inequality naturally produces the spectrum of strings on a circle, with momenta and windings related by T-duality. In the process, they deal with the function \(m^2\sim V''\) and substitute integers to some exponentially reparameterized formulae... Well, I don't really understand this argument, it looks like black magic. Why do they suddenly assume that some of the parameters are integers and these integers label independent states? But maybe even this makes some sense to those who analyze the meaning of the mathematical operations carefully.

We often hear about predictivity. The swampland program and the WGC undoubtedly produce some predictions (like "gravity is weak") – it's a reason I was naturally attracted to these things because by my nature, I usually and slightly prefer to disprove and debunk possibilities than to invent new ones – but these predictions have looked rather isolated and weak, a few inequalities or qualitative statements per Universe. But when studied more carefully, there may be tons of new consequences like inequalities that hold locally in the configuration space. Functions that nearly or completely saturate these conditions are obviously attractive choices of potentials (I finally avoided the adjective "natural" not to confuse it with more technical versions of "naturalness").

And these functions may have the ability to turn stringy inflation into a truly predictive theory because they would imply the \(r=0.07\) tensor modes. Maybe WGC is pretty exciting, after all. (Just to be sure, it's been known for a long time that the linear potentials produce this tensor-to-scalar ratio.)

If it is truly exciting, I am still comparing it to the uncertainty principle. Imagine that you have some inequalities that look like the uncertainty principle for various pairs of variables. Some of these inequalities might be a bit wrong, a bit too weak etc. But you also want to consolidate them (into the general inequality for any two observables) and derive something really sharp and deep, e.g. that the observables have nonzero commutators. (This is not how it happened historically, Heisenberg had the commutators first, in 1925, and the inequality was derived in 1927.)

Maybe we're in a similar situation. They're asking the reader whether the WGC is a property of the black holes only or quantum gravity. I surely think it's both and the latter is more general. Black holes are just important in quantum gravity – as some extreme and/or generic localized objects (which produce the whole seemingly empty interior and paradoxes associated with it). But at the end, I do think that the WGC or its descendants should be equivalent even to holography and other things that are not "just" about the black holes.

Quantum gravity is not quite the same as an effective field theory. And the difference between the two may be very analogous to the difference between classical and quantum physics. The WGC and its gradually thickening variations could be the first glimpses of a new understanding of quantum gravity – first glimpses that might hypothetically make the full discovery and understanding unavoidable.

by Luboš Motl ( at March 22, 2019 08:24 AM

March 21, 2019

Christian P. Robert - xi'an's og

abandon ship [value]!!!

The Abandon Statistical Significance paper we wrote with Blakeley B. McShane, David Gal, Andrew Gelman, and Jennifer L. Tackett has now appeared in a special issue of The American Statistician, “Statistical Inference in the 21st Century: A World Beyond p < 0.05“.  A 400 page special issue with 43 papers available on-line and open-source! Food for thought likely to be discussed further here (and elsewhere). The paper and the ideas within have been discussed quite a lot on Andrew’s blog and I will not repeat them here, simply quoting from the conclusion of the paper

In this article, we have proposed to abandon statistical significance and offered recommendations for how this can be implemented in the scientific publication process as well as in statistical decision making more broadly. We reiterate that we have no desire to “ban” p-values or other purely statistical measures. Rather, we believe that such measures should not be thresholded and that, thresholded or not, they should not take priority over the currently subordinate factors.

Which also introduced in a comment by Valentin Amrhein, Sander Greenland, and Blake McShane published in Nature today (and supported by 800+ signatures). Again discussed on Andrew’s blog.

by xi'an at March 21, 2019 11:19 PM

Alexey Petrov - Symmetry factor

CP-violation in charm observed at CERN


There is a big news that came from CERN today. It was announced at a conference called Recontres de Moriond, one of the major yearly conferences in the field of particle physics. One of the CERN’s experiments, LHCb, reported an observation — yes, observation, not an evidence for, but an observation, of CP-violation in charmed system. Why is it big news and why should you care?

You should care about this announcement because it has something to do with how our Universe looks like. As you look around, you might notice an interesting fact: everything is made of matter. So what about it? Well, one thing is missing from our everyday life: antimatter.

As it turns out, physicists believe that the amount of matter and antimatter was the same after the Universe was created. So, the $1,110,000 question is: what happened to antimatter? According to Sakharov’s criteria for baryonogenesis (a process of creating  more baryons, like protons and neutrons, than anti-baryons), one of the conditions for our Universe to be the way it is would be to have matter particles interact slightly differently from the corresponding antimatter particles. In particle physics this condition is called CP-violation. It has been observed in beauty and strange quarks, but never in charm quarks. As charm quarks are fundamentally different from both beauty and strange ones (electrical charge, mass, ways they interact, etc.), physicists hoped that New Physics, something that we have not yet seen or predicted, might be lurking nearby and can be revealed in charm decays. That is why so much attention has been paid to searches for CP-violation in charm.

Now there are indications that the search is finally over: LHCb announced that they observed CP-violation in charm. Here is their announcement (look for a news item from 21 March 2019). A technical paper can be found here, discussing how LHCb extracted CP-violating observables from time-dependent analysis of D -> KK and D-> pipi decays.

The result is generally consistent with the Standard Model expectations. However, there are theory papers (like this one) that predict the Standard Model result to be about seven times smaller with rather small uncertainty.  There are three possible interesting outcomes:

  1. Experimental result is correct but the theoretical prediction mentioned above is not. Well, theoretical calculations in charm physics are hard and often unreliable, so that theory paper underestimated the result and its uncertainties.
  2. Experimental result is incorrect but the theoretical prediction mentioned above is correct. Maybe LHCb underestimated their uncertainties?
  3. Experimental result is correct AND the theoretical prediction mentioned above is correct. This is the most interesting outcome: it implies that we see effects of New Physics.

What will it be? Time will show.

More technical note on why it is hard to see CP-violation in charm.

Once reason that CP-violating observables are hard to see in charm is because they are quite small, at least in the Standard Model.  All final/initial state quarks in the D -> KK or D -> pi pi transition belong to the first two generations. The CP-violating asymmetry that arises when we compare time-dependent decay rates of D0 to a pair of kaons or pions to the corresponding decays of anti-D0 particle can only happen if one reaches the weak phase taht is associated with the third generation of quarks (b and t), which is possible via penguin amplitude. The problem is that the penguin amplitude is small, as Glashow-Illiopulos -Maiani (GIM) mechanism makes it to be proportional to m_b^2 times tiny CKM factors. Strong phases needed for this asymmetry come from the tree-level decays and (supposedly) are largely non-perturbative.

Notice that in B-physics the situation is exactly the opposite. You get the weak phase from the tree-level amplitude and the penguin one is proportional to m_top^2, so CP-violating interference is large.

Ask me if you want to know more!

by apetrov at March 21, 2019 06:45 PM

Christian P. Robert - xi'an's og

Dutch summer workshops on Bayesian modeling

Just received an email about two Bayesian workshops in Amsterdam this summer:

both taking place at the University of Amsterdam. And focussed on Bayesian software.

by xi'an at March 21, 2019 01:29 PM

Matt Strassler - Of Particular Significance

LHCb experiment finds another case of CP violation in nature

The LHCb experiment at the Large Hadron Collider is dedicated mainly to the study of mesons [objects made from a quark of one type, an anti-quark of another type, plus many other particles] that contain bottom quarks (hence the `b’ in the name).  But it also can be used to study many other things, including mesons containing charm quarks.

By examining large numbers of mesons that contain a charm quark and an up anti-quark (or a charm anti-quark and an up quark) and studying carefully how they decay, the LHCb experimenters have discovered a new example of violations of the transformations known as CP (C: exchange of particle with anti-particle; P: reflection of the world in a mirror), of the sort that have been previously seen in mesons containing strange quarks and mesons containing bottom quarks.  Here’s the press release.

Congratulations to LHCb!  This important addition to our basic knowledge is consistent with expectations; CP violation of roughly this size is predicted by the formulas that make up the Standard Model of Particle Physics.  However, our predictions are very rough in this context; it is sometimes difficult to make accurate calculations when the strong nuclear force, which holds mesons (as well as protons and neutrons) together, is involved.  So this is a real coup for LHCb, but not a game-changer for particle physics.  Perhaps, sometime in the future, theorists will learn how to make predictions as precise as LHCb’s measurement!

by Matt Strassler at March 21, 2019 11:52 AM

Peter Coles - In the Dark

The Most Ancient Heavens

So here I am, in that London, getting ready for the start of a two-day conference at the Royal Astronomical Society on cosmology, large-scale structure, and weak gravitational lensing, to celebrate the work of Professor Alan Heavens, on (or near) the occasion of his 60th birthday. Yes, it is a great name for an astronomer.

I was honoured to be invited to give a talk at this meeting, though my immediate reaction when I was told about was `But he can’t be sixty! He’s only a few years older than me…oh.’ I gather I’m supposed to say something funny after the conference dinner tomorrow night too.

Courtesy of alphabetical order it looks like I’m top of the bill!

Anyway, I’ve known Alan since I was a research student, i.e. over thirty years, and we’re co-authors on 13 papers (all of them since 2011).

Anyway, I’m looking forward to the HeavensFest not only for the scientific programme (which looks excellent) but also for the purpose of celebrating an old friend and colleague.

Just to clear up a couple of artistic points.

First, the title of the meeting, The Most Ancient Heavens, is taken from Ode to Duty by William Wordsworth.

Second, the image on the conference programme shown above is a pastiche of The Creation of Alan Adam which is part of the ceiling of the Sistine Chapel, waswhich painted by Michelangelo di Lodovico Buonarroti Simoni, known to his friends as Michelangelo. Apparently he worked flat out painting this enormous fresco. It was agony but the ecstasy kept him going. I’ve often wondered (a) who did the floor of the Sistine Chapel and (b) how could Michelangelo create such great art when it was so clearly extremely cold? Anyway, I think that is a picture of Alan at high redshift on the far right, next to the man with beard who at least had the good sense to wear a nightie to spare his embarrassment.

Anyway, that’s all for now. I must be going. Time for a stroll down to Piccadilly.

by telescoper at March 21, 2019 08:16 AM

Lubos Motl - string vacua and pheno

CMS: 2.4-sigma excess in the last gluino bin, photons+MET

Gluino, a vampire alchemist with human eyes

I just want to have a separate blog post on this seemingly small anomaly. We already saw the preprint for one day in advance but the CMS preprint finally appeared on the hep-ex arXiv:
Search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at 13 TeV
OK, they look at events in which two photons are created and seen in the calorimeters, plus the momentum addition doesn't seem to add up. The sum of the initial protons' \(\sum\vec p_i\) seems to differ from the final particles \(\sum \vec p_f\). The difference is the "missing transverse momentum" but because such a momentum is carried by particles which must have at least the same energy, it's also referred to as MET, the missing \(E_T\) or missing transverse energy.

OK, CMS has picked the collisions with the qualitatively right final state, photons plus MET, and divided them to bins according to the magnitude of MET. The last bin has MET between \(250\GeV\) and \(350\GeV\). It's very hard to produce this high missing transverse momentum at the LHC – the Standard Model assumes that MET is carried by the invisible neutrinos only. And although the protons carry \(13\TeV\) of energy in total, it's divided between many partons in average, and it's unlikely that a neutrino created in the process can steal more than \(0.35\TeV\) of energy for itself.

In this last bin of photons+MET, 5.4 events were expected, plus minus the systematic error 1.55 events or so. However, a whopping 12 events were observed. If you combine the 1.55 systematic error with the \(\sqrt{5.4}\) statistical error in the Pythagorean way, you get some 2.8 events for a total sigma, and 12 is some 2.4 sigma above the predicted 5.4. Sorry if my calculation is wrong but that's a mistake I probably repeated a few times. They seem to say that the error 1.5-1.6 already includes the statistical error and I can't see how it can be true because 1.6 is smaller than the square root of five.

A 2.4 sigma excess corresponds to a 99% confidence level. It means that if this increase from 5.4 to 12 is due to chance, the probability that it occurred by chance is just some 1% or so. It means that the odds that it is due to a real signal are about 100 times higher than your prior odds. That's a significant increase. I think it's basically fair to interpret this increase as a reason to increase the visibility of this particular CMS search by a factor of 100 (for those who look for new physics).

Some people learned to react instinctively and easily. If the excess is just 2.4 sigma, it can't be real. I think it's a sloppy reasoning. It's right to be affected by 2.4 sigma excesses. In softer sciences, this is more than enough to claim a sure discovery and demand a world revolution that is proven necessary by that discovery. Particle physicists want to be hard, nothing below 5 sigma counts, but the truth is fuzzy and in between. It's just right to marginally increase one's belief, hopes, or attention that something might be true if there is a 2.4 sigma excess.

And this excess is in the last bin – the cutting edge of energy and luminosity. With the full dataset, if things just scale and it's a signal, there could be 50 events instead of the predicted 22, just on CMS. The same could be seen by ATLAS, in total, 100 events instead of 44. Maybe some events would show up above \(350\GeV\). If true, it would surely be enough for a combined or even separate discovery of new physics.

And yes, this new physics looks just damn natural to me. Gluinos, the superpartners of gluons, may appear in this photons+MET channel assuming a popular version of supersymmetry breaking, the gauge mediation supersymmetry breaking (GMSB). In that setup, the lightest supersymmetric particle and therefore dark matter particle is the gravitino \(\tilde G\), the superpartner of the graviton whose mass could be just \(1\eV\) or so which is nice in cosmology. Its interactions are very weak and the particle is predicted to be invisible to doable direct detection experiments, because gravitino's interactions are superpartners of gravity which is the weakest force, you know, thus easily explaining the null results (and suggesting that these direct search experiments were "wasted money" – such things are unavoidable somewhere in science; science is exploration, not a guaranteed insured baking of breads).

The gravitino could still be unstable i.e. R-parity could be broken (but the lifetime could be billion of years so the gravitinos are still around as dark matter) – in which case the gravitino decays could be seen in cosmic rays. On top of that, the NLSPs' decays to the gravitino could be seen in colliders. If the NLSP is too long-lived, there's a problem with the Big Bang Nucleosynthesis.

If gravitinos are dark matter, they're probably overpaying for the commodity. See how these gravitino balls are farmed.

We also need to assume a neutralino NLSP, the next to lightest superpartner, and it's not just any neutralino. It must be close to the bino, the superpartner of the B-boson, the gauge boson for the electroweak \(U(1)\) hypercharge gauge group (bino is closer to a photino than a zino under the Weinberg angle rotations of bases, correct my mistakes, please). There are many other possibilities but for the largest percentage of my tenure as an amateur supersymmetry phenomenologist, I have considered this assignment of the LSP and NLSP to be the more natural one despite my clear grasp which stringy vacua predict it. (Later, I looked at an American-Romanian-Slovak 2005 paper to have an idea that it's possible at all.)

Just to be sure, it's different from Kane's \(G_2\) M-theory compactifications, from Nima-et-al. split supersymmetry, and from many other things you've heard. The number of choices for the qualitative character of supersymmetry breaking and for the hierarchy of the superpartner masses is somewhat large.

The mass of the gluino indicated by the small excess would be about \(1.9\TeV\). Some squarks could be around \(1.8\TeV\) but detailed numbers are too model-specific because there are many squarks that may be different – and different in many ways.

As we discussed with Edwin, Feynman has warned against the "last bin", in an experimental paper claiming that the FG theory wasn't FG. But I think that he talked about a situation in which the systematic error in the last bin was high – the experimenters lost the ability to measure something precisely due to complications in their gadget. Here, the error in the last bin is already mostly statistical. So one can't get a new bin simply because there are no events detected with the MET above \(350\GeV\) and zero is too little.

In this sense, I think it's right to say the exact opposite of Feynman's sentence here. Feynman said that if the last bin were any good, they would draw another one. Well, I would say that if the last bin were really bad, it would have zero events and they wouldn't include it at all. ;-)

There's a significant probability it's an upward fluke, some probability it's a sign of new physics. None of them is zero. To eliminate one possibility means to be sloppy or close-minded. This last bin of this channel is arguably one of the most natural bins-of-channels where the superpartners could appear first, and the gradual appearance that grows from 2 sigma to 5 sigma and then higher is the normal thing how a discovery should proceed. It's no science-fiction speculation. The LHC is struggling to see some events with very high energies of final particles that the LHC is barely capable of producing – it just produces them too scarcely. In such a process, it's just rather natural for new physics to gradually appear in the last bin (what is the last bin in one paper but may be earlier than the last in higher-luminosity papers).

The Higgs boson also had some small fluctuations first. At some time in 2011, we discussed a possible bump indicating a \(115\GeV\) Higgs if you remember. It went away and by December 2011, a new bump emerged near \(125\GeV\) and I was correctly certain it was the correct one and I was right (I wasn't the only one but the certainty wasn't universal). This last bin may be analogous to the wrong \(115\GeV\) Higgs. But it may also be analogous to the correct \(125\GeV\) Higgs and it may grow.

You see that this paper was just published now and only uses 1/4 of the available CMS dataset. It sees a minor fluke. The LHC still has enough not yet analyzed data that could produce new physics – although the LHC is already not running for several months. It's just plain wrong for anyone to say that "the discovery of new physics in the LHC data up to 2018 has already been ruled out".

Update, March 21st:

There is a new CMS search for gauge mediation which also has 35.9/fb, includes the diphoton channel above, plus three more channels. One of them is one photon+jets+MET which was already reported in July 2017 when I naturally ignored it because that paper seemed like "clear no signal" in the absence of the diphoton analysis in it. But there's an excess in the (next to last) bin with the transverse energy \(450\)-\(600\GeV\), one photon, jets (or "transverse activity"), and missing energy. Instead of 3 expected, one gets 10 events.

In combination, there are some 2-sigmaish-excesses everywhere, although the diphoton is probably still the most important source of them. The charginos are mostly excluded below \(890\GeV\) but it should have been \(1080\GeV\). The wino mass \(M_2\) seems to be excluded below \(1100\GeV\) although it should have been all below \(1300\GeV\), and so on. I finally decided that the degree to which this increases the odds that the CMS is seeing hints of gauge-mediated SUSY breaking (if it is an increase at all, not a decrease) is too small and doesn't justify a full new blog post.

by Luboš Motl ( at March 21, 2019 06:25 AM

March 20, 2019

Peter Coles - In the Dark

Spring Equinox in the Ancient Irish Calendar | 20 March 2019

I’m sharing this interesting post with a quick reminder that the Vernal Equinox in the Northern Hemisphere occurs today, 20th March 2019, at 21:58 GMT.

Stair na hÉireann/History of Ireland

Equinox is the date (or moment) some astronomical alignments in Ireland mark as being auspicious. Not many, mind you, but some, like the cairn on Loughcrew or the two passages of Knowth, a sort of super-alignment with quadruple significance. Though the actual alignment of Knowth is disputed, it might be a lunar alignment or not an alignment at all.
The equinox is far less obvious an astronomical event than the two solstices, celebrated in Ireland and also the subject of astronomical alignments. It is like the equinox, which occurs in-between the winter solstice and the summer solstice, and vice versa, twice a year. However, it is just one event, as the spring and autumn equinox happens at different dates, but are for all intents and purposes identical events.
Taking place around 20th March and 22nd September, the equinox is the moment when the plane of the Earth’s equator passes…

View original post 384 more words

by telescoper at March 20, 2019 01:39 PM

Peter Coles - In the Dark

New Publication at the Open Journal of Astrophysics!

It’s nice to be able to announce that the Open Journal of Astrophysics has just published another paper. Here it is!

It’s by Darsh Kodwani, David Alonso and Pedro Ferreira from a combination of Oxford University and Cardiff University.

You can find the accepted version on the arXiv here. This version was accepted after modifications requested by the referee and editor.

This is another one for the `Cosmology and Nongalactic Astrophysics’ folder. We would be happy to get more submissions from other areas of astrophysics. Hint! Hint!

P.S. A few people have asked why the Open Journal of Astrophysics is not listed in the Directory of Open Access Journals. The answer to that is simple: to qualify for listing a journal must publish a minimum of five papers in a year. Since OJA underwent a failure long hiatus after publishing its first batch of papers we don’t yet qualify. However, so far in 2019 we have published four papers and have several others in the pipeline. We will reach the qualifying level soon and when we do I will put in the application!

by telescoper at March 20, 2019 10:26 AM

Lubos Motl - string vacua and pheno

Hossenfelder's pathetic attack against CERN's future collider
Sabine Hossenfelder became notorious for her obnoxiously demagogic and scientifically ludicrous diatribes against theoretical physics – she effectively became a New Age Castrated Peter Woit – but that doesn't mean that she doesn't hate the rest of particle physics.

Her latest target is CERN's new project for a collider after the LHC, the Future Circular Collider (FCC), an alternative to the Japanese linear ILC collider and the Chinese circular CEPC collider (the Nimatron).

This is just a 75-second-long FCC promotional video. It shows just some LHC-like pictures with several of the usual questions in fundamental physics that experiments such as this one are trying to help to answer. The video isn't excessively original but you can see some updated state-of-the-art fashions in computer graphics as well as the visual comparison of the FCC and its smaller but more real sister, the LHC.

But when you see an innocent standard video promoting particle accelerators, Ms Hossenfelder may be looking at the very same video and see something entirely different: a reason to write an angry rant, CERN produces marketing video for new collider and it’s full with [sic] lies.

What are the lies that this video is claimed to be full of? The first lie is that the video asks what is 96% of the Universe made of. Now, this question is listed with the implicit assertion that this is the question that the people behind this project and similar projects would help to answer. It's what drives them. No one is really promising that the FCC will answer this question.

The figure 96% refers to the dark energy (70%) plus dark matter (26%) combined. Hossenfelder complains:
Particle colliders don’t probe dark energy.
Maybe they don't but maybe they do. This is really a difficult question. They don't test the dark energy directly but whenever we learn new things about particles that may be seen through particle accelerators, we are constraining the theories of fundamental physics. And because the theories' explanation for particular particles and for dark-energy-like effects are correlated in general, the discoveries at particle accelerators generally favor or disfavor our hypotheses about dark energy (or whatever replaces it), too.

My point is that at the level of fundamental physics, particle species and forces are interconnected and cannot quite be separated. So her idea that these things are strictly separated so that the FCC only tests one part and not the other reflect some misunderstanding of the "unity of physics" that has already been established to a certain extent. Also, she writes:
Dark energy is a low-energy, long-distance phenomenon, the entire opposite from high-energy physics.
This is surely not how particle physicists view dark energy. Dark energy is related to the energy density of the vacuum which is calculable in particle physics. In the effective field theory approximation, contributions include vacuum diagrams – Feynman diagrams with no external legs. According to the rules of effective field theories as we know them, loops including any known or unknown particle species influence the magnitude of the dark energy.

For this reason, the claim that the dark energy belongs to the "entirely opposite" corner of physics than high-energy physics, seems rather implausible from any competent theoretical high-energy physicist's viewpoint. The total vacuum energy ends up being extremely tiny relatively to (any of) the individual contributions we seem to know – and this is known as the cosmological constant problem. But we don't know any totally convincing solution of that problem. The anthropic explanation assuming the landscape and the multiverse where googols of possible values of dark energy are allowed is the most complete known possibility – but it is so disappointing and/or unpredictive that many people refuse to agree that this is the established final word about the question.

The right solution could include some complete separation of the dark energy from high-energy physics, as suggested by Hossenfelder. But this is just one possible category of proposals. It's surely not an established scientific fact. And there's no known convincing theory of this Hossenfelder's type.

The discovery or non-discovery of superpartners at a higher energy scale would certainly influence the search for promising explanations of the cosmological constant problem, and so would other discoveries. For example, the dark energy may be due to quintessence and quintessence models may predict additional phenomena that are testable by the particle colliders. None of the findings are guaranteed to emerge from the FCC but that's how experiments usually work. We don't really know the results in advance, otherwise the experiment would be a waste of time.
What the FCC will reliably probe are the other 4%, the same 4% that we have probed for the past 50 years.
Indeed, a collider may only be promised to test the visible matter "reliably". But science would get nowhere if it were only trying to probe things that can be probed "reliably". That statement is very important not just for science. When Christopher Columbus sailed to the West, he claimed to reach India from the other side or stuff like that. But he couldn't promise them to reach India reliably. After all, indeed he had found another continent that almost covers the whole space between the North Pole and South Pole and prevents you from sailing from Europe to India in this direction.

But that didn't mean that Columbus' voyage was a waste of resources, did it? It is absolutely essential for the scientific spirit to try lots of things, both in theoretical and experimental science, whose successful outcome is not guaranteed, probes of things that are "unreliable". Scientists simply have to take the risk, to "experiment" in the colloquial sense.

It's truly ironic that Sabine Hossenfelder has been "created" as an appendix of Lee Smolin, her older fellow critic of science, who always claimed that science needed to fund much more risky directions and stuff like that (needless to say, the "most courageous directions" were meant to represent a euphemism for cowardly crackpots such as himself). Where does it end when Lee Smolin pulls a female clone from his unclean cloning machine and she drags all the anti-scientific gibberish he used to emit through several more years of evolution? What does she do with all the "courage" that Smolin's mouth – not behavior – was full of? And with the value of risk-taking? She says that only experiments with a "reliable" outcome should be funded! Isn't it ironic?

The next collider, and even the LHC in the already collected data or in the new run starting in 2021, may learn something about dark matter. If the dark matter is a light enough neutralino, the LHC or the next collider is likely enough to see it. If the dark matter is an axion, the outcome may be different. But if we won't try anything, we won't learn anything, either. Indeed, her criticism of the tests of dark matter theories is identical:
What is dark matter? We have done dozens of experiments that search for dark matter particles, and none has seen anything. It is not impossible that we get lucky and the FCC will produce a particle that fits the bill, but there is no knowing it will be the case.
A malicious enough person could have made the exact same hostile and stupid comment before every important experiment in the history of science. There was no knowing that Galileo would see anything new with the telescopes. Faraday and Ampere and Hertz and others weren't guaranteed to see any electromagnetic inductions, forces, electromagnetic waves. The CERN colliders weren't certain to discover the heavy gauge bosons in the 1980s and the Tevatron didn't have to discover the top quark. The LHC didn't have to discover the Higgs boson, at least not by 2012, because its mass could have been less accessible. And so on.

Does it mean that experiments shouldn't ever be tried? Does it mean that there is a "lie" in the FCC video? No. With Hossenfelder's mindset, we would still be jumping from one palm to another and eating bananas only. Another "lie" is about the matter-antimatter asymmetry:
Why is there no more antimatter? Because if there was, you wouldn’t be here to ask the question. Presumably this item refers to the baryon asymmetry. This is a fine-tuning problem which simply may not have an answer. And even if it has, the FCC may not answer it.
The answer to the question "because you wouldn't be here" may be said to be an "anthropic" answer. And it's a possible answer and a true one. But it doesn't mean that it is the answer in the sense of the only answer or the most physically satisfying answer. In fact, it's almost certain that Hossenfelder's anthropic answer cannot be the deepest one.

Why? Simply because every deep enough theory of particle physics does predict some baryon asymmetry. So the very simple observed fact that the Solar System hasn't annihilated with some antimatter is capable of disproving a huge fraction of possible theories that people could propose and that people have actually proposed.

Her claim that it is a "fine-tuning problem" is implausible. What she has in mind is clearly a theory that predicts the same amount of baryons and antibaryons in average – while the excess of baryons in our Universe is a statistical upward fluctuation (she uses the word "fine-tuning" which isn't what physicists would use but the context makes her point rather obvious). But one can calculate the probability of such a large enough fluctuation (seen all over the visible Universe!) for specific models and the probability is usually insanely low. For that reason, the very theory that predicts no asymmetry in average becomes very unlikely, too. By simple Bayesian inference, a theory that actually explains the asymmetry – that has a reason why the mean value of this asymmetry is nonzero and large enough – is almost guaranteed to win! Fundamental physicists still agree that you should better obey the Sakharov conditions (needed for an explanation of the asymmetry to exist).

It is rather transparent that she doesn't understand any of these questions. She doesn't understand how scientists think. She misunderstands the baryon asymmetry and tons of other technical topics but she also misunderstands something much more universal and fundamental – how scientists think and infer in the presence of some uncertainty which is almost omnipresent. Whenever there's some anomaly or anything that doesn't add up, but it plausible with a tiny probability, she just calls it "fine-tuning", throws "fine-tuning" at the problem, and concludes that there's nothing to explain. Sorry, this is not how a scientist thinks. If this attitude had been adopted by everyone for centuries, science wouldn't have made any progress at all. Visible enough anomalies simply do require genuine explanations, not just "it's fine-tuning and fine-tuning is always OK because naturalness is beauty and beauty is rubbish", which is Hossenfelder's totally flawed "methodology" in all of physics.

On top of that, she repeats her favorite "reliability" theme: "And even if it has, the FCC may not answer it." Right, the FCC may fail to answer one question or another, and it will almost certainly fail to answer most questions that people label as questions with a chance to be answered. But the other part of the story is that the FCC also may answer one of these questions or several of these questions.

Note that Hossenfelder only presents one possible scenario: science will fail to answer the questions. She never discusses the opposite possibility. Why? Because she is a hater of science who would love science to fail. Every time science learns something new, vitriolic science haters such as Ms Sabine Hossenfelder or Mr Peter Woit shrink. After every discovery, they realize that they're even more worthless than previously thought. While science makes progress, they can only produce hateful tirades addressed to brainwashed morons. While the gap is getting larger and deeper, and more obvious to everybody who watches the progress in science, the likes of Ms Hossenfelder escalate their hostility towards science because they believe that this escalation will be better to mask their own worthlessness.

The fact that the FCC has a chance to answer at least one of these questions is much more important than the possibility that it won't answer one of them or any of them.

Hossenfelder also claims that the FCC won't probe how the Universe began because the energy density at the FCC is "70 orders of magnitude lower". This is a randomly picked number – she probably compared some FCC-like energy with the Planck energy. But the statement about the beginning of the Universe doesn't necessarily talk about the "Planck time" after the Big Bang. It may talk about somewhat later epochs. But if the FCC has a higher energy than the LHC, it will be capable of emulating some processes that are closer to the true beginning than the processes repeated by the LHC.

She has also attacked the claims about the research of neutrinos:
On the accompanying website, I further learned that the FCC “is a bold leap into completely uncharted territory that would probe… the puzzling masses of neutrinos.”

The neutrino-masses are a problem in the Standard Model because either you need right-handed neutrinos which have never been seen, or because the neutrinos are different from the other fermions, by being “Majorana-particles” (I explained this here).
The FCC is relevant because new observations of the neutrino physics are possible – whether right-handed neutrinos or the rest mass of neutrinos (whether they are Dirac or Majorana) or new species of neutrinos etc. – and, on top of that, the very fact that the neutrino masses are nonzero may be viewed as physics beyond the Standard Model.

Why is it so? Because the neutrino masses, at least the Majorana ones, can't come from the Yukawa interactions. The term \(y h \bar \nu \nu\) isn't an \(SU(2)\) singlet because the term contains the product of three doublets, an odd number. You need dimension-five operators. Those are non-renormalizable. A theory with them breaks at some energies. At that energy scale, some new phenomena must kick in to restore the consistency of the theory.

Alternatively, Dirac masses could come from renormalizable Yukawa dimension-4 operators but the new right-handed neutrinos components may be said to be beyond the Standard Model. Some new interactions could be measured etc. Whatever is true in Nature, the FCC may clearly produce neutrinos and detect them in the form of the missing energy, like the LHC. It's unreasonable to attack the statement that the new collider would allow to test neutrinos in a new regime.
We presently have no reliable prediction for new physics at any energy below the Planck energy. A next larger collider may find nothing new. That may be depressing, but it’s true.
But the FCC video is simply not saying that we are guaranteed to get such answers. The big desert between the Standard Model and (nearly?) the Planck scale has always been a possibility. If we had the "duty" to have a reliable prediction of some new physical phenomenon at an intermediate energy scale, it would have to be found by theoretical particle physicists or similar folks.

But curiously enough, she's hysterically fighting against that (theoretical) part of the research, too. To summarize, she is fighting against particle, fundamental, or high-energy physics in any form. She hates it, she hates people who are asking questions, she hates people who are proposing possible answers, and she hates the people who do any work – theoretical or experimental work – that may pick the right answers or at least favor or disfavor some of them.

Nevertheless, due to the extreme political correctness, this absolute hater of science who doesn't do anything except for lame efforts to hurt the image of science is sometimes presented as a physicist by the popular media. She is nothing of the sort.

by Luboš Motl ( at March 20, 2019 06:19 AM

Lubos Motl - string vacua and pheno

Hossenfelder's plan to abolish particle physics is the most prominent achievement of diversity efforts in HEP yet
I guess that you don't doubt that that the Academia in the Western countries is leaning to the left. Well, that's a terrible understatement. It's heavily left-wing. A 2009 Pew Research Poll found that among scientists in the U.S. Academia, 55% were registered Democrats, 32% were Independents, and 6% were Republicans. The numbers have probably gotten much worse in the subsequent decade.

As we could conclude e.g. by seeing the 4,000 signatures under a petition penned by D.H. against Alessandro Strumia, out of 40,000 HEP authors that could have signed, the percentage of the hardcore extremist leftists who are willing to support even the most insidious left-wing campaigns is about 10% in particle physics. Assuming that the number 6% above was approximately correct, you can see that the Antifa-type leftists outnumber all Republicans, including the extremely moderate ones and the RINOs (Republicans In Name Only), and a vast majority of those 6% are RINOs or extremely moderate Republicans.

Because the extreme leftists are the only loud subgroup – you know, the silent majority is silent as the name indicates – they shape the atmosphere in the environment to a very unhealthy degree. It has become unhealthy especially because they have managed to basically expel everybody who would be visibly opposing them.

"Diversity" is one of the buzzwords that have become even more influential in the Academia than in the whole U.S. society – and even their influence over the latter is clearly excessive.

In practice, "diversity" is a word meaning to justify racist and sexist policies against whites (and yellows – who are often even more suppressed), against men, and especially against white men. Those are still allowed in the Academia but only if they "admit" that their previous lives and origin are non-existent; that they abhor masculinity and the white race and they deny that the white men have built most of the civilization; and that the whites, men, and white men have only brought misery to the world; and if they promise that they will dedicate their life to the war on the real i.e. evil men, whites, and white men.

The radically left-wing 10% of the university people are really excited about this hostility against the white men – they are as excited as the Nazis were during the Night of Broken Glass (even the pointing out of this analogy could cause trouble to you). The silent majority doesn't care or reacts with some incoherent babbling that seems safe enough to the radical loons which is why a kind of tolerance has evolved in between the radical left and the incoherent silent majority.

These moderate people say "why not", "it can't hurt" etc. when some white/men are forced to spit on their race and sex or when 50% of the females or people of color are hired purely through affirmative action. Sorry, Ladies and Gentlemen, but like Nazism, communism, and all totalitarian political movements based on lies, this system of lies and intimidation is very harmful and existentially threatening for whole sectors of the society and scientific disciplines, too.

We're still waiting for the first female physics Nobel prize winner who would say that she has found some institutionalized diversity efforts helpful – Marie Curie and Maria Meyer haven't been helped at all and Donna Strickland considers herself a scientist, not a woman in science, and is annoyed when her name is being abused by the feminist ideologues.

However, we already have a great example of prominent negative contributions to particle physics. Sabine Hossenfelder released her book, Lost In Math (whose PDF was posted by her or someone else on the Internet and you may find it on Google), and is writing numerous essays to argue that no new collider should ever be built again and particle physics should be suspended and 90% of the physicists should be fired.

For example, two days ago, Nude Socialist published her musings titled Why CERN’s plans for a €20 billion supersized collider are a bad idea whose title says everything you need (at least I believe you have no reason to contribute funds to the socialist porn). Ms Hossenfelder complains about the "eye-watering" €21 billion price of the most ambitious version of the FCC collider. Because she feels lost in math, you will have some suspicion that she chose the eye-catching adjective because she confused billions and trillions. But even if she did, it doesn't matter and she wouldn't change the conclusion because mathematics never plays a decisive role in her arguments.

On Wednesday, I discussed her text Particle physicists want money for bigger collider where she outlined some bold plans for the future of particle physics (more precisely for its non-existence), especially in the next 20 years, such as:
Therefore, investment-wise, it would make more sense to put particle physics on a pause and reconsider it in, say, 20 years to see whether the situation has changed, either because new technologies have become available or because more concrete predictions for new physics have been made.

No šit. Look, we are currently paying for a lot of particle physicists. If we got rid of 90% of those we'd still have more than enough to pass on knowledge to the next generation.

I am perfectly aware that there are theorists doing other things and that experimentalists have their own interests and so on. But who cares? You all sit in the same boat, and you know it. You have profited from those theorists' wild predictions that capture the public attention, you have not uttered a word of disagreement, and now you will go down with them.
And she wrote many theses along the same lines. From the beginning when this blog was started in late 2004, I was explaining that the jihad against string theory wasn't specifically targeting string theory. It was just a manifestation of some people's great hostility towards quantitative science, creativity, curiosity, rigor, mental patience, and intellectual excellence – and string theory was just the first target because it's probably the best example of all these qualities that the mankind has.

It seems to me that at least in the case of Sabine Hossenfelder, people see that I was right all along. This movement is just a generic anti-science movement and string theory or supersymmetry were the first targets because they are the scienciest sciences. But the rest of particle physics isn't really substantially different from the most prominent theories in theoretical physics, and neither is dark energy, dark matter, inflationary cosmology, and other things, so they should "go down" with the theories in theoretical physics, Hossenfelder proposes. It makes sense. If you succeeded in abolishing or banning string theory, of course you could abolish or ban things that "captured less public attention", too. It is just like in the story "First they came for the Jews...". And it's not just an analogy, of course. There's quite some overlap because some one-half of string theorists are Jewish while the ideology of the string theory critics was mostly copied from the pamphlets that used to define the Aryan Physics.

Well, as far as I know, Peter Woit and Lee Smolin – the prominent crackpots who hated string theory and played Sabine Hossenfelder's role around 2006 – have never gone far enough to demand the suspension of particle physics for 20 years, dismissal for 90% of particle physicists, and other things. So even people from this general "culture" were surprised by Hossenfelder's pronouncements. For example, Lorenzo wrote:
Sabine, I [have been your uncritical fan for years] but I fear that recently your campaign against particle physics is starting to go a bit too far. [...]
Well, regardless of Lorenzo's sycophancy as well as paragraphs full of arguments, he was the guy who was later told that he had to "go down" as well, in another quote above. Many others have been ordered to be doomed, too. OK, why was it Ms Sabine Hossenfelder and not e.g. her predecessors Mr Peter Woit and Mr Lee Smolin who finally figured out the "big, simple, ingenious idea" – the plan to demand the death for particle physics as a whole?

The correct answer has two parts that contribute roughly equally, I think. The first part is analogous to Leonard's answer to Penny's implicit question about his night activities:
Penny: Oh, Leonard?
Leonard: Hey.
Penny: I found these [Dr Elizabeth Plimpton's underwear] in the dryer. I’m assuming they belong to Sheldon.
Leonard: Thanks. It’s really hard to find these in his size. So, listen. I’ve been meaning to talk to you about the other morning.
Penny: You mean you and Dr. Slutbunny?
Leonard: Yeah, I wanted to explain.
Penny: Well, you don’t owe me an explanation.
Leonard: I don’t?
Penny: No, you don’t.
Leonard: So you’re not judging me?
Penny: Oh, I’m judging you nine ways to Sunday, but you don’t owe me an explanation.
Leonard: Nevertheless, I’d like to get one on the record so you can understand why I did what I did.
Penny: I’m listening.
Leonard: She let me.
OK, why did Leonard have sex with Dr Plimpton? Because she let him. Why does Dr Hossenfelder go "too far" and demands the euthanasia for particle physics? Because they let her – or we let her. Everyone let her. So why not? Mr Woit and Mr Smolin didn't go "this far" because, first, they are really less courageous and less masculine than Ms Hossenfelder; second, because – as members of the politically incorrect sex – they would genuinely face more intense backlash than a woman.

The second part of the answer why the "big plan" was first articulated by Ms Hossenfelder, a woman, and not by a man, like Mr Smolin or Mr Woit, is that it is more likely for a woman to grow hostile towards all of particle physics or any activity within physics that really depends on mathematics.

Mathematics is a man's game. The previous sentence is a slogan that oversimplifies things and a proper interpretation is desirable. The proper interpretation involves statistical distributions. Women are much less likely to feel really comfortable with mathematics and to become really successful in it (they are predicted – and seen – to earn about 1% of the Fields Medals, for example), especially advanced mathematics and mathematics that plays a crucial role, primarily because of the following two reasons:
  1. women's intellectual focus is more social, on nurturing, and less on "things" and mathematics
  2. women's IQ distribution is some 10% narrower which means that their percentage with some extremely high mathematical abilities, and this extreme tail is needed, decreases much more quickly than for men (the narrower distribution means that women are less likely to be really extremely stupid than men, too)
Smolin, Woit, and Hossenfelder are just three individuals so it would be a fallacy to generalize any observations about the three of them to theories about whole sexes. On the other hand, lots of the differences (except for her being more masculine than Woit and Smolin when it comes to courage!) are pretty much totally consistent with the rational expectations based on lots of experience – extreme leftists would say "prejudices" – about the two sexes. Ms Hossenfelder, a woman, just doesn't believe that the arguments and derivations rooted in complex enough mathematics should play a decisive role. She has very unreasonably claimed that even totally technical if not mathematical questions such as the uniqueness of string theory are sociological issues. It's because women want to make things "social". Well, Lee Smolin has also preposterously argued that "science is democracy" but he just didn't go this far in the removal of mathematics from physics.

Just to be sure, I am not saying that Ms Hossenfelder is a typical feminist. She is not. She hasn't been active in any of the far left compaigns. But her relationship towards mathematical methods places her among the typical women. She has been trained as a quantitative thinker which made her knowledge surpass that of the average adult woman, of course, but training cannot really change certain hardwired characteristics. On the other hand, a feminist is someone who believes that it is reasonable for women – and typical women like her – to have close to 50% of influence over physics (or other previously "masculine" activities). So what I am saying is the following: it is not her who is the real feminist in this story – it is the buyers and readers of her book and her apologists. Non-feminists, whether they're men or women, probably find it a matter of common sense that her book might be poetry or journalism but her opinions about physics itself just cannot be considered on par with those of the top men.

In the text above, we could see one mechanism by which the diversity efforts hurt particle physics. They made it harder to criticize a woman, in this case Ms Sabine Hossenfelder, for her extremely unfriendly pronouncements about science and for the bogus arguments on which she builds. Because the simply pointing out that Hossenfelder is just full of šit would amount to a "mansplaining" and the diversity efforts classify these things as politically incorrect, the political correctness has helped to turn Ms Sabine Hossenfelder into an antibiotics-resistant germ.

The second mechanism by which the diversity efforts have caused this Hossenfelder mess is hiding in the answer to the question: Why did Ms Hossenfelder and not another woman start this movement to terminate particle physics?

Well, it's because she is really angry. And she got really angry about it because she has been more or less forced – for more than 15 years – to do things she naturally mistrusts, according to her own opinions. Indeed, my anger is the fair counterpart of that, as explained e.g. in the LHC presentation by Sheldon Cooper. ;-) She had to write papers about the search for black holes at the LHC, theories with a minimal length, and lots of other things – usually with some male collaborators in Frankfurt, Santa Barbara, and elsewhere. But based on her actual experience and abilities, she just doesn't "believe" that anything in particle physics has any sense or value.

This admission has been written repeatedly. For example, a conversation with Nima Arkani-Hamed in her book ends up with:
On the plane back to Frankfurt, bereft of Nima’s enthusiasm, I understand why he has become so influential. In contrast to me, he believes in what he does.
Right, Nima's energy trumps that of the FCC collider and he surely looks like he believes what he does. But why would Ms Hossenfelder not trust what she is doing? And if she didn't trust what she was doing, why wasn't she doing something else? Why hasn't she left the damn theoretical or particle physics? Isn't it really a matter of scientific integrity that a scientist should only do things that she believes in?

And that's how we arrive to the second mechanism by which the diversity ideology has "caused" the calls to terminate particle physics. The diversity politics is pushing (some of the) people into places where they don't belong. We usually talk about the problems that it causes to the places. But this "misallocation of the people" causes problems to the people, too. People are being thrown into prisons. Sabine Hossenfelder was largely thrown to particle physics and related fields and for years, it was assumed that she had to be grateful for that, the system needed her to improve the diversity ratios, and no one needed to ask her about anything.

But she has suffered. She has suffered for more than 15 years. To get an idea, imagine that you need to deal with lipsticks on your face every fudging day for 15 years, otherwise you're in trouble. She really hates all of you and she wants to get you. So Dr Stöcker, Giddings, and others, be extraordinarily careful about vampires at night because the first vampire could be your last.

The politically correct movement has first forced Sabine Hossenfelder to superficially work in theoretical high energy physics (a field whose purpose is to use mathematical tools and arguments to suggest and analyze possible answers to open questions about the Universe) and although she never wanted to trust mathematics this much, she never wanted to derive consequences of many theories although at most "one theory" is relevant in the "real life", and she certainly didn't want to continue with this activity that depends on the trust in mathematics after the first failed predictions (genuine science is all about falsification of models, theories, and paradigms and failed predictions are everyday appearances; scientists must continue, otherwise science would have stopped within minutes the first time it was tried in a cave; in the absence of paradigm-shifting experiments, it's obvious that theorists are ahead and they are comparing a greater number of possibly paths to go further than 50 years ago – not everyone would like it but it's common sense why the theoretical work has to be more extensive in this sense).

And the PC ideology has kept her in this prison for more than 15 years.

And then, when she already emerged as an overt hater of particle physics, the same diversity ideology is turning her into an antibiotics-resistant germ because it's harder to point out that she isn't really good enough to be taken seriously when it comes to such fundamental questions. And it seems that some people don't care and in their efforts to make women stronger, they want to help her to sell her book – it seems that the risk that they are helping to kill particle physics seems to be a non-problem for them.

These are the two mechanisms by which the politically correct ideology threatens the very survival of scientific disciplines such as particle physics. So all the cowards in the HEP silent majority, you're just wrong. PC isn't an innocent cherry on a pie. PC is a life-threatening tumor and the cure should better start before it's too late.

And that's the memo.

by Luboš Motl ( at March 20, 2019 06:17 AM

March 19, 2019

Emily Lakdawalla - The Planetary Society Blog

OSIRIS-REx sees Bennu spewing stuff into space
The asteroid's rotation rate is also increasing, and scientists continue refining the plan to collect a regolith sample next year.

March 19, 2019 09:16 PM

Matt Strassler - Of Particular Significance

The Importance and Challenges of “Open Data” at the Large Hadron Collider

A little while back I wrote a short post about some research that some colleagues and I did using “open data” from the Large Hadron Collider [LHC]. We used data made public by the CMS experimental collaboration — about 1% of their current data — to search for a new particle, using a couple of twists (as proposed over 10 years ago) on a standard technique.  (CMS is one of the two general-purpose particle detectors at the LHC; the other is called ATLAS.)  We had two motivations: (1) Even if we didn’t find a new particle, we wanted to prove that our search method was effective; and (2) we wanted to stress-test the CMS Open Data framework, to assure it really does provide all the information needed for a search for something unknown.

Recently I discussed (1), and today I want to address (2): to convey why open data from the LHC is useful but controversial, and why we felt it was important, as theoretical physicists (i.e. people who perform particle physics calculations, but do not build and run the actual experiments), to do something with it that is usually the purview of experimenters.

The Importance of Archiving Data

In many subfields of physics and astronomy, data from experiments is made public as a matter of routine. Usually this occurs after an substantial delay, to allow the experimenters who collected the data to analyze it first for major discoveries. That’s as it should be: the experimenters spent years of their lives proposing, building and testing the experiment, and they deserve an uninterrupted opportunity to investigate its data. To force them to release data immediately would create a terrible disincentive for anyone to do all the hard work!

Data from particle physics colliders, however, has not historically been made public. More worrying, it has rarely been archived in a form that is easy for others to use at a later date. I’m not the right person to tell you the history of this situation, but I can give you a sense for why this still happens today.

The fundamental issue is the complexity of data sets from colliders, especially from hadron colliders such as the Tevatron and the LHC. (Archiving was partly done for LEP, a simpler collider, and was used in later studies including this search for unusual Higgs decays  and this controversial observation, also discussed here.) What “complexity” are we talking about? Collisions of protons and/or anti-protons are intrinsically complicated; particles of all sorts go flying in all directions. The general purpose particle detectors ATLAS and CMS have a complex shape and aren’t uniform. (Here’s a cutaway image showing CMS as a set of nested almost-cylinders. Note also that there are inherent weak points: places where cooling tubes have to run, where bundles of wires have to bring signals in and out of the machine, and where segments of the detector join together.) Meanwhile the interactions of the particles with the detector’s material is messy and often subtle (here’s a significantly oversimplified view). Not every particle is detected, and the probability of missing one depends on where it passes through the detector and what type of particle it is.

Even more important, 99.999% percent of ATLAS and CMS data is discarded as it comes in; only data which passes a set of filters, collectively called the “trigger,” will even be stored. The trigger is adjusted regularly as experimental conditions change. If you don’t understand these filters in detail, you can’t understand the data. Meanwhile the strategies for processing the raw data change over time, becoming more sophisticated, and introducing their own issues that must be known and managed.

I could easily go on (did I mention that at the LHC dozens of collisions occur simultaneously?) If, when you explore the data, you fail to account for all these issues, you can mistake a glitch for a new physical effect, or fail to observe a new physical effect because a glitch obscured it. Any experimentalist inside the collaborations is aware of most of these subtleties, and is surrounded by other experts who will be quick to complain if he or she forgets to account for one of them. That’s why it’s rare for the experimenters to report a result that has this type of error embedded in it.

Now, imagine writing a handbook that would encapsulate all of that combined knowledge, for use by people who will someday analyze the data without having access to that collective human library. This handbook would accompany an enormous data set taken in changing conditions, and would need to contain everything a person could possibly need to know in order to properly analyze data from an LHC experiment without making a technical error.

Not easy! But this is what the Open Data project at CERN, in which CMS is one of the participating experiments, aims to do. Because it’s extremely difficult, and therefore expensive in personnel and time, its value has to be clear.

I personally do think the value is clear, especially at the LHC. Until the last couple of decades, one could argue that data from an old particle physics experiment would go out of date so quickly, superseded by better experiments, that it really wasn’t needed. But this argument has broken down as experiments have become more expensive, with new ones less frequent. There is no guarantee, for instance, that any machine superseding the LHC will be built during my lifetime; it is a minimum of 20 and perhaps 40 years away. In all that time, the LHC’s data will be the state of the art in proton-proton collider physics, so it ought to be stored so that experts can use it 25 years from now. The price for making that possible has to be paid.

[This price was not paid for the Tevatron, whose data, which will remain the gold standard for proton-antiproton collisions for perhaps a century or more, is not well-archived.]

Was Using Open Data Necessary For Our Project?

Even if we all agree that it’s important to archive LHC data so that it can be used by future experimental physicists, it’s not obvious that today’s theorists should use it. There’s an alternative: a theorist with a particular idea can temporarily join one of the experimental collaborations, and carry out the research with like-minded experimental colleagues. In principle, this is a much better way to do things; it permits access to the full data set, it allows the expert experimentalists to handle and manage the data instead of amateurs like us, and it should in principle lead to state-of-the-art results.

I haven’t found this approach to work. I’ve been recommending the use of our technique [selecting events where the transverse momentum of the muon and antimuon pair is large, and often dropping isolation requirements] for over ten years, along with several related techniques. These remarks appear in papers; I’ve mentioned these issues in many talks, discussed them in detail with at least two dozen experimentalists at ATLAS and CMS (including many colleagues at Harvard), and even started a preliminary project with an experimenter to study them. But everyone had a reason not to proceed. I was told, over and over again, “Don’t worry, we’ll get to this next year.” After a decade of this, I came to feel that perhaps it would be best if we carried out the analysis ourselves.

Even then, there was an alternative: we could have just done a study of our method using simulated data, and this would have proved the value of our technique. Why spend the huge amount of time and effort to do a detailed analysis, on a fraction of the real data?

First, I worried that a study on simulated data would be no more effective than all of the talks I gave and all the personal encouraging I did over the previous ten years. I think seeing the study done for real has a lot more impact, because it shows explicitly how effective the technique is and how easily it is implemented. [Gosh, if even theorists can do it…]

Second, one of the things we did in our study is include “non-isolated muons” — muons that have other particles close by — which are normally not included in theorists’ studies. Dropping the usual isolation criteria may be essential for discoveries of hidden particles, as Kathryn Zurek and I have emphasized since 2006 (and studied in a paper with Han and Si, in 2007). I felt it was important to show this explicitly in our study. But we needed the real data to do this; simulation of the background sources for non-isolated muons would not have been accurate. [The experimenters rarely use non-isolated muons in the type of analysis we carried out, but notably have been doing so here; my impression is that they were unaware of our work from 2007 and came to this approach independently.]

Stress Testing the Archive

A further benefit to using the real data was that we stress-tested the archiving procedure in the Open Data project, and to do this fully, we had to carry out our analysis to the very end. The success or failure of our analysis was a test of whether the CMS Open Data framework truly provides all the information needed to do a complete search for something unknown.

The test received a passing grade, with qualifications. Not only we did complete the project, we were able to repeat a rather precise measurement of the (well-known) cross-section for Z boson production, which would have failed if the archive and the accompanying information had been unusable. That said, there is room for improvement: small things were missing, including some calibration information and some simulated data. The biggest issue is perhaps the format for the data storage (difficult to use and unpack for a typical user).

It’s important to recognize that the persons in charge of Open Data at CMS have a huge and difficult job; they have to figure out how to write the nearly impossible handbook I referred to above. It’s therefore crucial that people like our group of theorists actually use the open data sets now, not just after the LHC is over. Now, when the open data sets are still small, is the time to figure out what information is missing, to determine how to improve the data storage, to fill out the documentation and make sure it has no gaps. We hope we’ve contributed something to that process.

The Future

Should others follow in our footsteps? Yes, I think, though not lightly. In our case, five experts required two years to do the simplest possible study; we could have done it in one year if we’d been more efficient, but probably not much less. Do not underestimate what this takes, both in terms of understanding the data and learning how to do statistical analysis that most people rarely undertake.

But there are studies that simply cannot be done without real data, and unless you can convince an experimentalist to work with you, your only choice may be to dive in and do your best. And if you are already somewhat informed, but want to learn more about how real experimental analysis is done, so you can appreciate more fully what is typically hidden from view, you will not find a better self-training ground. If you want to take it on, I suggest, as an initial test, that you try to replicate our measurement of the Z boson cross-section. If you can’t, you’re not ready for anything else.

I should emphasize that Open Data is a resource that can be used in other ways, and several groups have already done this. In addition to detailed studies of jets carried out on the real data by my collaborators, led by Professor Jesse Thaler, there have been studies that have relied solely on the archive of simulated data also provided by the CMS Open Data project. These have value too, in that they offer proving grounds for techniques to be applied later to real data. Since exploratory studies of simulated data don’t require the extreme care that analysis of real data demands, there may be a lot of potential in this aspect of the Open Data project.

In the end, our research study, like most analyses, is just a drop in the huge bucket of information learned from the LHC. Its details should not obscure the larger question: how shall we, as a community, maintain the LHC data set so that it can continue to provide information across the decades? Maybe the Open Data project is the best approach.  If so, how can we best support it?  And if not, what is the alternative?

by Matt Strassler at March 19, 2019 01:35 PM

Lubos Motl - string vacua and pheno

Naturalness, watermelons, populism, intuition, and intelligence
A technical comment at the beginning: The updated Disqus widget allows the commenters to press new buttons and apply basic and not so basic HTML formatting on their comments – bold face, italics, underline, strikethrough, links, spoilers (!), codes, and quotes. Feel free to play, children. ;-)

Like her famous countrymate, Sabine Hossenfelder believes that the lie repeated many times becomes the truth which is why she brought the 347th rant against naturalness to her brainwashed, moronic readers.

In one way or another, naturalness is a principle saying that the dimensionless parameters in our physical theories should be of order one – and we should seriously ask "Why" and look for an explanation otherwise. Philosopher Porter Williams just wrote a 32-page-long paper Two Notions of Naturalness whose main point is to distinguish two different ideas that are labeled "naturalness". It's great but the number of flavors of naturalness is greater than two – yet all of them are flavors of the same thing.

At the beginning, she posts the image above [source] with the caption: "Square watermelons. Natural?"

Too bad she hasn't even tried to answer her own question because it's an excellent metaphorical example of naturalness – which can teach you a lot about the actual physicists' motivations when they assume naturalness in one way or another.

OK, excellent, if or when you see the square watermelons for the first time, don't you think "it's strange"? I surely did. And I think that so does every damn curious human being on Earth. Why? Because watermelons that we know in Nature tend to be rather round. In fact, most fruits are quasi-round.

The fruits are quasi-round because there's a lot of water or mass in them, some shell or skin has to confine that water or mass, and the plant minimizes the amount of material for the skin or shell. In effect, the optimum shape is determined by similar mechanisms as the shape of the bubble and it ends up close to round. If you have really never heard of the square watermelons, don't click at the hyperlink and don't read the next paragraph. And try to answer the question: Are these square watermelons made of plastic matter in factories, do they grow in wild nature, or something in between?

The answer is "something in between". They are really naturally grown fruits – but in an environment that was engineered by humans. These watermelons are grown in boxes! The Japanese like to do it (clearly, it is a similar habit of a clever bastardizing of Mother Nature like their bonsai trees) and charge $100 for one watermelon.

As you can see, the surprising shape isn't "quite" natural. It's been helped by the humans and their seemingly artificial business tricks. We could discuss whether square fruits like that exist somewhere in Nature or whether they could. It could be an interesting discussion and someone could bring some surprising data. I don't want to claim that it's completely impossible for the cubic fruits to emerge naturally. But I surely want to claim that a curious, intelligent person is expected to be surprised and ask what is the origin of these unusual shapes. Because the shapes are unusual for fruits, the answer has to be a bit unusual, too!

This is really the general point of naturalness. When the mechanism – for the growth of fruits or the generation of a low-energy effective field theory out of a deeper theory – is dull and straightforward, the dimensionless constants one generates are usually of order one. And if they're not of order one, there has to be something special – something that is very interesting even if we understand it just approximately. For example the tool you may buy for $31.12, a somewhat unnatural sequence of digits 1-2-3. ;-)

Note that I could map the square watermelon example to the case of parameters in quantum field theory a bit more closely. On the watermelon, we may calculate the ratio \(P=A_{\rm min}/A_{\rm average}\) of the minimum curvature radius and the average curvature radius evaluated over the smooth green surface. For round watermelons, you may get the ratio of order \(P\approx 0.5\). For square watermelons, you could get \(P\approx 0.01\) or so if the square watermelon's edges end up being sharp enough. The square watermelon looks intuitively unnatural to us because \(P\ll 1\). And it should.

People who don't ask "Why?" when they see a square watermelon simply lack the scientific curiosity. There is nothing "great" about that intellectual deficiency! And indeed, if you concluded that the "playful human hand" had to be involved in the creation of the square watermelons, your intuition was totally right. This is an example of the generalized naturalness reasoning.

Can we rigorously prove naturalness – that the parameters are never much smaller than one? We can't prove it rigorously. We can't even make the statement rigorous. For example, we would have to decide "which numbers are small enough to be called much smaller than one". There is clearly no sharp bound. And if we picked a bound arbitrarily, there's no reason to expect that a sharp theorem starts to hold from that bound, e.g. for all \(P\lt 0.01\).

But we can justify naturalness by some statistical reasoning. A deeper level of the laws of Nature allow the values of \(P\) to be calculated in some way – think about some detailed theory about the growth of watermelons. And if these theories don't depend on a pre-existing dimensionless parameter that would play the role of \(P\) or its functions, it's sane to expect the calculable \(P\) to be of order one.

In other words, there may be a probabilistic distribution \(\rho(P)\) for various values of \(P\). We don't know what that distribution should be, either. But again, it should be natural which means that its main support is probably located at values of \(P\) that are of order one (let's assume that \(P\) is a positive number here). The probability distribution has to be normalizable so it can't just allow arbitrarily small or large values of \(P\) to be equally likely. The most likely values of \(P\) could be comparable to trillions but then the distribution \(\rho(P)\) itself would be unnatural.

If the distribution \(\rho(P)\) is uniform and supported by the interval \((0,1)\), then the probability that \(P\lt 0.01\) will end up being \(0.01\) itself. Extremely small values of parameters translate to extremely small probabilities. And if they're extreme enough, the precise way of the "translation" doesn't matter much. If there's some fine-tuning, reasonable enough translations can't change it. Judge Potter Stewart once defined "hard porn" by saying "I can't define it but I know it when I see it." If that organ's hardness in some location is some very large number \(X\), then it really no longer matters much whether it's \(X\) or \(2X\) and it is simply "hard porn". It's similar with naturalness: In a huge fraction of situations, we can rather safely say "it's natural" or "it's not" because the borderline cases (where the details of the definition would matter) are relatively rare.

As you can see, my argumentation is strictly speaking circular. But it doesn't mean that it's wrong, unimportant, or that it can be ignored. If you reject naturalness of any flavor and completely, well, I can't change your mind. That's what Ms Hossenfelder is doing, of course. But even in the absence of a proof of any flavor of the naturalness principle, there's still an empirical fact and it's the following:
Smart theoretical physicists simply do care about naturalness.
Sabine Hossenfelder encourages her readers to elaborate upon their conspiracy theories about group think etc. It must surely be a matter of group think that physicists care about naturalness. Well, if you can buy this sort of populist stuff, you may praise your peers – communities of commenters such as those at Backreaction are clearly among the best examples of group think in the Universe, so the complaint is really cute – but you can't change the fact that your brain is a rotten chunk of junk. There is absolutely nothing sociological about the physicists' tendency to be sensitive about fine-tuning vs naturalness and there's nothing mob-like about the tendency to care about naturalness.

Bragging isn't the main purpose of this paragraph but I've realized that naturalness was extremely important since I was 5 years old or so. There are various reasons why it's so. First of all, by that time, I had seen lots of round enough watermelons and similar fruits to instinctively understand what shapes of fruits and other things are natural. (Clearly, the fruits are neither the main application of naturalness nor my expertise. We are really talking about the form of the equations describing the physical laws.) Of course near round shapes are more natural. More importantly for a theorist, I had done a sufficient number of straightforward calculations of dimensionless numbers that ended up with values that may be considered "values of order one". And when some results were "very far from order one", I could give you a quick explanation why it was so.

I don't know how widespread the realization is. As far as I can say, even when I was 5, I could have been the only person in the city of Pilsen who actively realized – and could articulate, in a somewhat childish physics speak – that numbers of order one are natural and those that are much higher are not. The idea of the overwhelming, dull majority that doesn't care or that doesn't know what I am talking about that my lonely realization is an example of group think is an amazing case of chutzpah. At any rate, the basic realization was never shaped by anyone else, let alone "group think" that requires an actual "group" of people.

The people who keep on growing like theoretical physicists become increasingly appreciative of naturalness. So the importance of naturalness had its manifestations in the basic school, high school, college... and let me jump to the graduate school. We had to prepare for the qualifiers – a Russian friend was scaring me that I could really fail which wasn't realistic, as I saw afterwards, but it was still fun to achieve some historical great scores. But the point is that the oral exams – and perhaps some problems in the written exam – contain a huge number of problems "quickly make an order-of-magnitude estimate of some quantity".

The number of such problems – sometimes extraordinarily practical problems, at least from a fundamentally theoretical physicist's viewpoint, you know – that a soon-to-be physics PhD can solve is huge and they really cover all important things in the Universe, if I can borrow a clarifying phrase from Sheldon Cooper. Almost none of them were related to particle physics. Just a totally trivial example: Estimate the drift velocity of electrons in a wire of some thickness and some normal current. Great. The student has to divide the current density (current over the cross section) by the charge carrier density (I mean the number density times the electron charge) and gets a result below 1 millimeter per second or so. So much smaller than the Fermi speed above 1 kilometer per second! The individual electrons are flying very quickly but their "center of mass" is moving rather slowly when the currents are realistic.

I am talking about the order-of-magnitude estimates – and parameteric estimates in general – because they undoubtedly belong to the toolkit that every good theoretical and experimental physicist must have mastered. Not only as a philosophical thesis that she can mindlessly parrot; but as something that he can use in thousands of examples (I've used both sexes so that the feminists can't whine about the discrimination).

A big part of the general, instinctive understanding of physics by the physicists is about these parameteric estimates in which the numerical prefactors and other "details" (such as subleading corrections in some expansion) are simply ignored. Perhaps one-half of the physicists' efficient understanding of the phenomena in the Universe would be impossible without this methodology – the dimensional analysis etc.

Now, why does it make sense to ignore the (dimensionless) numerical prefactors as details? Why don't they totally change the result? Well, because they're assumed to be of order one. They're assumed to differ from \(1\) at least by one or two or three orders of magnitude – while the other known (mostly dimensionful) parameters a priori belong to huge intervals that may span dozens (a larger number than the previous one) of orders of magnitude! This assumption may be more justified in some cases and less justified in others. But it's something that is so extremely useful and informative in such a huge fraction of the examples in physics (and other natural sciences, not to mention some everyday problems and even social sciences) that a physicist surely cannot throw away this whole industry of order-of-magnitude estimates.

Every competent physics graduate student surely understands that the principle that "we may consider the numerical factors as details in our estimates" and "the dimensionless parameters are assumed to be of order one" are closely related if not almost equivalent. So when Sabine Hossenfelder writes
Williams offers one argument that I had not heard before, which is that you need naturalness to get reliable order-of-magnitude estimates. But this argument assumes [...],
you can immediately see that she is simply not a real theoretical physicist and she has never been one. Otherwise she would have heard about the relationship – well, she would really be capable of figuring it out herself, and she would have actually done it decades ago. She couldn't have meritocratically passed tests like the PhD qualifying exams. Her PhD is demonstrably fraudulent. She has never understood these basic things – even though before one gets a physics PhD, she should not only understand them but be capable of applying them in basically any physical context.

New physics beyond the Higgs that could have been seen at the LHC may be said to be a "failed prediction of naturalness". As I said, it was a very soft prediction and I have never thought that the odds were too different from 50-50. Of course even a "big desert" is plausible – and, from a point of view, beautiful (and no, I will not erase this word from my vocabulary) – given some mechanisms that enable it. "New physics at the LHC" was always partly a matter of phenomenologists' (and other excitement-loving physicists') wishful thinking, not a hard prediction of anything. But naturalness isn't a high-precision formula. Naturalness is a grand principle, a manifestation of basic Bayesian inference applied to parameter spaces, or a strategy to get closer to realistic answers in physics and other natural sciences. You can't really refute it or falsify it – just like you cannot "falsify" mathematics as a discipline.

Hossenfelder, her readers, and this whole organized anti-physics movement is obsessed with the idea that they will falsify all of theoretical physics or all of physics or all of the logical thinking. You just can't. You live in a fantasy land, buddies. These fantasy dwellers love to define some "rules of the game" and they decide that an outcome classified as a "failure" may be interpreted as a "falsification of naturalness or theoretical physics" or whatever they like etc. But all these conclusions are just absolutely irrational and more generally, fantasy dwellers are simply not defining the rules of the game. Nature and the laws of mathematics do. The scientists' job is to increasingly understand them, not to beat them by their opinions.

At the end, Hossenfelder writes:
The LHC data has shown that the naturalness arguments that particle physicists relied on did not work. But instead of changing their methods of theory-development, they adjust their criteria of naturalness to accommodate the data. This will not lead to better predictions.
Even if that outcome were a "failure", it was one failure of a probabilistic strategy, not a falsification of anything, and "another failure" doesn't follow from the first one at all.

Of course the adjustment of the people's understanding of naturalness is the first sensible thing – and quite possibly, the only sensible thing – that the theoretical physicists should do after the strategy was seen as unsuccessful due to the null results at the LHC – and the direct dark matter searches, for example.

No one knows – and, despite her self-confident suggestions to the contrary, even Ms Hossenfelder cannot know – whether the adjusted notions of naturalness will lead to better predictions. But we will never know if we don't try. Of course it's the professional – and moral – duty of the physicists to play with naturalness, adjust it, combine it, recombine it, and try to shape a more accurate cutting-edge picture of the laws of physics. Physicists are obliged to do exactly what she claims they're not allowed to do.

Maybe someone invents a new principle that will de facto eliminate naturalness from physics – because it will be replaced by something completely different. But this scenario is a pure speculation at this moment. We can't pretend that it has taken place because it hasn't taken place. So naturalness and its variations will remain a tool of the physicists whether a crackpot likes it or not. Physicists will keep on playing with this methodology.

That process may lead to an additional bunch of soft predictions that will be confirmed or refuted by an experiment. But it is a part of science. The alternative, to throw away basic physicists' tools and principles such as the principle of naturalness in the most generous form and without any replacement, is a road to hell – a method to abolish science as we have known it for centuries. Physicists will surely try to find and apply as many novel ways to guess what may happen in future experiments – but in the relative absence of alternatives, of course they will still need to exploit some sort of naturalness.

As long as theoretical physicists are being picked according to their ability to predict or retrodict phenomena or explain and quantify the phenomena in Nature (and I sincerely hope that graduate students won't be given physics PhDs for their vigor while licking the aßes of the likes of Ms Hossenfelder), these physicists will be vastly more appreciative of naturalness than the average laymen – because the thinking in terms of naturalness and fine-tuning is almost inseparably connected with the scientific curiosity and quantitative instincts of the physicists' type. You can't do anything about it, Ms Hossenfelder and her equally worthless sycophants.

Incidentally, Pavel Nadolsky tried to explain to her that there are extra problems in the absence of naturalness – the predictive or explanatory power of different theories can't even be properly compared in the absence of naturalness. The previous half-sentence is already too complex for her poultry brain so she said Nadolsky was only talking about the individual theories' explanatory power – that's the maximum complexity that her brain can contain. But he was talking about the comparisons of the explanatory power of several theories – that's way too difficult for her populist stupid knee-jerk reactions. So of course Mr Nadolsky couldn't teach anything to her. She is unable and unwilling to learn anything and frankly speaking, it's way too late for her to learn these basic ideas about physics.

She's already living a different epoch – it is Ihr Kampf to persuade as many morons as possible that her fundamental problems with some basic insights and methods of physics are just fine. They are not.

And that's the memo.

Oops, it isn't quite the end yet. She also added a bunch of fast appraisals in a footnote at the very bottom – and these two short sentences remind me why I simply couldn't stand that fudged-up pretentious pseudointellectual for a millisecond:
The strong CP-problem (that’s the thing with the theta-parameter) is usually assumed to be solved by the Pecci-Quinn mechanism, never mind that we still haven’t seen axions. The cosmological constant has something to do with gravity, and therefore particle physicists think it’s none of their business.
It's so arrogant yet so stupid. We haven't seen an axion yet but we're allowed to talk about the Peccei-Quinn [note the spelling] mechanism because this is a damn theoretical prediction made by a theory. The theory is ahead of the experimental confirmation which fully explains that the experimental discovery hasn't taken place yet and there's nothing wrong about it. This is how the discovery process works in about 50% of cases in the history of physics.

In the remaining 50%, the experimenters are the first ones who discover something new and physicists create a theory explaining that new phenomenon afterwards. Needless to say, Ms Hossenfelder doesn't like this ordering, either – she doesn't want to build colliders because theorists don't have "firm predictions" for the new collider in advance. As you can see, she hates both scenarios, one in which the theorists start and one in which the experimenters start – she hates 100% progress in physics.

As of February 2019, it is unknown whether there are axions in Nature and whether one of them is responsible for solving the strong CP-problem. Her claim that the answer is known to her is just a pile of šajze.

Similarly, she writes that "it's not particle physicists' business" to discuss the cosmological constant because the cosmological constant has "something to do with gravity". Holy cow. Some particle physicists sensibly neglect gravity simply because it's very weak in most particle physics experiments which they're focusing upon but some particle physicists, especially those who are closer to formal theory, do discuss the fundamental laws of physics and they do include and have to include (quantum) gravity. Moreover, everything that has ever been observed has "something to do with gravity" – because all mass/energy/momentum is gravitationally coupled. Particle physics and fundamental terms in cosmology are really inseparable.

What she actually wants to implicitly push down your throat is that the cosmological constant phenomena (the accelerated expansion of the Universe) are consequences of some modifications of general relativity which is why relativists, and not particle physicists, should discuss it. But that's just an unsupported hypothesis of hers. An equally plausible – if not much more plausible – hypothesis is that the tiny cosmologically observed vacuum energy is simply what we see and it simply has to correspond to the energy density terms in a theory of particle physics.

Both possibilities are viable and researchers must preserve their freedom to do research of both.

Some particle physicists ignore the problem of the tiny cosmological constant because they're focusing on quantum field theory which isn't compatible with quantum gravity (at least due to renormalizability), and their QFT assumptions and principles may be assumed to fail. But others, like string theorists who study the cosmological constant, simply cannot ignore this effect and the tiny magnitude of the constant. They're shocked by the tiny constant much like physicists are shocked by any unnatural constant in physics – and this one is tiny, indeed. They spend their energy by looking some anthropic/naturalness/other explanations of the puzzle because that defines their work, at least the part of their work linked to the cosmological constant.

The fact that she can successfully pump some easy answers and solutions or non-solutions to all these difficult questions down the scumbags' throats doesn't mean that this indoctrination has something to do with proper scientific research.

by Luboš Motl ( at March 19, 2019 01:38 AM

March 18, 2019

Jon Butterworth - Life and Physics

Running over the same old ground?
Last week I was in CERN for various meetings. Rather unexpectedly, these included one with Roger Waters in which I totally failed to say “Welcome to the Machine” at the right moment. The main business was CERN’s Scientific Policy Committee, … Continue reading

by Jon Butterworth at March 18, 2019 08:14 AM

March 17, 2019

Clifford V. Johnson - Asymptotia


You may have heard that there are an estimated 1 billion Painted Lady butterflies passing through the Los Angeles area right now. Just after sunrise this morning I watched them begin their daily swarming. Fascinating! Later, just after lunch, I captured some quick shots of this one for you… -cvj

The post Painted appeared first on Asymptotia.

by Clifford at March 17, 2019 05:41 AM

March 16, 2019

Robert Helling - atdotde

Nebelkerze CDU-Vorschlag zu "keine Uploadfilter"
Sorry, this one of the occasional posts about German politics and thus in German. This is my posting to a German speaking mailing lists discussing the upcoming EU copyright directive (must be stopped in current from!!! March 23rd international protest day) and now the CDU party has proposed how to implement it in German law, although so unspecific that all the problematic details are left out. Here is the post.

Vielleicht bin ich zu doof, aber ich verstehe nicht, wo der genaue Fortschritt zu dem, was auf EU-Ebene diskutiert wird, sein soll. Ausser dass der CDU-Vorschlag so unkonkret ist, dass alle internen Widersprüche im Nebel verschwinden. Auch auf EU-Ebene sagen doch die Befuerworter, dass man viel lieber Lizenzen erwerben soll, als filtern. Das an sich ist nicht neu.

Neu, zumindest in diesem Handelsblatt-Artikel, aber sonst habe ich das nirgends gefunden, ist die Erwähnung von Hashsummen („digitaler Fingerabdruck“) oder soll das eher sowas wie ein digitales Wasserzeichen sein? Das wäre eine echte Neuerung, würde das ganze Verfahren aber sofort im Keim ersticken, da damit nur die Originaldatei geschützt wäre (das waere ja auch trivial festzustellen), aber jede Form des abgeleiteten Werkes komplett durch die Maschen fallen würde und man durch eine Trivialänderung Werke „befreien“ könnte. Ansonsten sind wir wieder bei den zweifelhaften, auf heute noch nicht existierender KI-Technologie beruhenden Filtern.

Das andere ist die Pauschallizenz. Ich müsste also nicht mehr mit allen Urhebern Verträge abschliessen, sondern nur noch mit der VG Internet. Da ist aber wieder die grosse Preisfrage, für wen die gelten soll. Intendiert sind natürlich wieder Youtube, Google und FB. Aber wie formuliert man das? Das ist ja auch der zentrale Stein des Anstoßes der EU-Direktive: Eine Pauschallizenz brauchen all, ausser sie sind nichtkommerziell (wer ist das schon), oder (jünger als drei Jahre und mit wenigen Benutzern und kleinem Umsatz) oder man ist Wikipedia oder man ist GitHub? Das waere wieder die „Internet ist wie Fernsehen - mit wenigen grossen Sendern und so - nur eben anders“-Sichtweise, wie sie von Leuten, die das Internet aus der Ferne betrachten so gerne propagiert wird. Weil sie eben alles andere praktisch platt macht. Was ist denn eben mit den Foren oder Fotohostern? Müssten die alle eine Pauschallizenz erwerben (die eben so hoch sein müsste, dass sie alle Film- und Musikrechte der ganzen Welt pauschal abdeckt)? Was verhindert, dass das am Ende ein „wer einen Dienst im Internet betreibt, der muss eben eine kostenpflichtige Internetlizenz erwerben, bevor er online gehen kann“-Gesetz wird, das bei jeder nichttrivialen Höhe der Lizenzgebühr das Ende jeder gras roots Innovation waere?

Interessant waere natuerlich auch, wie die Einnahmen der VG Internet verteilt werden. Ein Schelm waere, wenn das nicht in großen Teilen zB bei Presseverlegern landen würde. Das waere doch dann endlich das „nehmt denjenigen, die im Internet Geld verdienen dieses weg und gebt es und, die nicht mehr so viel Geld verdienen“-Gesetz. Dann müsste die Lizenzgebühr am besten ein Prozentsatz des Umsatz sein, am besten also eine Internet-Steuer.

Und ich fange nicht damit an, wozu das führt, wenn alle europäischen Länder so krass ihre eigene Umsetzungssuppe kochen.

Alles in allem ein ziemlich gelungener Coup der CDU, der es schaffen kann, den Kritikern von Artikel 13 in der öffentlichen Meinung den Wind aus den Segeln zu nehmen, indem man es alles in eine inkonkrete Nebelwolke packt, wobei die ganzen problematischen Regelungen in den Details liegen dürften.

by Robert Helling ( at March 16, 2019 09:43 AM

March 15, 2019

Jon Butterworth - Life and Physics

Catching up
I have been too distracted to write much lately. This is partly due to the demoralising backdrop of UK politics, and partly because I have been having fun with physics and related matters. This is a quick catch up on … Continue reading

by Jon Butterworth at March 15, 2019 10:22 AM

John Baez - Azimuth

Algebraic Geometry

A more polished version of this article appeared on Nautilus on 2019 February 28. This version has some more material.

How I Learned to Stop Worrying and Love Algebraic Geometry

In my 50s, too old to become a real expert, I have finally fallen in love with algebraic geometry. As the name suggests, this is the study of geometry using algebra. Aroun 1637, Pierre Fermat and Rene Descartes laid the groundwork for this subject by taking a plane, mentally drawing a grid on it as we now do with graph paper, and calling the coordinates x and y. We can the write down an equation like x^2 + y^2  = 1, and there will be a curve consisting of points whose coordinates obey this equation. In this example, we get a circle!

It was a revolutionary idea at the time, because it lets us systematically convert questions about geometry into questions about equations, which we can solve if we’re good enough at algebra. Some mathematicians spend their whole lives on this majestic subject. But I never really liked it much—until recently. Now I’ve connected it to my interest in quantum physics.

We can describe many interesting curves with just polynomials. For example, roll a circle inside a circle three times as big. You get a curve with three sharp corners called a “deltoid”, shown in red above. It’s not obvious that you can describe this using a polynomial equation, but you can. The great mathematician Leonhard Euler dreamt this up in 1745.

As a kid I liked physics better than math. My uncle Albert Baez, father of the famous folk singer Joan Baez, worked for UNESCO, helping developing countries with physics education. My parents lived in Washington D.C.. Whenever my uncle came to town, he’d open his suitcase, pull out things like magnets or holograms, and use them to explain physics to me. This was fascinating. When I was eight, he gave me a copy of the college physics textbook he wrote. While I couldn’t understand it, I knew right away that I wanted to. I decided to become a physicist.

My parents were a bit worried, because they knew physicists needed mathematics, and I didn’t seem very good at that. I found long division insufferably boring, and refused to do my math homework, with its endless repetitive drills. But later, when I realized that by fiddling around with equations I could learn about the universe, I was hooked. The mysterious symbols seemed like magic spells. And in a way, they are. Science is the magic that actually works.

And so I learned to love math, but in a certain special way: as the key to physics. In college I wound up majoring in math, in part because I was no good at experiments. I learned quantum mechanics and general relativity, studying the necessary branches of math as I went. I was fascinated by Eugene Wigner’s question about the “unreasonable effectiveness” of mathematics in describing the universe. As he put it, “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.”

Despite Wigner’s quasi-religious language, I didn’t think that God was an explanation. As far as I can tell, that hypothesis raises more questions than it answers. I studied mathematical logic and tried to prove that any universe containing a being like us, able to understand the laws of that universe, must have some special properties. I failed utterly, though I managed to get my first publishable paper out of the wreckage. I decided that there must be some deep mystery here, that we might someday understand, but only after we understood what the laws of physics actually are: not the pretty good approximate laws we know now, but the actual correct laws.

As a youthful optimist I felt sure such laws must exist, and that we could know them. And then, surely, these laws would give a clue to the deeper puzzle: why the universe is governed by mathematical laws in the first place.

So I went to graduate school—to a math department, but motivated by physics. I already knew that there was too much mathematics to ever learn it all, so I tried to focus on what mattered to me. And one thing that did not matter to me, I believed, was algebraic geometry.

How could any mathematician not fall in love with algebraic geometry? Here’s why. In its classic form, this subject considers only polynomial equations—equations that describe not just curves, but also higher-dimensional shapes called “varieties.” So x^2 + y^2  = 1 is fine, and so is x^{47} - 2xyz = y^7, but an equation with sines or cosines, or other functions, is out of bounds—unless we can figure out how to convert it into an equation with just polynomials. As a graduate student, this seemed like a terrible limitation. After all, physics problems involve plenty of functions that aren’t polynomials.

This is Cayley’s nodal cubic surface. It’s famous because it is the variety with the most nodes (those pointy things) that is described by a cubic equation. The equation is (xy + yz + zx)(1 - x - y - z) + xyz = 0, and it’s called “cubic” because we’re multiplying at most three variables at once.

Why does algebraic geometry restrict itself to polynomials? Mathematicians study curves described by all sorts of equations – but sines, cosines and other fancy functions are only a distraction from the fundamental mysteries of the relation between geometry and algebra. Thus, by restricting the breadth of their investigations, algebraic geometers can dig deeper. They’ve been working away for centuries, and by now their mastery of polynomial equations is truly staggering. Algebraic geometry has become a powerful tool in number theory, cryptography and other subjects.

I once met a grad student at Harvard, and I asked him what he was studying. He said one word, in a portentous tone: “Hartshorne.” He meant Robin Hartshorne’s textbook Algebraic Geometry, published in 1977. Supposedly an introduction to the subject, it’s actually a very hard-hitting tome. Consider Wikipedia’s description:

The first chapter, titled “Varieties,” deals with the classical algebraic geometry of varieties over algebraically closed fields. This chapter uses many classical results in commutative algebra, including Hilbert’s Nullstellensatz, with the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel as usual references.

If you can’t make heads or tails of this… well, that’s exactly my point. To penetrate even the first chapter of Hartshorne, you need quite a bit of background. To read Hartshorne is to try to catch up with centuries of geniuses running as fast as they could.

One of these geniuses was Hartshorne’s thesis advisor, Alexander Grothendieck. From about 1960 to 1970, Grothendieck revolutionized algebraic geometry as part of an epic quest to prove some conjectures about number theory, the Weil Conjectures. He had the idea that these could be translated into questions about geometry and settled that way. But making this idea precise required a huge amount of work. To carry it out, he started a seminar. He gave talks almost every day, and enlisted the help of some of the best mathematicians in Paris.

Alexander Grothendieck at his seminar in Paris

Working nonstop for a decade, they produced tens of thousands of pages of new mathematics, packed with mind-blowing concepts. In the end, using these ideas, Grothendieck succeeded in proving all the Weil Conjectures except the final, most challenging one—a close relative of the famous Riemann Hypothesis, for which a million dollar prize still waits.

Towards the end of this period, Grothendieck also became increasingly involved in radical politics and environmentalism. In 1970, when he learned that his research institute was partly funded by the military, he resigned. He left Paris and moved to teach in the south of France. Two years later a student of his proved the last of the Weil Conjectures—but in a way that Grothendieck disliked, because it used a “trick” rather than following the grand plan he had in mind. He was probably also jealous that someone else reached the summit before him. As time went by, Grothendieck became increasingly embittered with academia. And in 1991, he disappeared!

We now know that he moved to the Pyrenees, where he lived until his death in 2014. He seems to have largely lost interest in mathematics and turned his attention to spiritual matters. Some reports make him seem quite unhinged. It is hard to say. At least 20,000 pages of his writings remain unpublished.

During his most productive years, even though he dominated the French school of algebraic geometry, many mathematicians considered Grothendieck’s ideas “too abstract.” This sounds a bit strange, given how abstract all mathematics is. What’s inarguably true is that it takes time and work to absorb his ideas. As grad student I steered clear of them, since I was busy struggling to learn physics. There, too, centuries of geniuses have been working full-speed, and anyone wanting to reach the cutting edge has a lot of catching up to do. But, later in my career, my research led me to Grothendieck’s work.

If I had taken a different path, I might have come to grips with his work through string theory. String theorists postulate that besides the visible dimensions of space and time—three of space and one of time—there are extra dimensions of space curled up too small to see. In some of their theories these extra dimensions form a variety. So, string theorists easily get pulled into sophisticated questions about algebraic geometry. And this, in turn, pulls them toward Grothendieck.

A slice of one particular variety, called a “quintic threefold,” that can be used to describe the extra curled-up dimensions of space in string theory.

Indeed, some of the best advertisements for string theory are not successful predictions of experimental results—it’s made absolutely none of these—but rather, its ability to solve problems within pure mathematics, including algebraic geometry. For example, suppose you have a typical quintic threefold: a 3-dimensional variety described by a polynomial equation of degree 5. How many curves can you draw on it that are described by polynomials of degree 4? I’m sure this question has occurred to you. So, you’ll be pleased to know that answer is exactly 317,206,375.

This sort of puzzle is quite hard, but string theorists have figured out a systematic way to solve many puzzles of this sort, including much harder ones. Thus, we now see string theorists talking with algebraic geometers, each able to surprise the other with their insights.

My own interest in Grothendieck’s work had a different source. I’ve always had serious doubts about string theory, and counting curves on varieties is the last thing I’d ever try. Like rock climbing, it’s exciting to watch but too scary to actually attempt myself. But it turns out that Grothendieck’s ideas are so general and powerful that they spill out beyond algebraic geometry into many other subjects. In particular, his 600-page unpublished manuscript Pursuing Stacks, written in 1983, made a big impression on me. In it, he argues that topology—very loosely, the theory of what space can be shaped like, if we don’t care about bending or stretching it, just what kind of holes it has—can be completely reduced to algebra!

At first this idea may sound just like algebraic geometry, where we use algebra to describe geometrical shapes, like curves or higher-dimensional varieties. But “algebraic topology” winds up having a very different flavor, because in topology we don’t restrict our shapes to be described by polynomial equations. Instead of dealing with beautiful gems, we’re dealing with floppy, flexible blobs—so the kind of algebra we need is different.

Mathematicians sometimes joke that a topologist cannot tell the difference between a doughnut and a coffee cup.

Algebraic topology is a beautiful subject that has been around long before Grothendieck—but he was one of the first to seriously propose a method to reduce all topology to algebra.

Thanks to my work on physics, his proposal was tremendously exciting when I came across it. At the time I had taken up the challenge of trying to unify our two best theories of physics: quantum physics, which describes all the forces except gravity, and general relativity, which describes gravity. It seems that until we do this, our understanding of the fundamental laws of physics is doomed to be incomplete. But it’s devilishly difficult. One reason is that quantum physics is based on algebra, while general relativity involves a lot of topology. But that suggests an avenue of attack: if we can figure out how to express topology in terms of algebra, we might find a better language to formulate a theory of quantum gravity.

My physics colleagues will let out a howl here, and complain that I am oversimplifying. Yes, I’m oversimplifying. There is more to quantum physics than mere algebra, and more to general relativity than mere topology. Nonetheless, the possible benefits to physics of reducing topology to algebra are what got me so excited about Grothendieck’s work.

So, starting in the 1990s, I tried to understand the powerful abstract concepts that Grothendieck had invented—and by now I have partially succeeded. Some mathematicians find these concepts to be the hard part of algebraic geometry. They now seem like the easy part to me. The hard part, for me is the nitty-gritty details. First, there is all the material in those texts that Hartshorne takes as prerequisites: “the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel.” But there is also a lot more.

So, while I now have some of what it takes to read Hartshorne, until recently I was too intimidated to learn it. A student of physics once asked a famous expert how much mathematics a physicist needs to know. The expert replied: “More.” Indeed, the job of learning mathematics is never done, so I focus on the things that seem most important and/or fun. Until last year, algebraic geometry never rose to the top of the list.

What changed? I realized that algebraic geometry is connected to the relation between classical and quantum physics. Classical physics is the physics of Newton, where we imagine that we can measure everything with complete precision, at least in principle. Quantum physics is the physics of Schrödinger and Heisenberg, governed by the uncertainty principle: if we measure some aspects of a physical system with complete precision, others must remain undetermined.

For example, any spinning object has an “angular momentum”. In classical mechanics we visualize this as an arrow pointing along the axis of rotation, whose length is proportional to how fast the object is spinning. And in classical mechanics, we assume we can measure this arrow precisely. In quantum mechanics—a more accurate description of reality—this turns out not to be true. For example, if we know how far this arrow points in the x direction, we cannot know how far it points in the y direction. This uncertainty is too small to be noticeable for a spinning basketball, but for an electron it is important: physicists had only a rough understanding of electrons until they took this into account.

Physicists often want to “quantize” classical physics problems. That is, they start with the classical description of some physical system, and they want to figure out the quantum description. There is no fully general and completely systematic procedure for doing this. This should not be surprising: the two worldviews are so different. However, there are useful recipes for quantization. The most systematic ones apply to a very limited selection of physics problems.

For example, sometimes in classical physics we can describe a system by a point in a variety. This is not something one generally expects, but it happens in plenty of important cases. For example, consider a spinning object: if we fix how long its angular momentum arrow is, the arrow can still point in any direction, so its tip must lie on a sphere. Thus, we can describe a spinning object by a point on a sphere. And this sphere is actually a variety, the “Riemann sphere”, named after Bernhard Riemann, one of the greatest algebraic geometers of the 1800s.

When a classical physics problem is described by a variety, some magic happens. The process of quantization becomes completely systematic—and surprisingly simple. There is even a kind of reverse process, which one might call “classicization,” that lets you turn the quantum description back into a classical description. The classical and quantum approaches to physics become tightly linked, and one can take ideas from either approach and see what they say about the other one. For example, each point on the variety describes not only a state of the classical system (in our example, a definite direction for the angular momentum), but also a state of the corresponding quantum system—even though the latter is governed by the uncertainty principle. The quantum state is the “best quantum approximation” to the classical state.

Even better, in this situation many of the basic theorems about algebraic geometry can be seen as facts about quantization! Since quantization is something I’ve been thinking about for a long time, this makes me very happy. Richard Feynman once said that for him to make progress on a tough physics problem, he needed to have some sort of special angle on it:

I have to think that I have some kind of inside track on this problem. That is, I have some sort of talent that the other guys aren’t using, or some way of looking, and they are being foolish not to notice this wonderful way to look at it. I have to think I have a little better chance than the other guys, for some reason. I know in my heart that it is likely that the reason is false, and likely the particular attitude I’m taking with it was thought of by others. I don’t care; I fool myself into thinking I have an extra chance.

This may be what I’d been missing on algebraic geometry until now. Algebraic geometry is not just a problem to be solved, it’s a body of knowledge—but it’s such a large, intimidating body of knowledge that I didn’t dare tackle it until I got an inside track. Now I can read Hartshorne, translate some of the results into facts about physics, and feel I have a chance at understanding this stuff. And it’s a great feeling.

For the details of how algebraic geometry connects classical and quantum mechanics, see my talk slides and series of blog articles.

by John Baez at March 15, 2019 12:01 AM

March 13, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

RTE’s Brainstorm; a unique forum for public intellectuals

I have an article today on RTE’s ‘Brainstorm’ webpage, my tribute to Stephen Hawking one year after his death.

"Hawking devoted a great deal of time to science outreach, unusual for a scientist at this level"

I wasn’t aware of the RTE brainstorm initiative until recently, but I must say it is a very interesting and useful resource. According to the mission statement on the website“RTÉ Brainstorm is where the academic and research community will contribute to public debate, reflect on what’s happening in the world around us and communicate fresh thinking on a broad range of issues”.  A partnership between RTE, University College Cork, NUI Galway, University of Limerick, Dublin City University, Ulster University, Maynooth University and the Technological University of Dublin, the idea is to provide an online platform for academics and other specialists to engage in public discussions of interesting ideas and perspectives in user-friendly language.  You can find a very nice description of the initiative in The Irish Times here .

I thoroughly approve of this initiative. Many academics love to complain about the portrayal of their subject (and a lot of other subjects) in the media; this provides a simple and painless method for such people to reach a wide audience. Indeed, I’ve always liked the idea of the public intellectual. Anyone can become a specialist in a given topic; it’s a lot harder to make a meaningful contribution to public debate. Some would say this is precisely the difference between the academic and the public intellectual. Certainly, I enjoy engaging in public discussions of matters close to my area of expertise and I usually learn something new.  That said, a certain humility is an absolute must – it’s easy to forget that detailed knowledge of a subject does not automatically bestow the wisdom of Solomon. Indeed, there is nothing worse than listing to an specialist use their expertise to bully others into submission – it’s all about getting the balance right and listening as well as informing….

by cormac at March 13, 2019 07:28 PM

March 12, 2019

CERN Bulletin


The birth of the Web as seen by a Staff Association delegate

As you certainly know by now, the web (WWW) was invented at CERN in 1989 by Tim Berners-Lee.

Tony Cass has been a member of the Staff Association since he came to CERN in 1987, appreciating its role in defending staff rights and its contributions to the running and development of CERN. Tony got elected in 2017 as a staff delegate and we are happy to count him in our ranks in the Staff Council.

After working for some time in the program library and supporting the CERNVM system, Tony contributed to the introduction of desktop-based interactive computer systems, for which he later became responsible. His responsibilities gradually extended to all interactive, batch and storage systems, and he was also responsible for the modernization and upgrading of the data center.

After leading the Database Services Group, Tony is now responsible for CERN's network, telephony and digital radio services as Head of the IT Communications Systems Group.

Today, he gives us a very personal testimony of this great era of the birth of the World-Wide Web.

As a youngish young CERN staff member in 1989, the dawn of the Web passed me by. I was too busy making sure Federico Carminati’s never-ending stream of GEANT3 updates and fixes compiled on all platforms under the sun. Or worrying that I’d be woken up by a failure of the IBM backup system in the middle of the night. There was no easy remote access then. If the problem wasn’t trivial enough to fix via using the Minitel’s terminal emulator there was no alternative but the drive to CERN. Frequently, the drive to CERN was quicker.

Even when WWW first came to my attention, it was with the line mode browser. I doubt even Nicola Pellow would call that attractive and it was even more unprepossessing on a fixed size IBM 3270 display. Here, I have to give a quick mention to Rainer Többicke who produced the wizardry that delivered virtual 3270 displays in the mid 80’s—why did it take so long for Windows to support virtual desktops?

One person who did spot the potential was Bernd Pollermann. Bernd was responsible for the help system on the IBM and was forever trying to figure out why people couldn’t find the information they wanted and then adding the necessary keywords to the help files. A bit like Google including results for “cheap Adelaide” when you search for “CHEP Adelaide”. Bernd quickly spotted the web’s potential for making information findable and enabling people to explore topics in more detail. Bernd also, like Google, valued speed. I was responsible for an assembler I/O package and, at his request, wrote a stripped down version with an unfriendly but efficient interface and no error checking. A task-specific API it’d be called now but we didn’t have such terms back then.

My next recollection is of a presentation by Tim himself in 1992 or 1993. I was sitting at the back of the IT auditorium as usual. This was before Frédéric Hemmer and I started vying for the precious aisle seat by the left-hand entrance, though. In those days that seat was reserved for the ever-active Tony Osborne—he somehow managed to juggle roles as Deputy Division Leader, account & accounting overlord and CPLear software expert. Anyway, back to the story. I don’t much remember the presentation but I do remember Harry Renshall suggesting at the end that Tim should develop a browser for the X Window system. Tim wasn’t interested, explaining, I think, that there would be no possibility to author content. Personally, I think the complexity of X Window was reason enough. Even in an era when I had a wall full, the metre-long set of X Windows manuals was scary. I kept my programming studies to, and I’m sure it will surprise members of my current group to hear this, the works of Richard Stevens.

Over at NCSA, though, there was a bunch of people who didn’t have Tim’s purity of vision (or *blink*taste*/blink*) and weren’t scared by complexity. The rest, as they say, is history.

Or almost. Tim was in the same group as me in his last years at CERN. I was at a conference with his then Section Leader, Judy Richards, when she was astounded to find she had a secret person in her team. It was, of course, Arthur Secret, but it seems, with hindsight, to have been a harbinger of the separation between the Web and the place it was born.

March 12, 2019 05:03 PM

CERN Bulletin

CERN Bulletin


72nd Council of FICSA

A delegation from the CERN Staff Association participated in the Council of the Federation of International Civil Servants' Associations (FICSA), which was held from 4 to 8 February 2019 in Vienna, Austria, in the premises of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) Preparatory Commission. Participation in the Council always provides an opportunity for very enriching exchanges of experience and ideas, but it also helps to realize the progress made by some organisations on certain issues. 

What is FICSA?

The Federation of International Civil Servants’ Association (FICSA) was established in 1952 and currently brings together more than 80 staff associations or unions of international or intergovernmental organisations.

A distinction is made between members (29), who come from the United Nations Common System, and associate members (18), who are outside the Common System. FICSA also has 15 staff associations from other organisations as advisory members, and more than 20 local federations, which gather United Nations local staff associations with observer status. Some of these staff associations are in small organisations of a few dozen members. Others come from large agencies, such as the International Atomic Energy Agency (IAEA), which counts approximately an equal number of civil servants as CERN.

The CERN Staff Association is an associate member of FICSA as CERN is not part of the United Nations Common System. We participate in the FICSA Council and its deliberations, with limited voting rights. Overall, CERN is one of the most active members of FICSA; and our voice is heard!

What are the objectives and actions of FICSA?

The objectives of FICSA are:

1. To defend staff rights;

2. To ensure that the conditions of service of the staff in the Common System are maintained at a level, which ensures the recruitment and retention of highly, qualified personnel;

3. To contribute to a positive image of the international public service.

FICSA's annual work programme and on-going programmes include the following activities:

1. Inform all staff members of problems affecting their conditions of employment;

2. Organize training, seminars and working groups on specific issues related to conditions of employment;

3. Advise members of FICSA-affiliated associations or unions on staff-administration

4. Produce documents of the Federation's position on the technical aspects of the employment conditions;

5. Coordinate strike movements;

6. Support and assist in appeal procedures (internal appeal procedures, administrative tribunals), in the case of non-compliance with employment conditions;

7. Develop strategies to prevent violation of rights;

8. Advocate for the positions of staff members with representatives of Member States.

The Council process

Traditionally, the weekend preceding the FICSA Council is dedicated to a few preparatory meetings, but it is also the occasion for one or two training sessions for delegates attending the Council.

This year, an international lawyer defending staff members before the Administrative Tribunal of the International Labour Organization (ILOAT) came to explain to delegates how to effectively prepare internal appeals before the Tribunal, the points not to be missed (including before an appeal) and the possible pitfalls. The training was comprehensive and interesting, followed by engaging discussion with remarkable interventions from one of your CERN staff delegates.

On Monday morning, the Council was opened in plenary session with administrative items, followed by addresses by the Secretary General of the CTBTO, the President of the CTBTO Staff Association and finally the new President of the International Civil Service Commission (ICSC[3]) who is responsible for establishing and reviewing the employment conditions for staff throughout the United Nations Common System.

The Council then suspended its meeting and from Monday afternoon to Thursday evening, the work was split across each of the specialized Standing Committees and ad-hoc Committees set up during the Council.

The Council meeting resumed on Friday with a presentation of the work and conclusions of each of the Committees and the themes proposed for inclusion in the 2019 FICSA work programme.

Topics covered

In addition to the subject of ICSC's review of the methodology and operational rules for salary adjustments, which was again one of the themes closely followed by FICSA this year, it is worth mentioning:

The increased use of non-staff contracts and its impact on the United Nations system

The FICSA Council is concerned about the increasing proportion of "non-staff" personnel and its impact on the employment and working conditions of the staff. It was noted that there was no clear and unified definition of the concept of "non-staff" personnel and their rights and

benefits, if any. Like at CERN, the use of short-term staff is becoming more significant, and our colleagues fear a loss of knowledge and increased difficulties in ensuring the sustainability of the key tasks entrusted to their organisations.

Proposed amendments to the ILOAT Statute

Staff associations have expressed great concern about amendments proposed by some organisations to the Statute of the ILO Administrative Tribunal (ILOAT). In the words of one of the most well-known staff lawyers, these amendments would be a "real disaster" for the staff of all organizations, which have accepted the jurisdiction of the ILOAT.

Three organizations have recently withdrawn from the jurisdiction of the ILOAT (WMO, UPU and CTA), having expressed different reasons but apparently linked to recent judgments against them. Some organizations wish to amend the Statute of the Tribunal to allow for prompt withdrawals from its jurisdiction, with a minimum of requirements.

The CERN Staff Association has been particularly active and proactive on this issue throughout the Council. The Council unanimously adopted a recommendation strongly condemning the pressure exerted by the organisations.

Protection of whistle-blowers

For more than a year now, FICSA has been working on how to protect whistle-blowers. A new workshop, led by United Nations experts on the subject, was hosted by ITU in Geneva on 15 November 2018.

The participants reviewed the current situation throughout the United Nations Common System, identified key gaps in whistle-blower protection and discussed options for addressing them. Emphasis was also placed on the practical advice and support that staff representatives could give to whistle-blowers within their organisation.

FICSA: why does the CERN Staff Association participate?

Although CERN is not part of the United Nations Common System, the CERN Staff Association has a strong interest in being part of FICSA and participating in the work of the Federation.

Indeed, the various participating staff associations can thus share their experiences and ideas, exchange with experienced colleagues on specific topics, jointly prepare answers on common subjects (for example, strongly oppose the proposal to reform the ILOAT Statute).

This privileged moment also allows us to realize that while we have occasionally more favourable or advanced conditions than in other organisations, we are generally less well off. We can therefore draw inspiration from the best conditions offered elsewhere, but also realize that the work to be done remains immense and that we must never let ourselves take the financial and working conditions for granted.

Beyond these aspects, being together gives tremendous energy and renewed motivation, but also a comfort to share with other staff delegates the same desire to serve our organisations, being convinced that this also requires respect and consideration of the interests of staff members.

It is essential to show support and solidarity in the particularly difficult situations where some of our colleagues face openly conflictual relations with their Administration or Management.





March 12, 2019 05:03 PM

CERN Bulletin


Jardin des Particules”, the crèche and school of the CERN Staff Association is looking for:

A Deputy Director, Educational Manager of the School, Elementary Cycle (Possible part-time job from 60 to 80%)

Candidates must meet the following requirements:

  •   Holding a Swiss teaching qualification and having practiced for at least five years in a public school in French-speaking Switzerland
  •   Having a good knowledge of PER + MER teaching methods

If you are interested in this position you can send your complete application including your cover letter, CV with photo, copies of diplomas, work certificates and letters of recommendation by email to

Spread the word!

March 12, 2019 05:03 PM

Emily Lakdawalla - The Planetary Society Blog

The March Equinox Issue of The Planetary Report Is Out!
I’m very pleased to announce the publication of the March Equinox issue of The Planetary Report: “Inside the Ice Giants.” The print issue shipped to members yesterday!

March 12, 2019 04:45 PM

Emily Lakdawalla - The Planetary Society Blog

Where We Are
Emily Lakdawalla introduces an at-a-glance spacecraft locator to The Planetary Report.

March 12, 2019 04:03 PM

Emily Lakdawalla - The Planetary Society Blog

The Skies of Mini-Neptunes
A GREAT QUEST is underway to discover Earthsize worlds in their stars’ habitable zones. Along the way, astronomers have been surprised to learn that the most typical size of planet in our galaxy is one with no counterpart in our own solar system.

March 12, 2019 04:02 PM

ZapperZ - Physics and Physicists

PIP-II Upgrade At Fermilab
Don Lincoln explains why the PIP-II upgrade at Fermilab will take the accelerator facility to the next level.

The video actually explains a bit about how particle accelerator works, and the type of improvement that is being planned for.


by ZapperZ ( at March 12, 2019 01:04 PM

March 11, 2019

John Baez - Azimuth

Metal-Organic Frameworks

I’ve been talking about new technologies for fighting climate change, with an emphasis on negative carbon emissions. Now let’s begin looking at one technology in more detail. This will take a few articles. I want to start with the basics.

A metal-organic framework or MOF is a molecular structure built from metal atoms and organic compounds. There are many kinds. They can be 3-dimensional, like this one made by scientists at CSIRO in Australia:

And they can be full of microscopic holes, giving them an enormous surface area! For example, here’s a diagram of a MOF with yellow and orange balls showing the holes:

In fact, one gram of the stuff can have a surface area of more than 12,000 square meters!

Gas molecules like to sit inside these holes. So, perhaps surprisingly at first, you can pack a lot more gas in a cylinder containing a MOF than you can in an empty cylinder at the same pressure!

This lets us store gases using MOFs—like carbon dioxide, but also hydrogen, methane and others. And importantly, you can also get the gas molecules out of the MOF without enormous amounts of energy. Also, you can craft MOFs with different hole sizes and different chemical properties, so they attract some gases much more than others.

So, we can imagine various applications suited to fighting climate change! One is carbon capture and storage, where you want a substance that eagerly latches onto CO2 molecules, but can also easily be persuaded to let them go. But another is hydrogen or methane storage for the purpose of fuel. Methane releases less CO2 than gasoline does when it burns, per unit amount of energy—and hydrogen releases none at all. That’s why some advocate a hydrogen economy.

Could hydrogen-powered cars be better than battery-powered cars, someday? I don’t know. But never mind—such issues, though important, aren’t what I want to talk about now. I just want to quote something about methane storage in MOFs, to give you a sense of the state of the art.

• Mark Peplow, Metal-organic framework compound sets methane storage record, C&EN, 11 December 2017.

Cars powered by methane emit less CO2 than gasoline guzzlers, but they need expensive tanks and compressors to carry the gas at about 250 atm. Certain metal-organic framework (MOF) compounds—made from a lattice of metal-based nodes linked by organic struts—can store methane at lower pressures because the gas molecules pack tightly inside their pores.

So MOFs, in principle, could enable methane-powered cars to use cheaper, lighter, and safer tanks. But in practical tests, no material has met a U.S. Department of Energy (DOE) gas storage target of 263 cm3 of methane per cm3 of adsorbent at room temperature and 64 atm, enough to match the capacity of high-pressure tanks.

A team led by David Fairen-Jimenez at the University of Cambridge has now developed a synthesis method that endows a well-known MOF with a capacity of 259 cm3 of methane per cm3 under those conditions, at least 50% higher than its nearest rival. “It’s definitely a significant result,” says Jarad A. Mason at Harvard University, who works with MOFs and other materials for energy applications and was not involved in the research. “Capacity has been one of the biggest stumbling blocks.”

Only about two-thirds of the MOF’s methane was released when the pressure dropped to 6 atm, a minimum pressure needed to sustain a decent flow of gas from a tank. But this still provides the highest methane delivery capacity of any bulk adsorbent.

A couple things are worth noting here. First, the process of a molecule sticking to a surface is called adsorption, not to be confused with absorption. Second, notice that using MOFs they managed to compress methane by a factor of 259 at a pressure of just 64 atmospheres. If we tried the same trick without MOFs we would need a pressure of 259 atmospheres!

But MOFs are not only good at holding gases, they’re good at sucking them up, which is really the flip side of the same coin: gas molecules avidly seek to sit inside the little holes of your MOF. So people are also using MOFs to build highly sensitive detectors for specific kinds of gases:

Tunable porous MOF materials interface with electrodes to sound the alarm at the first sniff of hydrogen sulfide, Phys.Org, 7 March 2017.

And some MOFs work in water, too—so people are trying to use them as water filters, sort of a high-tech version of zeolites, the minerals that inspired people to invent MOFs in the first place. Zeolites have an impressive variety of crystal structures:

and so on… but MOFs seem to be more adjustable in their structure and chemical properties.

Looking more broadly at future applications, we can imagine MOFs will be important in a host of technologies where we want a substance with lots of microscopic holes that are eager to hold specific molecules. I have a feeling that the most powerful applications of MOFs will come when other technologies mature. For example: projecting forward to a time when we get really good nanotechnology, we can imagine MOFs as useful “storage lockers” for molecular robots.

But next time I’ll talk about what we can do now, or soon, to capture carbon dioxide with MOFs.

In the meantime: can you imagine some cool things we could do with MOFs? This may feed your imagination:

• Wikipedia, Metal-organic frameworks.

by John Baez at March 11, 2019 10:34 PM

March 10, 2019

John Baez - Azimuth

Breakthrough Institute on Climate Change

I found this article, apparently by Ted Nordhaus and Alex Trembath, to be quite thought-provoking. At times it sinks too deep into the moment’s politics for my taste, given that the issues it raises will probably be confronting us for the whole 21st century. But still, it raises big issues:

• Breakthrough Institute, Is climate change like diabetes or an asteroid?

The Breakthrough Insitute seeks “technological solutions to environmental challenges”, so that informs their opinions. Let me quote some bits and urge you to read the whole thing! Even if it annoys you, it should make you think a bit.

Is climate change more like an asteroid or diabetes? Last month, one of us argued at Slate that climate advocates should resist calls to declare a national climate emergency because climate change was more like “diabetes for the planet” than an asteroid. The diabetes metaphor was surprisingly controversial. Climate change can’t be managed or lived with, many argued in response; it is an existential threat to human societies that demands an immediate cure.

The objection is telling, both in the ways in which it misunderstands the nature of the problem and in the contradictions it reveals. Diabetes is not benign. It is not a “natural” phenomena and it can’t be cured. It is a condition that, if unmanaged, can kill you. And even for those who manage it well, life is different than before diabetes.

This seems to us to be a reasonably apt description of the climate problem. There is no going back to the world before climate change. Whatever success we have mitigating climate change, we almost certainly won’t return to pre-industrial atmospheric concentrations of greenhouse gases, at least not for many centuries. Even at one or 1.5 degrees Celsius of warming, the climate and the planet will look very different, and that will bring unavoidable consequences for human societies. We will live on a hotter planet and in a climate that will be more variable and less predictable.

How bad our planetary diabetes gets will depend on how much we continue to emit and how well adapted to a changing climate human societies become. With the present one degree of warming, it appears that human societies have adapted relatively well. Various claims attributing present day natural disasters to climate change are controversial. But the overall statistics suggest that deaths due to climate-related natural disasters globally are falling, not rising, and that economic losses associated with those disasters, adjusting for growing population and affluence, have been flat for many decades.

But at three or four degrees of warming, all bets are off. And it appears that unmanaged, that’s where present trends in emissions arelikely to take us. Moreover, even with radical action, stabilizing emissions at 1.5 degrees C, as many advocates now demand, is not possible without either solar geoengineering or sucking carbon emissions out of the atmosphere at massive scale. Practically, given legacy emissions and committed infrastructure, the long-standing international target of limiting temperature increase to two degrees C is also extremely unlikely.

Unavoidably, then, treating our climate change condition will require not simply emissions reductions but also significant adaptation to known and unknown climate risks that are already baked in to our future due to two centuries of fossil fuel consumption. It is in this sense that we have long argued that climate change must be understood as a chronic condition of global modernity, a problem that will be managed but not solved.

A discussion of the worst-case versus the best-case IPCC scenarios, and what leads to these scenarios:

The worst case climate scenarios, which are based on worst case emissions scenarios, are the source of most of the terrifying studies of potential future climate impacts. These are frequently described as “business as usual” — what happens if the economy keeps growing and the global population becomes wealthier and hence more consumptive. But that’s not how the IPCC, which generates those scenarios, actually gets to very high emissions futures. Rather, the worst case scenarios are those in which the world remains poor, populous, unequal, and low-tech. It is a future with lots of poor people who don’t have access to clean technology. By contrast, a future in which the world is resilient to a hotter climate is likely also one in which the world has been more successful at mitigating climate change as well. A wealthier world will be a higher-tech world, one with many more low carbon technological options and more resources to invest in both mitigation and adaptation. It will be less populous (fertility rates reliably fall as incomes rise), less unequal (because many fewer people will live in extreme poverty), and more urbanized (meaning many more people living in cities with hard infrastructure, air conditioning, and emergency services to protect them).

That will almost certainly be a world in which global average temperatures have exceeded two degrees above pre-industrial levels. The latest round of climate deadline-ism (12 years to prevent climate catastrophe according to The Guardian) won’t change that. But as even David Wallace Wells, whose book The Uninhabitable Earth has helped revitalize climate catastrophism, acknowledges, “Two degrees would be terrible but it’s better than three… And three degrees is much better than four.”

Given the current emissions trajectory, a future world that stabilized emissions below 2.5 or three degrees, an accomplishment that in itself will likely require very substantial and sustained efforts to reduce emissions, would also likely be one reasonably well adapted to live in that climate, as it would, of necessity, be one that was much wealthier, less unequal, and more advanced technologically than the world we live in today.

The most controversial part of the article concerns the “apocalyptic” or “millenarian” tendency among enviromentalists: the feeling that only a complete reorganization of society will save us—for example, going “back to nature”.

[…] while the nature of the climate problem is chronic and the political and policy responses are incremental, the culture and ideology of contemporary environmentalism is millenarian. In the millenarian mind, there are only two choices, catastrophe or completely reorganizing society. Americans will either see the writing on the wall and remake the world, or perish in fiery apocalypse.

This, ultimately, is why adaptation, nuclear energy, carbon capture, and solar geoengineering have no role in the environmental narrative of apocalypse and salvation, even as all but the last are almost certainly necessary for any successful response to climate change and will also end up in any major federal policy effort to address climate change. Because they are basically plug-and-play with the existing socio-technical paradigm. They don’t require that we end capitalism or consumerism or energy intensive lifestyles. Modern, industrial, techno-society goes on, just without the emissions. This is also why efforts by nuclear, carbon capture, and geoengineering advocates to marshall catastrophic framing to build support for those approaches have had limited effect.

The problem for the climate movement is that the technocratic requirements necessary to massively decarbonize the global economy conflict with the egalitarian catastrophism that the movement’s mobilization strategies demand. McKibben has privately acknowledged as much to several people, explaining that he hasn’t publicly recognized the need for nuclear energy because he believes doing so would “split this movement in half.”

Implicit in these sorts of political calculations is the assumption that once advocates have amassed sufficient political power, the necessary concessions to the practical exigencies of deeply reducing carbon emissions will then become possible. But the army you raise ultimately shapes the sorts of battles you are able to wage, and it is not clear that the army of egalitarian millenarians that the climate movement is mobilizing will be willing to sign on to the necessary compromises — politically, economically, and technologically — that would be necessary to actually address the problem.

Again: read the whole thing!

by John Baez at March 10, 2019 10:12 PM

March 06, 2019

Robert Helling - atdotde

Challenge: How to talk to a flat earther?
Further down the rabbit hole, over lunch I finished watching "Behind the Curve", a Netflix documentary on people believing the earth is a flat disk. According to them, the north pole is in the center, while Antarctica is an ice wall at the boundary. Sun and moon are much closer and flying above this disk while the stars are on some huge dome like in a planetarium. NASA is a fake agency promoting the doctrine and airlines must be part of the conspiracy as they know that you cannot directly fly between continents on the southern hemisphere (really?).

These people are happily using GPS for navigation but have a general mistrust in the science (and their teachers) of at least two centuries.

Besides the obvious "I don't see curvature of the horizon" they are even conducting experiments to prove their point (fighting with laser beams not being as parallel over miles of distance as they had hoped for). So at least some of them might be open to empirical disprove.

So here is my challenge: Which experiment would you conduct with them to convince them? Warning: Everything involving stuff disappearing at the horizon (ships sailing away, being able to see further from a tower) are complicated by non-trivial diffraction in the atmosphere which would very likely turn this observation inconclusive. The sun being at different declination (height) at different places might also be explained by being much closer and a Foucault pendulum might be too indirect to really convince them (plus it requires some non-elementary math to analyse).

My personal solution is to point to the observation that the declination of Polaris (around which I hope they can agree the night sky rotates) is given my the geographical latitude: At the north pole it is right above you but is has to go down the more south you get. I cannot see how this could be reconciled with a dome projection.

How would you approach this? The rules are that it must only involve observations available to everyone, no spaceflight, no extra high altitude planes. You are allowed to make use of the phone, cameras, you can travel (say by car or commercial flight but you cannot influence the flight route). It does not involve lots of money or higher math.

by Robert Helling ( at March 06, 2019 02:24 PM

March 02, 2019

John Baez - Azimuth

Negative Carbon Emissions

A carbon dioxide scrubber is any sort of gadget that removes carbon dioxide from the air. There are various ways such gadgets can work, and various things we can do with them. For example, they’re already being used to clean the air in submarines and human-occupied spacecraft. I want to talk about carbon dioxide scrubbers as a way to reduce carbon emissions from burning fossil fuels, and a specific technology for doing this. But I don’t want to talk about those things today.

Why not? It turns out that if you start talking about the specifics of one particular approach to fighting global warming, people instantly want to start talking about other approaches they consider better. This makes some sense: it’s a big problem and we need to compare different approaches. But it’s also a bit frustrating: we need to study different approaches individually so we can know enough to compare them, or make progress on any one approach.

I mainly want to study the nitty-gritty details of various individual approaches, starting with one approach to carbon scrubbing. But if I don’t say anything about the bigger picture, people will be unsatisfied.

So, right now I want to say a bit about carbon dioxide scrubbers.

The first thing to realize—and this applies to all approaches to battling global warming—is the huge scale of the task. In 2018 we put 37.1 gigatonnes of CO2 into the atmosphere by burning fossil fuels and making cement.

That’s a lot! Let’s compare some of the other biggest human industries, in terms of the sheer mass being processed.

Cement production is big. Global cement production in 2017 was about 4.1 gigatonnes, with China making more than the rest of the world combined, and a large uncertainty in how much they made. But digging up and burning carbon is even bigger. For example, over 7 gigatonnes of coal is being mined per year. I can’t find figures on total agricultural production, but in 2004 we created about 5 gigatonnes of agricultural waste. Total grain production was just 2.53 gigatonnes in 2017. Total plastic production in 2017 was a mere 348 megatonnes.

So, to use technology to remove as much CO2 from the air as we’re currently putting in would require an industry that processes more mass than any other today.

I conclude that this won’t happen anytime soon. Indeed David McKay calls all methods of removing CO2 from air “the last thing we should talk about”. For now, he argues, we should focus on cutting carbon emissions. And I believe that to do that on a large enough scale requires economic incentives, for example a carbon tax.

But to keep global warming below 2°C over pre-industrial levels, it’s becoming increasingly likely that we’ll need negative carbon emissions:

Indeed, a lot of scenarios contemplated by policymakers involve net negative carbon emissions. Often they don’t realize just how hard these are to achieve! In his talk Mitigation on methadone: how negative emissions lock in our high-carbon addiction, Kevin Anderson has persuasively argued that policymakers are fooling themselves into thinking we can keep burning carbon as we like now and achieve the necessary negative emissions later. He’s not against negative carbon emissions. He’s against using vague fantasies of negative carbon emissions to put off confronting reality!

It is not well understood by policy makers, or indeed many academics, that IAMs [integrated assessment models] assume such a massive deployment of negative emission technologies. Yet when it comes to the more stringent Paris obligations, studies suggest that it is not possible to reach 1.5°C with a 50% chance without significant negative emissions. Even for 2°C, very few scenarios have explored mitigation without negative emissions, and contrary to common perception, negative emissions are also prevalent in higher stabilisation targets (Figure 2). Given such a pervasive and pivotal role of negative emissions in mitigation scenarios, their almost complete absence from climate policy discussions is disturbing and needs to be addressed urgently.

Read his whole article!

Pondering the difficulty of large-scale negative carbon emissions, but also their potential importance, I’m led to imagine scenarios like this:

In the 21st century we slowly wean ourselves of our addiction to burning carbon. By the end, we’re suffering a lot from global warming. It’s a real mess. But suppose our technological civilization survives, and we manage to develop a cheap source of clean energy. And once we switch to this, we don’t simply revert to our old bad habit of growing until we exhaust the available resources! We’ve learned our lesson—the hard way. We start trying to cleaning up the mess we made. Among other things, we start removing carbon dioxide from the atmosphere. We then spend a century—or two, or three—doing this. Thanks to various tipping points in the Earths’ climate system, we never get things back to the way they were. But we do, finally, make the Earth a beautiful place again.

If we’re aiming for some happy ending like this, it may pay to explore various ways to achieve negative carbon emissions even if we can’t scale them up fast enough to stop a big mess in the 21st century.

(Of course, I’m not suggesting this strategy as an alternative to cutting carbon emissions, or doing all sorts of other good things. We need a multi-pronged strategy, including some prongs that will only pay off in the long run, and only if we’re lucky.)

If we’re exploring various methods to achieve negative carbon emissions, a key aspect is figuring out economically viable pathways to scale up those methods. They’ll start small and they’ll inevitably be expensive at first. The ones that get big will get cheaper—per tonne of CO2 removed—as they grow.

This has various implications. For example, suppose someone builds a machine that sucks CO2 from the air and uses it to make carbonated soft drinks and to make plants grow better in greenhouses. As I mentioned, Climeworks is actually doing this!

In one sense, this is utterly pointless for fighting climate change, because these markets only use 6 megatonnes of CO2 annually—less than 0.02% of how much CO2 we’re dumping into the atmosphere!

But on the other hand, if this method of CO2 scrubbing can be scaled up and become cheaper and cheaper, it’s useful to start exploring the technology now. It could be the first step along some economically viable pathway.

I especially like the idea of CO2 scrubbing for coal-fired power plants. Of course to cut carbon emissions it would be better to ban coal-fired power plants. But this will take a while:

So, we can imagine an intermediate regime where regulations or a carbon tax make people sequester the CO2 from coal-fired power plants. And if this happens, there could be a big market for carbon dioxide scrubbers—for a while, at least.

I hope we can agree on at least one thing: the big picture is complicated. Next time I’ll zoom in and start talking about a specific technology for CO2 scrubbing.

by John Baez at March 02, 2019 02:08 AM

February 26, 2019

CERN Bulletin


Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, La prochaine permanence se tiendra le :

Mardi 2 avril de 13 h 30 à 16 h 00

Salle de réunion de l’Association du personnel

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations :

Formulaire de contact :

February 26, 2019 04:02 PM

February 25, 2019

John Baez - Azimuth

Problems with the Standard Model Higgs

Here is a conversation I had with Scott Aaronson. It started on his blog, in a discussion about ‘fine-tuning’. Some say the Standard Model of particle physics can’t be the whole story, because in this theory you need to fine-tune the fundamental constants to keep the Higgs mass from becoming huge. Others say this argument is invalid.

I tried to push the conversation toward the calculations actually underlie this argument. Then our conversation drifted into email and got more technical… and perhaps also more interesting, because it led us to contemplate the stability of the vacuum!

You see, if we screwed up royally on our fine-tuning and came up with a theory where the square of the Higgs mass was negative, the vacuum would be unstable. It would instantly decay into a vast explosion of Higgs bosons.

Another possibility, also weird, turns out to be slightly more plausible. This is that the Higgs mass is positive—as it clearly is—and yet the vacuum is ‘metastable’. In this scenario, the vacuum we see around us might last a long time, and yet eventually it could decay through quantum tunnelling to the ‘true’ vacuum, with a lower energy density:

Little bubbles of true vacuum would form, randomly, and then grow very rapidly. This would be the end of life as we know it.

Scott agreed that other people might like to see our conversation. So here it is. I’ll fix a few mistakes, to make me seem smarter than I actually am.

I’ll start with some stuff on his blog.

Scott wrote, in part:

If I said, “supersymmetry basically has to be there because it’s such a beautiful symmetry,” that would be an argument from beauty. But I didn’t say that, and I disagree with anyone who does say it. I made something weaker, what you might call an argument from the explanatory coherence of the world. It merely says that, when we find basic parameters of nature to be precariously balanced against each other, to one part in 1010 or whatever, there’s almost certainly some explanation. It doesn’t say the explanation will be beautiful, it doesn’t say it will be discoverable by an FCC or any other collider, and it doesn’t say it will have a form (like SUSY) that anyone has thought of yet.

John wrote:

Scott wrote:

It merely says that, when we find basic parameters of nature to be precariously balanced against each other, to one part in 1010 or whatever, there’s almost certainly some explanation.

Do you know examples of this sort of situation in particle physics, or is this just a hypothetical situation?

Scott wrote:

To answer a question with a question, do you disagree that that’s the current situation with (for example) the Higgs mass, not to mention the vacuum energy, if one considers everything that could naïvely contribute? A lot of people told me it was, but maybe they lied or I misunderstood them.

John wrote:

The basic rough story is this. We measure the Higgs mass. We can assume that the Standard Model is good up to some energy near the Planck energy, after which it fizzles out for some unspecified reason.

According to the Standard Model, each of the 25 fundamental constants appearing in the Standard Model is a “running coupling constant”. That is, it’s not really a constant, but a function of energy: roughly the energy of the process we use to measure that process. Let’s call these “coupling constants measured at energy E”. Each of these 25 functions is determined by the value of all 25 functions at any fixed energy E – e.g. energy zero, or the Planck energy. This is called the “renormalization group flow”.

So, the Higgs mass we measure is actually the Higgs mass at some energy E quite low compared to the Planck energy.

And, it turns out that to get this measured value of the Higgs mass, the values of some fundamental constants measured at energies near the Planck mass need to almost cancel out. More precisely, some complicated function of them needs to almost but not quite obey some equation.

People summarize the story this way: to get the observed Higgs mass we need to “fine-tune” the fundamental constants’ values as measured near the Planck energy, if we assume the Standard Model is valid up to energies near the Planck energy.

A lot of particle physicists accept this reasoning and additionally assume that fine-tuning the values of fundamental constants as measured near the Planck energy is “bad”. They conclude that it would be “bad” for the Standard Model to be valid up to the Planck energy.

(In the previous paragraph you can replace “bad” with some other word—for example, “implausible”.)

Indeed you can use a refined version of the argument I’m sketching here to say “either the fundamental constants measured at energy E need to obey an identity up to precision ε or the Standard Model must break down before we reach energy E”, where ε gets smaller as E gets bigger.

Then, in theory, you can pick an ε and say “an ε smaller than that would make me very nervous.” Then you can conclude that “if the Standard Model is valid up to energy E, that will make me very nervous”.

(But I honestly don’t know anyone who has approximately computed ε as a function of E. Often people seem content to hand-wave.)

People like to argue about how small an ε should make us nervous, or even whether any value of ε should make us nervous.

But another assumption behind this whole line of reasoning is that the values of fundamental constants as measured at some energy near the Planck energy are “more important” than their values as measured near energy zero, so we should take near-cancellations of these high-energy values seriously—more seriously, I suppose, than near-cancellations at low energies.

Most particle physicists will defend this idea quite passionately. The philosophy seems to be that God designed high-energy physics and left his grad students to work out its consequences at low energies—so if you want to understand physics, you need to focus on high energies.

Scott wrote in email:

Do I remember correctly that it’s actually the square of the Higgs mass (or its value when probed at high energy?) that’s the sum of all these positive and negative high-energy contributions?

John wrote:

Sorry to take a while. I was trying to figure out if that’s a reasonable way to think of things. It’s true that the Higgs mass squared, not the Higgs mass, is what shows up in the Standard Model Lagrangian. This is how scalar fields work.

But I wouldn’t talk about a “sum of positive and negative high-energy contributions”. I’d rather think of all the coupling constants in the Standard Model—all 25 of them—obeying a coupled differential equation that says how they change as we change the energy scale. So, we’ve got a vector field on \mathbb{R}^{25} that says how these coupling constants “flow” as we change the energy scale.

Here’s an equation from a paper that looks at a simplified model:

Here m_h is the Higgs mass, m_t is the mass of the top quark, and both are being treated as functions of a momentum k (essentially the energy scale we’ve been talking about). v is just a number. You’ll note this equation simplifies if we work with the Higgs mass squared, since

m_h dm_h = \frac{1}{2} d(m_h^2)

This is one of a bunch of equations—in principle 25—that say how all the coupling constants change. So, they all affect each other in a complicated way as we change k.

By the way, there’s a lot of discussion of whether the Higgs mass square goes negative at high energies in the Standard Model. Some calculations suggest it does; other people argue otherwise. If it does, this would generally be considered an inconsistency in the whole setup: particles with negative mass squared are tachyons!

I think one could make a lot of progress on these theoretical issues involving the Standard Model if people took them nearly as seriously as string theory or new colliders.

Scott wrote:

So OK, I was misled by the other things I read, and it’s more complicated than m_h^2 being a sum of mostly-canceling contributions (I was pretty sure m_h couldn’t be such a sum, since then a slight change to parameters could make it negative).

Rather, there’s a coupled system of 25 nonlinear differential equations, which one could imagine initializing with the “”true”” high-energy SM parameters, and then solving to find the measured low-energy values. And these coupled differential equations have the property that, if we slightly changed the high-energy input parameters, that could generically have a wild effect on the low-energy outputs, pushing them up to the Planck scale or whatever.

Philosophically, I suppose this doesn’t much change things compared to the naive picture: the question is still, how OK are you with high-energy parameters that need to be precariously tuned to reproduce the low-energy SM, and does that state of affairs demand a new principle to explain it? But it does lead to a different intuition: namely, isn’t this sort of chaos just the generic behavior for nonlinear differential equations? If we fix a solution to such equations at a time t_0, our solution will almost always appear “finely tuned” at a faraway time t_1—tuned to reproduce precisely the behavior at t_0 that we fixed previously! Why shouldn’t we imagine that God fixed the values of the SM constants for the low-energy world, and they are what they are at high energies simply because that’s what you get when you RG-flow to there?

I confess I’d never heard the speculation that m_h^2 could go negative at sufficiently high energies (!!). If I haven’t yet abused your generosity enough: what sort of energies are we talking about, compared to the Planck energy?

John wrote:

Scott wrote:

Rather, there’s a coupled system of 25 nonlinear differential equations, which one could imagine initializing with the “”true”” high-energy SM parameters, and then solving to find the measured low-energy values. And these coupled differential equations have the property that, if we slightly changed the high-energy input parameters, that could generically have a wild effect on the low-energy outputs, pushing them up to the Planck scale or whatever.


Philosophically, I suppose this doesn’t much change things compared to the naive picture: the question is still, how OK are you with high-energy parameters that need to be precariously tuned to reproduce the low-energy SM, and does that state of affairs demand a new principle to explain it? But it does lead to a different intuition: namely, isn’t this sort of chaos just the generic behavior for nonlinear differential equations?

Yes it is, generically.

Physicists are especially interested in theories that have “ultraviolet fixed points”—by which they usually mean values of the parameters that are fixed under the renormalization group flow and attractive as we keep increasing the energy scale. The idea is that these theories seem likely to make sense at arbitrarily high energy scales. For example, pure Yang-Mills fields are believed to be “asymptotically free”—the coupling constant measuring the strength of the force goes to zero as the energy scale gets higher.

But attractive ultraviolet fixed points are going to be repulsive as we reverse the direction of the flow and see what happens as we lower the energy scale.

So what gives? Are all ultraviolet fixed points giving theories that require “fine-tuning” to get the parameters we observe at low energies? Is this bad?

Well, they’re not all the same. For theories considered nice, the parameters change logarithmically as we change the energy scale. This is considered to be a mild change. The Standard Model with Higgs may not have an ultraviolet fixed point, but people usually worry about something else: the Higgs mass changes quadratically with the energy scale. This is related to the square of the Higgs mass being the really important parameter… if we used that, I’d say linearly.

I think there’s a lot of mythology and intuitive reasoning surrounding this whole subject—probably the real experts could say a lot about it, but they are few, and a lot of people just repeat what they’ve been told, rather uncritically.

If we fix a solution to such equations at a time t_0, our solution will almost always appear “finely tuned” at a faraway time t_1—tuned to reproduce precisely the behavior at t_0 that we fixed previously! Why shouldn’t we imagine that God fixed the values of the SM constants for the low-energy world, and they are what they are at high energies simply because that’s what you get when you RG-flow to there?

This is something I can imagine Sabine Hossenfelder saying.

I confess I’d never heard the speculation that m_h^2 could go negative at sufficiently high energies (!!). If I haven’t yet abused your generosity enough: what sort of energies are we talking about, compared to the Planck energy?

The experts are still arguing about this; I don’t really know. To show how weird all this stuff is, there’s a review article from 2013 called “The top quark and Higgs boson masses and the stability of the electroweak vacuum”, which doesn’t look crackpotty to me, that argues that the vacuum state of the universe is stable if the Higgs mass and top quark are in the green region, but only metastable otherwise:

The big ellipse is where the parameters were expected to lie in 2012 when the paper was first written. The smaller ellipses only indicate the size of the uncertainty expected after later colliders made more progress. You shouldn’t take them too seriously: they could be centered in the stable region or the metastable region.

An appendix give an update, which looks like this:

The paper says:

one sees that the central value of the top mass lies almost exactly on the boundary between vacuum stability and metastability. The uncertainty on the top quark mass is nevertheless presently too large to clearly discriminate between these two possibilities.

Then John wrote:

By the way, another paper analyzing problems with the Standard Model says:

It has been shown that higher dimension operators may change the lifetime of the metastable vacuum, \tau, from

\tau = 1.49 \times 10^{714} T_U


\tau =5.45 \times 10^{-212} T_U

where T_U is the age of the Universe.

In other words, the calculations are not very reliable yet.

And then John wrote:

Sorry to keep spamming you, but since some of my last few comments didn’t make much sense, even to me, I did some more reading. It seems the best current conventional wisdom is this:

Assuming the Standard Model is valid up to the Planck energy, you can tune parameters near the Planck energy to get the observed parameters down here at low energies. So of course the the Higgs mass down here is positive.

But, due to higher-order effects, the potential for the Higgs field no longer looks like the classic “Mexican hat” described by a polynomial of degree 4:

with the observed Higgs field sitting at one of the global minima.

Instead, it’s described by a more complicated function, like a polynomial of degree 6 or more. And this means that the minimum where the Higgs field is sitting may only be a local minimum:

In the left-hand scenario we’re at a global minimum and everything is fine. In the right-hand scenario we’re not and the vacuum we see is only metastable. The Higgs mass is still positive: that’s essentially the curvature of the potential near our local minimum. But the universe will eventually tunnel through the potential barrier and we’ll all die.

Yes, that seems to be the conventional wisdom! Obviously they’re keeping it hush-hush to prevent panic.

This paper has tons of relevant references:

• Tommi Markkanen, Arttu Rajantie, Stephen Stopyra, Cosmological aspects of Higgs vacuum metastability.

Abstract. The current central experimental values of the parameters of the Standard Model give rise to a striking conclusion: metastability of the electroweak vacuum is favoured over absolute stability. A metastable vacuum for the Higgs boson implies that it is possible, and in fact inevitable, that a vacuum decay takes place with catastrophic consequences for the Universe. The metastability of the Higgs vacuum is especially significant for cosmology, because there are many mechanisms that could have triggered the decay of the electroweak vacuum in the early Universe. We present a comprehensive review of the implications from Higgs vacuum metastability for cosmology along with a pedagogical discussion of the related theoretical topics, including renormalization group improvement, quantum field theory in curved spacetime and vacuum decay in field theory.

Scott wrote:

Once again, thank you so much! This is enlightening.

If you’d like other people to benefit from it, I’m totally up for you making it into a post on Azimuth, quoting from my emails as much or as little as you want. Or you could post it on that comment thread on my blog (which is still open), or I’d be willing to make it into a guest post (though that might need to wait till next week).

I guess my one other question is: what happens to this RG flow when you go to the infrared extreme? Is it believed, or known, that the “low-energy” values of the 25 Standard Model parameters are simply fixed points in the IR? Or could any of them run to strange values there as well?

I don’t really know the answer to that, so I’ll stop here.

But in case you’re worrying now that it’s “possible, and in fact inevitable, that a vacuum decay takes place with catastrophic consequences for the Universe”, relax! These calculations are very hard to do correctly. All existing work uses a lot of approximations that I don’t completely trust. Furthermore, they are assuming that the Standard Model is valid up to very high energies without any corrections due to new, yet-unseen particles!

So, while I think it’s a great challenge to get these calculations right, and to measure the Standard Model parameters accurately enough to do them right, I am not very worried about the Universe being taken over by a rapidly expanding bubble of ‘true vacuum’.

by John Baez at February 25, 2019 10:28 PM

February 24, 2019

Michael Schmitt - Collider Blog

Miracles when you use the right metric

I recommend reading, carefully and thoughtfully, the preprint “The Metric Space of Collider Events” by Patrick Komiske, Eric Metodiev, and Jesse Thaler (arXiv:1902.02346). There is a lot here, perhaps somewhat cryptically presented, but much of it is exciting.

First, you have to understand what the Earth Mover’s Distance (EMD) is. This is easier to understand than the Wasserstein Metric of which it is a special case. The EMD is a measure of how different two pdfs (probability density functions) are and it is rather different than the usual chi-squared or mean integrated squared error because it emphasizes separation rather than overlap. The idea is look at how much work you have to do to reconstruct one pdf from another, where “reconstruct” means transporting a portion of the first pdf a given distance. You keep track of the “work” you do, which means the amount of area (i.e.,”energy” or “mass”) you transport and how far you transport it. The Wikipedia article aptly makes an analogy with suppliers delivering piles of stones to customers. The EMD is the smallest effort required.

The EMD is a rich concept because it allows you to carefully define what “distance” means. In the context of delivering stones, transporting them across a plain and up a mountain are not the same. In this sense, rotating a collision event about the beam axis should “cost” nothing – i.e, be irrelevant — while increasing the energy or transverse momentum should, because it is phenomenologically interesting.

The authors want to define a metric for LHC collision events with the notion that events that come from different processes would be well separated. This requires a definition of “distance” – hence the word “metric” in the title. You have to imagine taking one collision event consisting of individual particle or perhaps a set of hadronic jets, and transporting pieces of it in order to match some other event. If you have to transport the pieces a great distance, then the events are very different. The authors’ ansatz is a straight forward one, depending essentially on the angular distance θij/R plus a term than takes into account the difference in total energies of the two events. Note: the subscripts i and j refer to two elements from the two different events. The paper gives a very nice illustration for two top quark events (read and blue):

Transformation of one top quark event into another

The first thing that came to mind when I had grasped, with some effort, the suggested metric, was that this could be a great classification tool. And indeed it is. The authors show that a k-nearest neighbors algorithm (KNN), straight out of the box, equipped with their notion of distance, works nearly as well as very fancy machine learning techniques! It is crucial to note that there is no training here, no search for a global minimum of some very complicated objection function. You only have to evaluate the EMD, and in their case, this is not so hard. (Sometimes it is.) Here are the ROC curves:

ROC curves. The red curve is the KNN with this metric, and the other curves close by are fancy ML algorithms. The light blue curve is a simple cut on N-subjettiness observables, itself an important theoretical tool

I imagine that some optimization could be done to close the small gap with respect to the best performing algorithms, for example in improving on the KNN.

The next intriguing idea presented in this paper is the fractal dimension, or correlation dimension, dim(Q), associated with their metric. The interesting bit is how dim(Q) depends on the mass/energy scale Q, which can plausibly vary from a few GeV (the regime of hadronization) up to the mass of the top quark (173 GeV). The authors compare three different sets of jets from ordinary QCD production, from W bosons decaying hadronically, and from top quarks, because one expects the detailed structure to be distinctly different, at least if viewed with the right metric. And indeed, the variation of dim(Q) with Q is quite different:

dim(Q) as a function of Q for three sources of jets

(Note these jets all have essentially the same energy.) There are at least three take-away points. First, the dim(Q) is much higher for top jets than for W and QCD jets, and W is higher than QCD. This hierarchy reflects the relative complexity of the events, and hints at new discriminating possibilities. Second, they are more similar at low scales where the structure involves hadronication, and more different at high scales which should be dominated by the decay structure. This is born out by they decay products only curves. Finally, there is little difference in the curves based on particles or on partons, meaning that the result is somehow fundamental and not an artifact of hadronization itself. I find this very exciting.

The authors develop the correlation distance dim(Q) further. It is a fact that a pair of jets from W decays boosted to the same degree can be described by a single variable: the ratio of their energies. This can be mapped onto an annulus in a abstract dimensional space (see the paper for slightly more detail). The interesting step is to look at how the complexity of individual events, reflected in dim(Q), varies around the annulus:

Embedding of W jets and how dim(Q) varies around the annulus and inside it

The blue events to the lower left are simple, with just a single round dot (jet) in the center, while the red events in the upper right have two dots of nearly equal size. The events in the center are very messy, with many dots of several sizes. So morphology maps onto location in this kinematic plane.

A second illustration is provided, this time based on QCD jets of essentially the same energy. The jet masses will span a range determined by gluon radiation and the hadronization process. Jets at lower mass should be clean and simple while jets at high mass should show signs of structure. This is indeed the case, as nicely illustrated in this picture:

How complex jet substructure correlates with jet mass

This picture is so clear it is almost like a textbook illustration.

That’s it. (There is one additional topic involving infrared divergence, but since I do not understand it I won’t try to describe it here.) The paper is short with some startling results. I look forward to the authors developing these studies further, and for other researchers to think about them and apply them to real examples.

by Michael Schmitt at February 24, 2019 05:16 PM

ZapperZ - Physics and Physicists

Brian Greene on Science, Religion, Hawking, and Trump
I've only found this video recently, even though it is almost a year old already, but it is still interesting, and funny. And strangely enough, he shares my view on religion, especially the fact that people seem to ignore that there are so many of them, each claiming to be the "truth". They all can't be, and thus, the biggest threat and challenge against a religion is the existence of another religion.


by ZapperZ ( at February 24, 2019 03:03 PM

February 23, 2019

Jon Butterworth - Life and Physics

Review: A Map of the Invisible: Journeys Into Particle Physics by Jon Butterworth
Originally posted on Emily Jade Books:
Title: A Map of the Invisible: Journeys Into Particle Physics Author: Jon Butterworth Publisher: Random House UK, Cornerstone Publication Date: 5th October 2017 Pages: 289 Genres: Non-Fiction, Science, Physics Rating: ★★★★★ View on Goodreads…

by Jon Butterworth at February 23, 2019 03:50 PM

February 22, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

The joys of mid term

Thank God for mid-term, or ‘reading week’ as it is known in some colleges. Time was I would have spent the week on the ski slopes, but these days I see the mid-term break as a precious opportunity to catch up – a nice relaxed week in which I can concentrate on correcting assessments, preparing teaching notes and setting end-of-semester exams. There is a lot of satisfaction in getting on top of things, if only temporarily!

Then there’s the research. To top the week off nicely, I heard this morning that my proposal to give a talk at the forthcoming Authur Eddington conference  in Paris has been accepted; this is great news as the conference will mark the centenary of Eddington’s measurement of the bending of starlight by the sun, an experiment that provided key evidence in support Einstein’s general theory of relativity. To this day, some historians question the accuracy of Eddington’s result, while most physicists believe his findings were justified, so it should make for an interesting conference .



by cormac at February 22, 2019 04:45 PM

ZapperZ - Physics and Physicists

Why Does Light Slow Down In A Material?
Don Lincoln tackles one of those internet/online FAQs. This time, it is an explanation on why light slows down in water, or in matter in general.

Certainly, this is the explanation many of us know when we were in school. However, most of the questions that I get regarding this phenomenon came from people who want to know the explanation at the "quantum" level, i.e. if light is made up of photons, how does one explain this phenomenon in the photon picture? That is the origin of the two "wrong" explanations that he pointed out in the video, i.e. people wanting to use "photons" to explain what is going on here.

Actually, Don Lincoln could have gone a bit further with the explanation and included the fact that this explanation can account for why the speed of light (and index of refraction) inside a material is dependent on the frequency of the light entering the material.

Strangely enough, this actually reminded me of a puzzle that I had when I first encountered this explanation. If the electrons (or the electric dipoles) inside the material oscillate and create an additional EM wave, and the superposition of these two waves give rise to the final wave that appears to move slower in the material, then what stops this second EM wave from leaving the material? Is it only confined within the material? Do we detect "leakage" of this second or any additional wave due to things oscillating in the material? Because the second wave has a different wavelength, it will be refracted differently at the boundary, so it will no longer be aligned with the original wave after they leave the material, if they all leave the material.

Anyone knows?

Edit: Funny enough, and maybe because I watched this video, YouTube gave me an old MinutePhysics video that used the bouncing light particle explanation that Don Lincoln says isn't correct.


by ZapperZ ( at February 22, 2019 03:44 AM

February 21, 2019

Clifford V. Johnson - Asymptotia

Available Now!

Oh, that talk I did at Perimeter? It is available online now. It is all about the process of making the book "The Dialogues", why I did it and how I did it. Along the way, I show some examples and talk about the science they're bringing to life, but this is not primarily a science talk but a talk about talking about science, if you see what I mean.

The talk starts slowly, but bear with me and it warms up swiftly!

YouTube link here. Ended below:
[...] Click to continue reading this post

The post Available Now! appeared first on Asymptotia.

by Clifford at February 21, 2019 04:10 PM

February 19, 2019

Matt Strassler - Of Particular Significance

A Broad Search for Fast Hidden Particles

A few days ago I wrote a quick summary of a project that we just completed (and you may find it helpful to read that post first). In this project, we looked for new particles at the Large Hadron Collider (LHC) in a novel way, in two senses. Today I’m going to explain what we did, why we did it, and what was unconventional about our search strategy.

The first half of this post will be appropriate for any reader who has been following particle physics as a spectator sport, or in some similar vein. In the second half, I’ll add some comments for my expert colleagues that may be useful in understanding and appreciating some of our results.  [If you just want to read the comments for experts, jump here.]

Why did we do this?

Motivation first. Why, as theorists, would we attempt to take on the role of our experimental colleagues — to try on our own to analyze the extremely complex and challenging data from the LHC? We’re by no means experts in data analysis, and we were very slow at it. And on top of that, we only had access to 1% of the data that CMS has collected. Isn’t it obvious that there is no chance whatsoever of finding something new with just 1% of the data, since the experimenters have had years to look through much larger data sets?

Not only isn’t it obvious, it’s false. In an immense data set, unexpected phenomena can hide, and if you ask the wrong questions, they may not reveal themselves. This has happened before, and it will happen again.

Conventional thinking, before the LHC and still today, is that new phenomena will likely appear in the form of particles with large rest masses. The logic? You can’t easily explain the patterns of particles in the Standard Model without adding some new, slightly heavier ones.  Crudely speaking, this would lead you to look for particles with the largest rest masses you can reach, which is limited by the energy of your machine, because of E=mc². That’s why the LHC was expected to find more than just the Higgs particle, and why the machine proposed for the post-LHC era at CERN is a larger copy of the LHC, with more energy in each collision.

This point of view is based on strong, reasonable arguments; please don’t think I’m criticizing it as foolish. But there have long been cracks in those arguments. Studies of so-called “gauge-mediated” supersymmetry breaking in the mid 1990s, which revealed a vast world of previously ignored phenomena. The discovery of the cosmological constant in 1998, which clearly violated conventional reasoning. The possibilities of relatively large unobserved dimensions of space (which we, like blind flies stuck between panes of glass, can’t detect), which showed how little confidence we should have in what we think we know. The model known as the Twin Higgs in 2005, which showed how the new heavy particles might exist, but could hide from the LHC.

In this context, it should not have been a total shock — just a painful one — that so far the LHC has not found anything other than a Higgs boson, using conventional searches.

Indeed, the whole story of my 30-year career has been watching the conventional reasoning gradually break down. I cannot explain why some of my colleagues were so confident in it, but I hope their depression medication is working well. I myself view the evisceration of conventional wisdom as potentially very exciting, as well as very challenging.

And if you’re a bit unconventional, you might wonder whether particles that are insensitive to all the known forces (except gravity) might have an unexpected role to play in explaining the mysteries of particle physics. Importantly, these (and only these) types of particles could have arbitrarily low rest mass and yet have escaped discovery up to now.

This unconventional thinking has made some headway into the LHC community, and there have been quite a number of searches for these “hidden” particles. BUT… not nearly enough of them.  The problem is that (a) such particles might be produced in any of a hundred different ways, and (b) they might disintegrate (“decay”) in any of a hundred different ways, meaning that there are many thousands of different ways they might show up in our experiments. Don’t quote me on the numbers here; they’re for effect. The point is that there’s a lot of work to do if we’re going to be sure that we did a thorough search for hidden particles. And one thing I can assure you: this work has not yet been done.

Fortunately, the number of searches required is a lot smaller than thousands… but still large: dozens, probably. The simplest one of them — one that even theorists could do, and barely qualifying as “unconventional” — is to look for a hidden particle that (1) decays sometimes to a muon/anti-muon pair, and (2) is initially produced with a hard kick… like shrapnel from an explosion. In colloquial terms, we might say that it’s faster than typical, perhaps very close to the speed of light. In Physicese, it has substantial momentum [and at the LHC we have to look for momentum perpendicular (“transverse”) to the proton beams.]

There’s an obvious way to look for fast hidden particles, and it requires only one simple step beyond the conventional. And if you take that step, you have a much better chance of success than if you take the conventional route… so much better than in some cases you might find something, even with 1% of the data, that nobody had yet noticed. And even if you find nothing, you might convince the experts that this is an excellent approach, worth applying on the big data sets.

So that’s why we did it. Now a few more details.

What we did

There’s nothing new about looking for new particles that sometimes decay to muon/anti-muon pairs; several particles have been discovered this way, including the Z boson and the Upsilon (a mini-atom made from a bottom quark/anti-quark pair.) It’s not even particularly unconventional to look for that pair to carry a lot of transverse momentum. But to use these as the only requirements in a search — this is a bit unconventional, and to my knowledge has not been done before.

Why add this unconventional element in the first place? Because, as Kathryn Zurek and I emphasized long ago, the best way to discover low-mass hidden particles is often to look for them in the decay of a heavier particle; it may be the most common way they are produced, and even if it isn’t, it’s often the most spectacular production mechanism, and thus the easiest to detect. Indeed, when a particle is produced in the decay of another, it often acts like shrapnel — it goes flying.

But it’s hard to detect such a particle (let’s call it “V”, because of the shape the muon and anti-muon make in the picture below) if it decays before it has moved appreciably from the location where it was created. In that case, your only clue of its existence is the shrapnel’s shrapnel: the muon and anti-muon that appear when the V itself decays. And by itself, that’s not enough information. Many of the proton-proton collisions at the LHC make a muon and anti-muon pair for a variety of random reasons. How, then, can one find evidence for the V, if its muon/anti-muon pair look just like other random processes that happen all the time?


 (Left) Within a proton-proton collision (yellow), a mini-collision creates a muon and anti-muon (red); as a pair, they are usually slow, in which case their motion is roughly back-to-back.  (Right) In this case, within the mini-collision, a heavy particle (X) is created; it immediately decays to a  V particle plus other particles.  The V is usually moving quickly; when it decays, the resulting muon and anti-muon then travel on paths that form a sort of “V” shape.  Macroscopically, the events appear similar at first glance, but the invariant mass and fast motion of the V’s muon/anti-muon pair can help distinguish V production from a large random background.

One key trick is that the new particle’s muon/anti-muon pair isn’t random: the energies and momenta of the muon and anti-muon always have a very special relationship, characteristic of the rest mass of the V particle. [Specifically: the “invariant mass” of the muon/anti-muon pair equals the V rest mass.] And so, we use the fact that this special relationship rarely holds for the random background, but it always holds for the signal.  We don’t know the V mass, but we do know that the signal itself always reflects it.

The problem (both qualitatively and quantitatively) is much like trying to hear someone singing on your car radio while stuck in a deafening city traffic jam. As long as the note being sung is loud and clear enough, and the noise isn’t too insane, it’s possible.

But what if it’s not? Well, you might try some additional tricks. Noise reduction headphones, for example, would potentially cancel some of the traffic sound, making the song easier to hear. And that’s a vague analogy to what we are doing when we require the muon and anti-muon be from a fast particle. Most of the random processes that make muon/anti-muon pairs make a slow pair. [The individual muon and anti-muon move quickly, but together they look as though they came from something slow.] By removing those muon/anti-muon pairs that are slow, in this sense, we remove much of the random “background” while leaving much of the signal from any new particle of the sort we’re looking for.

Searches for particles like V have come mainly in two classes. Either (1) no noise-cancelling tricks are applied, and one just tries to cope with the full set of random muon/anti-muon pairs, or (2) a precise signal is sought in a highly targeted matter, and every trick one can think of is applied. Approach (1), extremely broad but shallow, has the advantage of being very simple, but the disadvantage that what you’re looking for might be lost in the noise. Approach (2), deep but narrow, has the advantage of being extremely powerful for the targeted signal, but the disadvantage that many other interesting signals are removed by one or another of the too-clever tricks.  So if you use the approach (2), you may need a different search strategy for each target.

Our approach tries to maintain tremendous breadth while adding some more depth. And the reason this is possible is that the trick of requiring substantial momentum can reduce the size of the background “noise” by factors of 5, 25, 100 — depending on how much momentum one demands. These are big factors! [Which is important, because when you reduce background by 100, you usually only gain sensitivity to your signal by 10.] So imposing this sort of requirement can make it somewhat easier to find signals from fast hidden particles.

Using this approach, we were able to search for fast hidden short-lived particles and exclude them without exception across a wide range of masses and down to smaller production rates than anyone has in the past. The precise bounds still need to be worked out in particular cases, but it’s clear that while, as expected, we do less well than targeted searches for particular signals, we often do 2 to 10 times better than a conventional broad search. And that benefit of our method over the conventional approach should apply also to ATLAS and CMS when they search in their full data sets.

Some Additional Technical Remarks for our Experimental Colleagues:

Now, for my expert colleagues, some comments about our results and where one can take them next. I’ll reserve most of my thoughts on our use of open data, and why open data is valuable, for my next post… with one exception.

Isolation Or Not

Open data plays a central role in our analysis at the point where we drop the muon isolation criterion and replace it with something else.  We considered two ways of reducing QCD background (muons from heavy quark decays and fake muons);

  • the usual one, requiring the muons and anti-muons be isolated from other particles,
  • dropping the isolation criterion on muons altogether, and replacing it with a cut on the promptness of the muon (i.e. on the impact parameter).

With real data to look at, we were able to establish that by tightening the promptness requirement alone, one can reduce QCD until it is comparable to the Drell-Yan background, with minimal loss of signal. And that means a price of no more than √2 in sensitivity. So if chasing a potential signal whose muons are isolated < 85% of the time (i.e. a dimuon isolation efficiency < 70%) one may be better off dropping the isolation criterion and replacing it with a promptness criterion. [Of course there’s a great deal of in-between, such as demanding one fully isolated lepton and imposing promptness and loose isolation on the other.]

It’s easy to think of reasonable signals where each muon is isolated less than 85% of the time; indeed, as Kathryn Zurek and I emphasized back in 2006, a feature of many relevant hidden sector models (“hidden valleys” or “dark sectors”, as they are often known) is that particles emerge in clusters, creating non-standard jets. So it is quite common, in such theories, that the isolation criterion should be dropped (see for example this study from 2007.)

Our Limits

Our main results are model-independent limits on the product of the cross-section for V production, the probability for V to decay to muons, and the acceptance and efficiency for the production of V.  To apply our limits, these quantities must be computed within a particular signal model. We’ve been working so hard to iron out our results that we really haven’t had time yet to look carefully at their detailed implications. But there are a few things already worth noting.

We briefly compared our model-independent results to targeted analyses by ATLAS and CMS for a simple model: Higgs decaying to two pseudoscalars a, one of which decays to a bottom quark/anti-quark pair and one to a muon/anti-muon pair. (More generally we considered hVa where V, decaying to muons, may have a different mass from a). If the V and a mass are both 40 GeV, our limits lag the targeted limits by a factor of 15-20.  But the targeted limits used 10-20 times as much data as we had.  One may then estimate that with identical data sets, the difference between our approach and theirs would likely be somewhere between 5 and 10.

Conversely, there are many models to which the targeted limits don’t apply; if a usually decays to gluons, or the a mass is so low that its decay products merge, we still set the same limit, while the published experimental analyses, which require two b-tagged jets, do not.

Thus broad analyses and deep analyses are complementary, and both are needed.

The wiggle at 29.5 GeV and what to do about it

In our limit plots, we have a few excesses, as one would expect with such a detailed mass scan. The only interesting excess (which is not the most significant) is found in a sample with no isolation criterion and a cut of 25 GeV on the dimuon pair — and what makes it interesting is that it is at 29.5 GeV (and locally 2.7σ) which is quite near to a couple of other curious excesses (from ALEPH data and by CMS) that we’ve heard about in the last couple of years.  Probably it’s nothing, but just in case, let me say a few cautious words about it.


Left: the prompt and isolated dimuon samples from the CMS open data, showing the dimuon spectrum after a transverse momentum cut of 25 GeV.  Right: the limits on V production (times acceptance and efficiency) that we obtain for the prompt sample.  The most locally significant excesses appear at masses of 22 and 29.5 GeV, but they are not globally significant.

So let’s imagine there were a new particle (I’ll keep calling it V) with mass of about 29 GeV.  What would we already know about it?

The excess we see at 29.5 GeV does not appear in the isolated sample, which overlaps the prompt sample but is half its size (see the figure above left), and it’s not visible with a transverse momentum cut of 60 GeV.  That means it could only be consistent with a signal with transverse momentum in the 25-50 GeV range, and with often-non-isolated leptons.

If it is produced with bottom (b) quarks all of the time, then there’s no question that CMS’s studies, which require a b-tagged jet, are much more sensitive than we are, and there’s no chance, if they have only a ~3σ excess, that we would see any hint of it.

But if instead V is usually produced with no b quarks, then the CMS analysis could be missing most of the events. We, in our model-independent analysis, would miss far fewer. And in that context, it’s possible we would see a hint in our analysis at the same time CMS sees a hint in theirs, despite having much less data available.

Moreover, since the muons must often be non-isolated (and as mentioned above, there are many models where this happens naturally) other searches that require isolated leptons (e.g. multilepton searches) might have less sensitivity than one might initially assume.

Were this true, then applying our analysis [with little or no isolation requirement, and a promptness cut] to the full Run II data set would immediately reveal this particle. Both ATLAS or CMS would see it easily.   So this is not a question we need to debate or discuss. There’s plenty of data; we just need to see the results of our methodology applied to it.

It would hardly be surprising if someone has looked already, in the past or present, by accident or by design.  Nevertheless, there will be considerable interest in seeing a public result.

Looking Beyond This Analysis

First, a few obvious points:

  • We only used muons, but one could add electrons; for a spin-one V, this would help.
  • Our methods would also work for photon pairs (for which there’s more experience because of the Higgs, though not at low mass) as appropriate for spin-0 and spin-2 parent particles.
  • Although we applied our approach below the Z peak and above the upsilon, there’s no reason not to extend it above or below. We refrained only because we had too little data above the Z peak to do proper fits, and below the Upsilon the muons were so collimated that our results would have been affected by the losses in the dimuon trigger efficiency.

More generally, the device of reducing backgrounds through a transverse momentum cut (or a transverse boost cut) is only one of several methods that are motivated by hidden sector models. As I have emphasized in many talks, for instance here, others include:

1) cutting on the number of jets [each additional jet reduces background by a factor of several, depending on the jet pt cut]. This works because Drell-Yan and even dileptonic top have few jets on average, and it works on QCD to a lesser extent. It is sensitive to models where the V is produced by a heavy particle (or particle pair) that typically produces many jets. Isolation is often reduced in this case, so a loose isolation criterion (or none) may be necessary. Of course one could also impose a cut on the number of b tags.

2) cutting on the total transverse energy in the event, often called S_T. This works because in Drell-Yan, S_T is usually of the same order as the dimuon mass. It (probably) doesn’t remove much QCD unless the isolation criterion is applied, and there will still be plenty of top background. It’s best for models where the V is produced by a heavy particle (or particle pair) but the V momentum is often not high, the number of jets is rather small, and the events are rare.

3) cutting on the missing transverse energy in the event, which works because it’s small in Drell-Yan and QCD. This of course is done in many supersymmetry studies and elsewhere, but no model-independent limits on V production has been shown publicly.  This is a powerful probe of models where some hidden particles are very long-lived, or often decay invisibly.

All of these approaches can be applied to muon pairs, electron pairs, photon pairs — even tau pairs and bottom-jet pairs, for that matter.  Each of them can provide an advantage in reducing backgrounds and allowing more powerful model-independent searches for resonances. And in each case, there are signals that might have been missed in other analyses that could show up first in these searches.  More theoretical and experimental studies are needed, but in my opinion, many of these searches need to be done on Run II data before there can be any confidence that the floor has been swept clean.

[There is at least one more subtle example: cutting on the number of tracks. Talk to me about the relevant signals if you’re curious.]

Of course, multi-lepton searches are a powerful tool for finding V as well. All you need is one stray lepton — and then you plot the dimuon (or dielectron) mass. Unfortunately we rarely see limits on V + lepton as a function of the V mass, but it would be great if they were made available, because they would put much stronger constraints on relevant models than one can obtain by mere counting of 3-lepton events.

A subtlety: The question of how to place cuts

A last point for the diehards: one of the decisions that searches of this type require is where exactly to make the cuts — or alternatively, how to bin the data? Should our first cut on the transverse momentum be at 25 or 35 GeV? and having made this choice, where should the next cut be? Without a principle, this choice introduces significant arbitrariness into the search strategy, with associated problems.

This question is itself crude, and surely there are more sophisticated statistical methods for approaching the issue. There is also the question of whether to bin rather than cut. But I did want to point out that we proposed an answer, in the form of a rough principle, which one might call a “2-5” rule, illustrated in the figure below. The idea is that a signal at the 2 sigma level at one cut should be seen at the 5 sigma level after the next, stricter cut is applied; this requires each cut successively reduce the background by roughly a factor of (5/2)².  Implicitly, in this calculation we assume the stricter cut removes no signal. That assumption is less silly than it sounds, because the signals to which we are sensitive often have a characteristic momentum scale, and cuts below that characteristic scale have slowly varying acceptance.  We showed some examples of this in section V of our paper.


Cuts are chosen so that each cut reduces the background by approximately 25/4 compared to the previous cut, based on the 2-5 rule.  This is shown above for our isolated dimuon sample, with our transverse momentum cuts of 0, 25, and 60 GeV.

by Matt Strassler at February 19, 2019 01:33 PM

February 17, 2019

ZapperZ - Physics and Physicists

Self-Propulsion of Inverse Leidenfrost Droplets Explained
I was not familiar at all with the Leidenfrost effect, even though I've heard the name. So when I read the article, I was fascinated by it. Unlike other people who want find the answers to the mysteries of the universe, etc... I went into physics because I was more curious about these small, little puzzles that, in the end, could have big impact and big outcome elsewhere. So thie Leidenfrost levitation phenomenon is right up my alley, and I'm kicking myself for not reading up on it sooner than this (or maybe they mentioned it in my advanced classical mechanics graduate course, and I overlooked it).

Anyhow, it appears that there is an inverse Leidenfrost self-propulsion, and a group of physicsts have managed to provide an explanation for it. the article describes both the Leidenfrost and inverse Leidenfrost propulsion, so you may read it for yourself. The research work[1], unfortunately, is currently available only via subscription. So you either need one for yourself, or log in to an organization that has site-wide access to it.

And look at the possible application for this seemingly mundane effect that grew out of a basic curiosity:

Gauthier’s team believe the effect could be used to develop efficient techniques for freezing and transporting biological materials including cells and proteins. With the help of simulations, they hope that this transport could occur with no risk of contamination or heat degradation to the materials.


[1] A. Gauthier et al. PNAS v.116, p.1174

by ZapperZ ( at February 17, 2019 03:21 PM

February 13, 2019

Matt Strassler - Of Particular Significance

Breaking a Little New Ground at the Large Hadron Collider

Today, a small but intrepid band of theoretical particle physicists (professor Jesse Thaler of MIT, postdocs Yotam Soreq and Wei Xue of CERN, Harvard Ph.D. student Cari Cesarotti, and myself) put out a paper that is unconventional in two senses. First, we looked for new particles at the Large Hadron Collider in a way that hasn’t been done before, at least in public. And second, we looked for new particles at the Large Hadron Collider in a way that hasn’t been done before, at least in public.

And no, there’s no error in the previous paragraph.

1) We used a small amount of actual data from the CMS experiment, even though we’re not ourselves members of the CMS experiment, to do a search for a new particle. Both ATLAS and CMS, the two large multipurpose experimental detectors at the Large Hadron Collider [LHC], have made a small fraction of their proton-proton collision data public, through a website called the CERN Open Data Portal. Some experts, including my co-authors Thaler, Xue and their colleagues, have used this data (and the simulations that accompany it) to do a variety of important studies involving known particles and their properties. [Here’s a blog post by Thaler concerning Open Data and its importance from his perspective.] But our new study is the first to look for signs of a new particle in this public data. While our chances of finding anything were low, we had a larger goal: to see whether Open Data could be used for such searches. We hope our paper provides some evidence that Open Data offers a reasonable path for preserving priceless LHC data, allowing it to be used as an archive by physicists of the post-LHC era.

2) Since only had a tiny fraction of CMS’s data was available to us, about 1% by some count, how could we have done anything useful compared to what the LHC experts have already done? Well, that’s why we examined the data in a slightly unconventional way (one of several methods that I’ve advocated for many years, but has not been used in any public study). Consequently it allowed us to explore some ground that no one had yet swept clean, and even have a tiny chance of an actual discovery! But the larger scientific goal, absent a discovery, was to prove the value of this unconventional strategy, in hopes that the experts at CMS and ATLAS will use it (and others like it) in future. Their chance of discovering something new, using their full data set, is vastly greater than ours ever was.

Now don’t all go rushing off to download and analyze terabytes of CMS Open Data; you’d better know what you’re getting into first. It’s worthwhile, but it’s not easy going. LHC data is extremely complicated, and until this project I’ve always been skeptical that it could be released in a form that anyone outside the experimental collaborations could use. Downloading the data and turning it into a manageable form is itself a major task. Then, while studying it, there are an enormous number of mistakes that you can make (and we made quite a few of them) and you’d better know how to make lots of cross-checks to find your mistakes (which, fortunately, we did know; we hope we found all of them!) The CMS personnel in charge of the Open Data project were enormously helpful to us, and we’re very grateful to them; but since the project is new, there were inevitable wrinkles which had to be worked around. And you’d better have some friends among the experimentalists who can give you advice when you get stuck, or point out aspects of your results that don’t look quite right. [Our thanks to them!]

All in all, this project took us two years! Well, honestly, it should have taken half that time — but it couldn’t have taken much less than that, with all we had to learn. So trying to use Open Data from an LHC experiment is not something you do in your idle free time.

Nevertheless, I feel it was worth it. At a personal level, I learned a great deal more about how experimental analyses are carried out at CMS, and by extension, at the LHC more generally. And more importantly, we were able to show what we’d hoped to show: that there are still tremendous opportunities for discovery at the LHC, through the use of (even slightly) unconventional model-independent analyses. It’s a big world to explore, and we took only a small step in the easiest direction, but perhaps our efforts will encourage others to take bigger and more challenging ones.

For those readers with greater interest in our work, I’ll put out more details in two blog posts over the next few days: one about what we looked for and how, and one about our views regarding the value of open data from the LHC, not only for our project but for the field of particle physics as a whole.

by Matt Strassler at February 13, 2019 01:43 PM

February 12, 2019

Robert Helling - atdotde

Bohmian Rapsody

Visits to a Bohmian village

Over all of my physics life, I have been under the local influence of some Gaul villages that have ideas about physics that are not 100% aligned with the main stream views: When I was a student in Hamburg, I was good friends with people working on algebraic quantum field theory. Of course there were opinions that they were the only people seriously working on QFT as they were proving theorems while others dealt with perturbative series only that are known to diverge and are thus obviously worthless. Funnily enough they were literally sitting above the HERA tunnel where electron proton collisions took place that were very well described by exactly those divergent series. Still, I learned a lot from these people and would say there are few that have thought more deeply about structural properties of quantum physics. These days, I use more and more of these things in my own teaching (in particular in our Mathematical Quantum Mechanics and Mathematical Statistical Physics classes as well as when thinking about foundations, see below) and even some other physicists start using their language.

Later, as a PhD student at the Albert Einstein Institute in Potsdam, there was an accumulation point of people from the Loop Quantum Gravity community with Thomas Thiemann and Renate Loll having long term positions and many others frequently visiting. As you probably know, a bit later, I decided (together with Giuseppe Policastro) to look into this more deeply resulting in a series of papers there were well received at least amongst our peers and about which I am still a bit proud.

Now, I have been in Munich for over ten years. And here at the LMU math department there is a group calling themselves the Workgroup Mathematical Foundations of Physics. And let's be honest, I call them the Bohmians (and sometimes the Bohemians). And once more, most people believe that the Bohmian interpretation of quantum mechanics is just a fringe approach that is not worth wasting any time on. You will have already guessed it: I did so none the less. So here is a condensed report of what I learned and what I think should be the official opinion on this approach. This is an informal write up of a notes paper that I put on the arXiv today.

Bohmians don't like about the usual (termed Copenhagen lacking a better word) approach to quantum mechanics that you are not allowed to talk about so many things and that the observer plays such a prominent role by determining via a measurement what aspect is real an what is not. They think this is far too subjective. So rather, they want quantum mechanics to be about particles that then are allowed to follow trajectories.

"But we know this is impossible!" I hear you cry. So, let's see how this works. The key observation is that the Schrödinger equation for a Hamilton operator of the form kinetic term (possibly with magnetic field) plus potential term, has  a conserved current

$$j = \bar\psi\nabla\psi - (\nabla\bar\psi)\psi.$$

So as your probability density is $\rho=\bar\psi\psi$, you can think of that being made up of particles moving with a velocity field

$$v = j/\rho = 2\Im(\nabla \psi/\psi).$$

What this buys you is that if you have a bunch of particles that is initially distributed like the probability density and follows the flow of the velocity field it will also later be distributed like $|\psi |^2$.

What is important is that they keep the Schrödinger equation in tact. So everything that you can do with the original Schrödinger equation (i.e. everything) can be done in the Bohmian approach as well.  If you set up your Hamiltonian to describe a double slit experiment, the Bohmian particles will flow nicely to the screen and arrange themselves in interference fringes (as the probability density does). So you will never come to a situation where any experimental outcome will differ  from what the Copenhagen prescription predicts.

The price you have to pay, however, is that you end up with a very non-local theory: The velocity field lives in configuration space, so the velocity of every particle depends on the position of all other particles in the universe. I would say, this is already a show stopper (given what we know about quantum field theory whose raison d'être is locality) but let's ignore this aesthetic concern.

What got me into this business was the attempt to understand how the set-ups like Bell's inequality and GHZ and the like work out that are supposed to show that quantum mechanics cannot be classical (technically that the state space cannot be described as local probability densities). The problem with those is that they are often phrased in terms of spin degrees of freedom which have Hamiltonians that are not directly of the form above. You can use a Stern-Gerlach-type apparatus to translate the spin degree of freedom to a positional but at the price of a Hamiltonian that is not explicitly know let alone for which you can analytically solve the Schrödinger equation. So you don't see much.

But from Reinhard Werner and collaborators I learned how to set up qubit-like algebras from positional observables of free particles (at different times, so get something non-commuting which you need to make use of entanglement as a specific quantum resource). So here is my favourite example:

You start with two particles each following a free time evolution but confined to an interval. You set those up in a particular entangled state (stationary as it is an eigenstate of the Hamiltonian) built from the two lowest levels of the particle in the box. And then you observe for each particle if it is in the left or the right half of the interval.

From symmetry considerations (details in my paper) you can see that each particle is with the same probability on the left and the right. But they are anti-correlated when measured at the same time. But when measured at different times, the correlation oscillates like the cosine of the time difference.

From the Bohmian perspective, for the static initial state, the velocity field vanishes everywhere, nothing moves. But in order to capture the time dependent correlations, as soon as one particle has been measured, the position of the second particle has to oscillate in the box (how the measurement works in detail is not specified in the Bohmian approach since it involves other degrees of freedom and remember, everything depends on everything but somehow it has to work since you want to produce the correlations that are predicted by the Copenhagen approach).

The trajectory of the second particle depending on its initial position

This is somehow the Bohmian version of the collapse of the wave function but they would never phrase it that way.

And here is where it becomes problematic: If you could see the Bohmian particle moving you could decide if the other particle has been measured (it would oscillate) or not (it would stand still). No matter where the other particle is located. With this observation you could build a telephone that transmits information instantaneously, something that should not exist. So you have to conclude you must not be able to look at the second particle and see if it oscillates or not.

Bohmians  tell you you cannot because all you are supposed to observer about the particles are their positions (and not their velocity). And if you try to measure the velocity by measuring the position at two instants in time you don't because the first observation disturbs the particle so much that it invalidates the original state.

As it turns out, you are not allowed to observe anything else about the particles than that they are distributed like $|\psi |^2$ because if you could, you could build a similar telephone (at least statistically) as I explain the in the paper (this fact is known in the Bohm literature but I found it nowhere so clearly demonstrated as in this two particle system).

My conclusion is that the Bohm approach adds something (the particle positions) to the wave function but then in the end tells you you are not allowed to observe this or have any knowledge of this beyond what is already encoded in the wave function. It's like making up an invisible friend.

PS: If you haven't seen "Bohemian Rhapsody", yet, you should, even if there are good reasons to criticise the dramatisation of real events.

by Robert Helling ( at February 12, 2019 07:20 AM

February 11, 2019

Jon Butterworth - Life and Physics

What to focus on. Where to look for the science.
“Broken Symmetries” is an art exhibition at FACT in Liverpool. Spread over galleries on two levels, it provides an audio and visual immersion in a strange frontier of knowledge and its echoes and resonances in wider culture. The artwork is … Continue reading

by Jon Butterworth at February 11, 2019 10:08 AM

February 07, 2019

Axel Maas - Looking Inside the Standard Model

Why there won't be warp travel in times of global crises
One of the questions I get most often at outreach events is: "What is about warp travel?", or some other wording for faster-than-light travel. Something, which makes interstellar travel possible, or at least viable.

Well, the first thing I can say is that there is nothing which excludes it. Of course, within our well established theories of the world it is not possible. Neither the standard model of particle physics, nor general relativity, when constrained to the matter we know of, allows it. Thus, whatever describes warp travel, it needs to be a theory, which encompasses and enlarges what we know. Can a quantized combination of general relativity and particle physics do this? Perhaps, perhaps not. Many people think about it really hard. Mostly, we run afoul of causality when trying.

But these are theoretical ideas. And even if some clever team comes up with a theory which allows warp travel, this does not say that this theory is actually realized in nature. Just because we can make it mathematical consistent does not guarantee that it is realized. In fact, we have many, many more mathematical consistent theories than are realized in nature. Thus, it is not enough to just construct a theory of warp travel. Which, as noted, we failed so far to do.

No, what we need is to figure out that it really happens in nature. So far, this did not happen. Neither did we observe it in any human-made experiment, nor did we have any observation in nature which unambiguously point to it. And this is what makes it real hard.

You see, the universe is a tremendous place, which is unbelievable large, and essentially three times as old as the whole planet earth. Not to mention humanity. There happen extremely powerful events out there. This starts from quasars, effectively like a whole galactic core on fire, to black hole collisions and supernovas. These events put out an enormous amount of energy. Much, much more than even our sun generates. Hence, anything short of a big bang is happening all the time in the universe. And we see the results. The earth is hit constantly by particles with much, much higher energies than we can produce in any experiment. And this since earth came into being. Incidentally, this also tells us that nothing we can do at a particle accelerator can really be dangerous. Whatever we do there has happened so often in our Earth's atmosphere, it would have killed this planet long before humanity entered the scene. Only bad thing about it, we do never know when and where such an event happens. And the rate is also not that high, it is only that earth existed already so very long. And is big. Hence, we cannot use this to make controlled observations.

Thus, whatever could happen, happens out there. In the universe. We see some things out there, which we cannot explain yet, e.g. dark matter. But by and large a lot works as expected. Especially, we do not see anything which begs warp travel to explain. Or anything else remotely suggesting something happening faster than the speed of light. Hence, if something like faster-than-light travel is possible, it is neither common nor easily happening.

As noted, this does not mean it is impossible. Only that if it is possible, it is very, very hard. Especially, this means it will be very, very hard to make an experiment to demonstrate the phenomenon. Much less to actually make it a technology, rather than a curiosity. This means, a lot of effort will be necessary to get to see it, if it is really possible.

What is a lot? Well, the CERN is a bit. But human, or even robotic, space exploration is an entire different category, some one to two orders of magnitudes more. Probably, we would need to combine such space exploration with particle physics to really get to it. Possible the best example for such an endeavor is the future LISA project to measure gravitational waves in space. It is perhaps even our current best bet to observe any hints of faster-than-light phenomena, aside from bigger particle physics experiments on earth.

Do we have the technology for such a project? Yes, we do. We have it since roughly a decade. But it will likely take at least one more decade to have LISA flying. Why not now? Resources. Or, often put equivalently, costs.

And here comes the catch. I said, it is our best chance. But this does not mean it is a good chance. In fact, even if faster-than-light is possible, I would be very surprised if we would see it with this mission. There is probably a few more generations of technology, and another order of magnitude of resources, needed, before we could see something, given of what I know how well everything currently fits. Of course, there can always be surprises with every little step further. I am sure, we will discover something interesting, possibly spectacular with LISA. But I would not bet anything valuable that it will be having to do with warp travel.

So, you see, we have to scale up, if we want to go to the stars. This means investing resources. A lot of them. But resources are needed to fix things on earth as well. And the more we damage, the more we need to fix, and the less we have to get to the stars. Right now, humanity moves into a state of perpetual crises. The damage wrought by the climate crises will require enormous efforts to mitigate, much more to stop the downhill trajectory. As a consequence of the climate crises, as well as social inequality, more and more conflicts will create further damage. Finally, isolationism, both nationally as well as socially, driven by fear of the oncoming crises, will also soak up tremendous amounts of resources. And, finally, a hostile environment towards diversity and putting individual gains above common gains create a climate which is hostile to anything new and different in general, and to science in particular. Hence, we will not be able to use our resources, or the ingenuity of the human species as a whole, to get to the stars.

Thus, I am not hopeful to see faster-than-light in my lifetime, or those of the next generation. Such a challenge, if it is possible at all, will require a common effort of our species. That would be truly one worthy endeavour to put our minds at. But right now, as a scientist, I am much more occupied with protecting a world in which science is possible, both metaphorically as well as literally.

But, there is always hope. If we rise up, and decide to change fundamentally. When we put the well-being of us as a whole in front. Then, I would be optimistic that we can get out there. Well, at least as fast as nature permits. How fast this ever will be.

by Axel Maas ( at February 07, 2019 09:17 AM

February 06, 2019

Clifford V. Johnson - Asymptotia

At the Perimeter

In case you were putting the kettle on to make tea for watching the live cast.... Or putting on your boots to head out to see it in person, my public talk at the Perimeter Institute has been postponed to tomorrow! It'll be just as graphic! Here's a link to the event's details.

-cvj Click to continue reading this post

The post At the Perimeter appeared first on Asymptotia.

by Clifford at February 06, 2019 11:16 PM

February 04, 2019

ZapperZ - Physics and Physicists

When Condensed Matter Physics Became King
If you are one of those, or know one of those, who think Physics is only the LHC and high-energy physics, and String Theory, etc., you need to read this excellent article.

When I first read it in my hard-copy version of Physics Today, the first thing that came across my mind after I put it down is that this should be a must-read for the general public, but especially to high-school students and all of those bushy-tailed and bright-eyed incoming undergraduate student in physics. This is because the need to be introduced to a field of study in physics that has become the "king" in physics. Luckily, someone pointed out to me that this article is available online.

Reading the article, it was hard, but understandable, to imagine the resistance that was there in incorporating the "applied" side of physics into a physics professional organization. But it was at a time when physics was still seen as something esoteric with the grandiose idea of "understanding our world" in a very narrow sense.

Solid state’s odd constitution reflected changing attitudes about physics, especially with respect to applied and industrial research. A widespread notion in the physics community held that “physics” referred to natural phenomena and “physicist” to someone who deduced the rules governing them—making applied or industrial researchers nonphysicists almost by definition. But suspicion of that view grew around midcentury. Stanford University’s William Hansen, whose own applied work led to the development of the klystron (a microwave-amplifying vacuum tube), reacted to his colleague David Webster’s suggestion in 1943 that physics was defined by the pursuit of natural physical laws: “It would seem that your criterion sets the sights terribly high. How many physicists do you know who have discovered a law of nature? … It seems to me, this privilege is given only to a very few of us. Nevertheless the work of the rest is of value.”

Luckily, the APS did form the Division of Solid State Physics, and it quickly exploded from there.

By the early 1960s, the DSSP had become—and has remained since—the largest division of APS. By 1970, following a membership drive at APS meetings, the DSSP enrolled more than 10% of the society’s members. It would reach a maximum of just shy of 25% in 1989. Membership in the DSSP has regularly outstripped the division of particles and fields, the next largest every year since 1974, by factors of between 1.5 and 2.
This is a point that many people outside of physics do not realize. They, and the media, often make broad statements about physics and physicists based on what is happening in, say, elementary particle physics, or String, or many of those other fields, when in reality, those areas of physics are not even an valid representation of the field of physics because they are not the majority. Using, say, what is going on in high-energy physics to represent the whole field of physics is similar to using the city of Los Angeles as a valid representation of the United States. It is neither correct nor accurate!

This field, that has now morphed into Condensed Matter Physics, is vibrant, and encompassed such a huge variety of studies, that the amount of work coming out of it each week or each month is mindboggling. It is the only field of physics that has two separate section on Physical Review Letters, The Physical Review B comes out four (FOUR) times a month. Only Phys. Rev. D has more than one edition per month (twice a month). The APS March Meeting, where the Division of Condensed Matter Physics participatesin, continues to be the biggest giant of annual physics conference in the world.

Everything about this field of study is big, important, high-impact, wide-ranging, and fundamental. But of course, as I've said multiple times on here, it isn't sexy for most of the public and the media. So it never because the poster boy for physics, even if they make up the largest percentage of practicing physicist. Doug Natelson said it as much in commenting about condensed matter physics's image problem:

Condensed matter also faces a perceived shortfall in inherent excitement. Black holes sound like science fiction. The pursuit of the ultimate reductionist building blocks, whether through string theory, loop quantum gravity, or enormous particle accelerators, carries obvious profundity. Those topics are also connected historically to the birth of quantum mechanics and the revelation of the power of the atom, when physicists released primal forces that altered both our intellectual place in the world and the global balance of power.

Compared with this heady stuff, condensed matter can sound like weak sauce: “Sure, they study the first instants after the Big Bang, but we can tell you why copper is shiny.” The inferiority complex that this can engender leads to that old standby: claims of technological relevance (for example, “this advance will eventually let us make better computers”). A trajectory toward applications is fine, but that tends not to move the needle for most of the public, especially when many breathless media claims of technological advances don’t seem to pan out.

It doesn’t have to be this way. It is possible to present condensed-matter physics as interesting, compelling, and even inspiring. Emergence, universality, and symmetry are powerful, amazing ideas. The same essential physics that holds up a white dwarf star is a key ingredient in what makes solids solid, whether we’re talking about a diamond or a block of plastic. Individual electrons seem simple, but put many of them together with a magnetic field in the right 2D environment and presto: excitations with fractional charges. Want electrons to act like ultrarelativistic particles, or act like their own antiparticles, or act like spinning tops pointing in the direction of their motion, or pair up and act together coherently? No problem, with the right crystal lattice. This isn’t dirt physics, and it isn’t squalid.

It is why I keep harping to the historical fact of Phil Anderson's work on a condensed matter system that became the impetus for the Higgs mechanism in elementary particle, and how some of the most exotic consequences of QFT are found in complex material (Majorana fermions, magnetic monopoles, etc...etc.).

So if your view of physics has been just the String theory, the LHC, etc... well, keep them, but include its BIG and more influential brother, the condensed matter physics, that not only has quite a number of important, fundamental stuff, but also has a direct impact on your everyday lives. It truly is the "King" of physics.


by ZapperZ ( at February 04, 2019 03:13 PM

February 01, 2019

Clifford V. Johnson - Asymptotia

Black Holes and Time Travel in your Everyday Life

Oh, look what I found! It is my talk "Black Holes and Time Travel in your Everyday Life", which I gave as the Klopsteg Award lecture at AAPT back in July. Someone put it on YouTube. I hope you enjoy it!

Two warnings: (1) Skip to about 6 minutes to start, to avoid all the embarrassing handshaking and awarding and stuff. (2) There's a bit of early morning slowness + jet lag in my delivery here and there, so sorry about that. :)


Abstract: [...] Click to continue reading this post

The post Black Holes and Time Travel in your Everyday Life appeared first on Asymptotia.

by Clifford at February 01, 2019 07:38 PM

Clifford V. Johnson - Asymptotia

Black Market of Ideas

As a reminder, today I'll be at the natural history museum (LA) as part of the "Night of Ideas" event! I'll have a number of physics demos with me and will be at a booth/table (in the Black Market of Ideas section) talking about physics ideas underlying our energy future as a species. I'll sign some books too! Come along!

Here's link to the event:

Click to continue reading this post

The post Black Market of Ideas appeared first on Asymptotia.

by Clifford at February 01, 2019 07:35 PM

January 18, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Back to school

It was back to college this week, a welcome change after some intense research over the hols. I like the start of the second semester, there’s always a great atmosphere around the college with the students back and the restaurants, shops and canteens back open. The students seem in good form too, no doubt enjoying a fresh start with a new set of modules (also, they haven’t yet received their exam results!).

This semester, I will teach my usual introductory module on the atomic hypothesis and early particle physics to second-years. As always, I’m fascinated by the way the concept of the atom emerged from different roots and different branches of science: from philosophical considerations in ancient Greece to considerations of chemistry in the 18th century, from the study of chemical reactions in the 19th century to considerations of statistical mechanics around the turn of the century. Not to mention a brilliant young patent clerk who became obsessed with the idea of showing that atoms really exist, culminating in his famous paper on Brownian motion. But did you know that Einstein suggested at least three different ways of measuring Avogadro’s constant? And each method contributed significantly to establishing the reality of atoms.


 In 1908, the French physicist Jean Perrin demonstrated that the motion of particles suspended in a liquid behaved as predicted by Einstein’s formula, derived from considerations of statistical mechanics, giving strong support for the atomic hypothesis.  

One change this semester is that I will also be involved in delivering a new module,  Introduction to Modern Physics, to first-years. The first quantum revolution, the second quantum revolution, some relativity, some cosmology and all that.  Yet more prep of course, but ideal for anyone with an interest in the history of 20th century science. How many academics get to teach interesting courses like this? At conferences, I often tell colleagues that my historical research comes from my teaching, but few believe me!


Then of course, there’s also the module Revolutions in Science, a course I teach on Mondays at University College Dublin; it’s all go this semester!

by cormac at January 18, 2019 04:15 PM

January 17, 2019

Robert Helling - atdotde

Has your password been leaked?
Today, there was news about a huge database containing 773 million email address / password pairs became public. On Have I Been Pawned you can check if any of your email addresses is in this database (or any similar one). I bet it is (mine are).

These lists are very probably the source for the spam emails that have been around for a number of months where the spammer claims they broke into your account and tries to prove it by telling you your password. Hopefully, this is only a years old LinkedIn password that you have changed aeons ago.

To make sure, you actually want to search not for your email but for your password. But of course, you don't want to tell anybody your password. To this end, I have written a small perl script that checks for your password without telling anybody by doing a calculation locally on your computer. You can find it on GitHub.

by Robert Helling ( at January 17, 2019 07:43 PM

January 15, 2019

Jon Butterworth - Life and Physics

Conceptual design for a post-LHC future circular collider at CERN
This conceptual design report came out today. It looks like an impressive amount of work and although I am familiar with some of its contents, it will take time to digest, and I will undoubtedly be writing more about it … Continue reading

by Jon Butterworth at January 15, 2019 10:08 PM

January 12, 2019

Sean Carroll - Preposterous Universe

True Facts About Cosmology (or, Misconceptions Skewered)

I talked a bit on Twitter last night about the Past Hypothesis and the low entropy of the early universe. Responses reminded me that there are still some significant misconceptions about the universe (and the state of our knowledge thereof) lurking out there. So I’ve decided to quickly list, in Tweet-length form, some true facts about cosmology that might serve as a useful corrective. I’m also putting the list on Twitter itself, and you can see comments there as well.

  1. The Big Bang model is simply the idea that our universe expanded and cooled from a hot, dense, earlier state. We have overwhelming evidence that it is true.
  2. The Big Bang event is not a point in space, but a moment in time: a singularity of infinite density and curvature. It is completely hypothetical, and probably not even strictly true. (It’s a classical prediction, ignoring quantum mechanics.)
  3. People sometimes also use “the Big Bang” as shorthand for “the hot, dense state approximately 14 billion years ago.” I do that all the time. That’s fine, as long as it’s clear what you’re referring to.
  4. The Big Bang might have been the beginning of the universe. Or it might not have been; there could have been space and time before the Big Bang. We don’t really know.
  5. Even if the BB was the beginning, the universe didn’t “pop into existence.” You can’t “pop” before time itself exists. It’s better to simply say “the Big Bang was the first moment of time.” (If it was, which we don’t know for sure.)
  6. The Borde-Guth-Vilenkin theorem says that, under some assumptions, spacetime had a singularity in the past. But it only refers to classical spacetime, so says nothing definitive about the real world.
  7. The universe did not come into existence “because the quantum vacuum is unstable.” It’s not clear that this particular “Why?” question has any answer, but that’s not it.
  8. If the universe did have an earliest moment, it doesn’t violate conservation of energy. When you take gravity into account, the total energy of any closed universe is exactly zero.
  9. The energy of non-gravitational “stuff” (particles, fields, etc.) is not conserved as the universe expands. You can try to balance the books by including gravity, but it’s not straightforward.
  10. The universe isn’t expanding “into” anything, as far as we know. General relativity describes the intrinsic geometry of spacetime, which can get bigger without anything outside.
  11. Inflation, the idea that the universe underwent super-accelerated expansion at early times, may or may not be correct; we don’t know. I’d give it a 50% chance, lower than many cosmologists but higher than some.
  12. The early universe had a low entropy. It looks like a thermal gas, but that’s only high-entropy if we ignore gravity. A truly high-entropy Big Bang would have been extremely lumpy, not smooth.
  13. Dark matter exists. Anisotropies in the cosmic microwave background establish beyond reasonable doubt the existence of a gravitational pull in a direction other than where ordinary matter is located.
  14. We haven’t directly detected dark matter yet, but most of our efforts have been focused on Weakly Interacting Massive Particles. There are many other candidates we don’t yet have the technology to look for. Patience.
  15. Dark energy may not exist; it’s conceivable that the acceleration of the universe is caused by modified gravity instead. But the dark-energy idea is simpler and a more natural fit to the data.
  16. Dark energy is not a new force; it’s a new substance. The force causing the universe to accelerate is gravity.
  17. We have a perfectly good, and likely correct, idea of what dark energy might be: vacuum energy, a.k.a. the cosmological constant. An energy inherent in space itself. But we’re not sure.
  18. We don’t know why the vacuum energy is much smaller than naive estimates would predict. That’s a real puzzle.
  19. Neither dark matter nor dark energy are anything like the nineteenth-century idea of the aether.

Feel free to leave suggestions for more misconceptions. If they’re ones that I think many people actually have, I might add them to the list.

by Sean Carroll at January 12, 2019 08:31 PM

January 09, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Physical Methods of Hazardous Wastewater Treatment

Hazardous waste comprises all types of waste with the potential to cause a harmful effect on the environment and pet and human health. It is generated from multiple sources, including industries, commercial properties and households and comes in solid, liquid and gaseous forms.

There are different local and state laws regarding the management of hazardous waste in different localities. Irrespective of your jurisdiction, the management starts from a proper hazardous waste collection from your Utah property through to its eventual disposal.

There are many methods of waste treatment after its collection using the appropriate structures recommended by environmental protection authorities. One of the most common and inexpensive ones is physical treatment. The following are the physical treatment options for hazardous wastewater.


In this treatment technique, the waste is separated into a liquid and a solid. The solid waste particles in the liquid are left to settle at a container’s bottom through gravity. Sedimentation is done in a continuous or batch process.

Continuous sedimentation is the standard option and generally used for the treatment for large quantities of liquid waste. It is often used in the separation of heavy metals in the steel, copper and iron industries and fluoride in the aluminum industry.


This treatment method comprises the separation of wastewater into a depleted and aqueous stream. The wastewater passes through alternating cation and anion-permeable membranes in a compartment.

A direct current is then applied to allow the passage of cations and anions to opposite directions. This results in solutions with elevated concentrations of positive and negative ions and another with a low ion concentration.

Electro-dialysis is used to enrich or deplete chemical solutions in manufacturing, desalting whey in the food sector and generating potable water from saline water.

Reverse Osmosis

Man checking wastewater

This uses a semi-permeable membrane for the separation of dissolved organic and inorganic elements in wastewater. The wastewater is forced through the semi-permeable membrane by pressure, and larger molecules are filtered out by the small membrane pores.

Polyamide membranes have largely replaced polysulphone ones for wastewater treatment nowadays owing to their ability to withstand liquids with high pH. Reverse osmosis is usually used in the desalinization of brackish water and treating electroplating rinse waters.

Solvent Extraction

This involves the separation of the components of a liquid through contact with an immiscible liquid. The most common solvent used in the treatment technique is supercritical fluid (SCF) mainly CO2.

These fluids exist at the lowest temperature where condensation occurs and have a low density and fast mass ion transfer when mixed with other liquids. Solvent extraction is used for extracting oil from the emulsions used in steel and aluminum processing and organ halide pesticide from treated soil.

Superficial ethane as a solvent is also useful for the purification of waste oils contaminated with water, metals, and PCBs.

Some companies and household have tried handling their hazardous wastewater to minimize costs. This, in most cases, puts their employees at risk since the “treated” water is still often dangerous to human health, the environment and, their machines.

The physical processes above sometimes used with chemical treatment techniques are the guaranteed options for truly safe wastewater.

The post Physical Methods of Hazardous Wastewater Treatment appeared first on None Equilibrium.

by Nonequilibrium at January 09, 2019 11:35 PM

January 08, 2019

Axel Maas - Looking Inside the Standard Model

Taking your theory seriously
This blog entry is somewhat different than usual. Rather than writing about some particular research project, I will write about a general vibe, directing my research.

As usual, research starts with a 'why?'. Why does something happen, and why does it happen in this way? Being the theoretician that I am, this question often equates with wanting to have mathematical description of both the question and the answer.

Already very early in my studies I ran into peculiar problems with this desire. It usually left me staring at the words '...and then nature made a choice', asking myself, how could it? A simple example of the problem is a magnet. You all know that a magnet has a north pole and a south pole, and that these two are different. So, how does it happen which end of the magnet becomes the north pole and which the south pole? At the beginning you always get to hear that this is a random choice, and it just happens that one particular is made. But this is not really the answer. If you dig deeper than you find that originally the metal of any magnet has been very hot, likely liquid. In this situation, a magnet is not really magnetic. It becomes magnetic when it is cooled down, and becomes solid. At some temperature (the so-called Curie temperature), it becomes magnetic, and the poles emerge. And here this apparent miracle of a 'choice by nature' happens. Only that it does not. The magnet cools down not all by itself, but it has a surrounding. And the surrounding can have magnetic fields as well, e.g. the earth's magnetic field. And the decision what is south and what is north is made by how the magnet forms relative to this field. And thus, there is a reason. We do not see it directly, because magnets have usually moved since then, and thus this correlation is no longer obvious. But if we would heat the magnet again, and let it cool down again, we could observe this.

But this immediately leaves you with the question of where did the Earth's magnetic field comes from, and got its direction? Well, it comes from the liquid metallic core of the Earth, and aligns along or oppositely, more or less, the rotation axis of the Earth. Thus, the question is, how did the rotation axis of the Earth comes about, and why has it a liquid core? Both questions are well understood, and arise from how the Earth has formed billions of years ago. This is due to the mechanics of the rotating disk of dust and gas which formed around our fledgling sun. Which in turns comes from the dynamics on even larger scales. And so on.

As you see, whenever one had the feeling of a random choice, it was actually the outside of what we looked at so far, which made the decision. So, such questions always lead us to include more into what we try to understand.

'Hey', I now can literally hear people say who are a bit more acquainted with physics, 'does not quantum mechanics makes really random choices?'. The answer to this is yes and no in equal measures. This is probably one of the more fundamental problems of modern physics. Yes, our description of quantum mechanics, as we teach it also in courses, has intrinsic randomness. But when does it occur? Yes, exactly, whenever we jump outside of the box we describe in our theory. Real, random choice is encountered in quantum physics only whenever we transcend the system we are considering. E.g. by an external measurement. This is one of the reasons why this is known as the 'measurement problem'. If we stay inside the system, this does not happen. But at the expense that we are loosing the contact to things, like an ordinary magnet, which we are used to. The objects we are describing become obscure, and we talk about wave functions and stuff like this. Whenever we try to extend our description to also include the measurement apparatus, on the other hand, we again get something which is strange, but not as random as it originally looked. Although talking about it becomes almost impossible beyond any mathematical description. And it is not really clear what random means anymore in this context. This problem is one of the big ones in the concept of physics. While there is a relation to what I am talking about here, this question can still be separated.

And in fact, it is not this divide what I want to talk about, at least not today. I just wanted to get away with this type of 'quantum choice'. Rather, I want to get to something else.

If we stay inside the system we describe, then everything becomes calculable. Our mathematical description is closed in the sense that after fixing a theory, we can calculate everything. Well, at least in principle, in practice our technical capabilities may limit this. But this is of no importance for the conceptual point. Once we have fixed the theory, there is no choice anymore. There is no outside. And thus, everything needs to come from inside the theory. Thus, a magnet in isolation will never magnetize, because there is nothing which can make a decision about how. The different possibilities are caught in an eternal balanced struggle, and none can win.

Which makes a lot of sense, if you take physical theories really seriously. After all, one of the basic tenants is that there is no privileged frame of reference: 'Everything is relative'. If there is nothing else, nothing can happen which creates an absolute frame of reference, without violating the very same principles on which we found physics. If we take our own theories seriously, and push them to the bitter end, this is what needs to come about.

And here I come back to my own research. One of the driving principles has been to really push this seriousness. And ask what it implies if one really, really takes it seriously. Of course, this is based on the assumption that the theory is (sufficiently) adequate, but that is everyday uncertainty for a physicist anyhow. This requires me to very, very carefully separate what is really inside, and outside. And this leads to quite surprising results. Essentially most of my research on Brout-Englert-Higgs physics, as described in previous entries, is coming about because of this approach. And leads partly to results quite at odds with common lore, often meaning a lot of work to convince people. Even if the mathematics is valid and correct, interpretation issues are much more open to debate when it comes to implications.

Is this point of view adequate? After all, we know for sure that we are not yet finished, and our theories do not contain all there is, and there is an 'outside'. However it may look. And I agree. But, I think it is very important that we very clearly distinguish what is an outside influence, and what is not. And as a first step to ensure what is outside, and thus, in a sense, is 'new physics', we need to understand what our theories say if they are taken in isolation.

by Axel Maas ( at January 08, 2019 10:15 AM

January 06, 2019

Jaques Distler - Musings

TLS 1.0 Deprecation

You have landed on this page because your HTTP client used TLSv1.0 to connect to this server. TLSv1.0 is deprecated and support for it is being dropped from both servers and browsers.

We are planning to drop support for TLSv1.0 from this server in the near future. Other sites you visit have probably already done so, or will do so soon. Accordingly, please upgrade your client to one that supports at least TLSv1.2. Since TLSv1.2 has been around for more than a decade, this should not be hard.

by Jacques Distler at January 06, 2019 06:12 AM

The n-Category Cafe

TLS 1.0 Deprecation

You have landed on this page because your HTTP client used TLSv1.0 to connect to this server. TLSv1.0 is deprecated and support for it is being dropped from both servers and browsers.

We are planning to drop support for TLSv1.0 from this server in the near future. Other sites you visit have probably already done so, or will do so soon. Accordingly, please upgrade your client to one that supports at least TLSv1.2. Since TLSv1.2 has been around for more than a decade, this should not be hard.

by Jacques Distler at January 06, 2019 06:12 AM

January 05, 2019

The n-Category Cafe

Applied Category Theory 2019 School

Dear scientists, mathematicians, linguists, philosophers, and hackers:

We are writing to let you know about a fantastic opportunity to learn about the emerging interdisciplinary field of applied category theory from some of its leading researchers at the ACT2019 School. It will begin February 18, 2019 and culminate in a meeting in Oxford, July 22–26. Applications are due January 30th; see below for details.

Applied category theory is a topic of interest for a growing community of researchers, interested in studying systems of all sorts using category-theoretic tools. These systems are found in the natural sciences and social sciences, as well as in computer science, linguistics, and engineering. The background and experience of our community’s members is as varied as the systems being studied.

The goal of the ACT2019 School is to help grow this community by pairing ambitious young researchers together with established researchers in order to work on questions, problems, and conjectures in applied category theory.

Who should apply

Anyone from anywhere who is interested in applying category-theoretic methods to problems outside of pure mathematics. This is emphatically not restricted to math students, but one should be comfortable working with mathematics. Knowledge of basic category-theoretic language—the definition of monoidal category for example—is encouraged.

We will consider advanced undergraduates, PhD students, and post-docs. We ask that you commit to the full program as laid out below.

Instructions for how to apply can be found below the research topic descriptions.

Senior research mentors and their topics

Below is a list of the senior researchers, each of whom describes a research project that their team will pursue, as well as the background reading that will be studied between now and July 2019.

Miriam Backens

Title: Simplifying quantum circuits using the ZX-calculus

Description: The ZX-calculus is a graphical calculus based on the category-theoretical formulation of quantum mechanics. A complete set of graphical rewrite rules is known for the ZX-calculus, but not for quantum circuits over any universal gate set. In this project, we aim to develop new strategies for using the ZX-calculus to simplify quantum circuits.

Background reading:

  1. Matthes Amy, Jianxin Chen, Neil Ross. A finite presentation of CNOT-Dihedral operators.
  2. Miriam Backens. The ZX-calculus is complete for stabiliser quantum mechanics.

Tobias Fritz

Title: Partial evaluations, the bar construction, and second-order stochastic dominance

Description: We all know that 2+2+1+1 evaluates to 6. A less familiar notion is that it can partially evaluate to 5+1. In this project, we aim to study the compositional structure of partial evaluation in terms of monads and the bar construction and see what this has to do with financial risk via second-order stochastic dominance.

Background reading:

  1. Tobias Fritz and Paolo Perrone. Monads, partial evaluations, and rewriting.
  2. Maria Manuel Clementino, Dirk Hofmann, George Janelidze. The monads of classical algebra are seldom weakly cartesian.
  3. Todd Trimble. On the bar construction.

Pieter Hofstra

Title: Complexity classes, computation, and Turing categories

Description: Turing categories form a categorical setting for studying computability without bias towards any particular model of computation. It is not currently clear, however, that Turing categories are useful to study practical aspects of computation such as complexity. This project revolves around the systematic study of step-based computation in the form of stack-machines, the resulting Turing categories, and complexity classes. This will involve a study of the interplay between traced monoidal structure and computation. We will explore the idea of stack machines qua programming languages, investigate the expressive power, and tie this to complexity theory. We will also consider questions such as the following: can we characterize Turing categories arising from stack machines? Is there an initial such category? How does this structure relate to other categorical structures associated with computability?

Background reading:

  1. J.R.B. Cockett and P.J.W. Hofstra. Introduction to Turing categories. APAL, Vol 156, pp. 183-209, 2008.
  2. J.R.B. Cockett, P.J.W. Hofstra and P. Hrubes. Total maps of Turing categories. ENTCS (Proc. of MFPS XXX), pp. 129-146, 2014.
  3. A. Joyal, R. Street and D. Verity. Traced monoidal categories. Mat. Proc. Cam. Phil. Soc. 3, pp. 447-468, 1996.

Bartosz Milewski

Title: Traversal optics and profunctors

Description: In functional programming, optics are ways to zoom into a specific part of a given data type and mutate it. Optics come in many flavors such as lenses and prisms and there is a well-studied categorical viewpoint, known as profunctor optics. Of all the optic types, only the traversal has resisted a derivation from first principles into a profunctor description. This project aims to do just this.

Background reading:

  1. Bartosz Milewski. Profunctor optics, categorical view.
  2. Craig Pastro, Ross Street. Doubles for monoidal categories.

Mehrnoosh Sadrzadeh

Title: Formal and experimental methods to reason about dialogue and discourse using categorical models of vector spaces

Description: Distributional semantics argues that meanings of words can be represented by the frequency of their co-occurrences in context. A model extending distributional semantics from words to sentences has a categorical interpretation via Lambek’s syntactic calculus or pregroups. In this project, we intend to further extend this model to reason about dialogue and discourse utterances where people interrupt each other, there are references that need to be resolved, disfluencies, pauses, and corrections. Additionally, we would like to design experiments and run toy models to verify predictions of the developed models.

Background reading:

  1. Gerhard Jager (1998): A multi-modal analysis of anaphora and ellipsis. University of Pennsylvania Working Papers in Linguistics 5(2), p. 2.
  2. Matthew Purver, Ronnie Cann, and Ruth Kempson. Grammars as parsers: meeting the dialogue challenge. Research on Language and Computation, 4(2-3):289–326, 2006.

David Spivak

Title: Toward a mathematical foundation for autopoiesis

Description: An autopoietic organization—anything from a living animal to a political party to a football team—is a system that is responsible for adapting and changing itself, so as to persist as events unfold. We want to develop mathematical abstractions that are suitable to found a scientific study of autopoietic organizations. To do this, we’ll begin by using behavioral mereology and graphical logic to frame a discussion of autopoeisis, most of all what it is and how it can be best conceived. We do not expect to complete this ambitious objective; we hope only to make progress toward it.

Background reading:

  1. Brendan Fong, David Jaz Myers, David Spivak. Behavioral mereology.
  2. Brendan Fong, David Spivak. Graphical regular logic.
  3. Luhmann. Organization and Decision, CUP. (Preface)

School structure

All of the participants will be divided up into groups corresponding to the projects. A group will consist of several students, a senior researcher, and a TA. Between January and June, we will have a reading course devoted to building the background necessary to meaningfully participate in the projects. Specifically, two weeks are devoted to each paper from the reading list. During this two week period, everybody will read the paper and contribute to discussion in a private online chat forum. There will be a TA serving as a domain expert and moderating this discussion. In the middle of the two week period, the group corresponding to the paper will give a presentation via video conference. At the end of the two week period, this group will compose a blog entry on this background reading that will be posted to the n-category cafe.

After all of the papers have been presented, there will be a two-week visit to Oxford University, 15–26 July 2019. The second week is solely for participants of the ACT2019 School. Groups will work together on research projects, led by the senior researchers.

The first week of this visit is the ACT2019 Conference, where the wider applied category theory community will arrive to share new ideas and results. It is not part of the school, but there is a great deal of overlap and participation is very much encouraged. The school should prepare students to be able to follow the conference presentations to a reasonable degree.

To apply

To apply please send the following to by January 30th, 2019:

  • Your CV
  • A document with:
    • An explanation of any relevant background you have in category theory or any of the specific projects areas
    • The date you completed or expect to complete your Ph.D and a one-sentence summary of its subject matter.
  • Order of project preference
  • To what extent can you commit to coming to Oxford (availability of funding is uncertain at this time)
  • A brief statement (~300 words) on why you are interested in the ACT2019 School. Some prompts:
    • how can this school contribute to your research goals?
    • how can this school help in your career?

Also have sent on your behalf to a brief letter of recommendation confirming any of the following:

  • your background
  • ACT2019 School’s relevance to your research/career
  • your research experience


For more information, contact either

  • Daniel Cicala. cicala (at) math (dot) ucr (dot) edu

  • Jules Hedges. julian (dot) hedges (at) cs (dot) ox (dot) ac (dot) uk

by john ( at January 05, 2019 10:54 PM

January 04, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A Christmas break in academia

There was a time when you wouldn’t catch sight of this academic in Ireland over Christmas – I used to head straight for the ski slopes as soon as term ended. But family commitments and research workloads have put paid to that, at least for a while, and I’m not sure it’s such a bad thing. Like many academics, I dislike being away from the books for too long and there is great satisfaction to be had in catching up on all the ‘deep roller’ stuff one never gets to during the teaching semester.


The professor in disguise in former times 

The first task was to get the exam corrections out of the way. This is a job I quite enjoy, unlike most of my peers. I’m always interested to see how the students got on and it’s the only task in academia that usually takes slightly less time than expected. Then it was on to some rather more difficult corrections – putting together revisions to my latest research paper, as suggested by the referee. This is never a quick job, especially as the points raised are all very good and some quite profound. It helps that the paper has been accepted to appear in Volume 8 of the prestigious Einstein Studies series, but this is a task that is taking some time.

Other grown-up stuff includes planning for upcoming research conferences – two abstracts now in the post, let’s see if they’re accepted. I also spent a great deal of the holidays helping to organize an international conference on the history of physics that will be hosted in Ireland in 2020. I have very little experience in such things, so it’s extremely interesting, if time consuming.

So there is a lot to be said for spending Christmas at home, with copious amounts of study time uninterrupted by students or colleagues. An interesting bonus is that a simple walk in the park or by the sea seems a million times more enjoyable after a good morning’s swot.  I’ve never really holidayed well and I think this might be why.


A walk on Dun Laoghaire pier yesterday afternoon

As for New Year’s resolutions, I’ve taken up Ciara Kelly’s challenge of a brisk 30-minute walk every day. I also took up tennis in a big way a few months ago – now there’s a sport that is a million times more practical in this part of the world than skiing.


by cormac at January 04, 2019 08:56 PM

January 03, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Getting the Most Out of Your Solar Hot Water System

Solar panels and hot water systems are great ways to save some serious cash when it comes to your energy bills. You make good use of the sun, which means that you are helping with maximizing the use of natural resources rather than creating unnatural ones, which can usually harm the Earth in the long run.

However, solar panel systems should be properly used and maintained to make sure that you are making the most out of it. Most users and owners of the system do not know how to properly use it, which is a huge waste of energy and money.

Here, we will talk about the things you can do to make sure you get the most out of your solar panels after you are done with your solar PV installation.

Make Use of Boiler Timers and Solar Controllers

Ask your solar panel supplier if they can provide you with boiler timers and solar controllers. This is to make sure that the water will only be heated by the backup heating source, which is most likely after the water is heated by the sun to the maximum extent. It usually happens after the solar panels are not directly exposed to the sun, which means that this usually takes place late in the afternoon or whenever the sun changes its position.

You should also see to it that the cylinder has enough cold water for the sun to heat up after you have used up all of the hot water. This is to ensure that you will have hot water to use for the next day, which is especially important if you use hot water in the morning.

Check the Cylinder and Pipes Insulation

After having the solar panels and hot water system installed on your home, you should see to it that the cylinder and pipes are properly insulated. Failure to do so will result in inadequate hot water, making the system inefficient.

Solar panel systems that do not have insulated cylinders will not heat up your water enough, so make sure to ask the supplier and the people handling the installation about this to make the most out of your system.

Do Not Overfill the Storage

Man checking hot water system

Avoid filling the hot water vessel to the brim, as doing so can make the system inefficient. Aside from not getting the water as hot as you want it to be, you will risk the chance of having the system break down sooner than you expect.

Ask the supplier or the people installing the system to install a twin coil cylinder. This will allow the solar hot water system to heat up only one section of the coil cylinder, which is usually what the solar collector or thermal store is for.

In cases wherein the dedicated solar volume is not used, the timing of the backup heating will have a huge impact on the solar hot water system’s performance. This usually happens in systems that do not require the current cylinder to be changed.

Knowing how to properly use and maintain your solar hot water system is a huge time and money saver. It definitely would not hurt to ask questions from your solar panel supplier and installer, so make sure to ask them the questions that you have in mind. Enjoy your hot water and make sure to have your system checked every once in a while!

The post Getting the Most Out of Your Solar Hot Water System appeared first on None Equilibrium.

by Bertram Mortensen at January 03, 2019 06:05 PM

December 31, 2018

Jaques Distler - Musings

Python urllib2 and TLS

I was thinking about dropping support for TLSv1.0 in this webserver. All the major browser vendors have announced that they are dropping it from their browsers. And you’d think that since TLSv1.2 has been around for a decade, even very old clients ought to be able to negotiate a TLSv1.2 connection.

But, when I checked, you can imagine my surprise that this webserver receives a ton of TLSv1 connections… including from the application that powers Planet Musings. Yikes!

The latter is built around the Universal Feed Parser which uses the standard Python urrlib2 to negotiate the connection. And therein lay the problem …

At least in its default configuration, urllib2 won’t negotiate anything higher than a TLSv1.0 connection. And, sure enough, that’s a problem:

ERROR:planet.runner:Error processing
ERROR:planet.runner:URLError: <urlopen error [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590)>
ERROR:planet.runner:Error processing
ERROR:planet.runner:URLError: <urlopen error [Errno 54] Connection reset by peer>
ERROR:planet.runner:Error processing
ERROR:planet.runner:URLError: <urlopen error EOF occurred in violation of protocol (_ssl.c:590)>

Even if I’m still supporting TLSv1.0, others have already dropped support for it.

Now, you might find it strange that urllib2 defaults to a TLSv1.0 connection, when it’s certainly capable of negotiating something more secure (whatever OpenSSL supports). But, prior to Python 2.7.9, urllib2 didn’t even check the server’s SSL certificate. Any encryption was bogus (wide open to a MiTM attack). So why bother negotiating a more secure connection?

Switching from the system Python to Python 2.7.15 (installed by Fink) yielded a slew of

ERROR:planet.runner:URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726)>

errors. Apparently, no root certificate file was getting loaded.

The solution to both of these problems turned out to be:

--- a/feedparser/
+++ b/feedparser/
@@ -5,13 +5,15 @@ import gzip
 import re
 import struct
 import zlib
+import ssl
+import certifi

     import urllib.parse
     import urllib.request
 except ImportError:
     from urllib import splithost, splittype, splituser
-    from urllib2 import build_opener, HTTPDigestAuthHandler, HTTPRedirectHandler, HTTPDefaultErrorHandler, Request
+    from urllib2 import build_opener, HTTPSHandler, HTTPDigestAuthHandler, HTTPRedirectHandler, HTTPDefaultErrorHandler, Request
     from urlparse import urlparse

     class urllib(object):
@@ -170,7 +172,9 @@ def get(url, etag=None, modified=None, agent=None, referrer=None, handlers=None,

     # try to open with urllib2 (to use optional headers)
     request = _build_urllib2_request(url, agent, ACCEPT_HEADER, etag, modified, referrer, auth, request_headers)
-    opener = urllib.request.build_opener(*tuple(handlers + [_FeedURLHandler()]))
+    context = ssl.SSLContext(ssl.PROTOCOL_TLS)
+    context.load_verify_locations(cafile=certifi.where())
+    opener = urllib.request.build_opener(*tuple(handlers + [HTTPSHandler(context=context)] + [_FeedURLHandler()]))
     opener.addheaders = [] # RMK - must clear so we only send our custom User-Agent
     f =
     data =

Actually, the lines in red aren’t strictly necessary. As long as you set a ssl.SSLContext(), a suitable set of root certificates gets loaded. But, honestly, I don’t trust the internals of urllib2 to do the right thing anymore, so I want to make sure that a well-curated set of root certificates is used.

With these changes, Venus negotiates a TLSv1.3 connection. Yay!

Now, if only everyone else would update their Python scripts …


This article goes some of the way towards explaining the brokenness of Python’s TLS implementation on MacOSX. But only some of the way …

Update 2:

Another offender turned out to be the very application (MarsEdit 3) that I used to prepare this post. Upgrading to MarsEdit 4 was a bit of a bother. Apple’s App-sandboxing prevented my Markdown+itex2MML text filter from working. One is no longer allowed to use IPC::Open2 to pipe text through the commandline itex2MML. So I had to create a Perl Extension Module for itex2MML. Now there’s a MathML::itex2MML module on CPAN to go along with the Rubygem.

by distler ( at December 31, 2018 06:12 AM

December 29, 2018

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Nature Drive: Is an Electric Car Worth Buying?

Electric Vehicles (EVs) are seen as the future of the automotive industry. With sales projected at $30 million by 2030, electric cars are slowly but surely taking over their market. The EV poster boy, The Tesla Model S, is a consistent frontrunner in luxury car sales. However, there are still doubts about the electric car’s environmental benefits.

Established names like General Motors, Audi, and Nissan are all hopping on the electric vehicle wave. Competition has made EVs more attractive to the public. This is so in spite of threats from the government to cut federal tax credits on electric cars. Fluctuating prices for battery components like graphite may also be a concern. Some states in the US like California and New York plan on banning the sale of cars with internal combustion by 2050. Should you take the leap to go full electric?


The Tesla Model S starts at $75,700 and the SUV Model X at $79,500. There are many affordable options for your budget. The 2018 Ford Focus Electric, Hyundai Ioniq Electric, and Nissan Leaf start well under $30,000. Tesla even has the $35,000 Model 3, for those who want to experience the brand’s offerings for a lower price.

The Chevrolet Bolt EV ($36,620) is also a favorite among those who want to make use of the $7,500 tax credit. The tax credit brings the Bolt EV’s price into the sub $30,000 range.

EVs still cost more than their gasoline-powered counterparts up front. The regular 2018 Ford Focus starts at $18,825, about $10,000 cheaper than its electric sibling. Even if this is the case, electric cars still cost less to fuel.

Charging Options

EV charging station

EV charging has three levels:

  • Level one uses your wall outlet to charge. Most electric cars come with level 1 chargers that you can plug into the nearest socket. This is the slowest way to charge your EV. You’ll have to leave it charging overnight to top it up.
  • Level two is what you would commonly find on public charging stations. It’s faster than a level 1 charger, taking about three to eight hours to recharge. You can also have a level 2 charger installed in your home with a permit and the help of an electrician.
  • Level three or DC Fast Charge (DCFC) stations are usually found in public as well. DCFCs can fully charge a vehicle in the span of 20 minutes to one hour.

There are currently 23,809 electric vehicle charging stations in the USA and Canada. Some argue that this amount is meager compared to the 168,000 gas stations in the same area. Loren McDonald from CleanTechnica says this isn’t really a problem since electric vehicles still take up less than 0.29% of the automobiles in the US.

McDonald also argued that most of the charging would be done at home. There are still continuous efforts to build charging stations to suit the needs of electric car users across the country.

The Bumpy Road Ahead

Despite its promise of a greener drive for everyone, electric cars have received their fair share of scrutiny from environmentalists, as well. The Fraunhofer Institute of Building Physics stated that the energy needed to make an electric vehicle is more than double what it takes to make a conventional one because of its battery.

The International Council on Clean Transportation, however, says that battery manufacturing emissions may be similar to the ones from internal combustion engine manufacturing. The only difference is that electric cars don’t produce as much greenhouse gases as conventional ones do in the long run. The ICCT also says that with efforts to reduce the use of carbon in power sources, emissions from battery manufacturing will decrease by around 17%.

Electric vehicles are becoming more accessible. Manufacturers are creating electric versions of existing models. They’re even sponsoring electric charging stations around the country. With moves to use cleaner energy in manufacturing, it only makes sense to switch. You can do your part now and get yourself an EV with more affordable options available.

It also makes sense to wait for more competition to drive prices down if you don’t have the cash now. Either way, it’s not a matter of “if” but “when” you’ll switch to an EV for the greater good.

The post Nature Drive: Is an Electric Car Worth Buying? appeared first on None Equilibrium.

by Nonequilibrium at December 29, 2018 01:00 AM

December 27, 2018

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Fluid Chromatography: How It Prevents Drug Contamination

Many Americans take medications on a daily basis. A survey by Consumer Reports reveals that more than half of the population have a prescribed drug. Among this group, a good number consume more than three. Others are taking prescribed medications along with over-the-counter medicines, vitamins, and other forms of supplements.

As Americans get older and more drugs become available to manage or cure diseases, this percentage of consumers is also likely to increase. This trend then brings up one of the underrated medicinal concerns. How possible is drug contamination?

How Common Is Drug Contamination?

Drug contamination incidents do not happen all the time. In fact, they tend to be rare. This is because companies implement strict process regulations and quality control. On the downside, once they do, the implications are severe.

This problem can happen in different ways. One of these is through tampering. People can recall the Chicago Tylenol murders that occurred in 1982. During this time, seven people died after ingesting acetaminophen laced with potassium cyanide.

It can also happen at the manufacturing level. A good example is the 2009 contamination of a Sanofi Genzyme plant in Massachusetts. The manufacturers detected a viral presence in one of its bioreactors used to produce Cerezyme, a drug used to treat type 1 Gaucher disease. Although the threat cannot cause human infection, it impaired the cell’s viability.

Due to this incident, the company had to write off close to $30 million worth of products and lose over $300 million in revenue. Since it’s also one of the few manufacturers of medication for this disease, the shutdown led to a supply shortage for more than a year.

Using Fluid Chromatography as a Solution

Scientist working with test tubesTo ensure the viability and safety of the medications, companies such as provide supercritical and high-performance fluid chromatography.

Chromatography is the process of breaking down a product, such as a drug, into its components. In fluid or liquid chromatography, dissolved molecules or ions separate depending on how they interact with the mobile and the stationary phases.

In turn, the pharmaceutical analysts can determine the level of purity of the drug as well as the presence of minute traces of contaminants. Besides quality control, the technique can enable scientists to find substances that can be helpful in future research.

The level of accuracy of this test is already high, thanks to the developments of the equipment. Still, it’s possible they cannot detect all types of compounds due to factors such as thermal instability.

Newer or modern types of machines can have programmable settings. These allow the users to set the temperatures to extremely high levels or subzero conditions. They can also have features that will enable users to replicate the test many times at the same level of consistency.

It takes more than chromatography to prevent the contamination of medications, especially during manufacturing. It also requires high-quality control standards and strict compliance with industry protocols. Chromatography, though, is one useful way to safeguard the health of drug consumers in the country.

The post Fluid Chromatography: How It Prevents Drug Contamination appeared first on None Equilibrium.

by Nonequilibrium at December 27, 2018 07:27 PM

December 25, 2018

The n-Category Cafe

Category Theory 2019

As announced here previously, the major annual category theory meeting is taking place next year in Edinburgh, on 7-13 July. And after a week in the city of Arthur Conan Doyle, James Clerk Maxwell, Dolly the Sheep and the Higgs Boson, you can head off to Oxford for the Applied Category Theory 2019.

We’re now pleased to advertise our preliminary list of invited speakers, together with key dates for others who’d like to give talks.

The preliminary Invited Speakers include three of your Café hosts, and are as follows:

  • John Baez (Riverside)
  • Neil Ghani (Strathclyde)
  • Marco Grandis (Genoa)
  • Simona Paoli (Leicester)
  • Emily Riehl (Johns Hopkins)
  • Mike Shulman (San Diego)
  • Manuela Sobral (Coimbra)

Further invited speakers are to be confirmed.

Contributed talks   We are offering an early round of submissions and decisions to allow for those who need an early decision (e.g. for funding purposes) or want preliminary feedback for a possible resubmission. The timetable is as follows:

  • Early submission opens: January 1
  • Early submission deadline: March 1
  • Early decision notifications: April 1

For those who don’t need an early decision:

  • Submission opens: March 1
  • Submission deadline: May 1
  • Notifications: June 1

Submission for CT2019 is handled by EasyChair through the link

In order to submit, you will need to make an EasyChair account, which is a simple process. Submissions should be in the form of a brief (one page) abstract.

Registration is independent of abstract submission and will be open at a later date.

by leinster ( at December 25, 2018 05:12 PM

The n-Category Cafe

HoTT 2019

The first International Conference on Homotopy Type Theory, HoTT 2019, will take place from August 12th to 17th, 2019 at Carnegie Mellon University in Pittsburgh, USA. Here is the organizers’ announcement:

The invited speakers will be:

  • Ulrik Buchholtz (TU Darmstadt, Germany)
  • Dan Licata (Wesleyan University, USA)
  • Andrew Pitts (University of Cambridge, UK)
  • Emily Riehl (Johns Hopkins University, USA)
  • Christian Sattler (University of Gothenburg, Sweden)
  • Karol Szumilo (University of Leeds, UK)

Submissions of contributed talks will open in January and conclude in March; registration will open sometime in the spring.

There will also be an associated Homotopy Type Theory Summer School in the preceding week, August 7th to 10th.

The topics and instructors are:

  • Cubical methods: Anders Mortberg
  • Formalization in Agda: Guillaume Brunerie
  • Formalization in Coq: Kristina Sojakova
  • Higher topos theory: Mathieu Anel
  • Semantics of type theory: Jonas Frey
  • Synthetic homotopy theory: Egbert Rijke

We expect some funding to be available for students to attend the summer school and conference.

Looking forward to seeing you in Pittsburgh!

The scientific committee consists of:

  • Steven Awodey
  • Andrej Bauer
  • Thierry Coquand
  • Nicola Gambino
  • Peter LeFanu Lumsdaine
  • Michael Shulman, chair

by john ( at December 25, 2018 06:33 AM

The n-Category Cafe

Monads and Lawvere Theories

guest post by Jade Master

I have a question about the relationship between Lawvere theories and monads.

Every morphism of Lawvere theories <semantics>f:TT<annotation encoding="application/x-tex">f \colon T \to T'</annotation></semantics> induces a morphism of monads <semantics>M f:M TM T <annotation encoding="application/x-tex">M_f \colon M_T \Rightarrow M_{T^'}</annotation></semantics> which can be calculated by using the universal property of the coend formula for <semantics>M T<annotation encoding="application/x-tex">M_T</annotation></semantics>. (This can be found in Hyland and Power’s paper Lawvere theories and monads.)

On the other hand <semantics>f:TT<annotation encoding="application/x-tex">f \colon T \to T'</annotation></semantics> gives a functor <semantics>f *:Mod(T)Mod(T)<annotation encoding="application/x-tex">f^\ast \colon Mod(T') \to Mod(T)</annotation></semantics> given by precomposition with <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>. Because everything is nice enough, <semantics>f *<annotation encoding="application/x-tex">f^\ast</annotation></semantics> always has a left adjoint <semantics>f *:Mod(T)Mod(T)<annotation encoding="application/x-tex">f_\ast \colon Mod(T) \to Mod(T')</annotation></semantics>. (Details of this can be found in Toposes, Triples and Theories.)

My question is the following:

What relationship is there between the left adjoint <semantics>f *:Mod(T)Mod(T)<annotation encoding="application/x-tex">f_\ast \colon Mod(T) \to Mod(T')</annotation></semantics> and the morphism of monads computed using coends <semantics>M f:M TM T <annotation encoding="application/x-tex">M_f \colon M_T \Rightarrow M_{T^'}</annotation></semantics>?

In the examples I can think of the components of <semantics>M f<annotation encoding="application/x-tex">M_f</annotation></semantics> are given by the unit of the adjunction between <semantics>f *<annotation encoding="application/x-tex">f^\ast</annotation></semantics> and <semantics>f *<annotation encoding="application/x-tex">f_\ast</annotation></semantics> but I cannot find a reference explaining this. It doesn’t seem to be in Toposes, Triples, and Theories.

by john ( at December 25, 2018 06:21 AM

December 22, 2018

Alexey Petrov - Symmetry factor

David vs. Goliath: What a tiny electron can tell us about the structure of the universe
File 20181128 32230 mojlgr.jpg?ixlib=rb 1.1An artist’s impression of electrons orbiting the nucleus.
Roman Sigaev/

Alexey Petrov, Wayne State University

What is the shape of an electron? If you recall pictures from your high school science books, the answer seems quite clear: an electron is a small ball of negative charge that is smaller than an atom. This, however, is quite far from the truth.

A simple model of an atom with the nucleus of made of protons, which have a positive charge, and neutrons, which are neutral. The electrons, which have a negative charge, orbit the nucleus.
Vector FX /

The electron is commonly known as one of the main components of atoms making up the world around us. It is the electrons surrounding the nucleus of every atom that determine how chemical reactions proceed. Their uses in industry are abundant: from electronics and welding to imaging and advanced particle accelerators. Recently, however, a physics experiment called Advanced Cold Molecule Electron EDM (ACME) put an electron on the center stage of scientific inquiry. The question that the ACME collaboration tried to address was deceptively simple: What is the shape of an electron?

Classical and quantum shapes?

As far as physicists currently know, electrons have no internal structure – and thus no shape in the classical meaning of this word. In the modern language of particle physics, which tackles the behavior of objects smaller than an atomic nucleus, the fundamental blocks of matter are continuous fluid-like substances known as “quantum fields” that permeate the whole space around us. In this language, an electron is perceived as a quantum, or a particle, of the “electron field.” Knowing this, does it even make sense to talk about an electron’s shape if we cannot see it directly in a microscope – or any other optical device for that matter?

To answer this question we must adapt our definition of shape so it can be used at incredibly small distances, or in other words, in the realm of quantum physics. Seeing different shapes in our macroscopic world really means detecting, with our eyes, the rays of light bouncing off different objects around us.

Simply put, we define shapes by seeing how objects react when we shine light onto them. While this might be a weird way to think about the shapes, it becomes very useful in the subatomic world of quantum particles. It gives us a way to define an electron’s properties such that they mimic how we describe shapes in the classical world.

What replaces the concept of shape in the micro world? Since light is nothing but a combination of oscillating electric and magnetic fields, it would be useful to define quantum properties of an electron that carry information about how it responds to applied electric and magnetic fields. Let’s do that.

This is the apparatus the physicists used to perform the ACME experiment.
Harvard Department of Physics, CC BY-NC-SA

Electrons in electric and magnetic fields

As an example, consider the simplest property of an electron: its electric charge. It describes the force – and ultimately, the acceleration the electron would experience – if placed in some external electric field. A similar reaction would be expected from a negatively charged marble – hence the “charged ball” analogy of an electron that is in elementary physics books. This property of an electron – its charge – survives in the quantum world.

Likewise, another “surviving” property of an electron is called the magnetic dipole moment. It tells us how an electron would react to a magnetic field. In this respect, an electron behaves just like a tiny bar magnet, trying to orient itself along the direction of the magnetic field. While it is important to remember not to take those analogies too far, they do help us see why physicists are interested in measuring those quantum properties as accurately as possible.

What quantum property describes the electron’s shape? There are, in fact, several of them. The simplest – and the most useful for physicists – is the one called the electric dipole moment, or EDM.

In classical physics, EDM arises when there is a spatial separation of charges. An electrically charged sphere, which has no separation of charges, has an EDM of zero. But imagine a dumbbell whose weights are oppositely charged, with one side positive and the other negative. In the macroscopic world, this dumbbell would have a non-zero electric dipole moment. If the shape of an object reflects the distribution of its electric charge, it would also imply that the object’s shape would have to be different from spherical. Thus, naively, the EDM would quantify the “dumbbellness” of a macroscopic object.

Electric dipole moment in the quantum world

The story of EDM, however, is very different in the quantum world. There the vacuum around an electron is not empty and still. Rather it is populated by various subatomic particles zapping into virtual existence for short periods of time.

The Standard Model of particle physics has correctly predicted all of these particles. If the ACME experiment discovered that the electron had an EDM, it would suggest there were other particles that had not yet been discovered.

These virtual particles form a “cloud” around an electron. If we shine light onto the electron, some of the light could bounce off the virtual particles in the cloud instead of the electron itself.

This would change the numerical values of the electron’s charge and magnetic and electric dipole moments. Performing very accurate measurements of those quantum properties would tell us how these elusive virtual particles behave when they interact with the electron and if they alter the electron’s EDM.

Most intriguing, among those virtual particles there could be new, unknown species of particles that we have not yet encountered. To see their effect on the electron’s electric dipole moment, we need to compare the result of the measurement to theoretical predictions of the size of the EDM calculated in the currently accepted theory of the Universe, the Standard Model.

So far, the Standard Model accurately described all laboratory measurements that have ever been performed. Yet, it is unable to address many of the most fundamental questions, such as why matter dominates over antimatter throughout the universe. The Standard Model makes a prediction for the electron’s EDM too: it requires it to be so small that ACME would have had no chance of measuring it. But what would have happened if ACME actually detected a non-zero value for the electric dipole moment of the electron?

View of the Large Hadron Collider in its tunnel near Geneva, Switzerland. In the LHC two counter-rotating beams of protons are accelerated and forced to collide, generating various particles.
AP Photo/KEYSTONE/Martial Trezzini

Patching the holes in the Standard Model

Theoretical models have been proposed that fix shortcomings of the Standard Model, predicting the existence of new heavy particles. These models may fill in the gaps in our understanding of the universe. To verify such models we need to prove the existence of those new heavy particles. This could be done through large experiments, such as those at the international Large Hadron Collider (LHC) by directly producing new particles in high-energy collisions.

Alternatively, we could see how those new particles alter the charge distribution in the “cloud” and their effect on electron’s EDM. Thus, unambiguous observation of electron’s dipole moment in ACME experiment would prove that new particles are in fact present. That was the goal of the ACME experiment.

This is the reason why a recent article in Nature about the electron caught my attention. Theorists like myself use the results of the measurements of electron’s EDM – along with other measurements of properties of other elementary particles – to help to identify the new particles and make predictions of how they can be better studied. This is done to clarify the role of such particles in our current understanding of the universe.

What should be done to measure the electric dipole moment? We need to find a source of very strong electric field to test an electron’s reaction. One possible source of such fields can be found inside molecules such as thorium monoxide. This is the molecule that ACME used in their experiment. Shining carefully tuned lasers at these molecules, a reading of an electron’s electric dipole moment could be obtained, provided it is not too small.

However, as it turned out, it is. Physicists of the ACME collaboration did not observe the electric dipole moment of an electron – which suggests that its value is too small for their experimental apparatus to detect. This fact has important implications for our understanding of what we could expect from the Large Hadron Collider experiments in the future.

Interestingly, the fact that the ACME collaboration did not observe an EDM actually rules out the existence of heavy new particles that could have been easiest to detect at the LHC. This is a remarkable result for a tabletop-sized experiment that affects both how we would plan direct searches for new particles at the giant Large Hadron Collider, and how we construct theories that describe nature. It is quite amazing that studying something as small as an electron could tell us a lot about the universe.

A short animation describing the physics behind EDM and ACME collaboration’s findings.

Alexey Petrov, Professor of Physics, Wayne State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

by apetrov at December 22, 2018 08:51 PM

December 21, 2018

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Chiropractic Marketing: 5 Ways to Earn More Online Leads

Every business requires a steady stream of clients to succeed. The world is now doing everything online; from taking classes and shopping, to finding services online. This has been seen the rise of digital marketing to boost sales by targeting online customers.

While you can find customers by offering services, such as free spinal cord exams to local clients, online patients will remain unreached. As a result, digital marketing is crucial. If you have not started marketing digitally, here are five strategies to generate more leads online.

1.Website Design

For you to reach potential clients and create converting leads, you ought to have a platform on which to do it. Websites are, therefore, essential in digital marketing for chiropractors in Gilbert, Arizona.

Your website is your main marketing tool in reaching the online audience. It gives you an online presence. And, it will only make sense to invest in a good website — one that is visually appealing while at the same time functions well.

The website design should be suitable enough to attract your target audience and make you stand out from your competitors.

2.Search Engine Optimization (SEO)

SEO is an effective marketing strategy that helps clients to find you by searching on Google. One of the significant factors in conducting SEO is the use of relevant keywords that patients use when entering queries on search engines.

Creating appropriate content will also go a long way in gaining you favorable ranking. You don’t have to limit your topics on your products and services. Talk about trend in your industry, too. Anything that will be relevant to your target audience.

3. Social Media

Social media platforms are a good place to start building your online presence. People are always seeking recommendations, and social media offers an opportunity to get referrals. Some people even get the information they need from social media now, not search engines.

Brand your profiles in a similar way to boost recognition and share content regularly. Add social media buttons to your website to encourage people to share. Another tip is to create a social media personality for your brand that is approachable and fun, so people can trust your business.

4. Online Directories

Team lead speaking in front of the team

It is not enough to have a website and social media platforms. You need to be on online directories as well. Remember that it is in your best intention to cover as much digital ground as possible.

Popular online directories such as Google, Yahoo! and Yelp list businesses. The listings enable people to search businesses based on location. They can also access reviews left by other clients. Ensure that all your information is filled accurately and work toward getting many positive reviews.

5. Mobile friendly site

Most people use mobile devices to access the internet. Make your website mobile-friendly for easy view and navigation. Besides, Google ranks websites that are mobile user-friendly better.

By optimizing your website using SEO and social media among other strategies, you can earn more online leads and boost the success of your business.

The post Chiropractic Marketing: 5 Ways to Earn More Online Leads appeared first on None Equilibrium.

by Bertram Mortensen at December 21, 2018 02:15 PM