# Particle Physics Planet

## November 21, 2018

### Christian P. Robert - xi'an's og

Le Monde puzzle [#1075]

A Le Monde mathematical puzzle from after the competition:

A sequence of five integers can only be modified by subtracting an integer N from two neighbours of an entry and adding 2N to the entry.  Given the configuration below, what is the minimal number of steps to reach non-negative entries everywhere? Is this feasible for any configuration?

As I quickly found a solution by hand in four steps, but missed the mathematical principle behind!, I was not very enthusiastic in trying a simulated annealing version by selecting the place to change inversely proportional to its value, but I eventually tried and also obtained the same solution:

      [,1] [,2] [,3] [,4] [,5]
-3    1    1    1    1
1   -1    1    1   -1
0    1    0    1   -1
-1    1    0    0    1
1    0    0    0    0


But (update!) Jean-Louis Fouley came up with one step less!

      [,1] [,2] [,3] [,4] [,5]
-3    1    1    1    1
3   -2    1    1   -2
2    0    0    1   -2
1    0    0    0    0


The second part of the question is more interesting, but again without a clear mathematical lead, I could only attempt a large number of configurations and check whether all admitted “solutions”. So far none failed.

### Emily Lakdawalla - The Planetary Society Blog

This Thanksgiving, avoid the politics and talk space instead
If you're expecting to gather with extended family on Thanksgiving, avoid the politics. Here are some conversation starters to use at the dinner table that everyone can engage in.

### Peter Coles - In the Dark

50 Years of the Cosmic Web

I’ve just given a lecture on cosmology during which I showed a version of this amazing image:

The picture was created in 1977 by Seldner et al. based on the galaxy counts prepared by Charles Donald Shane and Carl Alvar Wirtanen and published in 1967 (Publ. Lick. Observatory 22, Part 1). There are no stars in the picture: it shows the  distribution of galaxies in the Northern Galactic sky. The very dense knot of galaxies seen in the centre of the image is the Coma Cluster, which lies very close to the Galactic North pole.The overall impression  is of a frothy pattern, which we now know as the Cosmic Web. I don’t think it is an unreasonable claim that the Lick galaxy catalogue provided the first convincing evidence of the form of the morphology of the large-scale structure of the Universe.

The original Shane-Wirtanen Lick galaxy catalogue lists counts of galaxies in 1 by 1 deg of arc blocks, but the actual counts were made in 10 by 10 arcmin cells. The later visualization is based on a reduction of the raw counts to obtain a catalogue with the original 10 by 10 arcmin resolution. The map above based on the corrected counts  shows the angular distribution of over 800,000 galaxies brighter than a B magnitude of approximately 19.

The distribution of galaxies is shown only in projection on the sky, and we are now able to probe the distribution in the radial direction with large-scale galaxy redshift surveys in order to obtain three-dimensional maps, but counting so many galaxy images by eye on photographic plates was a Herculean task that took many years to complete. Without such heroic endeavours in the past, our field would not have progressed anything like as quickly as it has.

I’m sorry I missed the 50th anniversary of the publication of the Lick catalogue, and Messrs Shane and Wirtanen both passed away some years ago, but at last I can doff my cap in their direction and acknowledge their immense contribution to cosmological research!

### Lubos Motl - string vacua and pheno

Swampland refinement of higher-spin no-go theorems
Dieter Lüst and two co-authors from Monkberg (Munich) managed to post the first hep-th paper today at 19:00:02 (a two-second lag is longer than usual, the timing contest wasn't too competitive):
A Spin-2 Conjecture on the Swampland
They articulate an interesting conjecture about the spin-two fields in quantum gravity – a conjecture of the Swampland type that is rather close to the Weak Gravity Conjecture and, in fact, may be derived from the Weak Gravity Conjecture under a mild additional assumption.

In particular, they claim that whenever there are particles whose spin is two or higher, they have to be massive and there has to be a whole tower of massive states. More precisely, if there is mass $$m$$ spin-two particle in quantum gravity which is self-interacting, the strength of the interaction may be parameterized by a new mass scale $$M_W$$ and the effective field theory has to break down at the mass scale $$\Lambda$$ where$\frac{\Lambda}{M_{\rm Planck}} = \frac{m}{M_W}$ You see that the Planck scale enters. The breakdown scale $$\Lambda$$ of the effective theory is basically the lowest mass of the next-to-lightest state in the predicted massive tower.

So if the self-interaction of the massive field is $$M_W\approx M_{\rm Planck}$$, then we get $$\Lambda\approx m$$ and all the lighter states in the tower are parameterically "comparably light" to the lightest spin-two boson. However, you can try to make the self-interaction stronger, by making $$M_W$$ smaller than the Planck scale, and then the tower may become more massive than the lightest representative.

They may derive the conjecture from the Weak Gravity Conjecture if they rewrite the self-interaction of the spin-two field through an interaction with a "gauge field" which is treated analogously to the electromagnetic gauge field in the Weak Gravity Conjecture – although it is the Stückelberg gauge field. It's not quite obvious to me that the Weak Gravity Conjecture must apply to gauge fields that are "unnecessary" or "auxiliary" in this sense but maybe there's a general rule saying that general principles such as the Weak Gravity Conjecture have to apply even in such "optional" cases.

I think that these conjectures – and evidence and partial proofs backing them – represent a clear progress of our knowledge beyond effective field theory. You know, in quantum field theory, we have theorems such as the Weinberg-Witten theorem. This particular one says that higher-spin particles can't be composite and similar things. That's only true in full-blown quantum field theories. But quantum gravity isn't strictly a quantum field theory (in the bulk). When you add gravity, things get generalized in a certain way. And things that were possible or impossible without gravity may become impossible or possible with quantum gravity.

Some "impossible scenarios" from QFTs may be suddenly allowed – but one pays with the need to allow an infinite tower of states and similar things. Note that if you look at$\frac{\Lambda}{M_{\rm Planck}} = \frac{m}{M_W}$ and send $$M_{\rm Planck}\to \infty$$ i.e. if you turn the gravity off, the Bavarian conjecture says that $$\Lambda\to\infty$$, too. So it becomes vacuous because it says that the effective theory "must break" at energy scales higher than infinity. Needless to say, the same positive power of the Planck mass appears in the original Weak Gravity Conjecture, too. That conjecture also becomes vacuous if you turn the gravity off.

When quantum gravity is turned on, there are new interactions, new states (surely the black hole microstates), and new mandatory interactions of these states. These new states and duties guarantee that theories where you would only add some fields or particles "insensitively" would be inconsistent. People are increasingly understanding what is the "new stuff" that simply has to happen in quantum gravity. And this new mandatory stuff may be understood either by some general consistency-based considerations assuming quantum gravity; or by looking at much more specific situations in the stringy vacua. Like in most of the good Swampland papers, Lüst et al. try to do both.

So far these two lines of reasoning are consistent with one another. They are increasingly compatible and increasingly equivalent – after all, string theory seems to be the only consistent theory of quantum gravity although we don't have any "totally canonical and complete" proof of this uniqueness (yet). The Swampland conjectures may be interpreted as another major direction of research that makes this point – that string theory is the only game in town – increasingly certain.

### Peter Coles - In the Dark

Sonnet No. 87

Farewell! thou art too dear for my possessing,
And like enough thou knowst thy estimate.
The Charter of thy worth gives thee releasing;
My bonds in thee are all determinate.
For how do I hold thee but by thy granting,
And for that riches where is my deserving?
The cause of this fair gift in me is wanting,
And so my patent back again is swerving.
Thy self thou gav’st, thy own worth then not knowing,
Or me, to whom thou gav’st it, else mistaking,
So thy great gift, upon misprision growing,
Comes home again, on better judgement making.
Thus have I had thee as a dream doth flatter:
In sleep a king, but waking no such matter.

### Emily Lakdawalla - The Planetary Society Blog

Here's where China is looking to land its 2020 Mars rover
NASA's Mars 2020 rover isn't the only spacecraft heading to Mars in two years.

## November 20, 2018

### Peter Coles - In the Dark

Open Journal Promotion?

Back in Maynooth after my weekend in Cardiff, I was up early this morning to prepare today’s teaching and related matters and I’m now pretty exhausted so I thought I’d just do a quick update about my pet project The Open Journal of Astrophysics.

I’ve been regularly boring all my readers with a stream of stuff about the Open Journal of Astrophysics, but if it’s all new to you, try reading the short post about the background to the Open Journal project that you can find here.

Since the re-launch of the journal last month we’ve had a reasonable number of papers submitted. I’m glad there wasn’t a huge influx, actually, because the Editorial Board is as yet unfamiliar with the system and require a manageable training set. The papers we have received are working their way through the peer-review system and we’ll see what transpires.

Obviously we’re hoping to increase the number of submissions with time (in a manageable way). As it happens, I have some (modest) funds available to promote the OJA as I think quite a large number of members of the astrophysics community haven’t heard of it. This also makes it a little difficult to enlist referees.

So here I have a small request. Do any of you have any ideas for promoting The Open Journal of Astrophysics? We could advertise directly in journals of course, but I’m wondering if anyone out there in the interwebs has any more imaginative ideas? If you do please let me know through the comments box below..

### Emily Lakdawalla - The Planetary Society Blog

We're going to Jezero!
NASA announced this morning the selection of Jezero crater for the landing site of the Mars 2020 mission. Jezero is a 45-kilometer-wide crater that once held a lake, and now holds a spectacular ancient river delta.

### Christian P. Robert - xi'an's og

irreversible Markov chains

Werner Krauth (ENS, Paris) was in Dauphine today to present his papers on irreversible Markov chains at the probability seminar. He went back to the 1953 Metropolis et al. paper. And mentioned a 1962 paper I had never heard of by Alder and Wainwright demonstrating phase transition can occur, via simulation. The whole talk was about simulating the stationary distribution of a large number of hard spheres on a one-dimensional ring, which made it hard for me to understand. (Maybe the triathlon before did not help.) And even to realise a part was about PDMPs… His slides included this interesting entry on factorised MCMC which reminded me of delayed acceptance and thinning and prefetching. Plus a notion of lifted Metropolis that could have applications in a general setting, if it differs from delayed rejection.

## November 19, 2018

### Peter Coles - In the Dark

Autumn Nights

I stumbled across this abstract painting (acrylic on canvas) by the artist Victoria Kloch and thought I’d share it this autumn night. Do check out her website. There’s lots more interesting stuff on it!

‘Autumn Night’ 5″x 7″ acrylic abstract on canvas by Victoria Kloch

View original post

### Peter Coles - In the Dark

Hip Replacement

From this month’s Oldie..

### Emily Lakdawalla - The Planetary Society Blog

NASA's Orion spacecraft makes progress, but are the agency's lunar plans on track?
Orion's service module arrived in Florida, but some space industry experts question whether NASA's human spaceflight plans are realistic.

## November 18, 2018

### The n-Category Cafe

Modal Types Revisited

We’ve discussed the prospects for adding modalities to type theory for many a year, e.g., here at the Café back at Modal Types, and frequently at the nLab. So now I’ve written up some thoughts on what philosophy might make of modal types in this preprint. My debt to the people who helped work out these ideas will be acknowledged when I publish the book.

This is to be the fourth chapter of a book which provides reasons for philosophy to embrace modal homotopy type theory. The book takes in order the components: types, dependency, homotopy, and finally modality.

The chapter ends all too briefly with mention of Mike Shulman et al.’s project, which he described in his post – What Is an n-Theory?. I’m convinced this is the way to go.

PS. I already know of the typo on line 8 of page 4.

## November 16, 2018

### Clifford V. Johnson - Asymptotia

Stan Lee’s Contributions to Science!!

I'm late to the party. Yes, I use the word party, because the outpouring of commentary noting the passing of Stan Lee has been, rightly, marked with a sense of celebration of his contributions to our culture. Celebration of a life full of activity. In the spirit of a few of the "what were you doing when you heard..." stories I've heard, involving nice coincidences and ironies, I've got one of my own. I'm not exactly sure when I heard the announcement on Monday, but I noticed today that it was also on Monday that I got an email giving me some news* about the piece I wrote about the Black Panther earlier this year for the publication The Conversation. The piece is about the (then) pending big splash the movie about the character (co-created by Stan Lee in the 60s) was about to make in the larger culture, the reasons for that, and why it was also a tremendous opportunity for science. For science? Yes, because, as I said there:

Vast audiences will see black heroes of both genders using their scientific ability to solve problems and make their way in the world, at an unrivaled level.

and

Improving science education for all is a core endeavor in a nation’s competitiveness and overall health, but outcomes are limited if people aren’t inspired to take an interest in science in the first place. There simply are not enough images of black scientists – male or female – in our media and entertainment to help inspire. Many people from underrepresented groups end up genuinely believing that scientific investigation is not a career path open to them.

Moreover, many people still see the dedication and study needed to excel in science as “nerdy.” A cultural injection of Black Panther heroics could help continue to erode the crumbling tropes that science is only for white men or reserved for people with a special “science gene.”

And here we are many months later, and I was delighted to see that people did get a massive dose of science inspiration from T'Challa and his sister Shuri, and the whole of the Wakanda nation, not just in Black Panther, but also in the Avengers: Infinity War movie a short while after.

But my larger point here is that so much of this goes back to Stan Lee's work with collaborators in not just making "relatable" superheroes, as you've heard said so many times --- showing their flawed human side so much more than the dominant superhero trope (represented by Superman, Wonder Woman, Batman, etc.,) allowed for at the time -- but making science and scientists be at the forefront of much of it. So many of the characters either were scientists (Banner (Hulk), Richards (Mr.Fantastic), T'Challa (BlackPanther), Pym (Ant Man), Stark (Ironman), etc) or used science actively to solve problems (e.g. Parker/Spiderman).

This was hugely influential on young minds, I have no doubt. This is not a small number of [...] Click to continue reading this post

The post Stan Lee’s Contributions to Science!! appeared first on Asymptotia.

### Lubos Motl - string vacua and pheno

Last June, I discussed machine learning approaches to the search for realistic vacua.

Computers may do a lot of work and lots of assumptions that some tasks may be "impossibly hard" may be shown incorrect with some help of computers that think and look for patterns. Today, a new paper was published on that issue, Deep learning in the heterotic orbifold landscape. Mütter, Parr, and Vaudrevange use "autoencoder neural networks" as their brain supplements.

The basic idea of the bootstrap program in physics.

But I want to mention another preprint,
Putting the Boot into the Swampland
The authors, Conlon (Oxford) and Quevedo (Trieste), have arguably belonged to the Stanford camp in the Stanford-vs-Swampland polemics. But they decided to study Cumrun Vafa's conjectures seriously and extended it in an interesting way.

Cumrun's "swampland" reasoning feels like a search for new, simple enough, universal principles of Nature that are obeyed in every theory of quantum gravity – or in every realization of string theory. These two "in" are a priori unequivalent and they represent slightly different papers or parts of papers as we know them today. But Cumrun Vafa and others, including me, believe that ultimately, "consistent theory of quantum gravity" and "string/M-theory" describe the same entity – they're two ways to look at the same beast. Why? Because, most likely, string theory really is the only game in town.

Some of the inequalities and claims that discriminate the consistent quantum gravity vacua against the "swampland" sound almost like the uncertainty principle, like some rather simple inequalities or existence claims. In one of them, Cumrun claims that a tower of states must exist whenever the quantum gravity moduli space has some extreme regions.

Conlon and Quevedo assume that this quantum gravitational theory lives in the anti de Sitter space and study the limit $$R_{AdS}\to\infty$$. The hypothesized tower on the bulk side gets translated to a tower of operators in the CFT, by the AdS/CFT correspondence. They argue that some higher-point interactions are fully determined on the AdS side and that the constraints they obey may be translated, via AdS/CFT, to known, older "bootstrap" constraints that have been known in CFT for a much longer time. Well, this is the more "conjectural" part of their paper – but it's the more interesting one and they have some evidence.

If that reasoning is correct, string theory is in some sense getting where it was 50 years ago. String theory partly arose from the "bootstrap program", the idea that mere general consistency conditions are enough to fully identify the S-matrix and similar things. That big assumption was basically ruled out – especially when "constructive quarks and gluons" were accepted as the correct description of the strong nuclear force. String theory has basically violated the "bootstrap wishful thinking" as well because it became analogously "constructive" as QCD and many other quantum field theories.

However, there has always been a difference. String theory generates low-energy effective field theories from different solutions of the same underlying theory. The string vacua may be mostly connected with each other on the moduli space or through some physical processes (topology changing transitions etc.). That's different from quantum field theories which are diverse and truly disconnected from each other. So string theory has always preserved the uniqueness and the potential to be fully derived from some general consistency condition(s). We don't really know what these conditions precisely are yet.

The bootstrap program has been developed decades ago and became somewhat successful for conformal field theories – especially but not only the two-dimensional conformal field theories similar to those that live on the stringy world sheets. Cumrun's swampland conditions seem far more tied to gravity and the dynamical spacetime. But by the AdS/CFT, some of the swampland conditions may be mapped to the older bootstrap constraints. Conlon and Quevedo call the map "bootland", not that it matters. ;-)

The ultimate consistency-based definition of quantum gravity or "all of string/M-theory" could be some clever generalization of the conditions we need in CFTs – and the derived bootstrap conditions they obey. We need some generalization in the CFT approach, I guess. Because CFTs are local, we may always distinguish "several particles" from "one particle". That's related to our ability to "count the number of strings" in perturbative string theory i.e. to distinguish single-string and multi-string states, and to count loops in the loop diagrams (by the topology of the world sheet).

It seems clear to me that this reduction to the one-string "simplified theory" must be abandoned in the gravitational generalization of the CFT calculus. The full universal definition of string theory must work with one-object and multi-object states on the same footing from the very beginning. Even though it looks much more complicated, there could be some analogies of the state-operator correspondence, operator product expansions, and other things in the "master definition of string/M-theory". In the perturbative stringy limits, one should be able to derive the world sheet CFT axioms as a special example.

## November 15, 2018

### Emily Lakdawalla - The Planetary Society Blog

When Space Science Becomes a Political Liability
John Culberson, an 8-term Texas Republican and staunch supporter the search for life on Europa, lost his re-election bid last week. His support for Europa was attacked by opponents and could send a chilling political message about the consequences of supporting space science and exploration.

### Jon Butterworth - Life and Physics

The Standard Model – TEDEd Lesson
I may have mentioned before, the Standard Model is about 50 years old now. It embodies a huge amount of human endeavour and understanding, and I try to explain it in my book, A Map of the Invisible (or Atom … Continue reading

### The n-Category Cafe

Magnitude: A Bibliography

I’ve just done something I’ve been meaning to do for ages: compiled a bibliography of all the publications on magnitude that I know about. More people have written about it than I’d realized!

This isn’t an exercise in citation-gathering; I’ve only included a paper if magnitude is the central subject or a major theme.

I’ve included works on magnitude of ordinary, un-enriched, categories, in which context magnitude is usually called Euler characteristic. But I haven’t included works on the diversity measures that are closely related to magnitude.

Enjoy! And let me know in the comments if I’ve missed anything.

## November 13, 2018

### ZapperZ - Physics and Physicists

Muons And Special Relativity
For those of us who studied physics or have taken a course involving Special Relativity, this is nothing new. The case of a lot of muons being detected on the earth's surface has been used as an example of the direct result of SR's time dilation and length contraction.

Still, it bears repeating, and presenting to those who are not aware of this, and this is what this MinutePhysics video has done.

Zz.

### CERN Bulletin

Interfon

Coopérative des fonctionnaires internationaux. Découvrez l'ensemble de nos avantages et remises auprès de nos fournisseurs sur notre site internet www.interfon.fr ou à notre bureau au bâtiment 504 (ouvert tous les jours de 12h30 à 15h30).

### CERN Bulletin

Conference

The Staff Association is pleased to invite you to a conference:

“The Wall”

Monday 26th of November 2018

at 6 pm

Main Auditorium (500-1-001)

Presentation by Andrea Musso

Guest Speaker: Eric Irivuzumugabe

The conference will be followed by a photo exhibition and light refreshments.

### CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois.

La prochaine permanence se tiendra le :

Mardi 27 novembre de 13 h 30 à 16 h 00

Salle de réunion de l’Association du personnel

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/

Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

### CERN Bulletin

Micro Club

INFOS novembre 2018

Opération NEMO

Du lundi 19 novembre au vendredi 7 décembre 2018, le Club organise sa traditionnelle Opération NEMO de fin d’année. Il vous propose des prix très attractifs pour certains produits de nos fournisseurs les plus importants : Apple, Lenovo, Brother, HP, Western Digital, LaCie, LMP, etc, etc.

Pendant ces trois semaines, ces firmes nous proposent pour certains articles, des prix légèrement inférieurs que ceux appliqués habituellement au CMC.

Nous ne pouvons pas publier les listes sur notre site Web, mais vous pouvez les obtenir directement au secrétariat du club.

En principe, sauf pour quelques cas particuliers, toutes les livraisons sont garanties avant la fin de cette année.

IMPORTANT : comme membre du Club et sur présentation de votre carte de membre de l’Association du Personnel, vous pouvez obtenir une petite réduction supplémentaire !

Commandes et fermeture de fin d’année :

Le Club sera fermé du mercredi 12 décembre 2018 à 20h00 au mardi 8 janvier 2019 à 18h00.

Une permanence sera assurée du lundi 17 au mercredi 19 décembre pour honorer les dernières livraisons 2018.

Les commandes Apple, Dell & Lenovo faites jusqu’au mardi 4 décembre 2018 seront livrées avant la fermeture.

Les commandes iPads, iPhones, HP, imprimantes Brother, disques externes et toners faites jusqu’au lundi 11 décembre 2018, seront livrées avant la fermeture.

Les réparations (Mac & PC) se feront jusqu’au jeudi 6 décembre 2018.

Les permanences des sections se terminent le mercredi 12 décembre 2018 à 20h.

Carte de membre

Dès le jeudi 29 novembre, vous pouvez renouveler les cartes de MEMBRE 2019 au secrétariat contre paiement de la cotisation. Nous vous rappelons nos heures d’ouverture :   du mardi au jeudi de 18h00 à 20h00.

Votre Comité

### CERN Bulletin

"CERN: Science Bridging Cultures" by Marilena Streit-Bianchi

You may have noticed, in the CERN Bulletin of 19 April 2018, the announcement of the presentation of the book “CERN: Science Bridging Cultures” to the Ambassador of Mozambique to the United Nations. For those of you who were unable to attend this presentation or have not yet had time to read this book, I present here for ECHO some aspects of this publication and explain how it came about.

Having worked at CERN for 41 years, I have witnessed many changes in the Organization. One thing has always permeated the spirit of the people working at CERN, at all levels: their interest in knowledge overcomes any barrier of origin, gender, language or religion. Now retired, I found that the time had come to highlight that CERN is not just a physics laboratory in quest of the unknown and where elementary particles are discovered and studied. By giving a glimpse of the laboratory’s various activities, I wanted to pay tribute to the mixture of diversity, capacities and humanity that CERN represents.

To this end, I have asked several CERN members to contribute in areas such as fundamental physics research, accelerators, experiments and physicists, information technologies, knowledge transfer and technological spin-offs, as well as the relationship between CERN and peace, CERN and art, and finally the role of science in society. Each contribution has been kept short, maximum four pages, and is easy to read.

I would like to point out that this book would not have been possible without the volunteer work of the contributors1 who wrote about their own work or field of activity, but also of the many people who contributed to the translation into the different languages. In addition, artists of different nationalities2 were invited to illustrate the work done at CERN.

I am very grateful to the Staff Association for allowing us to hold the exhibition "A Master of Drawing in Black and White - Justino António Cardoso" in July 2018, thus giving great visibility to the works of this Mozambican artist, giving him the opportunity to show his original drawings which illustrate, with an African touch, CERN's research activities and take a fresh and uncontaminated look3 at CERN's activities.

You can download this book for free from Zenodo, the open and free digital archive of CERN and OpenAIRE. The book is now available in English, French, Italian and Portuguese; from next month it will also be available in Spanish and German. This book is available in several languages so that it can be widely distributed to teachers and students in different countries, so that they can get to know CERN and appreciate why it is good to be able to work there.

Don't forget, if you liked this book, to tell your friends and your children's teachers so that they too can read it and share it with them.

[1] By chapter order: Marilena Streit-Bianchi, Emmanuel Tsesmelis, John Ellis, Lucio Rossi, Ana Maria Henriques Correia and João Martins Correia, Frédéric Hemmer, Giovanni Anelli, João Varela, Arthur I. Miller and Rolf Heuer.

[2] Davide Angheleddu (Italy), Justino António Cardoso (Mozambique), Margarita Cimadevila (Spain), Angelo Falciano (Italy), Michael Hoch (Austria), Karen Panman (Nederlands), Islam Mahmoud Sweity (Palestine) Wolfgang Trettnak (Austria).

[3] Justino António Cardoso was for the first time outside Africa when he visited CERN for 5 days. He had never had any contact before with high-energy physics or with physicists.

## November 12, 2018

### Jon Butterworth - Life and Physics

James Stirling
Today I got the terrible news of the untimely death of Professor James Stirling. A distinguished particle physicist and until August the Provost of Imperial College London, he will be remembered with fondness and admiration by many. Even astronomers – … Continue reading

### The n-Category Cafe

A Well Ordering Is A Consistent Choice Function

Well orderings have slightly perplexed me for a long time, so every now and then I have a go at seeing if I can understand them better. The insight I’m about to explain doesn’t resolve my perplexity, it’s pretty trivial, and I’m sure it’s well known to lots of people. But it does provide a fresh perspective on well orderings, and no one ever taught me it, so I thought I’d jot it down here.

In short: the axiom of choice allows you to choose one element from each nonempty subset of any given set. A well ordering on a set is a way of making such a choice in a consistent way.

Write $P\prime \left(X\right)P\text{'}\left(X\right)$ for the set of nonempty subsets of a set $XX$. One formulation of the axiom of choice is that for any set $XX$, there is a function $h:P\prime \left(X\right)\to Xh: P\text{'}\left(X\right) \to X$ such that $h\left(A\right)\in Ah\left(A\right) \in A$ for all $A\in P\prime \left(X\right)A \in P\text{'}\left(X\right)$.

But if we think of $hh$ as a piece of algebraic structure on the set $XX$, it’s natural to ask that $hh$ behaves in a consistent way. For example, given two nonempty subsets $A,B\subseteq XA, B \subseteq X$, how can we choose an element of $A\cup BA \cup B$?

• We could, quite simply, take $h\left(A\cup B\right)\in A\cup Bh\left(A \cup B\right) \in A \cup B$.

• Alternatively, we could take first take $h\left(A\right)\in Ah\left(A\right) \in A$ and $h\left(B\right)\in Bh\left(B\right) \in B$, then use $hh$ to choose an element of $\left\{h\left(A\right),h\left(B\right)\right\}\\left\{h\left(A\right), h\left(B\right)\\right\}$. The result of this two-step process is $h\left(\left\{h\left(A\right),h\left(B\right)\right\}\right)h\left(\\left\{ h\left(A\right), h\left(B\right) \\right\}\right)$.

A weak form of the “consistency” I’m talking about is that these two methods give the same outcome:

$h\left(A\cup B\right)=h\left(\left\{h\left(A\right),h\left(B\right)\right\}\right) h\left(A \cup B\right) = h\left(\\left\{h\left(A\right), h\left(B\right)\\right\}\right) $

for all $A,B\in P\prime \left(X\right)A, B \in P\text{'}\left(X\right)$. The strong form is similar, but with arbitrary unions instead of just binary ones:

$h\left(\bigcup \Omega \right)=h\left(\left\{h\left(A\right):A\in \Omega \right\}\right) h\Bigl\left( \bigcup \Omega \Bigr\right) = h\Bigl\left( \bigl\\left\{ h\left(A\right) : A \in \Omega \bigr\\right\} \Bigr\right) $

for all $\Omega \in P\prime P\prime \left(X\right)\Omega \in P\text{'}P\text{'}\left(X\right)$.

Let’s say that a function $h:P\prime \left(X\right)\to Xh: P\text{'}\left(X\right) \to X$ satisfying the weak or strong consistency law is a weakly or strongly consistent choice function on $XX$.

The central point is this:

A consistent choice function on a set $XX$ is the same thing as a well ordering on $XX$.

That’s true for consistent choice functions in both the weak and the strong sense — they turn out to be equivalent.

The proof is a pleasant little exercise. Given a well ordering $\le \leq$ on $XX$, define $h:P\prime \left(X\right)\to Xh: P\text{'}\left(X\right) \to X$ by taking $h\left(A\right)h\left(A\right)$ to be the least element of $AA$. It’s easy to see that this is a consistent choice function. In the other direction, given a consistent choice function $hh$ on $XX$, define $\le \leq$ by

$x\le y⇔h\left(\left\{x,y\right\}\right)=x. x \leq y \Leftrightarrow h\left(\\left\{x, y\\right\}\right) = x. $

You can convince yourself that $\le \leq$ is a well ordering and that $h\left(A\right)h\left(A\right)$ is the least element of $AA$, for any nonempty $A\subseteq XA \subseteq X$. The final task, also easy, is to show that the two constructions (of a consistent choice function from a well ordering and vice versa) are mutually inverse. And that’s that.

(For anyone following in enough detail to wonder about the difference between weak and strong: you only need to assume that $hh$ is a weakly consistent choice function in order to prove that the resulting relation $\le \leq$ is a well ordering, but if you start with a well ordering $\le \leq$, it’s clear that the resulting function $hh$ is strongly consistent. So weak is equivalent to strong.)

For me, the moral of the story is as follows. As everyone who’s done some set theory knows, if we assume the axiom of choice then every set can be well ordered. Understanding well orderings as consistent choice functions, this says the following:

If we’re willing to assume that it’s possible to choose an element of each nonempty subset of a set, then in fact it’s possible to make the choice in a consistent way.

People like to joke that the axiom of choice is obviously true, and that the well orderability of every set is obviously false. (Or they used to, at least.) The theorem on well ordering is derived from the axiom of choice by an entirely uncontroversial chain of reasoning, so I’ve always taken that joke to be the equivalent of throwing one’s hands up in despair: isn’t math weird! Look how this highly plausible statement implies an implausible one!

So the joke expresses a breakdown in many people’s intuitions. And with well orderings understood in the way I’ve described, we can specify the point at which the breakdown occurs: it’s in the gap between making a choice and making a consistent choice.

### Jon Butterworth - Life and Physics

Brief Answers to the Big Questions by Stephen Hawking – review
Back in the Guardian (well, the Observer actually) with a review of Stephen Hawking’s final book . A couple of paragraphs didn’t make the edit; no complaints from me about that, but I put them here mainly for the sake of … Continue reading

## November 11, 2018

### Lubos Motl - string vacua and pheno

New veins of science can't be found by a decree
Edwin has pointed out that a terrifying anti-science article was published in The Japan Times yesterday:
Scientists spend too much time on the old.
The author, the Bloomberg opinion columnist named Noah Smith (later I noticed that the rant was first published by Bloomberg), starts by attacking Ethan Siegel's text that had supported a new particle collider. Smith argues that because too many scientists are employed in science projects that extend the previous knowledge which leads to diminishing returns, all the projects extending the old science should be defunded and the money should be distributed to completely new small projects that have far-reaching practical consequences.

What a pile of toxic garbage!

Let's discuss the content of Smith's diatribe in some detail:
In a recent Forbes article, astronomer and writer Ethan Siegel called for a big new particle collider. His reasoning was unusual. Typically, particle colliders are created to test theories [...] But particle physics is running out of theories to test. [...] But fortunately governments seem unlikely to shell out the tens of billions of dollars required, based on nothing more than blind hope that interesting things will appear.
First of all, Smith says that it's "unusual" to say that the new collider should search for deviations from the Standard Model even if we don't know which ones we should expect. But there is nothing unusual about it at all and by his anxiety, Smith only shows that he doesn't have the slightest clue what science is.

The falsification of existing theories is how science makes progress – pretty much the only way how experimenters contribute to progress in science. This statement boils down to the fact that science can never prove theories to be completely right – after all, with the exception of the truly final theory, theories of physics are never quite right.

Instead, what an experiment can do reliably enough is to show that a theory is wrong. When the deviations from the old theoretical predictions are large enough so that we can calculate that it is extremely unlikely for such large deviations to occur by chance, we may claim with certainty that something that goes beyond the old theory has been found.

This is how the Higgs boson was found, too. The deviation of the measured data from the truncated Standard model prediction that assumed that "no Higgs boson exists" grew to 5 sigma at which point the Higgs boson discovery was officially announced.

The only true dichotomy boils down to the question whether the new theories and phenomena are first given some particular shape by theorists or by experimenters. The history of physics is full of both examples. Sometimes theorists have reasons to become sufficiently certain that a new phenomenon should exist because of theoretical reasons, and that phenomenon is later found by an experiment. Sometimes an experiment sees a new and surprising phenomenon and theorists only develop a good theory that explains the phenomenon later.

Theorists are surely not running out of theories to test. There are thousands of models – often resulting from very deep and highly motivated theories such as string theory or at least the grand unification – with tens of thousands of predictions and all of them may be tested. The recent frequency of discoveries just makes it sure that we shouldn't expect a new phenomenon that goes beyond the Standard Model to be discovered every other day. This is how Nature works.

In this lovely video promoting a location for the ILC project (another one has won), I think that the English subtitles were only added recently. The girl is a bored positron waiting for an electron.

Smith says that the expectation that new interesting things may be seen by a new collider is a "blind hope". But it is not a hope, let alone a blind one. It is a genuine possibility. It is a fact of physics that we don't know whether the Standard Model works around the collision energy of $$100\TeV$$. It either does or it does not. Indeed, because new physics is more interesting, physicists may "hope" that this is the answer that the collider will give. But the collider will give us some nontrivial information in either case.

Because the "new physics" answer is more interesting, one may say that the construction of the colliders is partially a bet, a lottery ticket, too. But most of progress just couldn't have emerged without experimenting, betting, taking a risk. If you want to avoid all risks, if you insist on certainty, you will have to rely on the welfare (or, if you are deciding how to invest your money, you need to rely on cash holdings or saving accounts with very low interest rates). You are a coward. You are not an important person for the world and you shouldn't get away with attempts to pretend that you are one.

Also, Smith says that governments are "unlikely to shell out the tens of billions". That's rubbish. Just like in the past, governments are very likely to reserve these funds because those are negligible amounts of money relatively to the overall budgets – and at least the symbolic implications of these modest expenses are far-reaching. When America was building the space research, a great fraction of the GDP was being spent on it – the fraction went up to 5% in a peak year. Compared to that, the price of a big collider is negligible. All governments have some people who know enough to be sure that rants by anti-science activists similar to Smith are worth nothing. Smith lives in a social bubble where his delusions are probably widespread but all the people in that bubble are largely disconnected from the most important things in the world and the society.

Japan is just deciding about the ILC in Japan.
Particle physicists have referred to this seeming dead end as a nightmare scenario. But it illustrates a deep problem with modern science. Too often, scientists expect to do bigger, more expensive versions of the research that worked before. Instead, what society often needs is for researchers to strike out in entirely new directions.
The non-discovery of new physics at the LHC has been described by disappointing phrases because people prefer when the experiments stimulate their own thinking and curiosity – and that of other physicists. Of course scientists prefer to do things where the chance for a discovery of something really new is higher. However, in fundamental physics, building a collider with a higher energy is the best known way to do it. You may be ignorant about this fact, Mr Smith, but it's just because you are an idiot, not because of some hypothetical flaw of high energy physics which is called high energy physics for a good reason. It's called in this way because increasing the energy is largely equivalent to making progress: higher energy is equivalent to shorter distance scales where we increasingly understand what is going on with an improving resolution.

If it were possible and easy to "strike out in entirely new directions", scientists would do it for obvious reasons – it would surely be great for the career of the author who finds a new direction. But qualitatively new discoveries are rare and cannot be ordered by a decree. We don't know in what exact directions "something new and interesting is hiding" which is why people must sort of investigate all promising enough directions. And looking in all the similar directions of "various new phenomena that may be seen at even higher energies" is simply the most promising strategy in particle physics according to what we know.

Equally importantly, extending the research strategies "that have worked before" isn't a sin. It's really how science always works. Scientific discoveries are never quite disconnected from the previous ones. Isaac Newton has found quite a revolutionary new direction – the quantitative basis for physics as we know it. He's still known for the proposition
If I have seen further it is by standing on the shoulders of giants.
Newton was partly joking – he wanted to mock some smaller and competing minds, namely Gottfried Leibniz and especially Robert Hooke who was short – but he was (and everyone was) aware of the fact that the new discoveries don't take place in the vacuum. Newton still had to build on the mathematics that was developed before him. When showing that the laws of gravity worked, he found Kepler's laws of planetary motion to be a very helpful summary of what his theory should imply, and so on.

Every new scientific advance is a "twist" in some previous ideas. It just cannot be otherwise. All the people who are claiming to make groundbreaking discoveries that are totally disconnected from the science of the recent century or so are full-blown crackpots.
During the past few decades, a disturbing trend has emerged in many scientific fields: The number of researchers required to generate new discoveries has steadily risen.
Yup. In some cases, the numbers may be reduced but in others, they cannot. For example, and this example is still rather typical for modern theoretical physics, M-theory was still largely found by one person, Edward Witten. It's unquestionable that most of the theoretical physicists have contributed much less science than Witten, even much less "science output per dollar". On the other hand, it's obvious that Witten has only discovered a small minority of the physics breakthroughs. If the number of theoretical physicists were one or comparable to one, the progress would be almost non-existent.

Experimental particle physics requires many more people for a single paper – like the 3,000 members of the ATLAS Collaboration (and extra 3,000 in CMS). But there are rather good reasons for that. ATLAS and CMS don't really differ from a company that produces something. For example, the legendary soft drink maker Kofola Czechoslovakia also has close to 3,000 employees. In Kofola, ATLAS, as well as CMS, the people do different kinds of work and if there's an obvious way to fire some of them while keeping all the vital processes going, it's being done.

You may compare Kofola, ATLAS, and CMS and decide which of them is doing a better job for the society. People in Czechoslovakia and Yugoslavia drink lots of Kofola products. People across the world are inspired to think about the collisions at the Large Hadron Collider. From a global perspective, Kofola, ATLAS, and CMS are negligible groups of employees. Each of them employs less than one-half of one millionth of the world population.

Think about the millions of people in the world who are employed in tax authorities although most of them could be fired and the tax collection could be done much more effectively with relatively modest improvements. Why does Mr Smith attack the teams working for the most important particle accelerator and not the tax officials? Because he is actually not motivated by any efficiency. He is driven by his hatred towards science.
In the 1800s, a Catholic monk named Gregor Mendel was able to discover some of the most fundamental concepts of genetic inheritance by growing pea plants.
Mendel was partly lucky – like many others. But his work cannot be extracted from the context. Mendel was one employee in the abbey in Brno, University of Olomouc, and perhaps other institutions in Czechia whose existence was at least partly justified by the efforts to deepen the human knowledge (or by efforts to breed better plants for economic reasons). At any rate, fundamental discoveries such as Newton's or Mendel's were waiting – they were the low-hanging fruits.

Indeed, one faces diminishing returns after the greatest discoveries are made, and this is true in every line of research and other activities. But this is a neutral and obvious fact, not something that can be rationally used against the whole fields. It's really a tautology – returns are diminishing after the greatest discoveries, otherwise they wouldn't be greatest. ;-) Particle physics didn't become meaningless after some event – any event, let's say the theoretical discovery of quantum field theory or the experimental discovery of W and Z bosons – just like genetics didn't become meaningless after Mendel discovered his fundamental laws. On the contrary, these important events were the beginnings when things actually started to be fun.

Smith complains that biotech companies have grown into multi-billion enterprises while Mendel was just playing in his modest garden. Why are billions spent for particle physics or genetics? Because they can. The mankind produces almost $100 trillion in GDP every year. Of course some fraction of it simply has to be genetics and particle physics because they're important, relatively speaking. It is ludicrous to compare the spending for human genome projects or the new colliders with Mendel's garden because no one actually has the choice of funding either Mendel's research or the International Particle Collider. These are not true competitors of one another because they're separated by 150 years! People across the epochs can't compete for funds. On top of that, the world GDP was smaller than today by orders of magnitude 150 years ago. Instead, we must compare whether we pay more money for a collider and less money e.g. for soldiers in Afghanistan (the campaign has cost over$1 trillion; or anything else, I don't want this text to be focused on interventionism) or vice versa. These are actually competing options. Of course particle physics and genetics deserve tens of billions every decade, to say the least. Ten billion dollars is just 0.01% of the world GDP, an incredibly tiny fraction. Even if there were almost no results, studying science is a part of what makes us human. Nations that don't do such things are human to a lesser degree and animals to a higher degree and they can be more legitimately treated as animals by others – e.g. eradicated. For this reason, paying something for science (even pure science) also follows from the survival instincts.
The universe of scientific fields isn’t fixed. Today, artificial intelligence is an enormously promising and rapidly progressing area, but back in 1956...
Here we see one thing that might support instead. But I don't think that most people who work on artificial intelligence should be called scientists. They're really engineers – or even further from science. Their goal isn't to describe how Nature works. Their task is to invent and build new things that can do certain new things but that exploit the known pieces that work according to known laws.
To keep rapid progress going, it makes sense to look for new veins of scientific discovery. Of course, there’s a limit to how fast that process can be forced...
The main problem isn't "how fast that process can be forced". The main problem with Smith's diatribe is that the discovery itself cannot be forced or pre-programmed; and that the search for some things and according to some strategy shouldn't be forced by the laymen such as Mr Smith at all because such an enforced behavior reduces the freedom of the scientists which slows down progress. And the rate of progress is whatever it is. There aren't any trivial ways to make it much faster and claims to the contrary are a pure wishful thinking. No one should be allowed to harass other people just because the world disagrees with his wishful thinking.
...it wasn’t until computers became sufficiently powerful, and data sets sufficiently big, that AI really took off.
The real point is that it just cannot be clear to everybody (or anybody!) from the beginning which research strategy or direction is likely to become interesting. But the scientists themselves are still more likely to make the right guess about the hot directions of future research than some ignorant laymen similar to Mr Smith who are obsessed with "forcing things" on everyone else.
But the way that scientists now are trained and hired seems to discourage them from striking off in bold new directions.
Mr Smith could clearly crawl into Mr Sm*lin's rectum and vice versa, to make it more obvious that allowing scum like that is a vicious circle.

What is actually discouraging scientists from striking off in bold new directions are anti-science rants such as this one by Mr Smith that clearly try to restrict what science can do (and maybe even think). If you think that you can make some groundbreaking discovery in a new direction, why don't you just do it yourself? Or together with thousands of similar inkspillers who are writing similar cr*p? And if you can't, why don't you exploit your rare opportunity to shut up? You don't have the slightest clue about science and the right way to do it and your influence over these matters is bound to be harmful.
This means that as projects like the Hadron Collider require ever-more particle physicists, ...
It is called the Large Hadron Collider, not just Hadron Collider, you Little Noam Smith aßhole.
With climate change a looming crisis, the need to discover sustainable energy technology...
Here we go. Only scientifically illiterate imbeciles like you believe that "climate change is a looming crisis". (I have already written several blog posts about dirty scumbags who would like to add physics funds to the climate hysteria.)

## October 26, 2018

### Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

New Frontiers

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mattis hendrerit ipsum, ac vehicula mauris iaculis eget. Aliquam quam felis, euismod ac arcu quis, laoreet pretium orci. Ut fermentum luctus lacus, ut convallis eros aliquam ut. In elementum sem vel commodo tristique. Sed non iaculis tortor. Maecenas vehicula lorem risus, in efficitur risus auctor et. Morbi ac ornare lacus. Aenean eu molestie ipsum. Proin tristique a purus quis semper.

Cras iaculis non metus id luctus. Aenean pellentesque et lacus sed malesuada. Integer nec tempor est, non convallis lacus. Duis ultrices sapien sit amet libero bibendum, eu auctor justo tempor. Etiam vitae arcu nisl. Mauris varius, lorem ut varius cursus, tortor lectus pellentesque justo, a dapibus lorem purus in eros. In vestibulum ultrices massa euismod fringilla. Sed iaculis semper commodo. Ut scelerisque tristique vestibulum. Sed dapibus porta risus nec ultricies. Nulla facilisi. Donec bibendum eu justo vel egestas. Sed vitae sagittis massa. Maecenas suscipit orci quis consequat sodales.

Sed facilisis condimentum ante sed bibendum. Phasellus sagittis commodo hendrerit. Etiam finibus nunc rutrum, volutpat risus vitae, tincidunt est. Proin id felis nisi. Etiam suscipit nec nulla at ultricies. Nam iaculis lacinia nunc, id ullamcorper purus luctus at. Pellentesque in consectetur ante. Sed eget porta enim.

The post New Frontiers appeared first on None Equilibrium.

### Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

People & Society

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mattis hendrerit ipsum, ac vehicula mauris iaculis eget. Aliquam quam felis, euismod ac arcu quis, laoreet pretium orci. Ut fermentum luctus lacus, ut convallis eros aliquam ut. In elementum sem vel commodo tristique. Sed non iaculis tortor. Maecenas vehicula lorem risus, in efficitur risus auctor et. Morbi ac ornare lacus. Aenean eu molestie ipsum. Proin tristique a purus quis semper.

Cras iaculis non metus id luctus. Aenean pellentesque et lacus sed malesuada. Integer nec tempor est, non convallis lacus. Duis ultrices sapien sit amet libero bibendum, eu auctor justo tempor. Etiam vitae arcu nisl. Mauris varius, lorem ut varius cursus, tortor lectus pellentesque justo, a dapibus lorem purus in eros. In vestibulum ultrices massa euismod fringilla. Sed iaculis semper commodo. Ut scelerisque tristique vestibulum. Sed dapibus porta risus nec ultricies. Nulla facilisi. Donec bibendum eu justo vel egestas. Sed vitae sagittis massa. Maecenas suscipit orci quis consequat sodales.

Sed facilisis condimentum ante sed bibendum. Phasellus sagittis commodo hendrerit. Etiam finibus nunc rutrum, volutpat risus vitae, tincidunt est. Proin id felis nisi. Etiam suscipit nec nulla at ultricies. Nam iaculis lacinia nunc, id ullamcorper purus luctus at. Pellentesque in consectetur ante. Sed eget porta enim.

The post People & Society appeared first on None Equilibrium.

### Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Environment & Energy

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mattis hendrerit ipsum, ac vehicula mauris iaculis eget. Aliquam quam felis, euismod ac arcu quis, laoreet pretium orci. Ut fermentum luctus lacus, ut convallis eros aliquam ut. In elementum sem vel commodo tristique. Sed non iaculis tortor. Maecenas vehicula lorem risus, in efficitur risus auctor et. Morbi ac ornare lacus. Aenean eu molestie ipsum. Proin tristique a purus quis semper.

Cras iaculis non metus id luctus. Aenean pellentesque et lacus sed malesuada. Integer nec tempor est, non convallis lacus. Duis ultrices sapien sit amet libero bibendum, eu auctor justo tempor. Etiam vitae arcu nisl. Mauris varius, lorem ut varius cursus, tortor lectus pellentesque justo, a dapibus lorem purus in eros. In vestibulum ultrices massa euismod fringilla. Sed iaculis semper commodo. Ut scelerisque tristique vestibulum. Sed dapibus porta risus nec ultricies. Nulla facilisi. Donec bibendum eu justo vel egestas. Sed vitae sagittis massa. Maecenas suscipit orci quis consequat sodales.

Sed facilisis condimentum ante sed bibendum. Phasellus sagittis commodo hendrerit. Etiam finibus nunc rutrum, volutpat risus vitae, tincidunt est. Proin id felis nisi. Etiam suscipit nec nulla at ultricies. Nam iaculis lacinia nunc, id ullamcorper purus luctus at. Pellentesque in consectetur ante. Sed eget porta enim.

The post Environment & Energy appeared first on None Equilibrium.

### Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Particle Physics

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mattis hendrerit ipsum, ac vehicula mauris iaculis eget. Aliquam quam felis, euismod ac arcu quis, laoreet pretium orci. Ut fermentum luctus lacus, ut convallis eros aliquam ut. In elementum sem vel commodo tristique. Sed non iaculis tortor. Maecenas vehicula lorem risus, in efficitur risus auctor et. Morbi ac ornare lacus. Aenean eu molestie ipsum. Proin tristique a purus quis semper.

Cras iaculis non metus id luctus. Aenean pellentesque et lacus sed malesuada. Integer nec tempor est, non convallis lacus. Duis ultrices sapien sit amet libero bibendum, eu auctor justo tempor. Etiam vitae arcu nisl. Mauris varius, lorem ut varius cursus, tortor lectus pellentesque justo, a dapibus lorem purus in eros. In vestibulum ultrices massa euismod fringilla. Sed iaculis semper commodo. Ut scelerisque tristique vestibulum. Sed dapibus porta risus nec ultricies. Nulla facilisi. Donec bibendum eu justo vel egestas. Sed vitae sagittis massa. Maecenas suscipit orci quis consequat sodales.

Sed facilisis condimentum ante sed bibendum. Phasellus sagittis commodo hendrerit. Etiam finibus nunc rutrum, volutpat risus vitae, tincidunt est. Proin id felis nisi. Etiam suscipit nec nulla at ultricies. Nam iaculis lacinia nunc, id ullamcorper purus luctus at. Pellentesque in consectetur ante. Sed eget porta enim.

The post Particle Physics appeared first on None Equilibrium.

### Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Space Exploration

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mattis hendrerit ipsum, ac vehicula mauris iaculis eget. Aliquam quam felis, euismod ac arcu quis, laoreet pretium orci. Ut fermentum luctus lacus, ut convallis eros aliquam ut. In elementum sem vel commodo tristique. Sed non iaculis tortor. Maecenas vehicula lorem risus, in efficitur risus auctor et. Morbi ac ornare lacus. Aenean eu molestie ipsum. Proin tristique a purus quis semper.

Cras iaculis non metus id luctus. Aenean pellentesque et lacus sed malesuada. Integer nec tempor est, non convallis lacus. Duis ultrices sapien sit amet libero bibendum, eu auctor justo tempor. Etiam vitae arcu nisl. Mauris varius, lorem ut varius cursus, tortor lectus pellentesque justo, a dapibus lorem purus in eros. In vestibulum ultrices massa euismod fringilla. Sed iaculis semper commodo. Ut scelerisque tristique vestibulum. Sed dapibus porta risus nec ultricies. Nulla facilisi. Donec bibendum eu justo vel egestas. Sed vitae sagittis massa. Maecenas suscipit orci quis consequat sodales.

Sed facilisis condimentum ante sed bibendum. Phasellus sagittis commodo hendrerit. Etiam finibus nunc rutrum, volutpat risus vitae, tincidunt est. Proin id felis nisi. Etiam suscipit nec nulla at ultricies. Nam iaculis lacinia nunc, id ullamcorper purus luctus at. Pellentesque in consectetur ante. Sed eget porta enim.

The post Space Exploration appeared first on None Equilibrium.

## October 24, 2018

### Jon Butterworth - Life and Physics

The trouble-makers of particle physics
The chances are you have heard quite a bit about the Higgs boson. The goody-two-shoes of particle physics, it may have been hard to find, but when it was discovered it was just as the theory – the Standard Model … Continue reading

### Axel Maas - Looking Inside the Standard Model

Looking for something when no one knows how much is there
This time, I want to continue the discussion from some months ago. Back then, I was rather general on how we could test our most dramatic idea. This idea is connected to what we regard as elementary particles. So far, our idea is that those you have heard about, the electrons, the Higgs, and so on are truly the basic building blocks of nature. However, we have found a lot of evidence that indicate that we see in experiment, and call these names, are actually not the same as the elementary particles themselves. Rather, they are a kind of bound state of the elementary ones, which only look at first sight like they themselves would be the elementary ones. Sounds pretty weird, huh? And if it sounds weird, it means it needs to be tested. We did so with numerical simulations. They all agreed perfectly with the ideas. But, of course, its physics, and thus we need also an experiment. The only question is which one.

We had some ideas already a while back. One of them will be ready soon, and I will talk again about it in due time. But this will be rather indirect, and somewhat qualitative. The other, however, required a new experiment, which may need two more decades to build. Thus, both cannot be the answer alone, and we need something more.

And this more is what we are currently closing in. Because one has this kind of weird bound state structure to make the standard model consistent, not only exotic particles are more complicated than usually assumed. Ordinary ones are too. And most ordinary are protons, the nucleus of the hydrogen atom. More importantly, protons is what is smashed together at the LHC at CERN. So, we have a machine already, which may be able to test it. But this is involved, as protons are very messy. They are already in the conventional picture bound states of quarks and gluons. Our results just say there are more components. Thus, we have somehow to disentangle old and new components. So, we have to be very careful in what we do.

Fortunately, there is a trick. All of this revolves around the Higgs. The Higgs has the property that interacts stronger with particles the heavier they are. The heaviest particles we know are the top quark, followed by the W and Z bosons. And the CMS experiment (and other experiments) at CERN has a measurement campaign to look at the production of these particles together! That is exactly where we expect something interesting can happen. However, our ideas are not the only ones leading to top quarks and Z bosons. There are many known processes which produce them as well. So we cannot just check whether they are there. Rather, we need to understand if there are there as expected. E.g., if they fly away from the interaction in the expected direction and with the expected speeds.

So what a master student and myself do is the following. We use a program, called HERWIG, which simulates such events. One of the people who created this program helped us to modify this program, so that we can test our ideas with it. What we now do is rather simple. An input to such simulations is how the structure of the proton looks like. Based on this, it simulates how the top quarks and Z bosons produced in a collision are distributed. We now just add our conjectured additional contributions to the proton, essentially a little bit of Higgs. We then check, how the distributions change. By comparing the changes to what we get in experiment, we can then deduced how large the Higgs contribution in the proton is. Moreover, we can even indirectly deduce its shape, i.e. how in the proton the Higgs is located.

And this we now study. We iterate modifications of the proton structure with comparison to experimental results and predictions without this Higgs contribution. Thereby, we constraint the Higgs contribution in the proton bit by bit. At the current time, we know that the data is only sufficient to provide an upper bound to this amount inside the proton. Our first estimates show already that this bound is actually not that strong, and quite a lot of Higgs could be inside the proton. But on the other hand, this is good, because that means that the expected data in the next couple of years from the experiments will be able to actually either constraint the contribution further, or could even detect it, if it is large enough. At any rate, we now know that we have a sensitive leverage to understand this new contribution.

## October 20, 2018

### Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

The thrill of a good conference

One of the perks of academia is the thrill of presenting results, thoughts and ideas at international conferences. Although the best meetings often fall at the busiest moment in the teaching semester and the travel can be tiring, there is no doubt that interacting directly with one’s peers is a huge shot in the arm for any researcher – not to mention the opportunity to travel to interesting locations and experience different cultures.

The view from my hotel in San Sebastian this morning.

This week, I travelled to San Sebastian in Spain to attend the Third International Conference on the History of Physics, the latest in a series of conferences that aim to foster dialogue between physicists with an interest in the history of their subject and professional historians of science. I think it’s fair to say the conference was a great success, with lots of interesting talks on a diverse range of topics. It didn’t hurt that the meeting took place in the Palacio Mirimar, a beautiful building in a fantastic location.

The Palacio Mirimar in San Sebastian.

The conference programme can be found here. I didn’t get to all the talks due to parallel timetabling, but three major highlights for me were ‘Structure or Agent? Max Planck and the Birth of Quantum Theory’ by Massimiliano Badino of the University of Verona, ‘The Principle of Plenitude as a Guiding Theme in Modern Physics’ by Helge Kragh of the University of Copenhagen, and ‘Rutherford’s Favourite Radiochemist: Bertram Borden’ by Edward Davis of the University of Cambridge.

A slide from the paper ‘Max Planck and the Birth of Quantum Theory’

My own presentation was titled The Dawning of Cosmology – Internal vs External Histories’ (the slides are here). In it, I considered the story of the emergence of the ‘big bang’ theory of the universe from two different viewpoints, the professional physicist vs. the science historian. (The former approach is sometimes termed ‘internal history’ as scientists tend to tell the story of scientific discovery as an interplay of theory and experiment within the confines of science. The latter approach is termed  ‘external’ because the professional historian will consider external societal factors such the prestige of researchers and their institutions and the relevance of national or international contexts). Nowadays, it is generally accepted that both internal and external factors usually often a role in a given scientific advance, a  process that has been termed the co-production of scientific knowledge.

Giving my paper in the conference room

As it was a short talk, I focused on three key stages in the development of the big bang model; the first (static) models of the cosmos that arose from relativity, the switch to expanding cosmologies in the 1930s, and finally the transition (much more gradual) to the idea of a universe that was once small, dense and hot. In preparing the paper, I found that the first stage was driven almost entirely by theoretical considerations (namely, Einstein’s wish to test his newly-minted general theory of relativity by applying it to the universe as a whole), with little evidence of co-production. Similarly, I found that the switch to expanding cosmologies was driven by almost entirely by developments in astronomy (namely, Hubble’s observations of the recession of the galaxies). Finally, I found the long rejection of Lemaître’s ‘fireworks’ universe was driven by obvious theoretical problems associated with the model (such as the problem of the singularity and the age paradox), while the eventual acceptance of the model was driven by major astronomical advances such as the discovery of the cosmic microwave background. Overall, my conclusion was that one could give a reasonably coherent account of the early development of modern cosmology in terms of the traditional narrative of an interplay of theory and experiment, with little evidence that social considerations played an important role in this particular story. As I once heard the noted historian Hasok Chang remark in a seminar, Sometimes science is the context’.

Can one draw any general conclusions from this little study? I think it would be interesting to investigate the matter further. One possibility is that social considerations become more important ‘as a field becomes a field’, i.e., as a new area of physics coalesces into its own distinct field, with specialized journals, postgraduate positions and undergraduate courses etc. Could it be that the traditional narrative works surprisingly well when considering the dawning of a field because the co-production effect is less pronounced then? Certainly, I have also found it hard to discern any major societal influence in the dawning of other theories such as special relativity or general relativity.

Coda

As a coda, I discussed a pet theme of mine; that the co-productive nature of scientific discovery presents a special problem for the science historian. After all, in order to weigh the relative impact of internal vs external considerations on a given scientific advance, one must presumably have a good understanding of each. But it takes many years of specialist training to attempt to place a scientific advance in its true scientific context, an impossible ask for a historian trained in the humanities. Some science historians avoid this problem by ‘black-boxing’ the science and focusing on social context alone. However, this means the internal scientific aspects of the story are either ignored or repeated from secondary sources, rather than offering new insights from perusing primary materials. Besides, how can one decide whether a societal influence is significant or not without considering the science? For example, Paul Forman’s argument concerning the influence of contemporaneous German culture on the acceptance of the Uncertainty Principle in quantum theory is interesting, but pays little attention to physics; a physicist might point out that it quickly became clear to the quantum theorists (many of whom were not German) that the Uncertainty Principle arose inevitably from wave-particle duality in all three formulations of the theory (see Hendry on this for example).

Indeed, now that it is accepted one needs to consider both internal and external factors in studying a given scientific advance, it’s not obvious to me what the professionalization of science history should look like, i.e., how the next generation of science historians should be trained. In the meantime, I think there is a good argument for the use of multi-disciplinary teams of collaborators in the study of the history of science.

All in all, a very enjoyable conference. I wish there had been time to relax and have a swim in the bay, but I never got a moment. On the other hand, I managed to stock up on some free issues of my favourite publication in this area, the European Physical Journal (H).  On the plane home, I had a great read of a seriously good EPJH article by S.M. Bilenky on the history of neutrino physics. Consider me inspired….

## October 17, 2018

### Robert Helling - atdotde

Bavarian electoral system
Last Sunday, we had the election for the federal state of Bavaria. Since the electoral system is kind of odd (but not as odd as first past the post), I would like to analyse how some variations (assuming the actual distribution of votes) in the rule would have worked out. So, first, here is how actually, the seats are distributed: Each voter gets two ballots: On the first ballot, each party lists one candidate from the local constituency and you can select one. On the second ballot, you can vote for a party list (it's even more complicated because also there, you can select individual candidates to determine the position on the list but let's ignore that for today).

Then in each constituency, the votes on ballot one are counted. The candidate with the most votes (like in first past the pole) gets elected for parliament directly (and is called a "direct candidate"). Then over all, the votes for each party on both ballots (this is where the system differs from the federal elections) are summed up. All votes for parties with less then 5% of the grand total of all votes are discarded (actually including their direct candidates but this is not of a partial concern). Let's call the rest the "reduced total". According to the fraction of each party in this reduced total the seats are distributed.

Of course the first problem is that you can only distribute seats in integer multiples of 1. This is solved using the Hare-Niemeyer-method: You first distribute the integer parts. This clearly leaves fewer seats open than the number of parties. Those you then give to the parties where the rounding error to the integer below was greatest. Check out the wikipedia page explaining how this can lead to a party losing seats when the total number of seats available is increased.

Because this is what happens in the next step: Remember that we already allocated a number of seats to constituency winners in the first round. Those count towards the number of seats that each party is supposed to get in step two according to the fraction of votes. Now, it can happen, that a party has won more direct candidates than seats allocated in step two. If that happens, more seats are added to the total number of seats and distributed according to the rules of step two until each party has been allocated at least the number of seats as direct candidates. This happens in particular if one party is stronger than all the other ones leading to that party winning almost all direct candidates (as in Bavaria this happened to the CSU which won all direct candidates except five in Munich and one in Würzburg which were won by the Greens).

A final complication is that Bavaria is split into seven electoral districts and the above procedure is for each district separately. So there are seven times rounding and adding seats procedures.

Sunday's election resulted in the following distribution of seats:

After the whole procedure, there are 205 seats distributed as follows

• CSU 85 (41.5% of seats)
• SPD 22 (10.7% of seats)
• FW 27 (13.2% of seats)
• GREENS 38 (18.5% of seats)
• FDP 11 (5.4% of seats)
• AFD 22 (10.7% of seats)

Now, for example one can calculate the distribution without districts throwing just everything in a single super-district. Then there are 208 seats distributed as

• CSU 85 (40.8%)
• SPD 22 (10.6%)
• FW 26 (12.5%)
• GREENS 40 (19.2%)
• FDP 12 (5.8%)
• AFD 23 (11.1%)
You can see that in particular the CSU, the party with the biggest number of votes profits from doing the rounding 7 times rather than just once and the last three parties would benefit from giving up districts.

But then there is actually an issue of negative weight of votes: The greens are particularly strong in Munich where they managed to win 5 direct seats. If instead those seats would have gone to the CSU (as elsewhere), the number of seats for Oberbayern, the district Munich belongs to would have had to be increased to accommodate those addition direct candidates for the CSU increasing the weight of Oberbayern compared to the other districts which would then be beneficial for the greens as they are particularly strong in Oberbayern: So if I give all the direct candidates to the CSU (without modifying the numbers of total votes), I get the follwing distribution:
221 seats
• CSU 91 (41.2%)
• SPD 24 (10.9%)
• FW 28 (12,6%)
• GREENS 42 (19.0%)
• FDP 12 (5.4%)
• AFD 24 (10.9%)
That is, there greens would have gotten a higher fraction of seats if they had won less constituencies. Voting for green candidates in Munich actually hurt the party as a whole!

The effect is not so big that it actually changes majorities (CSU and FW are likely to form a coalition) but still, the constitutional court does not like (predictable) negative weight of votes. Let's see if somebody challenges this election and what that would lead to.

The perl script I used to do this analysis is here.

Postscript:
The above analysis in the last point is not entirely fair as not to win a constituency means getting fewer votes which then are missing from the grand total. Taking this into account makes the effect smaller. In fact, subtracting the votes from the greens that they were leading by in the constituencies they won leads to an almost zero effect:

Seats: 220
• CSU  91 41.4%
• SPD  24 10.9%
• FW  28 12.7%
• GREENS  41 18.6%
• FDP  12 5.4%
• AFD  24 10.9%
Letting the greens win München Mitte (a newly created constituency that was supposed to act like a bad bank for the CSU taking up all central Munich more left leaning voters, do I hear somebody say "Gerrymandering"?) yields

Seats: 217
• CSU  90 41.5%
• SPD  23 10.6%
• FW  28 12.9%
• GREENS  41 18.9%
• FDP  12 5.5%
• AFD  23 10.6%
Or letting them win all but Moosach and Würzbug-Stadt where the lead was the smallest:

Seats: 210

• CSU  87 41.4%
• SPD  22 10.5%
• FW  27 12.9%
• GREENS  40 19.0%
• FDP  11 5.2%
• AFD  23 11.0%

## October 15, 2018

### Clifford V. Johnson - Asymptotia

Mindscape Interview!

And then two come along at once... Following on yesterday, another of the longer interviews I've done recently has appeared. This one was for Sean Carroll's excellent Mindscape podcast. This interview/chat is all about string theory, including some of the core ideas, its history, what that "quantum gravity" thing is anyway, and why it isn't actually a theory of (just) strings. Here's a direct link to the audio, and here's a link to the page about it on Sean's blog.

The whole Mindscape podcast has had some fantastic conversations, by the way, so do check it out on iTunes or your favourite podcast supplier!

I hope you enjoy it!!

The post Mindscape Interview! appeared first on Asymptotia.

## October 14, 2018

### Clifford V. Johnson - Asymptotia

Futuristic Podcast Interview

For your listening pleasure: I've been asked to do a number of longer interviews recently. One of these was for the "Futuristic Podcast of Mark Gerlach", who interviews all sorts of people from the arts (normally) over to the sciences (well, he hopes to do more of that starting with me). Go and check out his show on iTunes. The particular episode with me can be found as episode 31. We talk about a lot of things, from how people get into science (including my take on the nature vs nurture discussion), through the changes in how people get information about science to the development of string theory, to black holes and quantum entanglement - and a host of things in between. We even talked about The Dialogues, you'll be happy to hear. I hope you enjoy listening!

(The picture? Not immediately relevant, except for the fact that I did cycle to the place the recording took place. I mostly put it there because I was fixing my bike not long ago and it is good to have a photo in a post. That is all.)

The post Futuristic Podcast Interview appeared first on Asymptotia.

## October 13, 2018

### John Baez - Azimuth

Category Theory Course

I’m teaching a course on category theory at U.C. Riverside, and since my website is still suffering from reduced functionality I’ll put the course notes here for now. I taught an introductory course on category theory in 2016, but this one is a bit more advanced.

The hand-written notes here are by Christian Williams. They are probably best seen as a reminder to myself as to what I’d like to include in a short book someday.

Lecture 1: What is pure mathematics all about? The importance of free structures.

Lecture 2: The natural numbers as a free structure. Adjoint functors.

Lecture 3: Adjoint functors in terms of unit and counit.

Lecture 5: 2-Categories and string diagrams. Composing adjunctions.

Lecture 6: The ‘main spine’ of mathematics. Getting a monad from an adjunction.

Lecture 8: The walking monad, the augmented simplex category and the simplex category.

Lecture 9: Simplicial abelian groups from simplicial sets. Chain complexes from simplicial abelian groups.

Lecture 10: The Dold-Thom theorem: the category of simplicial abelian groups is equivalent to the category of chain complexes of abelian groups. The homology of a chain complex.

Lecture 8: The walking monad, the
augmented simplex category and the simplex category.

Lecture 9: Simplicial abelian groups from simplicial sets. Chain complexes from simplicial abelian groups.

Lecture 10: Chain complexes from simplicial abelian groups. The homology of a chain complex.

Lecture 12: The bar construction: getting a simplicial objects from an adjunction. The bar construction for G-sets, previewed.

Lecture 13: The adjunction between G-sets and sets.

Lecture 14: The bar construction for groups.

Lecture 15: The simplicial set $\mathbb{E}G$ obtained by applying the bar construction to the one-point $G$-set, its geometric realization $EG = |\mathbb{E}G|,$ and the free simplicial abelian group $\mathbb{Z}[\mathbb{E}G].$

Lecture 16: The chain complex $C(G)$ coming from the simplicial abelian group $\mathbb{Z}[\mathbb{E}G],$ its homology, and the definition of group cohomology $H^n(G,A)$ with coefficients in a $G$-module.

Lecture 17: Extensions of groups. The Jordan-Hölder theorem. How an extension of a group $G$ by an abelian group $A$ gives an action of $G$ on $A$ and a 2-cocycle $c \colon G^2 \to A.$

Lecture 18: Classifying abelian extensions of groups. Direct products, semidirect products, central extensions and general abelian extensions. The groups of order 8 as abelian extensions.

Lecture 19: Group cohomology. The chain complex for the cohomology of $G$ with coefficients in $A$, starting from the bar construction, and leading to the 2-cocycles used in classifying abelian extensions. The classification of extensions of $G$ by $A$ in terms of $H^2(G,A).$

Lecture 20: Examples of group cohomology: nilpotent groups and the fracture theorem. Higher-dimensional algebra and homotopification: the nerve of a category and the nerve of a topological space. $\mathbb{E}G$ as the nerve of the translation groupoid $G/\!/G.$ $BG = EG/G$ as the walking space with fundamental group $G.$

## October 07, 2018

### John Baez - Azimuth

Lebesgue Universal Covering Problem (Part 3)

Back in 2015, I reported some progress on this difficult problem in plane geometry. I’m happy to report some more.

First, remember the story. A subset of the plane has diameter 1 if the distance between any two points in this set is ≤ 1. A universal covering is a convex subset of the plane that can cover a translated, reflected and/or rotated version of every subset of the plane with diameter 1. In 1914, the famous mathematician Henri Lebesgue sent a letter to a fellow named Pál, challenging him to find the universal covering with the least area.

Pál worked on this problem, and 6 years later he published a paper on it. He found a very nice universal covering: a regular hexagon in which one can inscribe a circle of diameter 1. This has area

0.86602540…

But he also found a universal covering with less area, by removing two triangles from this hexagon—for example, the triangles C1C2C3 and E1E2E3 here:

The resulting universal covering has area

0.84529946…

In 1936, Sprague went on to prove that more area could be removed from another corner of Pál’s original hexagon, giving a universal covering of area

0.8441377708435…

In 1992, Hansen took these reductions even further by removing two more pieces from Pál’s hexagon. Each piece is a thin sliver bounded by two straight lines and an arc. The first piece is tiny. The second is downright microscopic!

Hansen claimed the areas of these regions were 4 · 10-11 and 6 · 10-18. This turned out to be wrong. The actual areas are 3.7507 · 10-11 and 8.4460 · 10-21. The resulting universal covering had an area of

0.844137708416…

This tiny improvement over Sprague’s work led Klee and Wagon to write:

it does seem safe to guess that progress on [this problem], which has been painfully slow in the past, may be even more painfully slow in the future.

However, in 2015 Philip Gibbs found a way to remove about a million times more area than Hansen’s larger region: a whopping 2.233 · 10-5. This gave a universal covering with area

0.844115376859…

Karine Bagdasaryan and I helped Gibbs write up a rigorous proof of this result, and we published it here:

• John Baez, Karine Bagdasaryan and Philip Gibbs, The Lebesgue universal covering problem, Journal of Computational Geometry 6 (2015), 288–299.

Greg Egan played an instrumental role as well, catching various computational errors.

At the time Philip was sure he could remove even more area, at the expense of a more complicated proof. Since the proof was already quite complicated, we decided to stick with what we had.

But this week I met Philip at The philosophy and physics of Noether’s theorems, a wonderful workshop in London which deserves a full blog article of its own. It turns out that he has gone further: he claims to have found a vastly better universal covering, with area

0.8440935944…

This is an improvement of 2.178245 × 10-5 over our earlier work—roughly equal to our improvement over Hansen.

You can read his argument here:

• Philip Gibbs, An upper bound for Lebesgue’s universal covering problem, 22 January 2018.

I say ‘claims’ not because I doubt his result—he’s clearly a master at this kind of mathematics!—but because I haven’t checked it and it’s easy to make mistakes, for example mistakes in computing the areas of the shapes removed.

It seems we are closing in on the final result; however, Philip Gibbs believes there is still room for improvement, so I expect it will take at least a decade or two to solve this problem… unless, of course, some mathematicians start working on it full-time, which could speed things up considerably.

## October 06, 2018

### John Baez - Azimuth

Riverside Math Workshop

We’re having a workshop with a bunch of cool math talks at U. C. Riverside, and you can register for it here:

Riverside Mathematics Workshop for Excellence and Diversity, Friday 19 October – Saturday 20 October, 2018. Organized by John Baez, Carl Mautner, José González and Chen Weitao.

This is the first of an annual series of workshops to showcase and celebrate excellence in research by women and other under-represented groups for the purpose of fostering and encouraging growth in the U.C. Riverside mathematical community.

After tea at 3:30 p.m. on Friday there will be two plenary talks, lasting until 5:00. Catherine Searle will talk on “Symmetries of spaces with lower curvature bounds”, and Edray Goins will give a talk called “Clocks, parking garages, and the solvability of the quintic: a friendly introduction to monodromy”. There will then be a banquet in the Alumni Center 6:30 – 8:30 p.m.

On Saturday there will be coffee and a poster session at 8:30 a.m., and then two parallel sessions on pure and applied mathematics, with talks at 9:30, 10:30, 11:30, 1:00 and 2:00. Check out the abstracts here!

(I’m especially interested in Christina Vasilakopoulou’s talk on Frobenius and Hopf monoids in enriched categories, but she’s my postdoc so I’m biased.)

## October 02, 2018

### John Baez - Azimuth

Applied Category Theory 2019

animation by Marius Buliga

I’m helping organize ACT 2019, an applied category theory conference and school at Oxford, July 15-26, 2019.

More details will come later, but here’s the basic idea. If you’re a grad student interested in this subject, you should apply for the ‘school’. Not yet—we’ll let you know when.

Dear all,

As part of a new growing community in Applied Category Theory, now with a dedicated journal Compositionality, a traveling workshop series SYCO, a forthcoming Cambridge U. Press book series Reasoning with Categories, and several one-off events including at NIST, we launch an annual conference+school series named Applied Category Theory, the coming one being at Oxford, July 15-19 for the conference, and July 22-26 for the school. The dates are chosen such that CT 2019 (Edinburgh) and the ACT 2019 conference (Oxford) will be back-to-back, for those wishing to participate in both.

There already was a successful invitation-only pilot, ACT 2018, last year at the Lorentz Centre in Leiden, also in the format of school+workshop.

For the conference, for those who are familiar with the successful QPL conference series, we will follow a very similar format for the ACT conference. This means that we will accept both new papers which then will be published in a proceedings volume (most likely a Compositionality special Proceedings issue), as well as shorter abstracts of papers published elsewhere. There will be a thorough selection process, as typical in computer science conferences. The idea is that all the best work in applied category theory will be presented at the conference, and that acceptance is something that means something, just like in CS conferences. This is particularly important for young people as it will help them with their careers.

Expect a call for submissions soon, and start preparing your papers now!

The school in ACT 2018 was unique in that small groups of students worked closely with an experienced researcher (these were John Baez, Aleks Kissinger, Martha Lewis and Pawel Sobociński), and each group ended up producing a paper. We will continue with this format or a closely related one, with Jules Hedges and Daniel Cicala as organisers this year. As there were 80 applications last year for 16 slots, we may want to try to find a way to involve more students.

We are fortunate to have a number of private sector companies closely associated in some way or another, who will also participate, with Cambridge Quantum Computing Inc. and StateBox having already made major financial/logistic contributions.

On behalf of the ACT Steering Committee,

John Baez, Bob Coecke, David Spivak, Christina Vasilakopoulou

## October 01, 2018

### Clifford V. Johnson - Asymptotia

Diverse Futures

I was asked by editors of the magazine Physics World's 30th anniversary edition to do a drawing that somehow captures changes in physics over the last 30 years, and looks forward to 30 years from now. This was an interesting challenge. There was not anything like the freedom to use space that I had in other works I've done, like my graphic book about science "The Dialogues", or my glimpse of the near future in my SF story "Resolution" in the Twelve Tomorrows anthology. I had over 230 pages for the former, and 20 pages for the latter. Here, I had one page. Well, actually a little over 2/3 of a page (once you take into account the introductory text, etc).

So I thought about it a lot. The editors wanted to show an active working environment, and so I thought about the interiors of labs for some time, looked up lots of physics breakthroughs over the years, and reflected on what might come. I eventually realized that the most important single change in the science that can be visually depicted (and arguably the single most important change of any kind) is the change that's happened to the scientists. Most importantly, we've become more diverse in various ways (not uniformly across all fields though), much more collaborative, and the means by which we communicate in order to do science have expanded greatly. All of this has benefited the science greatly, and I think that if you were to get a time machine and visit a lab 30 years ago, or 30 years from now, it will be the changes in the people that will most strike you, if you're paying attention. So I decided to focus on the break/discussion area of the lab, and imagined that someone stood in the same spot each year and took a snapshot. What we're seeing is those photos tacked to a noticeboard somewhere, and that's our time machine. Have a look, and keep an eye out for various details I put in to reflect the different periods. Enjoy! (Direct link here, and below I've embedded the image itself that's from the magazine. I recommend reading the whole issue, as it is a great survey of the last 30 years.)

The post Diverse Futures appeared first on Asymptotia.

## September 29, 2018

### Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

History of Physics at the IoP

This week saw a most enjoyable conference on the history of physics at the Institute of Physics in London. The IoP has had an active subgroup in the history of physics for many years, complete with its own newsletter, but this was the group’s first official workshop for a long while. It proved to be a most enjoyable and informative occasion, I hope it is the first of many to come.

The Institute of Physics at Portland Place in London (made famous by writer Ian McEwan in the novel ‘Solar’, as the scene of a dramatic clash between a brilliant physicist of questionable integrity and a Professor of Science Studies)

There were plenty of talks on what might be called ‘classical history’, such as Maxwell, Kelvin and the Inverse Square law of Electrostatics (by Isobel Falconer of the University of St. Andrews) and Newton’s First Law – a History (by Paul Ranford of University College London), while the more socially-minded historian might have enjoyed talks such as Psychical and Optical Research; Between Lord Rayleigh’s Naturalism and Dualism (by Gregory Bridgman of the University of Cambridge) and The Paradigm Shift of Physics -Religion-Unbelief Relationship from the Renaissance to the 21st Century (by Elisabetta Canetta of St Mary’s University). Of particular interest to me were a number of excellent talks drawn from the history of 20th century physics, such as A Partial History of Cosmic Ray Research in the UK (by the leading cosmic ray physicist Alan Watson), The Origins and Development of Free-Electron Lasers in the UK (by Elaine Seddon of Daresbury Laboratory),  When Condensed Matter became King (by Joseph Martin of the University of Cambridge), and Symmetries: On Physical and Aesthetic Argument in the Development of Relativity (by Richard Staley of the University of Cambridge). The official conference programme can be viewed here.

My own talk, Interrogating the Legend of Einstein’s “Biggest Blunder”, was a brief synopsis of our recent paper on this topic, soon to appear in the journal Physics in Perspective. Essentially our finding is that, despite recent doubts about the story, the evidence suggests that Einstein certainly did come to view his introduction of the cosmological constant term to the field equations as a serious blunder and almost certainly did declare the term his “biggest blunder” on at least one occasion. Given his awareness of contemporaneous problems such as the age of the universe predicted by cosmologies without the term, this finding has some relevance to those of today’s cosmologists who seek to describe the recently-discovered acceleration in cosmic expansion without a cosmological constant. The slides for the talk can be found here.

I must admit I missed a trick at question time. Asked about other  examples of ‘fudge factors’ that were introduced and later regretted, I forgot the obvious one. In 1900, Max Planck suggested that energy transfer between oscillators somehow occurs in small packets or ‘quanta’ of energy in order to successfully predict the spectrum of radiation from a hot body. However, he saw this as a mathematical device and was not at all supportive of the more general postulate of the ‘light quantum’ when it was proposed by a young Einstein in 1905.  Indeed, Planck rejected the light quantum for many years.

All in all, a superb conference. It was also a pleasure to visit London once again. As always, I booked a cheap ‘ n’ cheerful hotel in the city centre, walkable to the conference. On my way to the meeting, I walked past Madame Tussauds, the Royal Academy of Music, and had breakfast at the tennis courts in Regent’s Park. What a city!

Walking past the Royal Academy on my way to the conference

Views of London over a quick dinner after the conference

## September 27, 2018

### Axel Maas - Looking Inside the Standard Model

Unexpected connections
The history of physics is full of stuff developed for one purpose ending up being useful for an entirely different purpose. Quite often they also failed their original purpose miserably, but are paramount for the new one. Newer examples are the first attempts to describe the weak interactions, which ended up describing the strong one. Also, string theory was originally invented for the strong interactions, and failed for this purpose. Now, well, it is the popular science star, and a serious candidate for quantum gravity.

But failing is optional for having a second use. And we just start to discover a second use for our investigations of grand-unified theories. There our research used a toy model. We did this, because we wanted to understand a mechanism. And because doing the full story would have been much too complicated before we did not know, whether the mechanism works. But it turns out this toy theory may be an interesting theory on its own.

And it may be interesting for a very different topic: Dark matter. This is a hypothetical type of matter of which we see a lot of indirect evidence in the universe. But we are still mystified of what it is (and whether it is matter at all). Of course, such mysteries draw our interests like a flame the moth. Hence, our group in Graz starts to push also in this direction, being curious on what is going on. For now, we follow the most probable explanation that there are additional particles making up dark matter. Then there are two questions: What are they? And do they, and if yes how, interact with the rest of the world? Aside from gravity, of course.

Next week I will go to a workshop in which new ideas on dark matter will be explored, to get a better understanding of what is known. And in the course of preparing for this workshop I noted that there is this connection. I will actually present this idea at the workshop, as it forms a new class of possible explanations of dark matter. Perhaps not the right one, but at the current time an equally plausible one as many others.

And here is how it works. Theories of the type of grand-unified theories were for a long time expected to have a lot of massless particles. This was not bad for their original purpose, as we know quite some of them, like the photon and the gluons. However, our results showed that with an improved treatment and shift in paradigm that this is not always true. At least some of them do not have massless particles.

But dark matter needs to be massive to influence stars and galaxies gravitationally. And, except for very special circumstances, there should not be additional massless dark particles. Because otherwise the massive ones could decay into the massless ones. And then the mass is gone, and this does not work. Thus the reason why such theories had been excluded. But with our new results, they become feasible. Even more so, we have a lot of indirect evidence that dark matter is not just a single, massive particle. Rather, it needs to interact with itself, and there could be indeed many different dark matter particles. After all, if there is dark matter, it makes up four times more stuff in the universe than everything we can see. And what we see consists out of many particles, so why should not dark matter do so as well. And this is also realized in our model.

And this is how it works. The scenario I will describe (you can download my talk already now, if you want to look for yourself - though it is somewhat technical) finds two different types of stable dark matter. Furthermore, they interact. And the great thing about our approach is that we can calculate this quite precisely, giving us a chance to make predictions. Still, we need to do this, to make sure that everything works with what astrophysics tells us. Moreover, this setup gives us two more additional particles, which we can couple to the Higgs through a so-called portal. Again, we can calculate this, and how everything comes together. This allows to test this model not only by astronomical observations, but at CERN. This gives the basic idea. Now, we need to do all the detailed calculations. I am quite excited to try this out :) - so stay tuned, whether it actually makes sense. Or whether the model will have to wait for another opportunity.

## September 25, 2018

### Sean Carroll - Preposterous Universe

Atiyah and the Fine-Structure Constant

Sir Michael Atiyah, one of the world’s greatest living mathematicians, has proposed a derivation of α, the fine-structure constant of quantum electrodynamics. A preprint is here. The math here is not my forte, but from the theoretical-physics point of view, this seems misguided to me.

(He’s also proposed a proof of the Riemann conjecture, I have zero insight to give there.)

Caveat: Michael Atiyah is a smart cookie and has accomplished way more than I ever will. It’s certainly possible that, despite the considerations I mention here, he’s somehow onto something, and if so I’ll join in the general celebration. But I honestly think what I’m saying here is on the right track.

In quantum electrodynamics (QED), α tells us the strength of the electromagnetic interaction. Numerically it’s approximately 1/137. If it were larger, electromagnetism would be stronger, atoms would be smaller, etc; and inversely if it were smaller. It’s the number that tells us the overall strength of QED interactions between electrons and photons, as calculated by diagrams like these.
As Atiyah notes, in some sense α is a fundamental dimensionless numerical quantity like e or π. As such it is tempting to try to “derive” its value from some deeper principles. Arthur Eddington famously tried to derive exactly 1/137, but failed; Atiyah cites him approvingly.

But to a modern physicist, this seems like a misguided quest. First, because renormalization theory teaches us that α isn’t really a number at all; it’s a function. In particular, it’s a function of the total amount of momentum involved in the interaction you are considering. Essentially, the strength of electromagnetism is slightly different for processes happening at different energies. Atiyah isn’t even trying to derive a function, just a number.

This is basically the objection given by Sabine Hossenfelder. But to be as charitable as possible, I don’t think it’s absolutely a knock-down objection. There is a limit we can take as the momentum goes to zero, at which point α is a single number. Atiyah mentions nothing about this, which should give us skepticism that he’s on the right track, but it’s conceivable.

More importantly, I think, is the fact that α isn’t really fundamental at all. The Feynman diagrams we drew above are the simple ones, but to any given process there are also much more complicated ones, e.g.

And in fact, the total answer we get depends not only on the properties of electrons and photons, but on all of the other particles that could appear as virtual particles in these complicated diagrams. So what you and I measure as the fine-structure constant actually depends on things like the mass of the top quark and the coupling of the Higgs boson. Again, nowhere to be found in Atiyah’s paper.

Most importantly, in my mind, is that not only is α not fundamental, QED itself is not fundamental. It’s possible that the strong, weak, and electromagnetic forces are combined into some Grand Unified theory, but we honestly don’t know at this point. However, we do know, thanks to Weinberg and Salam, that the weak and electromagnetic forces are unified into the electroweak theory. In QED, α is related to the “elementary electric charge” e by the simple formula α = e2/4π. (I’ve set annoying things like Planck’s constant and the speed of light equal to one. And note that this e has nothing to do with the base of natural logarithms, e = 2.71828.) So if you’re “deriving” α, you’re really deriving e.

But e is absolutely not fundamental. In the electroweak theory, we have two coupling constants, g and g’ (for “weak isospin” and “weak hypercharge,” if you must know). There is also a “weak mixing angle” or “Weinberg angle” θW relating how the original gauge bosons get projected onto the photon and W/Z bosons after spontaneous symmetry breaking. In terms of these, we have a formula for the elementary electric charge: e = g sinθW. The elementary electric charge isn’t one of the basic ingredients of nature; it’s just something we observe fairly directly at low energies, after a bunch of complicated stuff happens at higher energies.

Not a whit of this appears in Atiyah’s paper. Indeed, as far as I can tell, there’s nothing in there about electromagnetism or QED; it just seems to be a way to calculate a number that is close enough to the measured value of α that he could plausibly claim it’s exactly right. (Though skepticism has been raised by people trying to reproduce his numerical result.) I couldn’t see any physical motivation for the fine-structure constant to have this particular value

These are not arguments why Atiyah’s particular derivation is wrong; they’re arguments why no such derivation should ever be possible. α isn’t the kind of thing for which we should expect to be able to derive a fundamental formula, it’s a messy low-energy manifestation of a lot of complicated inputs. It would be like trying to derive a fundamental formula for the average temperature in Los Angeles.

Again, I could be wrong about this. It’s possible that, despite all the reasons why we should expect α to be a messy combination of many different inputs, some mathematically elegant formula is secretly behind it all. But knowing what we know now, I wouldn’t bet on it.

## September 20, 2018

### John Baez - Azimuth

Patterns That Eventually Fail

Sometimes patterns can lead you astray. For example, it’s known that

$\displaystyle{ \mathrm{li}(x) = \int_0^x \frac{dt}{\ln t} }$

is a good approximation to $\pi(x),$ the number of primes less than or equal to $x.$ Numerical evidence suggests that $\mathrm{li}(x)$ is always greater than $\pi(x).$ For example,

$\mathrm{li}(10^{12}) - \pi(10^{12}) = 38,263$

and

$\mathrm{li}(10^{24}) - \pi(10^{24}) = 17,146,907,278$

But in 1914, Littlewood heroically showed that in fact, $\mathrm{li}(x) - \pi(x)$ changes sign infinitely many times!

This raised the question: when does $\pi(x)$ first exceed $\mathrm{li}(x)$? In 1933, Littlewood’s student Skewes showed, assuming the Riemann hypothesis, that it must do so for some $x$ less than or equal to

$\displaystyle{ 10^{10^{10^{34}}} }$

Later, in 1955, Skewes showed without the Riemann hypothesis that $\pi(x)$ must exceed $\mathrm{li}(x)$ for some $x$ smaller than

$\displaystyle{ 10^{10^{10^{964}}} }$

By now this bound has been improved enormously. We now know the two functions cross somewhere near $1.397 \times 10^{316},$ but we don’t know if this is the first crossing!

All this math is quite deep. Here is something less deep, but still fun.

You can show that

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, dt = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, dt = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, dt = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, \frac{\sin \left(\frac{t}{301}\right)}{\frac{t}{301}} \, dt = \frac{\pi}{2} }$

and so on.

It’s a nice pattern. But this pattern doesn’t go on forever! It lasts a very, very long time… but not forever.

More precisely, the identity

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }$

holds when

$n < 9.8 \cdot 10^{42}$

but not for all $n.$ At some point it stops working and never works again. In fact, it definitely fails for all

$n > 7.4 \cdot 10^{43}$

### The explanation

The integrals here are a variant of the Borwein integrals:

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, dx= \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\frac{\sin(x/3)}{x/3} \, dx = \frac{\pi}{2} }$

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\, \frac{\sin(x/3)}{x/3} \, \frac{\sin(x/5)}{x/5} \, dx = \frac{\pi}{2} }$

where the pattern continues until

$\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots\frac{\sin(x/13)}{x/13} \, dx = \frac{\pi}{2} }$

but then fails:

$\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots \frac{\sin(x/15)}{x/15} \, dx \approx \frac \pi 2 - 2.31\times 10^{-11} }$

I never understood this until I read Greg Egan’s explanation, based on the work of Hanspeter Schmid. It’s all about convolution, and Fourier transforms:

Suppose we have a rectangular pulse, centred on the origin, with a height of 1/2 and a half-width of 1.

Now, suppose we keep taking moving averages of this function, again and again, with the average computed in a window of half-width 1/3, then 1/5, then 1/7, 1/9, and so on.

There are a couple of features of the original pulse that will persist completely unchanged for the first few stages of this process, but then they will be abruptly lost at some point.

The first feature is that F(0) = 1/2. In the original pulse, the point (0,1/2) lies on a plateau, a perfectly constant segment with a half-width of 1. The process of repeatedly taking the moving average will nibble away at this plateau, shrinking its half-width by the half-width of the averaging window. So, once the sum of the windows’ half-widths exceeds 1, at 1/3+1/5+1/7+…+1/15, F(0) will suddenly fall below 1/2, but up until that step it will remain untouched.

In the animation below, the plateau where F(x)=1/2 is marked in red.

The second feature is that F(–1)=F(1)=1/4. In the original pulse, we have a step at –1 and 1, but if we define F here as the average of the left-hand and right-hand limits we get 1/4, and once we apply the first moving average we simply have 1/4 as the function’s value.

In this case, F(–1)=F(1)=1/4 will continue to hold so long as the points (–1,1/4) and (1,1/4) are surrounded by regions where the function has a suitable symmetry: it is equal to an odd function, offset and translated from the origin to these centres. So long as that’s true for a region wider than the averaging window being applied, the average at the centre will be unchanged.

The initial half-width of each of these symmetrical slopes is 2 (stretching from the opposite end of the plateau and an equal distance away along the x-axis), and as with the plateau, this is nibbled away each time we take another moving average. And in this case, the feature persists until 1/3+1/5+1/7+…+1/113, which is when the sum first exceeds 2.

In the animation, the yellow arrows mark the extent of the symmetrical slopes.

OK, none of this is difficult to understand, but why should we care?

Because this is how Hanspeter Schmid explained the infamous Borwein integrals:

∫sin(t)/t dt = π/2
∫sin(t/3)/(t/3) × sin(t)/t dt = π/2
∫sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2

∫sin(t/13)/(t/13) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2

But then the pattern is broken:

∫sin(t/15)/(t/15) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2

Here these integrals are from t=0 to t=∞. And Schmid came up with an even more persistent pattern of his own:

∫2 cos(t) sin(t)/t dt = π/2
∫2 cos(t) sin(t/3)/(t/3) × sin(t)/t dt = π/2
∫2 cos(t) sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2

∫2 cos(t) sin(t/111)/(t/111) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2

But:

∫2 cos(t) sin(t/113)/(t/113) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2

The first set of integrals, due to Borwein, correspond to taking the Fourier transforms of our sequence of ever-smoother pulses and then evaluating F(0). The Fourier transform of the sinc function:

sinc(w t) = sin(w t)/(w t)

is proportional to a rectangular pulse of half-width w, and the Fourier transform of a product of sinc functions is the convolution of their transforms, which in the case of a rectangular pulse just amounts to taking a moving average.

Schmid’s integrals come from adding a clever twist: the extra factor of 2 cos(t) shifts the integral from the zero-frequency Fourier component to the sum of its components at angular frequencies –1 and 1, and hence the result depends on F(–1)+F(1)=1/2, which as we have seen persists for much longer than F(0)=1/2.

• Hanspeter Schmid, Two curious integrals and a graphic proof, Elem. Math. 69 (2014) 11–17.

I asked Greg if we could generalize these results to give even longer sequences of identities that eventually fail, and he showed me how: you can just take the Borwein integrals and replace the numbers 1, 1/3, 1/5, 1/7, … by some sequence of positive numbers

$1, a_1, a_2, a_3 \dots$

The integral

$\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(a_1 x)}{a_1 x} \, \frac{\sin(a_2 x)}{a_2 x} \cdots \frac{\sin(a_n x)}{a_n x} \, dx }$

will then equal $\pi/2$ as long as $a_1 + \cdots + a_n \le 1,$ but not when it exceeds 1. You can see a full explanation on Wikipedia:

• Wikipedia, Borwein integral: general formula.

As an example, I chose the integral

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt }$

which equals $\pi/2$ if and only if

$\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} \le 1 }$

Thus, the identity holds if

$\displaystyle{ \sum_{k=1}^n \frac{1}{100 k} \le 1 }$

However,

$\displaystyle{ \sum_{k=1}^n \frac{1}{k} \le 1 + \ln n }$

so the identity holds if

$\displaystyle{ \frac{1}{100} (1 + \ln n) \le 1 }$

or

$\ln n \le 99$

or

$n \le e^{99} \approx 9.8 \cdot 10^{42}$

On the other hand, the identity fails if

$\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} > 1 }$

so it fails if

$\displaystyle{ \sum_{k=1}^n \frac{1}{101 k} > 1 }$

However,

$\displaystyle{ \sum_{k=1}^n \frac{1}{k} \ge \ln n }$

so the identity fails if

$\displaystyle{ \frac{1}{101} \ln n > 1 }$

or

$\displaystyle{ \ln n > 101}$

or

$\displaystyle{n > e^{101} \approx 7.4 \cdot 10^{43} }$

With a little work one could sharpen these estimates considerably, though it would take more work to find the exact value of $n$ at which

$\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }$

first fails.

## August 13, 2018

### Andrew Jaffe - Leaves on the Line

Planck: Demographics and Diversity

Another aspect of Planck’s legacy bears examining.

A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).

Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.

Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)

I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).

This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)

However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.

This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.

### Axel Maas - Looking Inside the Standard Model

Fostering an idea with experience
In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.

## July 26, 2018

### Sean Carroll - Preposterous Universe

Mindscape Podcast

For anyone who hasn’t been following along on other social media, the big news is that I’ve started a podcast, called Mindscape. It’s still young, but early returns are promising!

I won’t be posting each new episode here; the podcast has a “blog” of its own, and episodes and associated show notes will be published there. You can subscribe by RSS as usual, or there is also an email list you can sign up for. For podcast aficionados, Mindscape should be available wherever finer podcasts are served, including iTunes, Google Play, Stitcher, Spotify, and so on.

As explained at the welcome post, the format will be fairly conventional: me talking to smart people about interesting ideas. It won’t be all, or even primarily, about physics; much of my personal motivation is to get the opportunity to talk about all sorts of other interesting things. I’m expecting there will be occasional solo episodes that just have me rambling on about one thing or another.

And there are more exciting episodes on the way. Enjoy, and spread the word!

## July 20, 2018

### Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Summer days, academics and technological universities

The heatwave in the northern hemisphere may (or may not) be an ominous portend of things to come, but it’s certainly making for an enjoyable summer here in Ireland. I usually find it quite difficult to do any meaningful research when the sun is out, but things are a bit different when the good weather is regular.  Most days, I have breakfast in the village, a swim in the sea before work, a swim after work and a game of tennis to round off the evening. Tough life, eh.

Counsellor’s Strand in Dunmore East

So far, I’ve got one one conference proceeding written, one historical paper revamped and two articles refereed (I really enjoy the latter process, it’s so easy for academics to become isolated). Next week I hope to get back to that book I never seem to finish.

However, it would be misleading to portray a cosy image of a college full of academics beavering away over the summer. This simply isn’t the case around here – while a few researchers can be found in college this summer, the majority of lecturing staff decamped on June 20th and will not return until September 1st.

And why wouldn’t they? Isn’t that their right under the Institute of Technology contracts, especially given the heavy teaching loads during the semester? Sure – but I think it’s important to acknowledge that this is a very different set-up to the modern university sector, and doesn’t quite square with the move towards technological universities.

This week, the Irish newspapers are full of articles depicting the opening of Ireland’s first technological university, and apparently, the Prime Minister is anxious our own college should get a move on. Hmm. No mention of the prospect of a change in teaching duties, or increased facilities/time for research, as far as I can tell (I’d give a lot for an office that was fit for purpose).  So will the new designation just amount to a name change? And this is not to mention the scary business of the merging of different institutes of technology. Those who raise questions about this now tend to get cast as dismissed as resistors of progress. Yet the history of merging large organisations in Ireland hardly inspires confidence, not least because of a tendency for increased layers of bureaucracy to appear out of nowhere – HSE anyone?

## July 19, 2018

### Andrew Jaffe - Leaves on the Line

(Almost) The end of Planck

This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.

Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.

And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).

(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)

### Planck 2018: the science

So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.

The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.

All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)

Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.

As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).

Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.

The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.

(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)

### Planck 2018: lessons learned

So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).

But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.

Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.

That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.

### Andrew Jaffe - Leaves on the Line

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.

## July 16, 2018

### Tommaso Dorigo - Scientificblogging

A Beautiful New Spectroscopy Measurement
What is spectroscopy ?
(A) the observation of ghosts by infrared visors or other optical devices
(B) the study of excited states of matter through observation of energy emissions

If you answered (A), you are probably using a lousy internet search engine; and btw, you are rather dumb. Ghosts do not exist.

Otherwise you are welcome to read on. We are, in fact, about to discuss a cutting-edge spectroscopy measurement, performed by the CMS experiment using lots of proton-proton collisions by the CERN Large Hadron Collider (LHC).

## July 12, 2018

### Matt Strassler - Of Particular Significance

“Seeing” Double: Neutrinos and Photons Observed from the Same Cosmic Source

There has long been a question as to what types of events and processes are responsible for the highest-energy neutrinos coming from space and observed by scientists.  Another question, probably related, is what creates the majority of high-energy cosmic rays — the particles, mostly protons, that are constantly raining down upon the Earth.

As scientists’ ability to detect high-energy neutrinos (particles that are hugely abundant, electrically neutral, very light-weight, and very difficult to observe) and high-energy photons (particles of light, though not necessarily of visible light) have become more powerful and precise, there’s been considerable hope of getting an answer to these question.  One of the things we’ve been awaiting (and been disappointed a couple of times) is a violent explosion out in the universe that produces both high-energy photons and neutrinos at the same time, at a high enough rate that both types of particles can be observed at the same time coming from the same direction.

In recent years, there has been some indirect evidence that blazars — narrow jets of particles, pointed in our general direction like the barrel of a gun, and created as material swirls near and almost into giant black holes in the centers of very distant galaxies — may be responsible for the high-energy neutrinos.  Strong direct evidence in favor of this hypothesis has just been presented today.   Last year, one of these blazars flared brightly, and the flare created both high-energy neutrinos and high-energy photons that were observed within the same period, coming from the same place in the sky.

I have written about the IceCube neutrino observatory before; it’s a cubic kilometer of ice under the South Pole, instrumented with light detectors, and it’s ideal for observing neutrinos whose motion-energy far exceeds that of the protons in the Large Hadron Collider, where the Higgs particle was discovered.  These neutrinos mostly pass through Ice Cube undetected, but one in 100,000 hits something, and debris from the collision produces visible light that Ice Cube’s detectors can record.   IceCube has already made important discoveries, detecting a new class of high-energy neutrinos.

On Sept 22 of last year, one of these very high-energy neutrinos was observed at IceCube. More precisely, a muon created underground by the collision of this neutrino with an atomic nucleus was observed in IceCube.  To create the observed muon, the neutrino must have had a motion-energy tens of thousand times larger than than the motion-energy of each proton at the Large Hadron Collider (LHC).  And the direction of the neutrino’s motion is known too; it’s essentially the same as that of the observed muon.  So IceCube’s scientists knew where, on the sky, this neutrino had come from.

(This doesn’t work for typical cosmic rays; protons, for instance, travel in curved paths because they are deflected by cosmic magnetic fields, so even if you measure their travel direction at their arrival to Earth, you don’t then know where they came from. Neutrinos, beng electrically neutral, aren’t affected by magnetic fields and travel in a straight line, just as photons do.)

Very close to that direction is a well-known blazar (TXS-0506), four billion light years away (a good fraction of the distance across the visible universe).

The IceCube scientists immediately reported their neutrino observation to scientists with high-energy photon detectors.  (I’ve also written about some of the detectors used to study the very high-energy photons that we find in the sky: in particular, the Fermi/LAT satellite played a role in this latest discovery.) Fermi/LAT, which continuously monitors the sky, was already detecting high-energy photons coming from the same direction.   Within a few days the Fermi scientists had confirmed that TXS-0506 was indeed flaring at the time — already starting in April 2017 in fact, six times as bright as normal.  With this news from IceCube and Fermi/LAT, many other telescopes (including the MAGIC cosmic ray detector telescopes among others) then followed suit and studied the blazar, learning more about the properties of its flare.

Now, just a single neutrino on its own isn’t entirely convincing; is it possible that this was all just a coincidence?  So the IceCube folks went back to their older data to snoop around.  There they discovered, in their 2014-2015 data, a dramatic flare in neutrinos — more than a dozen neutrinos, seen over 150 days, had come from the same direction in the sky where TXS-0506 is sitting.  (More precisely, nearly 20 from this direction were seen, in a time period where normally there’d just be 6 or 7 by random chance.)  This confirms that this blazar is indeed a source of neutrinos.  And from the energies of the neutrinos in this flare, yet more can be learned about this blazar, and how it makes  high-energy photons and neutrinos at the same time.  Interestingly, so far at least, there’s no strong evidence for this 2014 flare in photons, except perhaps an increase in the number of the highest-energy photons… but not in the total brightness of the source.

The full picture, still emerging, tends to support the idea that the blazar arises from a supermassive black hole, acting as a natural particle accelerator, making a narrow spray of particles, including protons, at extremely high energy.  These protons, millions of times more energetic than those at the Large Hadron Collider, then collide with more ordinary particles that are just wandering around, such as visible-light photons from starlight or infrared photons from the ambient heat of the universe.  The collisions produce particles called pions, made from quarks and anti-quarks and gluons (just as protons are), which in turn decay either to photons or to (among other things) neutrinos.  And its those resulting photons and neutrinos which have now been jointly observed.

Since cosmic rays, the mysterious high energy particles from outer space that are constantly raining down on our planet, are mostly protons, this is evidence that many, perhaps most, of the highest energy cosmic rays are created in the natural particle accelerators associated with blazars. Many scientists have suspected that the most extreme cosmic rays are associated with the most active black holes at the centers of galaxies, and now we have evidence and more details in favor of this idea.  It now appears likely that that this question will be answerable over time, as more blazar flares are observed and studied.

The announcement of this important discovery was made at the National Science Foundation by Francis Halzen, the IceCube principal investigator, Olga Botner, former IceCube spokesperson, Regina Caputo, the Fermi-LAT analysis coordinator, and Razmik Mirzoyan, MAGIC spokesperson.

The fact that both photons and neutrinos have been observed from the same source is an example of what people are now calling “multi-messenger astronomy”; a previous example was the observation in gravitational waves, and in photons of many different energies, of two merging neutron stars.  Of course, something like this already happened in 1987, when a supernova was seen by eye, and also observed in neutrinos.  But in this case, the neutrinos and photons have energies millions and billions of times larger!

## July 08, 2018

### Marco Frasca - The Gauge Connection

ICHEP 2018

The great high-energy physics conference ICHEP 2018 is over and, as usual, I spend some words about it. The big collaborations of CERN presented their last results. I think the most relevant of this is about the evidence ($3\sigma$) that the Standard Model is at odds with the measurement of spin correlation between top-antitop pair of quarks. More is given in the ATLAS communicate. As expected, increasing precision proves to be rewarding.

About the Higgs particle, after the important announcement about the existence of the ttH process, both ATLAS and CMS are pursuing further their improvement of precision. About the signal strength they give the following results. For ATLAS (see here)

$\mu=1.13\pm 0.05({\rm stat.})\pm 0.05({\rm exp.})^{+0.05}_{-0.04}({\rm sig. th.})\pm 0.03({\rm bkg. th})$

and CMS (see here)

$\mu=1.17\pm 0.06({\rm stat.})^{+0.06}_{-0.05}({\rm sig. th.})\pm 0.06({\rm other syst.}).$

The news is that the error is diminished and both agrees. They show a small tension, 13% and 17% respectively, but the overall result is consistent with the Standard Model.

When the different contributions are unpacked in the respective contributions due to different processes, CMS claims some tensions in the WW decay that should be taken under scrutiny in the future (see here). They presented the results from $35.9{\rm fb}^{-1}$ data and so, there is no significant improvement, for the moment, with respect to Moriond conference this year. The situation is rather better for the ZZ decay where no tension appears and the agreement with the Standard Model is there in all its glory (see here). Things are quite different, but not too much, for ATLAS as in this case they observe some tensions but these are all below $2\sigma$ (see here). For the WW decay, ATLAS does not see anything above $1\sigma$ (see here).

So, although there is something to take under attention with the increase of data, that will reach $100 {\rm fb}^{-1}$ this year, but the Standard Model is in good health with respect to the Higgs sector even if there is a lot to be answered yet and precision measurements are the main tool. The correlation in the tt pair is absolutely promising and we should hope this will be confirmed a discovery.

## July 04, 2018

### Tommaso Dorigo - Scientificblogging

Chasing The Higgs Self Coupling: New CMS Results
Happy Birthday Higgs boson! The discovery of the last fundamental particle of the Standard Model was announced exactly 6 years ago at CERN (well, plus one day, since I decided to postpone to July 5 the publication of this post...).

In the Standard Model, the theory of fundamental interactions among elementary particles which enshrines our current understanding of the subnuclear world,  particles that constitute matter are fermionic: they have a haif-integer value of a quantity we call spin; and particles that mediate interactions between those fermions, keeping them together and governing their behaviour, are bosonic: they have an integer value of spin.

## June 25, 2018

### Sean Carroll - Preposterous Universe

On Civility

Alex Wong/Getty Images

White House Press Secretary Sarah Sanders went to have dinner at a local restaurant the other day. The owner, who is adamantly opposed to the policies of the Trump administration, politely asked her to leave, and she did. Now (who says human behavior is hard to predict?) an intense discussion has broken out concerning the role of civility in public discourse and our daily life. The Washington Post editorial board, in particular, called for public officials to be allowed to eat in peace, and people have responded in volume.

I don’t have a tweet-length response to this, as I think the issue is more complex than people want to make it out to be. I am pretty far out to one extreme when it comes to the importance of engaging constructively with people with whom we disagree. We live in a liberal democracy, and we should value the importance of getting along even in the face of fundamentally different values, much less specific political stances. Not everyone is worth talking to, but I prefer to err on the side of trying to listen to and speak with as wide a spectrum of people as I can. Hell, maybe I am even wrong and could learn something.

On the other hand, there is a limit. At some point, people become so odious and morally reprehensible that they are just monsters, not respected opponents. It’s important to keep in our list of available actions the ability to simply oppose those who are irredeemably dangerous/evil/wrong. You don’t have to let Hitler eat in your restaurant.

This raises two issues that are not so easy to adjudicate. First, where do we draw the line? What are the criteria by which we can judge someone to have crossed over from “disagreed with” to “shunned”? I honestly don’t know. I tend to err on the side of not shunning people (in public spaces) until it becomes absolutely necessary, but I’m willing to have my mind changed about this. I also think the worry that this particular administration exhibits authoritarian tendencies that could lead to a catastrophe is not a completely silly one, and is at least worth considering seriously.

More importantly, if the argument is “moral monsters should just be shunned, not reasoned with or dealt with constructively,” we have to be prepared to be shunned ourselves by those who think that we’re moral monsters (and those people are out there).  There are those who think, for what they take to be good moral reasons, that abortion and homosexuality are unforgivable sins. If we think it’s okay for restaurant owners who oppose Trump to refuse service to members of his administration, we have to allow staunch opponents of e.g. abortion rights to refuse service to politicians or judges who protect those rights.

The issue becomes especially tricky when the category of “people who are considered to be morally reprehensible” coincides with an entire class of humans who have long been discriminated against, e.g. gays or transgender people. In my view it is bigoted and wrong to discriminate against those groups, but there exist people who find it a moral imperative to do so. A sensible distinction can probably be made between groups that we as a society have decided are worthy of protection and equal treatment regardless of an individual’s moral code, so it’s at least consistent to allow restaurant owners to refuse to serve specific people they think are moral monsters because of some policy they advocate, while still requiring that they serve members of groups whose behaviors they find objectionable.

The only alternative, as I see it, is to give up on the values of liberal toleration, and to simply declare that our personal moral views are unquestionably the right ones, and everyone should be judged by them. That sounds wrong, although we do in fact enshrine certain moral judgments in our legal codes (murder is bad) while leaving others up to individual conscience (whether you want to eat meat is up to you). But it’s probably best to keep that moral core that we codify into law as minimal and widely-agreed-upon as possible, if we want to live in a diverse society.

This would all be simpler if we didn’t have an administration in power that actively works to demonize immigrants and non-straight-white-Americans more generally. Tolerating the intolerant is one of the hardest tasks in a democracy.

## June 24, 2018

### Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

7th Robert Boyle Summer School

This weekend saw the 7th Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s one of my favourite conferences – a select number of talks on the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

The Irish-born scientist and aristocrat Robert Boyle

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

Born in Lismore into a wealthy landowning family, Robert Boyle became one of the most important figures in the Scientific Revolution. A contemporary of Isaac Newton and Robert Hooke, he is recognized the world over for his scientific discoveries, his role in the rise of the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

This year, the theme of the conference was ‘What do we know – and how do we know it?’. There were many interesting talks such as Boyle’s Theory of Knowledge by Dr William Eaton, Associate Professor of Early Modern Philosophy at Georgia Southern University: The How, Who & What of Scientific Discovery by Paul Strathern, author of a great many books on scientists and philosophers such as the well-known Philosophers in 90 Minutes series: Scientific Enquiry and Brain StateUnderstanding the Nature of Knowledge by Professor William T. O’Connor, Head of Teaching and Research in Physiology at the University of Limerick Graduate Entry Medical School: The Promise and Peril of Big Data by Timandra Harkness, well-know media presenter, comedian and writer. For physicists, there was a welcome opportunity to hear the well-known American philosopher of physics Robert P. Crease present the talk Science Denial: will any knowledge do? The full programme for the conference can be found here.

All in all, a hugely enjoyable summer school, culminating in a garden party in the grounds of Lismore castle, Boyle’s ancestral home. My own contribution was to provide the music for the garden party – a flute, violin and cello trio, playing the music of Boyle’s contemporaries, from Johann Sebastian Bach to Turlough O’ Carolan. In my view, the latter was a baroque composer of great importance whose music should be much better known outside Ireland.

Images from the garden party in the grounds of Lismore Castle