Particle Physics Planet


November 21, 2018

Christian P. Robert - xi'an's og

Le Monde puzzle [#1075]

A Le Monde mathematical puzzle from after the competition:

A sequence of five integers can only be modified by subtracting an integer N from two neighbours of an entry and adding 2N to the entry.  Given the configuration below, what is the minimal number of steps to reach non-negative entries everywhere? Is this feasible for any configuration?

As I quickly found a solution by hand in four steps, but missed the mathematical principle behind!, I was not very enthusiastic in trying a simulated annealing version by selecting the place to change inversely proportional to its value, but I eventually tried and also obtained the same solution:

      [,1] [,2] [,3] [,4] [,5]
   -3    1    1    1    1
    1   -1    1    1   -1
    0    1    0    1   -1
   -1    1    0    0    1
    1    0    0    0    0

But (update!) Jean-Louis Fouley came up with one step less!

      [,1] [,2] [,3] [,4] [,5]
   -3    1    1    1    1
    3   -2    1    1   -2
    2    0    0    1   -2
    1    0    0    0    0

The second part of the question is more interesting, but again without a clear mathematical lead, I could only attempt a large number of configurations and check whether all admitted “solutions”. So far none failed.

by xi'an at November 21, 2018 11:18 PM

Emily Lakdawalla - The Planetary Society Blog

This Thanksgiving, avoid the politics and talk space instead
If you're expecting to gather with extended family on Thanksgiving, avoid the politics. Here are some conversation starters to use at the dinner table that everyone can engage in.

November 21, 2018 10:31 PM

Peter Coles - In the Dark

50 Years of the Cosmic Web

I’ve just given a lecture on cosmology during which I showed a version of this amazing image:

The picture was created in 1977 by Seldner et al. based on the galaxy counts prepared by Charles Donald Shane and Carl Alvar Wirtanen and published in 1967 (Publ. Lick. Observatory 22, Part 1). There are no stars in the picture: it shows the  distribution of galaxies in the Northern Galactic sky. The very dense knot of galaxies seen in the centre of the image is the Coma Cluster, which lies very close to the Galactic North pole.The overall impression  is of a frothy pattern, which we now know as the Cosmic Web. I don’t think it is an unreasonable claim that the Lick galaxy catalogue provided the first convincing evidence of the form of the morphology of the large-scale structure of the Universe.

The original Shane-Wirtanen Lick galaxy catalogue lists counts of galaxies in 1 by 1 deg of arc blocks, but the actual counts were made in 10 by 10 arcmin cells. The later visualization is based on a reduction of the raw counts to obtain a catalogue with the original 10 by 10 arcmin resolution. The map above based on the corrected counts  shows the angular distribution of over 800,000 galaxies brighter than a B magnitude of approximately 19.

The distribution of galaxies is shown only in projection on the sky, and we are now able to probe the distribution in the radial direction with large-scale galaxy redshift surveys in order to obtain three-dimensional maps, but counting so many galaxy images by eye on photographic plates was a Herculean task that took many years to complete. Without such heroic endeavours in the past, our field would not have progressed anything like as quickly as it has.

I’m sorry I missed the 50th anniversary of the publication of the Lick catalogue, and Messrs Shane and Wirtanen both passed away some years ago, but at last I can doff my cap in their direction and acknowledge their immense contribution to cosmological research!

 

by telescoper at November 21, 2018 06:49 PM

Lubos Motl - string vacua and pheno

Swampland refinement of higher-spin no-go theorems
Dieter Lüst and two co-authors from Monkberg (Munich) managed to post the first hep-th paper today at 19:00:02 (a two-second lag is longer than usual, the timing contest wasn't too competitive):
A Spin-2 Conjecture on the Swampland
They articulate an interesting conjecture about the spin-two fields in quantum gravity – a conjecture of the Swampland type that is rather close to the Weak Gravity Conjecture and, in fact, may be derived from the Weak Gravity Conjecture under a mild additional assumption.



In particular, they claim that whenever there are particles whose spin is two or higher, they have to be massive and there has to be a whole tower of massive states. More precisely, if there is mass \(m\) spin-two particle in quantum gravity which is self-interacting, the strength of the interaction may be parameterized by a new mass scale \(M_W\) and the effective field theory has to break down at the mass scale \(\Lambda\) where\[

\frac{\Lambda}{M_{\rm Planck}} = \frac{m}{M_W}

\] You see that the Planck scale enters. The breakdown scale \(\Lambda\) of the effective theory is basically the lowest mass of the next-to-lightest state in the predicted massive tower.



So if the self-interaction of the massive field is \(M_W\approx M_{\rm Planck}\), then we get \(\Lambda\approx m\) and all the lighter states in the tower are parameterically "comparably light" to the lightest spin-two boson. However, you can try to make the self-interaction stronger, by making \(M_W\) smaller than the Planck scale, and then the tower may become more massive than the lightest representative.

They may derive the conjecture from the Weak Gravity Conjecture if they rewrite the self-interaction of the spin-two field through an interaction with a "gauge field" which is treated analogously to the electromagnetic gauge field in the Weak Gravity Conjecture – although it is the Stückelberg gauge field. It's not quite obvious to me that the Weak Gravity Conjecture must apply to gauge fields that are "unnecessary" or "auxiliary" in this sense but maybe there's a general rule saying that general principles such as the Weak Gravity Conjecture have to apply even in such "optional" cases.



I think that these conjectures – and evidence and partial proofs backing them – represent a clear progress of our knowledge beyond effective field theory. You know, in quantum field theory, we have theorems such as the Weinberg-Witten theorem. This particular one says that higher-spin particles can't be composite and similar things. That's only true in full-blown quantum field theories. But quantum gravity isn't strictly a quantum field theory (in the bulk). When you add gravity, things get generalized in a certain way. And things that were possible or impossible without gravity may become impossible or possible with quantum gravity.

Some "impossible scenarios" from QFTs may be suddenly allowed – but one pays with the need to allow an infinite tower of states and similar things. Note that if you look at\[

\frac{\Lambda}{M_{\rm Planck}} = \frac{m}{M_W}

\] and send \(M_{\rm Planck}\to \infty\) i.e. if you turn the gravity off, the Bavarian conjecture says that \(\Lambda\to\infty\), too. So it becomes vacuous because it says that the effective theory "must break" at energy scales higher than infinity. Needless to say, the same positive power of the Planck mass appears in the original Weak Gravity Conjecture, too. That conjecture also becomes vacuous if you turn the gravity off.

When quantum gravity is turned on, there are new interactions, new states (surely the black hole microstates), and new mandatory interactions of these states. These new states and duties guarantee that theories where you would only add some fields or particles "insensitively" would be inconsistent. People are increasingly understanding what is the "new stuff" that simply has to happen in quantum gravity. And this new mandatory stuff may be understood either by some general consistency-based considerations assuming quantum gravity; or by looking at much more specific situations in the stringy vacua. Like in most of the good Swampland papers, Lüst et al. try to do both.

So far these two lines of reasoning are consistent with one another. They are increasingly compatible and increasingly equivalent – after all, string theory seems to be the only consistent theory of quantum gravity although we don't have any "totally canonical and complete" proof of this uniqueness (yet). The Swampland conjectures may be interpreted as another major direction of research that makes this point – that string theory is the only game in town – increasingly certain.

by Luboš Motl (noreply@blogger.com) at November 21, 2018 01:20 PM

Peter Coles - In the Dark

Sonnet No. 87

Farewell! thou art too dear for my possessing,
And like enough thou knowst thy estimate.
The Charter of thy worth gives thee releasing;
My bonds in thee are all determinate.
For how do I hold thee but by thy granting,
And for that riches where is my deserving?
The cause of this fair gift in me is wanting,
And so my patent back again is swerving.
Thy self thou gav’st, thy own worth then not knowing,
Or me, to whom thou gav’st it, else mistaking,
So thy great gift, upon misprision growing,
Comes home again, on better judgement making.
Thus have I had thee as a dream doth flatter:
In sleep a king, but waking no such matter.

 

by telescoper at November 21, 2018 12:17 PM

November 20, 2018

Peter Coles - In the Dark

Open Journal Promotion?

Back in Maynooth after my weekend in Cardiff, I was up early this morning to prepare today’s teaching and related matters and I’m now pretty exhausted so I thought I’d just do a quick update about my pet project The Open Journal of Astrophysics.

I’ve been regularly boring all my readers with a stream of stuff about the Open Journal of Astrophysics, but if it’s all new to you, try reading the short post about the background to the Open Journal project that you can find here.

Since the re-launch of the journal last month we’ve had a reasonable number of papers submitted. I’m glad there wasn’t a huge influx, actually, because the Editorial Board is as yet unfamiliar with the system and require a manageable training set. The papers we have received are working their way through the peer-review system and we’ll see what transpires.

Obviously we’re hoping to increase the number of submissions with time (in a manageable way). As it happens, I have some (modest) funds available to promote the OJA as I think quite a large number of members of the astrophysics community haven’t heard of it. This also makes it a little difficult to enlist referees.

So here I have a small request. Do any of you have any ideas for promoting The Open Journal of Astrophysics? We could advertise directly in journals of course, but I’m wondering if anyone out there in the interwebs has any more imaginative ideas? If you do please let me know through the comments box below..

by telescoper at November 20, 2018 06:14 PM

Emily Lakdawalla - The Planetary Society Blog

We're going to Jezero!
NASA announced this morning the selection of Jezero crater for the landing site of the Mars 2020 mission. Jezero is a 45-kilometer-wide crater that once held a lake, and now holds a spectacular ancient river delta.

November 20, 2018 04:33 PM

Christian P. Robert - xi'an's og

irreversible Markov chains

Werner Krauth (ENS, Paris) was in Dauphine today to present his papers on irreversible Markov chains at the probability seminar. He went back to the 1953 Metropolis et al. paper. And mentioned a 1962 paper I had never heard of by Alder and Wainwright demonstrating phase transition can occur, via simulation. The whole talk was about simulating the stationary distribution of a large number of hard spheres on a one-dimensional ring, which made it hard for me to understand. (Maybe the triathlon before did not help.) And even to realise a part was about PDMPs… His slides included this interesting entry on factorised MCMC which reminded me of delayed acceptance and thinning and prefetching. Plus a notion of lifted Metropolis that could have applications in a general setting, if it differs from delayed rejection.

by xi'an at November 20, 2018 02:14 PM

November 19, 2018

Peter Coles - In the Dark

Autumn Nights

I stumbled across this abstract painting (acrylic on canvas) by the artist Victoria Kloch and thought I’d share it this autumn night. Do check out her website. There’s lots more interesting stuff on it!

Victoria Kloch | fine art

'Autumn Night' 5"x7" acrylic abstract on canvas by Victoria Kloch ‘Autumn Night’ 5″x 7″ acrylic abstract on canvas by Victoria Kloch

View original post

by telescoper at November 19, 2018 07:57 PM

Peter Coles - In the Dark

Emily Lakdawalla - The Planetary Society Blog

NASA's Orion spacecraft makes progress, but are the agency's lunar plans on track?
Orion's service module arrived in Florida, but some space industry experts question whether NASA's human spaceflight plans are realistic.

November 19, 2018 12:00 PM

November 18, 2018

The n-Category Cafe

Modal Types Revisited

We’ve discussed the prospects for adding modalities to type theory for many a year, e.g., here at the Café back at Modal Types, and frequently at the nLab. So now I’ve written up some thoughts on what philosophy might make of modal types in this preprint. My debt to the people who helped work out these ideas will be acknowledged when I publish the book.

This is to be the fourth chapter of a book which provides reasons for philosophy to embrace modal homotopy type theory. The book takes in order the components: types, dependency, homotopy, and finally modality.

The chapter ends all too briefly with mention of Mike Shulman et al.’s project, which he described in his post – What Is an n-Theory?. I’m convinced this is the way to go.

PS. I already know of the typo on line 8 of page 4.

by david (d.corfield@kent.ac.uk) at November 18, 2018 09:34 AM

November 16, 2018

Clifford V. Johnson - Asymptotia

Stan Lee’s Contributions to Science!!

I'm late to the party. Yes, I use the word party, because the outpouring of commentary noting the passing of Stan Lee has been, rightly, marked with a sense of celebration of his contributions to our culture. Celebration of a life full of activity. In the spirit of a few of the "what were you doing when you heard..." stories I've heard, involving nice coincidences and ironies, I've got one of my own. I'm not exactly sure when I heard the announcement on Monday, but I noticed today that it was also on Monday that I got an email giving me some news* about the piece I wrote about the Black Panther earlier this year for the publication The Conversation. The piece is about the (then) pending big splash the movie about the character (co-created by Stan Lee in the 60s) was about to make in the larger culture, the reasons for that, and why it was also a tremendous opportunity for science. For science? Yes, because, as I said there:

Vast audiences will see black heroes of both genders using their scientific ability to solve problems and make their way in the world, at an unrivaled level.

and

Improving science education for all is a core endeavor in a nation’s competitiveness and overall health, but outcomes are limited if people aren’t inspired to take an interest in science in the first place. There simply are not enough images of black scientists – male or female – in our media and entertainment to help inspire. Many people from underrepresented groups end up genuinely believing that scientific investigation is not a career path open to them.

Moreover, many people still see the dedication and study needed to excel in science as “nerdy.” A cultural injection of Black Panther heroics could help continue to erode the crumbling tropes that science is only for white men or reserved for people with a special “science gene.”

And here we are many months later, and I was delighted to see that people did get a massive dose of science inspiration from T'Challa and his sister Shuri, and the whole of the Wakanda nation, not just in Black Panther, but also in the Avengers: Infinity War movie a short while after.

But my larger point here is that so much of this goes back to Stan Lee's work with collaborators in not just making "relatable" superheroes, as you've heard said so many times --- showing their flawed human side so much more than the dominant superhero trope (represented by Superman, Wonder Woman, Batman, etc.,) allowed for at the time -- but making science and scientists be at the forefront of much of it. So many of the characters either were scientists (Banner (Hulk), Richards (Mr.Fantastic), T'Challa (BlackPanther), Pym (Ant Man), Stark (Ironman), etc) or used science actively to solve problems (e.g. Parker/Spiderman).

This was hugely influential on young minds, I have no doubt. This is not a small number of [...] Click to continue reading this post

The post Stan Lee’s Contributions to Science!! appeared first on Asymptotia.

by Clifford at November 16, 2018 07:05 PM

Lubos Motl - string vacua and pheno

AdS/CFT as the swampland/bootstrap duality
Last June, I discussed machine learning approaches to the search for realistic vacua.

Computers may do a lot of work and lots of assumptions that some tasks may be "impossibly hard" may be shown incorrect with some help of computers that think and look for patterns. Today, a new paper was published on that issue, Deep learning in the heterotic orbifold landscape. Mütter, Parr, and Vaudrevange use "autoencoder neural networks" as their brain supplements.



The basic idea of the bootstrap program in physics.

But I want to mention another preprint,
Putting the Boot into the Swampland
The authors, Conlon (Oxford) and Quevedo (Trieste), have arguably belonged to the Stanford camp in the Stanford-vs-Swampland polemics. But they decided to study Cumrun Vafa's conjectures seriously and extended it in an interesting way.



Cumrun's "swampland" reasoning feels like a search for new, simple enough, universal principles of Nature that are obeyed in every theory of quantum gravity – or in every realization of string theory. These two "in" are a priori unequivalent and they represent slightly different papers or parts of papers as we know them today. But Cumrun Vafa and others, including me, believe that ultimately, "consistent theory of quantum gravity" and "string/M-theory" describe the same entity – they're two ways to look at the same beast. Why? Because, most likely, string theory really is the only game in town.



Some of the inequalities and claims that discriminate the consistent quantum gravity vacua against the "swampland" sound almost like the uncertainty principle, like some rather simple inequalities or existence claims. In one of them, Cumrun claims that a tower of states must exist whenever the quantum gravity moduli space has some extreme regions.

Conlon and Quevedo assume that this quantum gravitational theory lives in the anti de Sitter space and study the limit \(R_{AdS}\to\infty\). The hypothesized tower on the bulk side gets translated to a tower of operators in the CFT, by the AdS/CFT correspondence. They argue that some higher-point interactions are fully determined on the AdS side and that the constraints they obey may be translated, via AdS/CFT, to known, older "bootstrap" constraints that have been known in CFT for a much longer time. Well, this is the more "conjectural" part of their paper – but it's the more interesting one and they have some evidence.

If that reasoning is correct, string theory is in some sense getting where it was 50 years ago. String theory partly arose from the "bootstrap program", the idea that mere general consistency conditions are enough to fully identify the S-matrix and similar things. That big assumption was basically ruled out – especially when "constructive quarks and gluons" were accepted as the correct description of the strong nuclear force. String theory has basically violated the "bootstrap wishful thinking" as well because it became analogously "constructive" as QCD and many other quantum field theories.

However, there has always been a difference. String theory generates low-energy effective field theories from different solutions of the same underlying theory. The string vacua may be mostly connected with each other on the moduli space or through some physical processes (topology changing transitions etc.). That's different from quantum field theories which are diverse and truly disconnected from each other. So string theory has always preserved the uniqueness and the potential to be fully derived from some general consistency condition(s). We don't really know what these conditions precisely are yet.

The bootstrap program has been developed decades ago and became somewhat successful for conformal field theories – especially but not only the two-dimensional conformal field theories similar to those that live on the stringy world sheets. Cumrun's swampland conditions seem far more tied to gravity and the dynamical spacetime. But by the AdS/CFT, some of the swampland conditions may be mapped to the older bootstrap constraints. Conlon and Quevedo call the map "bootland", not that it matters. ;-)

The ultimate consistency-based definition of quantum gravity or "all of string/M-theory" could be some clever generalization of the conditions we need in CFTs – and the derived bootstrap conditions they obey. We need some generalization in the CFT approach, I guess. Because CFTs are local, we may always distinguish "several particles" from "one particle". That's related to our ability to "count the number of strings" in perturbative string theory i.e. to distinguish single-string and multi-string states, and to count loops in the loop diagrams (by the topology of the world sheet).

It seems clear to me that this reduction to the one-string "simplified theory" must be abandoned in the gravitational generalization of the CFT calculus. The full universal definition of string theory must work with one-object and multi-object states on the same footing from the very beginning. Even though it looks much more complicated, there could be some analogies of the state-operator correspondence, operator product expansions, and other things in the "master definition of string/M-theory". In the perturbative stringy limits, one should be able to derive the world sheet CFT axioms as a special example.

by Luboš Motl (noreply@blogger.com) at November 16, 2018 12:01 PM

November 15, 2018

Emily Lakdawalla - The Planetary Society Blog

When Space Science Becomes a Political Liability
John Culberson, an 8-term Texas Republican and staunch supporter the search for life on Europa, lost his re-election bid last week. His support for Europa was attacked by opponents and could send a chilling political message about the consequences of supporting space science and exploration.

November 15, 2018 05:27 PM

Jon Butterworth - Life and Physics

The Standard Model – TEDEd Lesson
I may have mentioned before, the Standard Model is about 50 years old now. It embodies a huge amount of human endeavour and understanding, and I try to explain it in my book, A Map of the Invisible (or Atom … Continue reading

by Jon Butterworth at November 15, 2018 04:50 PM

The n-Category Cafe

Magnitude: A Bibliography

I’ve just done something I’ve been meaning to do for ages: compiled a bibliography of all the publications on magnitude that I know about. More people have written about it than I’d realized!

This isn’t an exercise in citation-gathering; I’ve only included a paper if magnitude is the central subject or a major theme.

I’ve included works on magnitude of ordinary, un-enriched, categories, in which context magnitude is usually called Euler characteristic. But I haven’t included works on the diversity measures that are closely related to magnitude.

Enjoy! And let me know in the comments if I’ve missed anything.

by leinster (Tom.Leinster@gmx.com) at November 15, 2018 12:51 AM

November 13, 2018

ZapperZ - Physics and Physicists

Muons And Special Relativity
For those of us who studied physics or have taken a course involving Special Relativity, this is nothing new. The case of a lot of muons being detected on the earth's surface has been used as an example of the direct result of SR's time dilation and length contraction.

Still, it bears repeating, and presenting to those who are not aware of this, and this is what this MinutePhysics video has done.



Zz.

by ZapperZ (noreply@blogger.com) at November 13, 2018 09:51 PM

CERN Bulletin

Interfon

Coopérative des fonctionnaires internationaux. Découvrez l'ensemble de nos avantages et remises auprès de nos fournisseurs sur notre site internet www.interfon.fr ou à notre bureau au bâtiment 504 (ouvert tous les jours de 12h30 à 15h30).

November 13, 2018 03:11 PM

CERN Bulletin

Conference

The Staff Association is pleased to invite you to a conference:

“The Wall”

Monday 26th of November 2018

at 6 pm

Main Auditorium (500-1-001)

Presentation by Andrea Musso

Guest Speaker: Eric Irivuzumugabe

 

The conference will be followed by a photo exhibition and light refreshments.

For more information and registration: https://indico.cern.ch/event/759816/

November 13, 2018 02:11 PM

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois.

La prochaine permanence se tiendra le :

Mardi 27 novembre de 13 h 30 à 16 h 00

Salle de réunion de l’Association du personnel

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/

Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

November 13, 2018 02:11 PM

CERN Bulletin

Micro Club

              INFOS novembre 2018

 

Opération NEMO

Du lundi 19 novembre au vendredi 7 décembre 2018, le Club organise sa traditionnelle Opération NEMO de fin d’année. Il vous propose des prix très attractifs pour certains produits de nos fournisseurs les plus importants : Apple, Lenovo, Brother, HP, Western Digital, LaCie, LMP, etc, etc.

Pendant ces trois semaines, ces firmes nous proposent pour certains articles, des prix légèrement inférieurs que ceux appliqués habituellement au CMC.

Nous ne pouvons pas publier les listes sur notre site Web, mais vous pouvez les obtenir directement au secrétariat du club.

En principe, sauf pour quelques cas particuliers, toutes les livraisons sont garanties avant la fin de cette année.

IMPORTANT : comme membre du Club et sur présentation de votre carte de membre de l’Association du Personnel, vous pouvez obtenir une petite réduction supplémentaire !

 

Commandes et fermeture de fin d’année :

Le Club sera fermé du mercredi 12 décembre 2018 à 20h00 au mardi 8 janvier 2019 à 18h00.

Une permanence sera assurée du lundi 17 au mercredi 19 décembre pour honorer les dernières livraisons 2018.

Les commandes Apple, Dell & Lenovo faites jusqu’au mardi 4 décembre 2018 seront livrées avant la fermeture.

Les commandes iPads, iPhones, HP, imprimantes Brother, disques externes et toners faites jusqu’au lundi 11 décembre 2018, seront livrées avant la fermeture.

Les réparations (Mac & PC) se feront jusqu’au jeudi 6 décembre 2018.

Les permanences des sections se terminent le mercredi 12 décembre 2018 à 20h.

 

Carte de membre

Dès le jeudi 29 novembre, vous pouvez renouveler les cartes de MEMBRE 2019 au secrétariat contre paiement de la cotisation. Nous vous rappelons nos heures d’ouverture :   du mardi au jeudi de 18h00 à 20h00.

Votre Comité

November 13, 2018 02:11 PM

CERN Bulletin

"CERN: Science Bridging Cultures" by Marilena Streit-Bianchi

You may have noticed, in the CERN Bulletin of 19 April 2018, the announcement of the presentation of the book “CERN: Science Bridging Cultures” to the Ambassador of Mozambique to the United Nations. For those of you who were unable to attend this CERN Alumni, Marilena Streit Bianchipresentation or have not yet had time to read this book, I present here for ECHO some aspects of this publication and explain how it came about.

Having worked at CERN for 41 years, I have witnessed many changes in the Organization. One thing has always permeated the spirit of the people working at CERN, at all levels: their interest in knowledge overcomes any barrier of origin, gender, language or religion. Now retired, I found that the time had come to highlight that CERN is not just a physics laboratory in quest of the unknown and where elementary particles are discovered and studied. By giving a glimpse of the laboratory’s various activities, I wanted to pay tribute to the mixture of diversity, capacities and humanity that CERN represents.

To this end, I have asked several CERN members to contribute in areas such as fundamental physics research, accelerators, experiments and physicists, information technologies, knowledge transfer and technological spin-offs, as well as the relationship between CERN and peace, CERN and art, and finally the role of science in society. Each contribution has been kept short, maximum four pages, and is easy to read.

I would like to point out that this book would not have been possible without the volunteer work of the contributors1 who wrote about their own work or field of activity, but also of the many people who contributed to the translation into the different languages. In addition, artists of different nationalities2 were invited to illustrate the work done at CERN.

I am very grateful to the Staff Association for allowing us to hold the exhibition "A Master of Drawing in Black and White - Justino António Cardoso" in July 2018, thus giving great visibility to the works of this Mozambican artist, giving him the opportunity to show his original drawings which illustrate, with an African touch, CERN's research activities and take a fresh and uncontaminated look3 at CERN's activities.

You can download this book for free from Zenodo, the open and free digital archive of CERN and OpenAIRE. The book is now available in English, French, Italian and Portuguese; from next month it will also be available in Spanish and German. This book is available in several languages so that it can be widely distributed to teachers and students in different countries, so that they can get to know CERN and appreciate why it is good to be able to work there.

Don't forget, if you liked this book, to tell your friends and your children's teachers so that they too can read it and share it with them.

[1] By chapter order: Marilena Streit-Bianchi, Emmanuel Tsesmelis, John Ellis, Lucio Rossi, Ana Maria Henriques Correia and João Martins Correia, Frédéric Hemmer, Giovanni Anelli, João Varela, Arthur I. Miller and Rolf Heuer.

[2] Davide Angheleddu (Italy), Justino António Cardoso (Mozambique), Margarita Cimadevila (Spain), Angelo Falciano (Italy), Michael Hoch (Austria), Karen Panman (Nederlands), Islam Mahmoud Sweity (Palestine) Wolfgang Trettnak (Austria).

[3] Justino António Cardoso was for the first time outside Africa when he visited CERN for 5 days. He had never had any contact before with high-energy physics or with physicists.

 

November 13, 2018 02:11 PM

November 12, 2018

Jon Butterworth - Life and Physics

James Stirling
Today I got the terrible news of the untimely death of Professor James Stirling. A distinguished particle physicist and until August the Provost of Imperial College London, he will be remembered with fondness and admiration by many. Even astronomers – … Continue reading

by Jon Butterworth at November 12, 2018 05:13 PM

The n-Category Cafe

A Well Ordering Is A Consistent Choice Function

Well orderings have slightly perplexed me for a long time, so every now and then I have a go at seeing if I can understand them better. The insight I’m about to explain doesn’t resolve my perplexity, it’s pretty trivial, and I’m sure it’s well known to lots of people. But it does provide a fresh perspective on well orderings, and no one ever taught me it, so I thought I’d jot it down here.

In short: the axiom of choice allows you to choose one element from each nonempty subset of any given set. A well ordering on a set is a way of making such a choice in a consistent way.

Write <semantics>P(X)<annotation encoding="application/x-tex">P'(X)</annotation></semantics> for the set of nonempty subsets of a set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. One formulation of the axiom of choice is that for any set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, there is a function <semantics>h:P(X)X<annotation encoding="application/x-tex">h: P'(X) \to X</annotation></semantics> such that <semantics>h(A)A<annotation encoding="application/x-tex">h(A) \in A</annotation></semantics> for all <semantics>AP(X)<annotation encoding="application/x-tex">A \in P'(X)</annotation></semantics>.

But if we think of <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> as a piece of algebraic structure on the set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, it’s natural to ask that <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> behaves in a consistent way. For example, given two nonempty subsets <semantics>A,BX<annotation encoding="application/x-tex">A, B \subseteq X</annotation></semantics>, how can we choose an element of <semantics>AB<annotation encoding="application/x-tex">A \cup B</annotation></semantics>?

  • We could, quite simply, take <semantics>h(AB)AB<annotation encoding="application/x-tex">h(A \cup B) \in A \cup B</annotation></semantics>.

  • Alternatively, we could take first take <semantics>h(A)A<annotation encoding="application/x-tex">h(A) \in A</annotation></semantics> and <semantics>h(B)B<annotation encoding="application/x-tex">h(B) \in B</annotation></semantics>, then use <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> to choose an element of <semantics>{h(A),h(B)}<annotation encoding="application/x-tex">\{h(A), h(B)\}</annotation></semantics>. The result of this two-step process is <semantics>h({h(A),h(B)})<annotation encoding="application/x-tex">h(\{ h(A), h(B) \})</annotation></semantics>.

A weak form of the “consistency” I’m talking about is that these two methods give the same outcome:

<semantics>h(AB)=h({h(A),h(B)})<annotation encoding="application/x-tex"> h(A \cup B) = h(\{h(A), h(B)\}) </annotation></semantics>

for all <semantics>A,BP(X)<annotation encoding="application/x-tex">A, B \in P'(X)</annotation></semantics>. The strong form is similar, but with arbitrary unions instead of just binary ones:

<semantics>h(Ω)=h({h(A):AΩ})<annotation encoding="application/x-tex"> h\Bigl( \bigcup \Omega \Bigr) = h\Bigl( \bigl\{ h(A) : A \in \Omega \bigr\} \Bigr) </annotation></semantics>

for all <semantics>ΩPP(X)<annotation encoding="application/x-tex">\Omega \in P'P'(X)</annotation></semantics>.

Let’s say that a function <semantics>h:P(X)X<annotation encoding="application/x-tex">h: P'(X) \to X</annotation></semantics> satisfying the weak or strong consistency law is a weakly or strongly consistent choice function on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

The central point is this:

A consistent choice function on a set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is the same thing as a well ordering on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

That’s true for consistent choice functions in both the weak and the strong sense — they turn out to be equivalent.

The proof is a pleasant little exercise. Given a well ordering <semantics><annotation encoding="application/x-tex">\leq</annotation></semantics> on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, define <semantics>h:P(X)X<annotation encoding="application/x-tex">h: P'(X) \to X</annotation></semantics> by taking <semantics>h(A)<annotation encoding="application/x-tex">h(A)</annotation></semantics> to be the least element of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>. It’s easy to see that this is a consistent choice function. In the other direction, given a consistent choice function <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, define <semantics><annotation encoding="application/x-tex">\leq</annotation></semantics> by

<semantics>xyh({x,y})=x.<annotation encoding="application/x-tex"> x \leq y \Leftrightarrow h(\{x, y\}) = x. </annotation></semantics>

You can convince yourself that <semantics><annotation encoding="application/x-tex">\leq</annotation></semantics> is a well ordering and that <semantics>h(A)<annotation encoding="application/x-tex">h(A)</annotation></semantics> is the least element of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, for any nonempty <semantics>AX<annotation encoding="application/x-tex">A \subseteq X</annotation></semantics>. The final task, also easy, is to show that the two constructions (of a consistent choice function from a well ordering and vice versa) are mutually inverse. And that’s that.

(For anyone following in enough detail to wonder about the difference between weak and strong: you only need to assume that <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> is a weakly consistent choice function in order to prove that the resulting relation <semantics><annotation encoding="application/x-tex">\leq</annotation></semantics> is a well ordering, but if you start with a well ordering <semantics><annotation encoding="application/x-tex">\leq</annotation></semantics>, it’s clear that the resulting function <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> is strongly consistent. So weak is equivalent to strong.)

For me, the moral of the story is as follows. As everyone who’s done some set theory knows, if we assume the axiom of choice then every set can be well ordered. Understanding well orderings as consistent choice functions, this says the following:

If we’re willing to assume that it’s possible to choose an element of each nonempty subset of a set, then in fact it’s possible to make the choice in a consistent way.

People like to joke that the axiom of choice is obviously true, and that the well orderability of every set is obviously false. (Or they used to, at least.) The theorem on well ordering is derived from the axiom of choice by an entirely uncontroversial chain of reasoning, so I’ve always taken that joke to be the equivalent of throwing one’s hands up in despair: isn’t math weird! Look how this highly plausible statement implies an implausible one!

So the joke expresses a breakdown in many people’s intuitions. And with well orderings understood in the way I’ve described, we can specify the point at which the breakdown occurs: it’s in the gap between making a choice and making a consistent choice.

by leinster (Tom.Leinster@gmx.com) at November 12, 2018 02:08 PM

Jon Butterworth - Life and Physics

Brief Answers to the Big Questions by Stephen Hawking – review
Back in the Guardian (well, the Observer actually) with a review of Stephen Hawking’s final book . A couple of paragraphs didn’t make the edit; no complaints from me about that, but I put them here mainly for the sake of … Continue reading

by Jon Butterworth at November 12, 2018 10:35 AM

November 11, 2018

Lubos Motl - string vacua and pheno

New veins of science can't be found by a decree
Edwin has pointed out that a terrifying anti-science article was published in The Japan Times yesterday:
Scientists spend too much time on the old.
The author, the Bloomberg opinion columnist named Noah Smith (later I noticed that the rant was first published by Bloomberg), starts by attacking Ethan Siegel's text that had supported a new particle collider. Smith argues that because too many scientists are employed in science projects that extend the previous knowledge which leads to diminishing returns, all the projects extending the old science should be defunded and the money should be distributed to completely new small projects that have far-reaching practical consequences.

What a pile of toxic garbage!



Let's discuss the content of Smith's diatribe in some detail:
In a recent Forbes article, astronomer and writer Ethan Siegel called for a big new particle collider. His reasoning was unusual. Typically, particle colliders are created to test theories [...] But particle physics is running out of theories to test. [...] But fortunately governments seem unlikely to shell out the tens of billions of dollars required, based on nothing more than blind hope that interesting things will appear.
First of all, Smith says that it's "unusual" to say that the new collider should search for deviations from the Standard Model even if we don't know which ones we should expect. But there is nothing unusual about it at all and by his anxiety, Smith only shows that he doesn't have the slightest clue what science is.

The falsification of existing theories is how science makes progress – pretty much the only way how experimenters contribute to progress in science. This statement boils down to the fact that science can never prove theories to be completely right – after all, with the exception of the truly final theory, theories of physics are never quite right.



Instead, what an experiment can do reliably enough is to show that a theory is wrong. When the deviations from the old theoretical predictions are large enough so that we can calculate that it is extremely unlikely for such large deviations to occur by chance, we may claim with certainty that something that goes beyond the old theory has been found.

This is how the Higgs boson was found, too. The deviation of the measured data from the truncated Standard model prediction that assumed that "no Higgs boson exists" grew to 5 sigma at which point the Higgs boson discovery was officially announced.

The only true dichotomy boils down to the question whether the new theories and phenomena are first given some particular shape by theorists or by experimenters. The history of physics is full of both examples. Sometimes theorists have reasons to become sufficiently certain that a new phenomenon should exist because of theoretical reasons, and that phenomenon is later found by an experiment. Sometimes an experiment sees a new and surprising phenomenon and theorists only develop a good theory that explains the phenomenon later.

Theorists are surely not running out of theories to test. There are thousands of models – often resulting from very deep and highly motivated theories such as string theory or at least the grand unification – with tens of thousands of predictions and all of them may be tested. The recent frequency of discoveries just makes it sure that we shouldn't expect a new phenomenon that goes beyond the Standard Model to be discovered every other day. This is how Nature works.



In this lovely video promoting a location for the ILC project (another one has won), I think that the English subtitles were only added recently. The girl is a bored positron waiting for an electron.

Smith says that the expectation that new interesting things may be seen by a new collider is a "blind hope". But it is not a hope, let alone a blind one. It is a genuine possibility. It is a fact of physics that we don't know whether the Standard Model works around the collision energy of \(100\TeV\). It either does or it does not. Indeed, because new physics is more interesting, physicists may "hope" that this is the answer that the collider will give. But the collider will give us some nontrivial information in either case.

Because the "new physics" answer is more interesting, one may say that the construction of the colliders is partially a bet, a lottery ticket, too. But most of progress just couldn't have emerged without experimenting, betting, taking a risk. If you want to avoid all risks, if you insist on certainty, you will have to rely on the welfare (or, if you are deciding how to invest your money, you need to rely on cash holdings or saving accounts with very low interest rates). You are a coward. You are not an important person for the world and you shouldn't get away with attempts to pretend that you are one.

Also, Smith says that governments are "unlikely to shell out the tens of billions". That's rubbish. Just like in the past, governments are very likely to reserve these funds because those are negligible amounts of money relatively to the overall budgets – and at least the symbolic implications of these modest expenses are far-reaching. When America was building the space research, a great fraction of the GDP was being spent on it – the fraction went up to 5% in a peak year. Compared to that, the price of a big collider is negligible. All governments have some people who know enough to be sure that rants by anti-science activists similar to Smith are worth nothing. Smith lives in a social bubble where his delusions are probably widespread but all the people in that bubble are largely disconnected from the most important things in the world and the society.

Japan is just deciding about the ILC in Japan.
Particle physicists have referred to this seeming dead end as a nightmare scenario. But it illustrates a deep problem with modern science. Too often, scientists expect to do bigger, more expensive versions of the research that worked before. Instead, what society often needs is for researchers to strike out in entirely new directions.
The non-discovery of new physics at the LHC has been described by disappointing phrases because people prefer when the experiments stimulate their own thinking and curiosity – and that of other physicists. Of course scientists prefer to do things where the chance for a discovery of something really new is higher. However, in fundamental physics, building a collider with a higher energy is the best known way to do it. You may be ignorant about this fact, Mr Smith, but it's just because you are an idiot, not because of some hypothetical flaw of high energy physics which is called high energy physics for a good reason. It's called in this way because increasing the energy is largely equivalent to making progress: higher energy is equivalent to shorter distance scales where we increasingly understand what is going on with an improving resolution.

If it were possible and easy to "strike out in entirely new directions", scientists would do it for obvious reasons – it would surely be great for the career of the author who finds a new direction. But qualitatively new discoveries are rare and cannot be ordered by a decree. We don't know in what exact directions "something new and interesting is hiding" which is why people must sort of investigate all promising enough directions. And looking in all the similar directions of "various new phenomena that may be seen at even higher energies" is simply the most promising strategy in particle physics according to what we know.

Equally importantly, extending the research strategies "that have worked before" isn't a sin. It's really how science always works. Scientific discoveries are never quite disconnected from the previous ones. Isaac Newton has found quite a revolutionary new direction – the quantitative basis for physics as we know it. He's still known for the proposition
If I have seen further it is by standing on the shoulders of giants.
Newton was partly joking – he wanted to mock some smaller and competing minds, namely Gottfried Leibniz and especially Robert Hooke who was short – but he was (and everyone was) aware of the fact that the new discoveries don't take place in the vacuum. Newton still had to build on the mathematics that was developed before him. When showing that the laws of gravity worked, he found Kepler's laws of planetary motion to be a very helpful summary of what his theory should imply, and so on.

Every new scientific advance is a "twist" in some previous ideas. It just cannot be otherwise. All the people who are claiming to make groundbreaking discoveries that are totally disconnected from the science of the recent century or so are full-blown crackpots.
During the past few decades, a disturbing trend has emerged in many scientific fields: The number of researchers required to generate new discoveries has steadily risen.
Yup. In some cases, the numbers may be reduced but in others, they cannot. For example, and this example is still rather typical for modern theoretical physics, M-theory was still largely found by one person, Edward Witten. It's unquestionable that most of the theoretical physicists have contributed much less science than Witten, even much less "science output per dollar". On the other hand, it's obvious that Witten has only discovered a small minority of the physics breakthroughs. If the number of theoretical physicists were one or comparable to one, the progress would be almost non-existent.

Experimental particle physics requires many more people for a single paper – like the 3,000 members of the ATLAS Collaboration (and extra 3,000 in CMS). But there are rather good reasons for that. ATLAS and CMS don't really differ from a company that produces something. For example, the legendary soft drink maker Kofola Czechoslovakia also has close to 3,000 employees. In Kofola, ATLAS, as well as CMS, the people do different kinds of work and if there's an obvious way to fire some of them while keeping all the vital processes going, it's being done.

You may compare Kofola, ATLAS, and CMS and decide which of them is doing a better job for the society. People in Czechoslovakia and Yugoslavia drink lots of Kofola products. People across the world are inspired to think about the collisions at the Large Hadron Collider. From a global perspective, Kofola, ATLAS, and CMS are negligible groups of employees. Each of them employs less than one-half of one millionth of the world population.

Think about the millions of people in the world who are employed in tax authorities although most of them could be fired and the tax collection could be done much more effectively with relatively modest improvements. Why does Mr Smith attack the teams working for the most important particle accelerator and not the tax officials? Because he is actually not motivated by any efficiency. He is driven by his hatred towards science.
In the 1800s, a Catholic monk named Gregor Mendel was able to discover some of the most fundamental concepts of genetic inheritance by growing pea plants.
Mendel was partly lucky – like many others. But his work cannot be extracted from the context. Mendel was one employee in the abbey in Brno, University of Olomouc, and perhaps other institutions in Czechia whose existence was at least partly justified by the efforts to deepen the human knowledge (or by efforts to breed better plants for economic reasons). At any rate, fundamental discoveries such as Newton's or Mendel's were waiting – they were the low-hanging fruits.

Indeed, one faces diminishing returns after the greatest discoveries are made, and this is true in every line of research and other activities. But this is a neutral and obvious fact, not something that can be rationally used against the whole fields. It's really a tautology – returns are diminishing after the greatest discoveries, otherwise they wouldn't be greatest. ;-) Particle physics didn't become meaningless after some event – any event, let's say the theoretical discovery of quantum field theory or the experimental discovery of W and Z bosons – just like genetics didn't become meaningless after Mendel discovered his fundamental laws. On the contrary, these important events were the beginnings when things actually started to be fun.

Smith complains that biotech companies have grown into multi-billion enterprises while Mendel was just playing in his modest garden. Why are billions spent for particle physics or genetics? Because they can. The mankind produces almost $100 trillion in GDP every year. Of course some fraction of it simply has to be genetics and particle physics because they're important, relatively speaking. It is ludicrous to compare the spending for human genome projects or the new colliders with Mendel's garden because no one actually has the choice of funding either Mendel's research or the International Particle Collider. These are not true competitors of one another because they're separated by 150 years! People across the epochs can't compete for funds. On top of that, the world GDP was smaller than today by orders of magnitude 150 years ago.

Instead, we must compare whether we pay more money for a collider and less money e.g. for soldiers in Afghanistan (the campaign has cost over $1 trillion; or anything else, I don't want this text to be focused on interventionism) or vice versa. These are actually competing options. Of course particle physics and genetics deserve tens of billions every decade, to say the least. Ten billion dollars is just 0.01% of the world GDP, an incredibly tiny fraction. Even if there were almost no results, studying science is a part of what makes us human. Nations that don't do such things are human to a lesser degree and animals to a higher degree and they can be more legitimately treated as animals by others – e.g. eradicated. For this reason, paying something for science (even pure science) also follows from the survival instincts.
The universe of scientific fields isn’t fixed. Today, artificial intelligence is an enormously promising and rapidly progressing area, but back in 1956...
Here we see one thing that might support instead. But I don't think that most people who work on artificial intelligence should be called scientists. They're really engineers – or even further from science. Their goal isn't to describe how Nature works. Their task is to invent and build new things that can do certain new things but that exploit the known pieces that work according to known laws.
To keep rapid progress going, it makes sense to look for new veins of scientific discovery. Of course, there’s a limit to how fast that process can be forced...
The main problem isn't "how fast that process can be forced". The main problem with Smith's diatribe is that the discovery itself cannot be forced or pre-programmed; and that the search for some things and according to some strategy shouldn't be forced by the laymen such as Mr Smith at all because such an enforced behavior reduces the freedom of the scientists which slows down progress. And the rate of progress is whatever it is. There aren't any trivial ways to make it much faster and claims to the contrary are a pure wishful thinking. No one should be allowed to harass other people just because the world disagrees with his wishful thinking.
...it wasn’t until computers became sufficiently powerful, and data sets sufficiently big, that AI really took off.
The real point is that it just cannot be clear to everybody (or anybody!) from the beginning which research strategy or direction is likely to become interesting. But the scientists themselves are still more likely to make the right guess about the hot directions of future research than some ignorant laymen similar to Mr Smith who are obsessed with "forcing things" on everyone else.
But the way that scientists now are trained and hired seems to discourage them from striking off in bold new directions.
Mr Smith could clearly crawl into Mr Sm*lin's rectum and vice versa, to make it more obvious that allowing scum like that is a vicious circle.

What is actually discouraging scientists from striking off in bold new directions are anti-science rants such as this one by Mr Smith that clearly try to restrict what science can do (and maybe even think). If you think that you can make some groundbreaking discovery in a new direction, why don't you just do it yourself? Or together with thousands of similar inkspillers who are writing similar cr*p? And if you can't, why don't you exploit your rare opportunity to shut up? You don't have the slightest clue about science and the right way to do it and your influence over these matters is bound to be harmful.
This means that as projects like the Hadron Collider require ever-more particle physicists, ...
It is called the Large Hadron Collider, not just Hadron Collider, you Little Noam Smith aßhole.
With climate change a looming crisis, the need to discover sustainable energy technology...
Here we go. Only scientifically illiterate imbeciles like you believe that "climate change is a looming crisis". (I have already written several blog posts about dirty scumbags who would like to add physics funds to the climate hysteria.)

Just the would-be "research" into climate change has devoured over $100 billion – like ten Large Hadron Colliders – and the scientific fruits of this spending are non-existent. The only actual consequence of this "research" is that millions of stupid laymen such as Mr Smith have been fooled into believing that we face a problem with the climate. It wasn't really research, it has been a propaganda industry.

The money that has been wasted for the insane climate change hysteria is an excellent example of the crazy activities and funding that societies degenerate into if they start to be influenced by arrogant yet absolutely ignorant people similar to Mr Smith. That wasting (and the funds wasted for the actual policies are much higher, surely trillions) is an excellent example showing how harmful the politicization of science is.

The $10 billion Large Hadron Collider has still measured the mass of the Higgs boson – the only elementary spinless particle we know – as \(125\GeV\). The theoretically allowed interval was between \(50\GeV\) and \(800\GeV\) or so. What is a similar number that we have learned from the $100 billion climate change research in the recent 30 years?
So what science needs isn’t an even bigger particle collider; it needs something that scientists haven’t thought of yet.
The best way is to pick the most brilliant, motivated, and hard-working people as the scientists, allow them to do research as they see fit, and add extra funds to those that have made some significant achievements and who display an increased apparent likelihood of new breakthroughs or at least valuable advances to come – while making sure that aggressive yet stupid filth such as Mr Noah Smith doesn't intimidate them in any way.



Off-topic: It looks good for the Czech girls in the finals of the Fed Cup against the U.S. – 2-to-0 after Saturday matches. Both teams are without their top stars. On the Czech side, Plíšková is injured while Kvitová is ill. Incidentally, I noticed that the U.S. players and coach are super-excited whenever some Czechs play the most popular Czech fans' melody in any sports – which just happens to be When the Saints Go Marching In. ;-)

Sunday starts with the match of the "Russian" players from both teams – Kenin and Siniaková. Update: Siniaková looked much more playful and confident all the time but it ended up to be an incredibly tied and dramatic 4-hour match. But Siniaková became a new Czech heroine and my homeland has increased the number of Fed Cups from 10 to 11 – from superstring theory to M-theory.

As the video above suggests, Kvitová would be no good because she has lost to a Swiss retired painter (her fan) Mr Hubert Schmidt. You may see that she understands her occupation even as a theorist – she could rate him properly (if someone plays like that, he has to be famous, she correctly reasoned) and after the handshake, she was also able to identify the real name. ;-) The height and voice helped, too, she admitted. A touching prank.

by Luboš Motl (noreply@blogger.com) at November 11, 2018 02:58 PM

November 09, 2018

ZapperZ - Physics and Physicists

Comparing Understanding of Graphs Between Physics and Psychology Students
I ran across this paper a while back, but didn't get to reading it carefully till now.

If you have followed this blog for any considerable period of time, you would have seen several posts where I emphasized the importance of physics education, NOT just for the physics knowledge, but also for the intangible skills that comes along with it. Skills such as analytical ability and deciding on the validity of what causes what are all skills that transcends the subject of physics. These are skills that are important no matter what the students end up doing in life.

While I had mentioned such things to my students during our first day of class each semester, it is always nice when there are EVIDENCE (remember that?) to back such claim. In this particular study, the researchers compare how students handle and understand the information that they can acquire from graphs on topics outside of their area of study.

The students involved are physics and psychology students in Zagreb, Croatia. They were tested on their understanding of the concept of slope and area under the graph, their qualitative and quantitative understanding of graphs, and comparing their understanding of graphs in the context of physics and finance. For the latter area (finance), both groups of students did not receive kind of lessons in that subject area and thus, are presumably unfamiliar with both groups.

Before we proceed, I found that in Croatia, physics is a compulsory subject in pre-college education there, which is quite heartening.

Physics is taught as a compulsory subject in the last two grades of all elementary schools and throughout four years of most of high schools in Croatia. Pupils are taught kinematics graphs at the age 15 and 16 (last grade of elementary school and first year of high school). Psychology students were not exposed to the teaching on kinematics graphs after high school, while physics students learned about kinematics graphs also in several university courses. Physics and psychology students had not encountered graphs related to prices, money, etc., in their formal education.
So the psychology students in college are already familiar with basic kinematics and graphs, but did not go further into it once they are in college, unlike physics students. I'd say that this is more than what most high school students in the US have gone through, since Physics is typically not required in high schools here.

In any case, the first part of the study wasn't too surprising, that physics students did better overall at physics questions related to the slope and area under the graph. But it was interesting that the understanding of what "area under the graph" tends to be problematic for both groups. And when we got to the graphs related to finance, it seems clear that physics students were able to extract the necessary information better than psychology students. This is especially true when it comes to the quantitative aspect of it.

You should read the in-depth analysis and discussion of the result. I'll quote part of their conclusion here:

All students solved the questions about graph slope better than the questions about the area under a graph. Psychology students had rather low scores on the questions about area under a graph, and physics students spent more time than psychology students on questions about area under a graph. These results indicate that area under a graph is quite a difficult concept that is unlikely to be developed without formal teaching and learning, and that more attention should be given to this topic in physics courses.

Physics and psychology students had comparable scores on the qualitative questions on slope which indicates that the idea of slope is rather intuitive. However, many psychology students were not able to calculate the slope, thus indicating that their idea of slope was rather vague. This suggests that the intuitive idea of slope, probably held by most students, should be further developed in physics courses and strongly linked to the mathematical concept of slope that enables students to quantify slope.

Generally, physics students solved the qualitative and the quantitative questions equally well, whereas psychology students solved qualitative questions much better than the quantitative questions. This is further evidence that learning physics helps students to develop deeper understanding of concepts and the ability to quantitatively express relationships between quantities.

The key point here is the "transfer" of knowledge that they have into an area that they are not familiar with. It is clear that physics students were able to extract the information in the area of finance better than psychology students. This is an important point that should be highlighted, because it shows how skills learned from a physics course can transfer to other areas, and that a student need not be a physics major to gain something important and relevant from a physics class.

Zz.

by ZapperZ (noreply@blogger.com) at November 09, 2018 02:23 PM

November 08, 2018

Lubos Motl - string vacua and pheno

A pro-string PBS video
I wrote mostly negative things about the PBS Spacetime" physics videos. But Peter F. sent me a link to a new one,
Why String Theory Is Right (17 minutes).
Before you become excited that string theory finally gets some support from the mainstream media (the video has almost 200,000 views in less than a day), I must warn you: they plan to release a symmetric video "Why String Theory Is Wrong" (and maybe they will say "trouble" or "not even wrong" instead). Judging by the announcements at the beginning, their overall view will be at most neutral.



Why do they think that string theory is right? Because it's beautiful, much like the Dirac equation, because it contains gravity, and you can't really remove it from the string theory. String theory tames problems with quantum gravity. And gravity is derived from the 2D world sheet scale invariance – really the original "Weyl symmetry" – much like electromagnetism is derived from gauging the \(U(1)\) phase-change invariance of the wave function.

Well, none of these statements is quite right but they're at least correlated with the truth. What I mean by wrong statements? Well, in QFTs, the \(U(1)\) gauge symmetry is really a gauged version of the phase-changing symmetry applying to charged fields. A one-particle wave function may also be locally redefined by a phase, like the phase of the corresponding quantum field, but in this one-particle setup, there's really no good reason why a gauged \(U(1)\) should be there.

In the full QFT, a gauged \(U(1)\) is needed for a covariant description of any spin-one fields. In one-particle quantum mechanics, the gauge invariance is just an unnecessary addition. Moreover, the phase has really nothing to do with the quantum mechanical character of the wave function. The phase of the field or wave function is changing under the \(U(1)\) transformations if the particle is electrically charged. So the nontrivial transformation of an object's phase has everything to do with its electric charge, not with its being quantum mechanical.



There are some usual illustrations of world sheets and other things. But don't expect this video to remain focused on the explanations why string theory is right. Instead, the narrator needs to be politically correct, so even in this very video, we also hear why he thinks – or why he parrots other people who have said – that there is something wrong with string theory.

At some point, he says that string theory needs to curl up dimensions, and that's ugly – while he is showing the beautiful projections of some Calabi-Yau spaces, what an irony – and it's bad that a whole new structure had to be added. But all these claims, so popular among crackpots, are completely wrong. There is absolutely nothing disturbing, ugly, or contrived about the very existence of the class of string theory's vacua with compactified dimensions.

String theory vacua with curled-up dimensions are exactly as consistent – obeying the conditions or equations of string theory – as the vacua with 10 or 11 uncompactified spacetime dimensions. So the amount of ugliness or controversy that we add by considering curled-up dimensions is strictly zero. Instead, it would be absolutely wrong to manually remove the vacua with compactified dimensions. It would be exactly as wrong as saying that 7 is the only prime because the others aren't lucky enough. Mathematics doesn't care whether you call 7 a lucky number, and in the same way, equations of string theory don't care about your selective emotional reactions to vacua with curled-up dimensions.

As soon as one understands the relationship between string theory's vacua with compactified dimensions and particle physics phenomenology, he or she sees that the compactified dimensions, while being a "neutral predicted option" within string theory itself, is extremely promising phenomenologically. Properties of the effective field theory – such as the Standard Model's number of generations of quarks and leptons – are derivable from the geometric properties of the compactified dimensions.

If you realize that the spectrum of the Standard Model isn't the "simplest quantum field theory you could think of", it naturally agrees with the observation that the "geometry of the extra compact 6-dimensional or similar manifold" must also have some subtlety which is responsible for the multiplicity of fields and their charges. If you work a bit harder, you may reveal the dictionary translating aspects of the particle physics spectrum on one side; and aspects of the compactified stringy manifold on the other side. The amount of structure is non-empty in the former and that's why it's natural to expect it's non-empty in the latter. It is rather clear that you don't want the geometry of the hidden dimensions to be trivial.

Much of the beauty of string theory – and really most of it – is only seen when one considers vacua with compactified dimensions and not just the 10D or 11D spacetimes. So if someone – like the PBS narrator – talks about the beauty of string theory; and at the same moment, he says that the very notion of compactification is ugly, then you may be certain that he doesn't have a clue what he is talking about. He's just parroting mutually contradicting statements that he has heard at two very different places.

Well, people like that simply don't understand physics at the technical level. For that reason, it's probably unavoidable that they mix up serious physics with random popular opinions of crackpots. There are numerous standard anti-string examples of that phenomenon in the bulk of the video – narrators like that feel "smartest" when they mix physics with crackpots' opinions in completely incoherent ways.

However, the most comical example occurs in the last three minutes which no longer discuss string theory but rather virtual particles and the Casimir effect. Can the effect be explained by virtual particles? Are the particles real? And so on. Much of the stuff that he says is widespread – although all his statements about "uncertainties" about the origin of the Casimir effect etc. are just rubbish. There is no basic "unknown" about simple things such as the Casimir effect. The most straightforward calculation of the Casimir force computes the zero-point energy of the electromagnetic field confined between the plates. There aren't virtual particles in this calculation.

However, that doesn't mean that one can't compute the effect using virtual particles. One may do it, too: Just consider some scattering process involving two metallic plates. Of course the probability amplitudes will involve some intermediate states that may always be expressed in terms of virtual particles. The interference between differently positioned diagrams with the virtual photons will be essential to yield the result. You know, the Casimir force may be computed in various ways – and the expressions may be supplemented by various words. Even though the laymen generally assume that there is always just one correct answer to how to compute something or what is the cause, it is simply not the case. The calculations or perspectives are numerous, may seem totally unequivalent, but they still don't contradict each other! Only if you get different values for the final, observable result, there is a contradiction.

Tree and wakalixes

A question and answer that are much more disconnected from actual physics appear at 16:00. David Ratliff asks: "If a quantum tree falls in a vacuum and no one is there to measure it, does it still have energy?" Great, let me first discuss the question before I mention how PBS addressed it.

The energy of a tree – or the total energy of any physical system – is an observable (meaning a quantity that may be observed, and the word "observable" is meant to be a very specific term that should be used as a well-defined technical concept within quantum mechanics, and you should first learn what it is and how it is used). As an abstract concept or a mathematical object, namely a Hermitian linear operator on the Hilbert space, the energy (the Hamiltonian) always exists – like other mathematical objects exist (at least in the sense of Platonism).

But is there a specific privileged value of the energy in the absence of an observer? The answer is clearly No. Einstein had trouble with this negative answer – but these psychological troubles were equivalent to Einstein's inability to embrace quantum mechanics and Einstein was totally wrong. There is a whole industry of people who have a problem with this No – they differ from Einstein by being less achieved in physics, and by being more retarded than Einstein by more than 80 years.

The basic rules of quantum mechanics say that specific values of observables may only be obtained by an observation – which need an observer. The point of these words isn't to demand some anthropomorphic objects. The point of these words is to guarantee that some agent detects the information (about an observable – and it is the same kind of information as in classical physics) – it is the information that quantum mechanics has to work with. If there's no observer, there can be no observation which means that there can be no preferred value of an observable (such as energy).

Now, this statement – an obvious rudiment of the quantum mechanical reasoning about Nature – is often presented as a mysterious or controversial one by the pop science press. But do you know what it means when we say that "there is no preferred value of the energy"? It simply means that the "right" value of the energy is unknown. Is it unknown? Of course it is unknown if there is no observer because an observer is needed to know something.

The statement that in the absence of an observer, there is no preferred value of an observable, is really a totally trivial and obvious tautology. Everyone who doubts it is incapable of thinking rationally. You know, classical physics could postulate the existence of God or a meta-observer who always knows the values of everything that can be found out by a measurement.

But in quantum mechanics, values may only be obtained by actual measurements and those are always intrusive and change the state of the physical system. They can't be done silently. If you accurately measure the position, you change the momentum and make it very uncertain, and vice versa. If such observations aren't taking place, it really means that no observer is making an observation – not even God. It proves that God in the sense of the persistent omniscient observer doesn't exist. So this only – already abstract, within classical physics – observer who could have been used to argue that "someone knows the value" just isn't there. So literally no one knows any right value, and if no one knows it, the value is unknown which is exactly equivalent to saying that no privileged eigenvalue exists.

In reality, if you rationally apply the laws of quantum mechanics, you ultimately only care about whether you know some information or values of observables – whether these values are known to you. You should better know whether you know some values of something, what you know, and what the values of it are, otherwise you can't make accurate predictions. But you don't really care about the other (hypothetical?) observers' knowing of anything. If you don't know something, their knowing it won't help you! Their knowing or not knowing is their subjective state that you cannot directly access (you can't even know for sure whether NPCs, women, cats, earthworms, viruses, or molecules are "conscious" and how) – but it doesn't affect your analysis of the processes, anyway.

All the fog about this question is absolutely irrational. The only framework in which you could defend the existence of a preferred value in the absence of any observer is classical physics. If you find it necessary to fight for the existence of preferred values even in the absence of observers, you are really denying 100% of quantum mechanics.

I sort of predicted the general spirit of the PBS' answer to the innocent question – I did expect some jargon of the interpretational crackpots would be used – but what was said has exceeded my expectations and I exploded in laughter. PBS' Matt said:
Believe it or not but it is a seriously discussed issue and the answer reduces to the question whether the Universe has counterfactual definiteness.
LOL. That was hilarious.

What's going on here? First of all, Matt has explained or answered absolutely nothing. Why? Because the term "counterfactual definiteness" is just a phrase, a pair of words, and they contain no idea whatsoever. Instead, these two words are deliberately constructed to sound "intellectual" but the content of the phase is exactly equivalent to the question about the falling tree.

Counterfactual definiteness "exists" if and only if falling trees have an energy in the absence of an observation. Matt has only translated David's question to a useless pompous language.

Pedagogically speaking, the situation is exactly the same as Feynman's story about the lousy textbooks one of which was forcing kids to repeat "energy makes it go" after seeing any picture with any moving objects. The kids didn't learn anything at all. They could have said "wakalixes makes it go" and the value would be exactly the same, namely zero. On top of that, Feynman pointed out, "energy makes it go" is a half-incorrect statement because it makes you think that "energy doesn't make it stop". But energy makes things stop, too, when the mechanical form of energy changes to the chaotic thermal ones.

The textbooks were really presenting a totally unscientific understanding of "energy", as some metaphysical force indicating life or something that should be worshiped. In physics, it's an emotionally neutral quantity that is normally conserved in any process and the conservation is useful to discuss and constrain the evolution of objects.

By spitting the "counterfactual wakalixes definiteness", Matt was clearly switching to a journalist promoting the crackpots marketed as "philosophers" instead of actual physicists. They spend most of their highly limited mental capacities by learning and/or fabricating useless and complicated phrases such as "counterfactual definiteness" and the rest of their job is just to combine all these phrases in random ways (I had to ban another such commenter yesterday – unsurprisingly, he has had links to a philosophy department).

But by learning these fancily sounding phrases – that may only impress morons – they're making no progress in actually answering the questions. The questions may be answered with David Ratliff's original language involving the tree and the usefulness of any "counterfactual definiteness" jargon is absolutely non-existent.

So the answer is simply No, quantum mechanics says – and it really follows from the most rudimentary postulates – that it makes no sense to talk about the preferred value of any observable in the absence of any observation. The tree doesn't have any specific energy in the absence of measurements i.e. "counterfactual definiteness" (which is ultimately equivalent to "realism" or "physics' being classical") is not obeyed in Nature. The same comment applies completely universally. Of course it applies to the shape of galaxies in the early Universe (before any mammals were born), too. As an abstract concept, the shape "existed", but particular, preferred values of the parameters describing the shape only exist when they're obtained from observations. Only when an observer observes the sky – recently – the history of the galactic shapes right after the Big Bang got some clear contours. This statement isn't mysterious – it only says that before the birth of anyone who could know, quantities were unknown. It's just like saying that before the birth of anyone who could nuke the Japanese cities, the Japanese cities were unnuked. Before the birth of anyone who could f*ck, everyone and everything was unf*cked, and so on. Is it really surprising?

People enjoying terms such as the "counterfactual definiteness" have two main motivations. One of them is simply their desire to look smart even though almost all of them are intellectually mediocre folks, with the IQ close to 100. This category of people greatly overlaps with those who like to boast about their scores from IQ tests – or who struggle for 10 years to make a journal accept their crackpot paper, so that they can brag to be finally the best physicists in the world (I've never had a problem with my/our papers' getting published). The other is related but more specific: "counterfactual definiteness" was chosen to represent their prejudices that Nature obeys classical physics – which they believe and they're mentally unable to transcend this belief.

If something is called "counterfactual definiteness", it must be right, mustn't it? The person who invented such a complicated phrase must have been smart, listeners are led to believe, so the property must be obeyed in Nature. Wouldn't it otherwise be a giant waste of time that someone invented the long phrase and wrote papers and books about it? Sorry, it's not obeyed, the awkward terminology cannot change anything about it, the people who enjoy using similar phrases have the IQ about 100 and they are simply not too smart, and indeed, all the time was wasted. But given the fact that the people who discuss such things probably couldn't do anything that was more valuable intellectually, the damages are small.

I have increasingly wondered whether it makes any sense to explain anything important about physics to the broader public. The answer is probably a resounding No. Almost all laymen simply behave as parrots. Note that many parrots may also learn the sentence "I am not a parrot", just like the laymen. But when a parrot says such a thing, he or she still is a parrot.

OK, so because the laymen are surrounded by a vastly higher number of charlatans, philosophers, and other laymen, than by the physicists who actually know what they are talking about, it is simply more likely that they will parrot the laymen's delusions and things like "counterfactual definiteness". The real problem is that most of them apparently believe that it's the smart thing to do. When you're the most average parrot in the world who just repeats everything without any quality filters, you may present yourself as the most intelligent person in front of the greatest number of people in the world. Isn't the approval from an adjacent parrot – who may sometimes repeat what you said – the greatest confirmation that you're the most ingenious person in the world? ;-)

This seems to be how the laymen operate – they don't seem to get the very point that science and mathematics are all about a brutal selection of people's propositions rather than about the mindless repetition of everything you hear in your environment – which is why it is probably a waste of time to spread nontrivial physics or mathematics etc. among the laymen. Anyone who tries to do such a thing faces some hostile, formidable numbers. On the other hand, the survival of science at some decent level (especially pure science that is still studied at some professional and institutionalized level) requires a buffer zone of the laymen who have a clue and who know that the popular delusions simply mustn't be allowed to overshadow the actual science done by a small percentage of the people.

Whether science may survive in the generic society in the long term is uncertain.

by Luboš Motl (noreply@blogger.com) at November 08, 2018 05:14 PM

ZapperZ - Physics and Physicists

The Origin Of Matter's Mass
I can't believe it. I'm reporting on Ethan Siegel's article two days in a row! The last one yesterday was a doozy, wasn't it? :)

This one is a bit different and interesting. The first part of the article describes our understanding of where mass comes from for matter. I want to highlight this because it clarify one very important misconception that many people have, especially the general public. After all the brouhaha surrounding the Higgs and its discovery, a lot of people seem to think that all the masses of every particle and entity can be explained using the Higgs. This is clearly false as stated in the article.

Yet if we take a look at the proton (made of two up and one down quark) and the neutron (made of one up and two down quarks), a puzzle emerges. The three quarks within a proton or neutron, even when you add them all up, comprise less than 0.2% of the known masses of these composite particles. The gluons themselves are massless, while the electrons are less than 0.06% of a proton's mass. The whole of matter, somehow, weighs much, much more than the sum of its parts.

The Higgs may be responsible for the rest mass of these fundamental constituents of matter, but the whole of a single atom is nearly 100 times heavier than the sum of everything known to make it up. The reason has to do with a force that's very counterintuitive to us: the strong nuclear force. Instead of one type of charge (like gravity, which is always attractive) or two types (the "+" and "-" charges of electromagnetism), the strong force has three color charges (red, green and blue), where the sum of all three charges is colorless.

So while we may use the Higgs to point to the origin of  mass in, say, leptons, for hadrons/partons, this is not sufficient. The strong force itself contributes a significant amount to the origin of mass for these particles. The so-called "God Particles" are not that godly, because it can't do and explain everything.

The other interesting part of the article is that he included a "live blog" of the talk by Phiala Shanahan at occurred yesterday at the Perimeter Institute, related to this topic. So you may want to read through the transcript and see if you get anything new.

Zz.

by ZapperZ (noreply@blogger.com) at November 08, 2018 03:34 PM

November 07, 2018

ZapperZ - Physics and Physicists

US No Longer Attracts The Best Physics Minds
So much for making America great again.

Ethan Siegel summarizes the recent data on the severe drop in the number of international students seeking advanced physics degree in the US, and the drop in the number of applicants to US schools.

You need to read the article and the history of US advancement in physics, and science in general, to realize why this is a troubling trend. Whether you realize it or not, what you are enjoying now is the result of many such immigrants who came to the US and made extraordinary discoveries and contribution to science. This may no longer be true soon enough.

Yet, according to the American Physical Society, the past year has seen an alarming, unprecedented drop in the number of international applications to physics PhD programs in the United States. In an extremely large survey of 49 of the largest physics departments in the country, representing 41% of all enrolled physics graduate students in the United States, an overall decrease of almost 12% in the number of international applicants was observed from 2017 to 2018.

Graduate students in physics, if you are not aware of it, are the workhorse in advanced physics research. While senior researchers often think of the project, find the funding, and form the group, it is the graduate students and postdoc that often are the ones doing the actual work and executing the plan. And many of us not only rely on their skills and knowledge, but also their creativity in solving the myriads of problems that we often did not anticipate during the research work.

Without graduate students, many research programs would either come to a halt, or will be severely impacted. Period!

And the reality here is that the overwhelming majority of US institutions, both universities and US National Labs, have come to depend on a lot of international graduate students for these research projects. The ability to attract not just the best talent in the US, but also the best talent from all over the world, was a luxury that was the envy of many other countries. But that is no longer the case now, and the gloomy prediction of the beginning of the decline isn't that outrageous.

We find ourselves, today, at the very beginning of what could be the end of America's greatness in the realm of scientific research and education. Science has always been touted as the great equalizer: the scientific truths underlying our Universe know no borders and do not discriminate based on race, gender, or religion. We still have time to reverse this trend, and to welcome the brightest minds the world has to offer into our country.

But if we fail to do so, that intellectual capital will thrive elsewhere, leaving America behind. If we do not change course, "America First" will be the downfall of scientific greatness in our country.

I said as much way back in 2012 when I started noticing for the first time of many established Chinese researchers and college professors starting to migrate back to China and to Chinese institutions, something that was unheard of several years before. So now, compounding the budget constraints, we now have clear data on US no longer attracting as many international students as before.

There are no "greatness" in any of these here.

Zz.

by ZapperZ (noreply@blogger.com) at November 07, 2018 03:28 PM

November 04, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A welcome mid-term break

Today marks the end of the mid-term break for many of us in the third level sector in Ireland. While a non-teaching week in the middle of term has been a stalwart of secondary schools for many years, the mid-term break only really came to the fore in the Irish third level sector when our universities, Institutes of Technology (IoTs) and other colleges adopted the modern model of 12-week teaching semesters.

Also known as ‘reading week’ in some colleges, the break marks a precious respite in the autumn/winter term. A chance to catch one’s breath, a chance to prepare teaching notes for the rest of term and a chance to catch up on research. Indeed, it is the easiest thing in the world to let the latter slide during the teaching term – only to find that deadlines for funding, book chapters and conference abstracts quietly slipped past while one was trying to keep up with teaching and administration duties.

IMG_2069

A quiet walk in Foxrock on the last day of the mid-term break

Which brings me to a pet peeve. All those years later, teaching loads in the IoT sector remain far too high. Lecturers are typically assigned four teaching modules per semester, a load that may have been reasonable in the early days of teaching to Certificate and Diploma level, but makes little sense in the context of today’s IoT lecturer who may teach several modules at 3rd and 4th year degree level, with typically at least one brand new module each year – all of this whilst simultaneously attempting to keep up the research. It’s a false economy if ever there was one, as many a new staff member, freshly graduated from a top research group, will simply abandon research after a few busy years.

Of course, one might have expected to hear a great deal about this issue in the governments plan to ‘upgrade’ IoTs to technological university status. Actually, I have yet to see any public discussion of a prospective change in the teaching contracts of IoT lecturers – a question of money, no doubt. But this is surely another indication that we are talking about a change in name, rather than substance…

by cormac at November 04, 2018 05:15 PM

November 02, 2018

The n-Category Cafe

More Papers on Magnitude

I’ve been distracted by other things for the last few months, but in that time several interesting-looking papers on magnitude (co)homology have appeared on the arXiv. I will just list them here with some vague comments. If anyone (including the author!) would like to write a guest post on any of them then do email me.

For years a standing question was whether magnitude was connected with persistent homology, as both had a similar feel to them. Here Nina relates magnitude homology with persistent homology.

In both mine and Richard’s paper on graphs and Tom Leinster and Mike Shulman’s paper on general enriched categories, it was magnitude homology that was considered. Here Richard introduces the dual theory which he shows has the structure of a non-commutative ring.

I haven’t looked at this yet as I only discovered it last night. However, when I used to think a lot about gerbes and Deligne cohomology I was a fan of Kiyonori Gomi’s work with Yuji Terashima on higher dimensional parallel transport.

This is the write-up of some results he announced in a discussion here at the Café. These results answered questions asked by me and Richard in our original magnitude homology for graphs paper, for instance proving the expression for magnitude homology of cyclic graphs that we’d conjectured and giving pairs of graphs with the same magnitude but different magnitude homology.

by willerton (S.Willerton@sheffield.ac.uk) at November 02, 2018 10:12 AM

November 01, 2018

ZapperZ - Physics and Physicists

Cerenkov Radiation
Don Lincoln tackles the origin of Cerenkov radiation this time. This is the case where a body travels faster than light in a medium.

This is not purely academic. This is how we detect certain particles, such as neutrinos. Those photodetectors in, say, SuperKamiokande, are detecting these Cerenkov radiation. In fact, if you look in a pool of water of nuclear fuel rods, the blue light is the result of Cerenkov radiation.

So here's a chance for you to learn about Cerenkov radiation.



Zz.

by ZapperZ (noreply@blogger.com) at November 01, 2018 01:45 PM

Clifford V. Johnson - Asymptotia

Trick or Treat

Maybe a decade or so ago* I made a Halloween costume which featured this simple mask decorated with symbols. “The scary face of science” I called it, mostly referring to people’s irrational fear of mathematics. I think I was being ironic. In retrospect, I don’t think it was funny at all.

(Originally posted on Instagram here.)

-cvj

(*I've since found the link. Seems it was actually 7 years ago.) Click to continue reading this post

The post Trick or Treat appeared first on Asymptotia.

by Clifford at November 01, 2018 05:38 AM

The n-Category Cafe

2-Groups in Condensed Matter Physics

This blog was born in 2006 when a philosopher, a physicist and a mathematician found they shared an interest in categorification — and in particular, categorical groups, also known as 2-groups. So it’s great to see 2-groups showing up in theoretical condensed matter physics. From today’s arXiv papers:

Abstract. Sigma models effectively describe ordered phases of systems with spontaneously broken symmetries. At low energies, field configurations fall into solitonic sectors, which are homotopically distinct classes of maps. Depending on context, these solitons are known as textures or defect sectors. In this paper, we address the problem of enumerating and describing the solitonic sectors of sigma models. We approach this problem via an algebraic topological method – combinatorial homotopy, in which one models both spacetime and the target space with algebraic objects which are higher categorical generalizations of fundamental groups, and then counts the homomorphisms between them. We give a self-contained discussion with plenty of examples and a discussion on how our work fits in with the existing literature on higher groups in physics.

The fun will really start when people actually synthesize materials described by these materials! Condensed matter physicists are doing pretty well at realizing theoretically possible phenomena in the lab, so I’m optimistic. But I don’t think it’s happened yet.

My friend Chenchang Zhu, a mathematician, has also been working on these things with two physicists. The abstract only briefly mentions 2-groups, but they play a fundamental role in the paper:

Abstract. A discrete non-linear <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-model is obtained by triangulate both the space-time <semantics>M d+1<annotation encoding="application/x-tex">M^{d+1}</annotation></semantics> and the target space <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics>. If the path integral is given by the sum of all the complex homomorphisms <semantics>ϕ:M d+1K<annotation encoding="application/x-tex">\phi \colon M^{d+1} \to K</annotation></semantics>, with an partition function that is independent of space-time triangulation, then the corresponding non-linear <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-model will be called a topological non-linear <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-model which is exactly soluble. Those exactly soluble models suggest that phase transitions induced by fluctuations with no topological defects (i.e. fluctuations described by homomorphisms <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>) usually produce a topologically ordered state and are topological phase transitions, while phase transitions induced by fluctuations with all the topological defects give rise to trivial product states and are not topological phase transitions. If <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics> is a space with only non-trivial first homotopy group <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> which is finite, those topological non-linear <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-models can realize all <semantics>(3+1)d<annotation encoding="application/x-tex">(3+1)d</annotation></semantics> bosonic topological orders without emergent fermions, which are described by Dijkgraaf-Witten theory with gauge group <semantics>π 1(K)=G<annotation encoding="application/x-tex">\pi_1(K)=G</annotation></semantics>. Here, we show that the <semantics>(3+1)d<annotation encoding="application/x-tex">(3+1)d</annotation></semantics> bosonic topological orders with emergent fermions can be realized by topological non-linear σ-models with <semantics>π 1(K)=<annotation encoding="application/x-tex">\pi_1(K) = </annotation></semantics> finite groups, <semantics>π 2(K)= 2<annotation encoding="application/x-tex">\pi_2(K)=\mathbb{Z}_2</annotation></semantics>, and <semantics>π n>2(K)=0<annotation encoding="application/x-tex">\pi_{n &gt; 2}(K)=0</annotation></semantics>. A subset of those topological non-linear <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-models corresponds to 2-gauge theories, which realize and classify bosonic topological orders with emergent fermions that have no emergent Majorana zero modes at triple string intersections. The classification of <semantics>(3+1)<annotation encoding="application/x-tex">(3+1)</annotation></semantics>d bosonic topological orders may correspond to a classification of unitary fully dualizable fully extended topological quantum field theories in 4-dimensions.

The cobordism hypothesis, too, is getting into the act in the last sentence!

by john (baez@math.ucr.edu) at November 01, 2018 05:15 AM

October 31, 2018

Lubos Motl - string vacua and pheno

CMS excess: a dimuon resonance of mass \(28\GeV\)
Well... the dimuon resonance depends on an extra bottom quark that has to be produced
Aleph at LEP seems to agree with the excess!



The Proton Smash (Halloween)

The Guardian just has published an article by Ian Sample that was useful for me,
Has new ghost particle manifested at Large Hadron Collider?
because I have missed the August 2018 preprint
Search for resonances in the mass spectrum of muon pairs produced in association with \(b\) quark jets in proton-proton collisions at \(\sqrt s= 8\) and \(13\TeV\)
You may see the excess on Figure 1, Page 5 (7/35) of the preprint above. For the invariant mass of \(\mu^+\mu^-\) slightly below \(30\GeV\), you simply see a clear excess. All the events are required to produce a \(b\) quark jet along with the muon pair. They divide the excess to two signal regions, SR1 and SR2, according to \(|\eta|\). When it's below or above \(2.4\), the local significance is 4.2 and 2.9 sigma, respectively.



If you round those to 4 and 3 sigma and if you know the most famous example of the Pythagorean theorem, you may see that the combined local significance of this bump is almost exactly 5 sigma. Strangely enough, however, the large excesses I just mentioned only occurred in the \(8\TeV\) collisions. In the \(13\TeV\) collisions, there were +2.0 sigma excess and –1.4 sigma deficit in the two signal regions.



But it gets better than that. In the 1990s, the Aleph detector at LEP – the electron-positron collider that used to live in the same tunnel as the LHC today – saw an excess in ("almost") the same channel and the same mass:
Observation of an excess at \(30\GeV\) in the opposite sign di-muon spectra of \(Z\to b\bar b+X\) events recorded by the ALEPH experiment at LEP
I have no idea how we could have overlooked – or how I could have forgotten – about that 2016 preprint because Aleph has claimed the local significance to be 5 sigma (wow). The mass of the dimuon resonance is said to be \(30.4\GeV\) at LEP, vaguely compatible with the CMS observation.

It is not clear to me why the excesses from Aleph and \(8\TeV\) CMS, if real, should weaken, disappear, or get inverted at \(13\TeV\). But there could be a subtle story. CERN's Alexandre Nikitenko says that "the backgrounds are perhaps stronger at higher energy" which makes the signal look weaker there. The uncertain tone of that sentence puzzles me: Didn't they have to calculate what the backgrounds actually were before they wrote the paper? ;-)



Climate Mash, this cute video with the original voice, was released (as Flash, mostly dead now) in 2005, just months after some very big hurricanes (that failed to repeat so far) and exactly 13 years ago. Thankfully, the climate alarmists haven't done much to harm the civilization in these 13 years and their climate-powered attacks on Donald Trump aren't significantly different from those against George W. Bush (and Dick Cheney). When it comes to policies, the alarmism has mostly kept the status of an inconsequential religious ritual.

For a musically weaker fresh HEP remake, listen to The Proton Smash which is embedded at the top.

Most such anomalies go away, I think that this one is also more likely than not to go away (partly because its behavior at the highest energy is strange), but if we were discarding these things automatically, the experiments would be useless. So I, for one, will surely observe the fate of this anomaly. Will ATLAS confirm it? Note that the CMS has used 20/fb at \(8\TeV\) plus 36/fb at \(13\TeV\). For the latter dataset, CMS already has a quadrupled amount of data, so it could double the significance if the effect were real. So even the 2.0 sigma excess at \(13\TeV\) could grow to 4.0 sigma (plus minus something).

The Aleph-CMS combination, if we make it informally, is already enough for a claim of a discovery!

The Guardian makes the claim that theorists are generally excited while experimenters are grumpy and skeptical when such anomalies appear. It is so true. I am often closer to the experimenters but I think it's really a natural duty for a scientist to follow some promising anomalies even if he thinks that the probability that they will be real is well below 50%. So I will follow it.

The dimuon anomaly at \(28\GeV\) is analogous in significance to the diphoton anomaly at \(750\GeV\) a few years ago. I think it would be very healthy if we saw an avalanche of the papers trying to explain the possible \(28\GeV\) dimuon anomaly (a Russian paper already links the anomaly to the anomaly of the muon magnetic moment). Sadly, people have been discouraged from doing this kind of serious work. On top of that, CERN and others have been partly occupied by cultural Marxists who "suspend" the skillful people (I mean Alessandro Strumia) who are actually most capable of making research of that kind.

While the anomaly is as cool and exciting as the diphoton anomaly was, we may be forced by some evil forces to be grumpy this time. What a shame. Does it really make for the community to run a collider if analyses of the most interesting anomalies are discouraged and labeled politically incorrect?

At any rate, Alexandre Nikitenko and Yoram Soreq give a CERN talk on Thursday 1 pm dedicated to these anomalies.



Electron's charge, bonus

See Harvard Gazette for a story about the most precise measurement of electron's charge so far, by Harvard's John Doyle et al. (ACME). Via Willie Soon

by Luboš Motl (noreply@blogger.com) at October 31, 2018 04:53 PM

October 30, 2018

Jon Butterworth - Life and Physics

Dark Matters
This is great. I had nothing to do with it, it happened in the 95% of the Physics Department (and of the lives of my PhD students) about which I know nothing. I recommend you watch it and form your … Continue reading

by Jon Butterworth at October 30, 2018 08:57 PM

October 27, 2018

Robert Helling - atdotde

Interfere and it didn't happen
I am a bit late for the party, but also wanted to share my two cents on the paper "Quantum theory cannot consistently describe the use of itself" by Frauchiger and Renner. After sitting down and working out the math for myself, I found that the analysis in this paper and the blogpost by Scot (including many of the the 160+ comments, some by Renner) share a lot with what I am about to say but maybe I can still contribute a slight twist.

Coleman on GHZS

My background is the talk "Quantum Mechanics In Your Face" by Sidney Coleman which I consider as the best argument why quantum mechanics cannot be described by a local and realistic theory (from which I would conclude it is not realistic). In a nutshell, the argument goes like this: Consider the three qubit state state 

$$\Psi=\frac 1{\sqrt 2}(\uparrow\uparrow\uparrow-\downarrow\downarrow\downarrow)$$

which is both an eigenstate of eigenvalue -1 for $\sigma_z\otimes\sigma_z\otimes\sigma_z$ and an eigenstate of eigenvalue +1 for $\sigma_x\otimes\sigma_x\otimes\sigma_z$ or any permutation. This means that, given that the individual outcomes of measuring a $\sigma$-matrix on a qubit is $\pm 1$, when measuring all in the z-direction there will be an odd number of -1 results but if two spins are measured in x-direction and one in z-direction there is an even number of -1's. 

The latter tells us that the outcome of one z-measurement is the product of the two x-measurements on the other two spins. But multiplying this for all three spins we get that in shorthand $ZZZ=(XXX)^2=+1$ in contradiction to the -1 eigenvalue for all z-measurments. 

The conclusion is (unless you assume some non-local conspiracy between the spins) that one has to take serious the fact that on a given spin I cannot measure both $\sigma_x$ and $\sigma_z$ and thus when actually measuring the latter I must not even assume that $X$ has some (although unknown) value $\pm 1$ as it leads to the contradiction. Stuff that I cannot measure does not have a value (that is also my understanding of what "not realistic" means).

Fruchtiger and Renner

Now to the recent Nature paper. In short, they are dealing with two qubits (by which I only mean two state systems). The first is in a box L' (I will try to use the somewhat unfortunate nomenclature from the paper) and the second in in a box L (L stands for lab). For L, we use the usual z-basis of $\uparrow$ and $\downarrow$ as well as the x-basis $\leftarrow = \frac 1{\sqrt 2}(\downarrow - \uparrow)$  and $\rightarrow  = \frac 1{\sqrt 2}(\downarrow + \uparrow)$ . Similarly, for L' we use the basis $h$ and $t$ (heads and tails as it refers to a coin) as well as $o = \frac 1{\sqrt 2}(h - t)$ and $f  = \frac 1{\sqrt 2}(h+f)$.  The two qubits are prepared in the state

$$\Phi = \frac{h\otimes\downarrow + \sqrt 2 t\otimes \rightarrow}{\sqrt 3}$$.

Clearly, a measurement of $t$ in box L' implies that box L has to contain the state $\rightarrow$. Call this observation A.

Let's re-express $\rightarrow$ in the x-basis:

$$\Phi =\frac {h\otimes \downarrow + t\otimes \downarrow + t\otimes\uparrow}{\sqrt 3}$$

From which one concludes that an observer inside box L that measures $\uparrow$ concludes that the qubit in box L' is in state $t$. Call this observation B.

Similarly, we can express the same state in the x-basis for L':

$$\Phi = \frac{4 f\otimes \downarrow+ f\otimes \uparrow - o\otimes \uparrow}{\sqrt 3}$$

From this once can conclude that measuring $o$ for the state of L' one can conclude that L is in the state $\uparrow$. Call this observation C.

Using now C, B and A one is tempted to conclude that observing L' to be in state $o$ implies that L is in state $\rightarrow$. When we express the state in the $ht\leftarrow\rightarrow$-basis, however, we get

$$\Phi = \frac{f\otimes\leftarrow+ 3f\otimes \rightarrow + o\otimes\leftarrow - o\otimes \rightarrow}{\sqrt{12}}.$$

so with probability 1/12 we find both $o$  and $\leftarrow$. Again, we hit a contradiction.

One is tempted to use the same way out as above in the three qubit case and say one should not argue about contrafactual measurements that are incompatible with measurements that were actually performed. But Frauchiger and Renner found a set-up which seems to avoid that.

They have observers F and F' ("friends") inside the boxes that do the measurements in the $ht$ and $\uparrow\downarrow$ basis whereas later observers W and W' measure the state of the boxes including the observer F and F' in the $of$ and $\leftarrow\rightarrow$ basis.  So, at each stage of A,B,C the corresponding measurement has actually taken place and is not contrafactual!

Interference and it did not happen

I believe the way out is to realise that at least from a retrospective perspective, this analysis stretches the language and in particular the word "measurement" to the extreme. In order for W' to measure the state of L' in the $of$-basis, he has to interfere the contents including F' coherently such that there is no leftover of information from F''s measurement of $ht$ remaining. Thus, when W''s measurement is performed one should not really say that F''s measurement has in any real sense happened as no possible information is left over. So it is in any practical sense contrafactual.

To see the alternative, consider a variant of the experiment where a tiny bit of information (maybe the position of one air molecule or the excitation of one of F''s neutrons) escapes the interference. Let's call the two possible states of that qubit of information $H$ and $T$ (not necessarily orthogonal) and consider instead the state where that neutron is also entangled with the first qubit:

$$\tilde \Phi =  \frac{h\otimes\downarrow\otimes H + \sqrt 2 t\otimes \rightarrow\otimes T}{\sqrt 3}$$.

Then, the result of step C becomes

$$\tilde\Phi = \frac{f\otimes \downarrow\otimes H+ o\otimes \downarrow\otimes H+f\otimes \downarrow\otimes T-o\otimes\downarrow\otimes T + f\otimes \uparrow\otimes T-o \otimes\uparrow\times T}{\sqrt 6}.$$

We see that now there is a term containing $o\otimes\downarrow\otimes(H-T)$. Thus, as long as the two possible states of the air molecule/neuron are actually different, observation C is no longer valid and the whole contradiction goes away.

This makes it clear that the whole argument relies of the fact that when W' is doing his measurement any remnant of the measurement by his friend F' is eliminated and thus one should view the measurement of F' as if it never happened. Measuring L' in the $of$-basis really erases the measurement of F' in the complementary $ht$-basis.

by Robert Helling (noreply@blogger.com) at October 27, 2018 08:39 AM

October 26, 2018

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

New Frontiers

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mattis hendrerit ipsum, ac vehicula mauris iaculis eget. Aliquam quam felis, euismod ac arcu quis, laoreet pretium orci. Ut fermentum luctus lacus, ut convallis eros aliquam ut. In elementum sem vel commodo tristique. Sed non iaculis tortor. Maecenas vehicula lorem risus, in efficitur risus auctor et. Morbi ac ornare lacus. Aenean eu molestie ipsum. Proin tristique a purus quis semper.

Cras iaculis non metus id luctus. Aenean pellentesque et lacus sed malesuada. Integer nec tempor est, non convallis lacus. Duis ultrices sapien sit amet libero bibendum, eu auctor justo tempor. Etiam vitae arcu nisl. Mauris varius, lorem ut varius cursus, tortor lectus pellentesque justo, a dapibus lorem purus in eros. In vestibulum ultrices massa euismod fringilla. Sed iaculis semper commodo. Ut scelerisque tristique vestibulum. Sed dapibus porta risus nec ultricies. Nulla facilisi. Donec bibendum eu justo vel egestas. Sed vitae sagittis massa. Maecenas suscipit orci quis consequat sodales.

Sed facilisis condimentum ante sed bibendum. Phasellus sagittis commodo hendrerit. Etiam finibus nunc rutrum, volutpat risus vitae, tincidunt est. Proin id felis nisi. Etiam suscipit nec nulla at ultricies. Nam iaculis lacinia nunc, id ullamcorper purus luctus at. Pellentesque in consectetur ante. Sed eget porta enim.

The post New Frontiers appeared first on None Equilibrium.

by nonequilibrium_admin at October 26, 2018 06:21 PM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

People & Society

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mattis hendrerit ipsum, ac vehicula mauris iaculis eget. Aliquam quam felis, euismod ac arcu quis, laoreet pretium orci. Ut fermentum luctus lacus, ut convallis eros aliquam ut. In elementum sem vel commodo tristique. Sed non iaculis tortor. Maecenas vehicula lorem risus, in efficitur risus auctor et. Morbi ac ornare lacus. Aenean eu molestie ipsum. Proin tristique a purus quis semper.

Cras iaculis non metus id luctus. Aenean pellentesque et lacus sed malesuada. Integer nec tempor est, non convallis lacus. Duis ultrices sapien sit amet libero bibendum, eu auctor justo tempor. Etiam vitae arcu nisl. Mauris varius, lorem ut varius cursus, tortor lectus pellentesque justo, a dapibus lorem purus in eros. In vestibulum ultrices massa euismod fringilla. Sed iaculis semper commodo. Ut scelerisque tristique vestibulum. Sed dapibus porta risus nec ultricies. Nulla facilisi. Donec bibendum eu justo vel egestas. Sed vitae sagittis massa. Maecenas suscipit orci quis consequat sodales.

Sed facilisis condimentum ante sed bibendum. Phasellus sagittis commodo hendrerit. Etiam finibus nunc rutrum, volutpat risus vitae, tincidunt est. Proin id felis nisi. Etiam suscipit nec nulla at ultricies. Nam iaculis lacinia nunc, id ullamcorper purus luctus at. Pellentesque in consectetur ante. Sed eget porta enim.

The post People & Society appeared first on None Equilibrium.

by nonequilibrium_admin at October 26, 2018 06:19 PM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Environment & Energy

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mattis hendrerit ipsum, ac vehicula mauris iaculis eget. Aliquam quam felis, euismod ac arcu quis, laoreet pretium orci. Ut fermentum luctus lacus, ut convallis eros aliquam ut. In elementum sem vel commodo tristique. Sed non iaculis tortor. Maecenas vehicula lorem risus, in efficitur risus auctor et. Morbi ac ornare lacus. Aenean eu molestie ipsum. Proin tristique a purus quis semper.

Cras iaculis non metus id luctus. Aenean pellentesque et lacus sed malesuada. Integer nec tempor est, non convallis lacus. Duis ultrices sapien sit amet libero bibendum, eu auctor justo tempor. Etiam vitae arcu nisl. Mauris varius, lorem ut varius cursus, tortor lectus pellentesque justo, a dapibus lorem purus in eros. In vestibulum ultrices massa euismod fringilla. Sed iaculis semper commodo. Ut scelerisque tristique vestibulum. Sed dapibus porta risus nec ultricies. Nulla facilisi. Donec bibendum eu justo vel egestas. Sed vitae sagittis massa. Maecenas suscipit orci quis consequat sodales.

Sed facilisis condimentum ante sed bibendum. Phasellus sagittis commodo hendrerit. Etiam finibus nunc rutrum, volutpat risus vitae, tincidunt est. Proin id felis nisi. Etiam suscipit nec nulla at ultricies. Nam iaculis lacinia nunc, id ullamcorper purus luctus at. Pellentesque in consectetur ante. Sed eget porta enim.

The post Environment & Energy appeared first on None Equilibrium.

by nonequilibrium_admin at October 26, 2018 06:18 PM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Particle Physics

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mattis hendrerit ipsum, ac vehicula mauris iaculis eget. Aliquam quam felis, euismod ac arcu quis, laoreet pretium orci. Ut fermentum luctus lacus, ut convallis eros aliquam ut. In elementum sem vel commodo tristique. Sed non iaculis tortor. Maecenas vehicula lorem risus, in efficitur risus auctor et. Morbi ac ornare lacus. Aenean eu molestie ipsum. Proin tristique a purus quis semper.

Cras iaculis non metus id luctus. Aenean pellentesque et lacus sed malesuada. Integer nec tempor est, non convallis lacus. Duis ultrices sapien sit amet libero bibendum, eu auctor justo tempor. Etiam vitae arcu nisl. Mauris varius, lorem ut varius cursus, tortor lectus pellentesque justo, a dapibus lorem purus in eros. In vestibulum ultrices massa euismod fringilla. Sed iaculis semper commodo. Ut scelerisque tristique vestibulum. Sed dapibus porta risus nec ultricies. Nulla facilisi. Donec bibendum eu justo vel egestas. Sed vitae sagittis massa. Maecenas suscipit orci quis consequat sodales.

Sed facilisis condimentum ante sed bibendum. Phasellus sagittis commodo hendrerit. Etiam finibus nunc rutrum, volutpat risus vitae, tincidunt est. Proin id felis nisi. Etiam suscipit nec nulla at ultricies. Nam iaculis lacinia nunc, id ullamcorper purus luctus at. Pellentesque in consectetur ante. Sed eget porta enim.

The post Particle Physics appeared first on None Equilibrium.

by nonequilibrium_admin at October 26, 2018 06:16 PM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Space Exploration

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque mattis hendrerit ipsum, ac vehicula mauris iaculis eget. Aliquam quam felis, euismod ac arcu quis, laoreet pretium orci. Ut fermentum luctus lacus, ut convallis eros aliquam ut. In elementum sem vel commodo tristique. Sed non iaculis tortor. Maecenas vehicula lorem risus, in efficitur risus auctor et. Morbi ac ornare lacus. Aenean eu molestie ipsum. Proin tristique a purus quis semper.

Cras iaculis non metus id luctus. Aenean pellentesque et lacus sed malesuada. Integer nec tempor est, non convallis lacus. Duis ultrices sapien sit amet libero bibendum, eu auctor justo tempor. Etiam vitae arcu nisl. Mauris varius, lorem ut varius cursus, tortor lectus pellentesque justo, a dapibus lorem purus in eros. In vestibulum ultrices massa euismod fringilla. Sed iaculis semper commodo. Ut scelerisque tristique vestibulum. Sed dapibus porta risus nec ultricies. Nulla facilisi. Donec bibendum eu justo vel egestas. Sed vitae sagittis massa. Maecenas suscipit orci quis consequat sodales.

Sed facilisis condimentum ante sed bibendum. Phasellus sagittis commodo hendrerit. Etiam finibus nunc rutrum, volutpat risus vitae, tincidunt est. Proin id felis nisi. Etiam suscipit nec nulla at ultricies. Nam iaculis lacinia nunc, id ullamcorper purus luctus at. Pellentesque in consectetur ante. Sed eget porta enim.

The post Space Exploration appeared first on None Equilibrium.

by nonequilibrium_admin at October 26, 2018 06:13 PM

October 24, 2018

Jon Butterworth - Life and Physics

The trouble-makers of particle physics
The chances are you have heard quite a bit about the Higgs boson. The goody-two-shoes of particle physics, it may have been hard to find, but when it was discovered it was just as the theory – the Standard Model … Continue reading

by Jon Butterworth at October 24, 2018 11:47 AM

Axel Maas - Looking Inside the Standard Model

Looking for something when no one knows how much is there
This time, I want to continue the discussion from some months ago. Back then, I was rather general on how we could test our most dramatic idea. This idea is connected to what we regard as elementary particles. So far, our idea is that those you have heard about, the electrons, the Higgs, and so on are truly the basic building blocks of nature. However, we have found a lot of evidence that indicate that we see in experiment, and call these names, are actually not the same as the elementary particles themselves. Rather, they are a kind of bound state of the elementary ones, which only look at first sight like they themselves would be the elementary ones. Sounds pretty weird, huh? And if it sounds weird, it means it needs to be tested. We did so with numerical simulations. They all agreed perfectly with the ideas. But, of course, its physics, and thus we need also an experiment. The only question is which one.

We had some ideas already a while back. One of them will be ready soon, and I will talk again about it in due time. But this will be rather indirect, and somewhat qualitative. The other, however, required a new experiment, which may need two more decades to build. Thus, both cannot be the answer alone, and we need something more.

And this more is what we are currently closing in. Because one has this kind of weird bound state structure to make the standard model consistent, not only exotic particles are more complicated than usually assumed. Ordinary ones are too. And most ordinary are protons, the nucleus of the hydrogen atom. More importantly, protons is what is smashed together at the LHC at CERN. So, we have a machine already, which may be able to test it. But this is involved, as protons are very messy. They are already in the conventional picture bound states of quarks and gluons. Our results just say there are more components. Thus, we have somehow to disentangle old and new components. So, we have to be very careful in what we do.

Fortunately, there is a trick. All of this revolves around the Higgs. The Higgs has the property that interacts stronger with particles the heavier they are. The heaviest particles we know are the top quark, followed by the W and Z bosons. And the CMS experiment (and other experiments) at CERN has a measurement campaign to look at the production of these particles together! That is exactly where we expect something interesting can happen. However, our ideas are not the only ones leading to top quarks and Z bosons. There are many known processes which produce them as well. So we cannot just check whether they are there. Rather, we need to understand if there are there as expected. E.g., if they fly away from the interaction in the expected direction and with the expected speeds.

So what a master student and myself do is the following. We use a program, called HERWIG, which simulates such events. One of the people who created this program helped us to modify this program, so that we can test our ideas with it. What we now do is rather simple. An input to such simulations is how the structure of the proton looks like. Based on this, it simulates how the top quarks and Z bosons produced in a collision are distributed. We now just add our conjectured additional contributions to the proton, essentially a little bit of Higgs. We then check, how the distributions change. By comparing the changes to what we get in experiment, we can then deduced how large the Higgs contribution in the proton is. Moreover, we can even indirectly deduce its shape, i.e. how in the proton the Higgs is located.

And this we now study. We iterate modifications of the proton structure with comparison to experimental results and predictions without this Higgs contribution. Thereby, we constraint the Higgs contribution in the proton bit by bit. At the current time, we know that the data is only sufficient to provide an upper bound to this amount inside the proton. Our first estimates show already that this bound is actually not that strong, and quite a lot of Higgs could be inside the proton. But on the other hand, this is good, because that means that the expected data in the next couple of years from the experiments will be able to actually either constraint the contribution further, or could even detect it, if it is large enough. At any rate, we now know that we have a sensitive leverage to understand this new contribution.

by Axel Maas (noreply@blogger.com) at October 24, 2018 07:26 AM

October 20, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

The thrill of a good conference

One of the perks of academia is the thrill of presenting results, thoughts and ideas at international conferences. Although the best meetings often fall at the busiest moment in the teaching semester and the travel can be tiring, there is no doubt that interacting directly with one’s peers is a huge shot in the arm for any researcher – not to mention the opportunity to travel to interesting locations and experience different cultures.

IMG_2009[1]

The view from my hotel in San Sebastian this morning.

This week, I travelled to San Sebastian in Spain to attend the Third International Conference on the History of Physics, the latest in a series of conferences that aim to foster dialogue between physicists with an interest in the history of their subject and professional historians of science. I think it’s fair to say the conference was a great success, with lots of interesting talks on a diverse range of topics. It didn’t hurt that the meeting took place in the Palacio Mirimar, a beautiful building in a fantastic location.

Image result for palacio miramar san sebastian

The Palacio Mirimar in San Sebastian. 

The conference programme can be found here. I didn’t get to all the talks due to parallel timetabling, but three major highlights for me were ‘Structure or Agent? Max Planck and the Birth of Quantum Theory’ by Massimiliano Badino of the University of Verona, ‘The Principle of Plenitude as a Guiding Theme in Modern Physics’ by Helge Kragh of the University of Copenhagen, and ‘Rutherford’s Favourite Radiochemist: Bertram Borden’ by Edward Davis of the University of Cambridge.

IMG_2014[1]

A slide from the paper ‘Max Planck and the Birth of Quantum Theory’

My own presentation was titled The Dawning of Cosmology – Internal vs External Histories’ (the slides are here). In it, I considered the story of the emergence of the ‘big bang’ theory of the universe from two different viewpoints, the professional physicist vs. the science historian. (The former approach is sometimes termed ‘internal history’ as scientists tend to tell the story of scientific discovery as an interplay of theory and experiment within the confines of science. The latter approach is termed  ‘external’ because the professional historian will consider external societal factors such the prestige of researchers and their institutions and the relevance of national or international contexts). Nowadays, it is generally accepted that both internal and external factors usually often a role in a given scientific advance, a  process that has been termed the co-production of scientific knowledge.

IMG_2022[1]

Giving my paper in the conference room

As it was a short talk, I focused on three key stages in the development of the big bang model; the first (static) models of the cosmos that arose from relativity, the switch to expanding cosmologies in the 1930s, and finally the transition (much more gradual) to the idea of a universe that was once small, dense and hot. In preparing the paper, I found that the first stage was driven almost entirely by theoretical considerations (namely, Einstein’s wish to test his newly-minted general theory of relativity by applying it to the universe as a whole), with little evidence of co-production. Similarly, I found that the switch to expanding cosmologies was driven by almost entirely by developments in astronomy (namely, Hubble’s observations of the recession of the galaxies). Finally, I found the long rejection of Lemaître’s ‘fireworks’ universe was driven by obvious theoretical problems associated with the model (such as the problem of the singularity and the age paradox), while the eventual acceptance of the model was driven by major astronomical advances such as the discovery of the cosmic microwave background. Overall, my conclusion was that one could give a reasonably coherent account of the early development of modern cosmology in terms of the traditional narrative of an interplay of theory and experiment, with little evidence that social considerations played an important role in this particular story. As I once heard the noted historian Hasok Chang remark in a seminar, Sometimes science is the context’.

Can one draw any general conclusions from this little study? I think it would be interesting to investigate the matter further. One possibility is that social considerations become more important ‘as a field becomes a field’, i.e., as a new area of physics coalesces into its own distinct field, with specialized journals, postgraduate positions and undergraduate courses etc. Could it be that the traditional narrative works surprisingly well when considering the dawning of a field because the co-production effect is less pronounced then? Certainly, I have also found it hard to discern any major societal influence in the dawning of other theories such as special relativity or general relativity.

Coda

As a coda, I discussed a pet theme of mine; that the co-productive nature of scientific discovery presents a special problem for the science historian. After all, in order to weigh the relative impact of internal vs external considerations on a given scientific advance, one must presumably have a good understanding of each. But it takes many years of specialist training to attempt to place a scientific advance in its true scientific context, an impossible ask for a historian trained in the humanities. Some science historians avoid this problem by ‘black-boxing’ the science and focusing on social context alone. However, this means the internal scientific aspects of the story are either ignored or repeated from secondary sources, rather than offering new insights from perusing primary materials. Besides, how can one decide whether a societal influence is significant or not without considering the science? For example, Paul Forman’s argument concerning the influence of contemporaneous German culture on the acceptance of the Uncertainty Principle in quantum theory is interesting, but pays little attention to physics; a physicist might point out that it quickly became clear to the quantum theorists (many of whom were not German) that the Uncertainty Principle arose inevitably from wave-particle duality in all three formulations of the theory (see Hendry on this for example).

Indeed, now that it is accepted one needs to consider both internal and external factors in studying a given scientific advance, it’s not obvious to me what the professionalization of science history should look like, i.e., how the next generation of science historians should be trained. In the meantime, I think there is a good argument for the use of multi-disciplinary teams of collaborators in the study of the history of science.

All in all, a very enjoyable conference. I wish there had been time to relax and have a swim in the bay, but I never got a moment. On the other hand, I managed to stock up on some free issues of my favourite publication in this area, the European Physical Journal (H).  On the plane home, I had a great read of a seriously good EPJH article by S.M. Bilenky on the history of neutrino physics. Consider me inspired….

by cormac at October 20, 2018 09:51 PM

October 17, 2018

Robert Helling - atdotde

Bavarian electoral system
Last Sunday, we had the election for the federal state of Bavaria. Since the electoral system is kind of odd (but not as odd as first past the post), I would like to analyse how some variations (assuming the actual distribution of votes) in the rule would have worked out. So, first, here is how actually, the seats are distributed: Each voter gets two ballots: On the first ballot, each party lists one candidate from the local constituency and you can select one. On the second ballot, you can vote for a party list (it's even more complicated because also there, you can select individual candidates to determine the position on the list but let's ignore that for today).

Then in each constituency, the votes on ballot one are counted. The candidate with the most votes (like in first past the pole) gets elected for parliament directly (and is called a "direct candidate"). Then over all, the votes for each party on both ballots (this is where the system differs from the federal elections) are summed up. All votes for parties with less then 5% of the grand total of all votes are discarded (actually including their direct candidates but this is not of a partial concern). Let's call the rest the "reduced total". According to the fraction of each party in this reduced total the seats are distributed.

Of course the first problem is that you can only distribute seats in integer multiples of 1. This is solved using the Hare-Niemeyer-method: You first distribute the integer parts. This clearly leaves fewer seats open than the number of parties. Those you then give to the parties where the rounding error to the integer below was greatest. Check out the wikipedia page explaining how this can lead to a party losing seats when the total number of seats available is increased.

Because this is what happens in the next step: Remember that we already allocated a number of seats to constituency winners in the first round. Those count towards the number of seats that each party is supposed to get in step two according to the fraction of votes. Now, it can happen, that a party has won more direct candidates than seats allocated in step two. If that happens, more seats are added to the total number of seats and distributed according to the rules of step two until each party has been allocated at least the number of seats as direct candidates. This happens in particular if one party is stronger than all the other ones leading to that party winning almost all direct candidates (as in Bavaria this happened to the CSU which won all direct candidates except five in Munich and one in Würzburg which were won by the Greens).

A final complication is that Bavaria is split into seven electoral districts and the above procedure is for each district separately. So there are seven times rounding and adding seats procedures.

Sunday's election resulted in the following distribution of seats:

After the whole procedure, there are 205 seats distributed as follows


  • CSU 85 (41.5% of seats)
  • SPD 22 (10.7% of seats)
  • FW 27 (13.2% of seats)
  • GREENS 38 (18.5% of seats)
  • FDP 11 (5.4% of seats)
  • AFD 22 (10.7% of seats)
You can find all the total of votes on this page.

Now, for example one can calculate the distribution without districts throwing just everything in a single super-district. Then there are 208 seats distributed as

  • CSU 85 (40.8%)
  • SPD 22 (10.6%)
  • FW 26 (12.5%)
  • GREENS 40 (19.2%)
  • FDP 12 (5.8%)
  • AFD 23 (11.1%)
You can see that in particular the CSU, the party with the biggest number of votes profits from doing the rounding 7 times rather than just once and the last three parties would benefit from giving up districts.

But then there is actually an issue of negative weight of votes: The greens are particularly strong in Munich where they managed to win 5 direct seats. If instead those seats would have gone to the CSU (as elsewhere), the number of seats for Oberbayern, the district Munich belongs to would have had to be increased to accommodate those addition direct candidates for the CSU increasing the weight of Oberbayern compared to the other districts which would then be beneficial for the greens as they are particularly strong in Oberbayern: So if I give all the direct candidates to the CSU (without modifying the numbers of total votes), I get the follwing distribution:
221 seats
  • CSU 91 (41.2%)
  • SPD 24 (10.9%)
  • FW 28 (12,6%)
  • GREENS 42 (19.0%)
  • FDP 12 (5.4%)
  • AFD 24 (10.9%)
That is, there greens would have gotten a higher fraction of seats if they had won less constituencies. Voting for green candidates in Munich actually hurt the party as a whole!

The effect is not so big that it actually changes majorities (CSU and FW are likely to form a coalition) but still, the constitutional court does not like (predictable) negative weight of votes. Let's see if somebody challenges this election and what that would lead to.

The perl script I used to do this analysis is here.

Postscript:
The above analysis in the last point is not entirely fair as not to win a constituency means getting fewer votes which then are missing from the grand total. Taking this into account makes the effect smaller. In fact, subtracting the votes from the greens that they were leading by in the constituencies they won leads to an almost zero effect:

Seats: 220
  • CSU  91 41.4%
  • SPD  24 10.9%
  • FW  28 12.7%
  • GREENS  41 18.6%
  • FDP  12 5.4%
  • AFD  24 10.9%
Letting the greens win München Mitte (a newly created constituency that was supposed to act like a bad bank for the CSU taking up all central Munich more left leaning voters, do I hear somebody say "Gerrymandering"?) yields

Seats: 217
  • CSU  90 41.5%
  • SPD  23 10.6%
  • FW  28 12.9%
  • GREENS  41 18.9%
  • FDP  12 5.5%
  • AFD  23 10.6%
Or letting them win all but Moosach and Würzbug-Stadt where the lead was the smallest:

Seats: 210

  • CSU  87 41.4%
  • SPD  22 10.5%
  • FW  27 12.9%
  • GREENS  40 19.0%
  • FDP  11 5.2%
  • AFD  23 11.0%


by Robert Helling (noreply@blogger.com) at October 17, 2018 06:55 PM

October 15, 2018

Clifford V. Johnson - Asymptotia

Mindscape Interview!

And then two come along at once... Following on yesterday, another of the longer interviews I've done recently has appeared. This one was for Sean Carroll's excellent Mindscape podcast. This interview/chat is all about string theory, including some of the core ideas, its history, what that "quantum gravity" thing is anyway, and why it isn't actually a theory of (just) strings. Here's a direct link to the audio, and here's a link to the page about it on Sean's blog.

The whole Mindscape podcast has had some fantastic conversations, by the way, so do check it out on iTunes or your favourite podcast supplier!

I hope you enjoy it!!

-cvj Click to continue reading this post

The post Mindscape Interview! appeared first on Asymptotia.

by Clifford at October 15, 2018 06:47 PM

October 14, 2018

Clifford V. Johnson - Asymptotia

Futuristic Podcast Interview

For your listening pleasure: I've been asked to do a number of longer interviews recently. One of these was for the "Futuristic Podcast of Mark Gerlach", who interviews all sorts of people from the arts (normally) over to the sciences (well, he hopes to do more of that starting with me). Go and check out his show on iTunes. The particular episode with me can be found as episode 31. We talk about a lot of things, from how people get into science (including my take on the nature vs nurture discussion), through the changes in how people get information about science to the development of string theory, to black holes and quantum entanglement - and a host of things in between. We even talked about The Dialogues, you'll be happy to hear. I hope you enjoy listening!

(The picture? Not immediately relevant, except for the fact that I did cycle to the place the recording took place. I mostly put it there because I was fixing my bike not long ago and it is good to have a photo in a post. That is all.)

-cvj Click to continue reading this post

The post Futuristic Podcast Interview appeared first on Asymptotia.

by Clifford at October 14, 2018 07:22 PM

October 13, 2018

John Baez - Azimuth

Category Theory Course

I’m teaching a course on category theory at U.C. Riverside, and since my website is still suffering from reduced functionality I’ll put the course notes here for now. I taught an introductory course on category theory in 2016, but this one is a bit more advanced.

The hand-written notes here are by Christian Williams. They are probably best seen as a reminder to myself as to what I’d like to include in a short book someday.

Lecture 1: What is pure mathematics all about? The importance of free structures.

Lecture 2: The natural numbers as a free structure. Adjoint functors.

Lecture 3: Adjoint functors in terms of unit and counit.

Lecture 4: 2-Categories. Adjunctions.

Lecture 5: 2-Categories and string diagrams. Composing adjunctions.

Lecture 6: The ‘main spine’ of mathematics. Getting a monad from an adjunction.

Lecture 7: Definition of a monad. Getting a monad from an adjunction. The augmented simplex category.

Lecture 8: The walking monad, the augmented simplex category and the simplex category.

Lecture 9: Simplicial abelian groups from simplicial sets. Chain complexes from simplicial abelian groups.

Lecture 10: The Dold-Thom theorem: the category of simplicial abelian groups is equivalent to the category of chain complexes of abelian groups. The homology of a chain complex.

Lecture 7: Definition of a monad. Getting a monad from an adjunction. The augmented simplex category.

Lecture 8: The walking monad, the
augmented simplex category and the simplex category.

Lecture 9: Simplicial abelian groups from simplicial sets. Chain complexes from simplicial abelian groups.

Lecture 10: Chain complexes from simplicial abelian groups. The homology of a chain complex.

Lecture 12: The bar construction: getting a simplicial objects from an adjunction. The bar construction for G-sets, previewed.

Lecture 13: The adjunction between G-sets and sets.

Lecture 14: The bar construction for groups.

Lecture 15: The simplicial set \mathbb{E}G obtained by applying the bar construction to the one-point G-set, its geometric realization EG = |\mathbb{E}G|, and the free simplicial abelian group \mathbb{Z}[\mathbb{E}G].

Lecture 16: The chain complex C(G) coming from the simplicial abelian group \mathbb{Z}[\mathbb{E}G], its homology, and the definition of group cohomology H^n(G,A) with coefficients in a G-module.

Lecture 17: Extensions of groups. The Jordan-Hölder theorem. How an extension of a group G by an abelian group A gives an action of G on A and a 2-cocycle c \colon G^2 \to A.

Lecture 18: Classifying abelian extensions of groups. Direct products, semidirect products, central extensions and general abelian extensions. The groups of order 8 as abelian extensions.

Lecture 19: Group cohomology. The chain complex for the cohomology of G with coefficients in A, starting from the bar construction, and leading to the 2-cocycles used in classifying abelian extensions. The classification of extensions of G by A in terms of H^2(G,A).

Lecture 20: Examples of group cohomology: nilpotent groups and the fracture theorem. Higher-dimensional algebra and homotopification: the nerve of a category and the nerve of a topological space. \mathbb{E}G as the nerve of the translation groupoid G/\!/G. BG = EG/G as the walking space with fundamental group G.

by John Baez at October 13, 2018 11:35 PM

October 07, 2018

John Baez - Azimuth

Lebesgue Universal Covering Problem (Part 3)

Back in 2015, I reported some progress on this difficult problem in plane geometry. I’m happy to report some more.

First, remember the story. A subset of the plane has diameter 1 if the distance between any two points in this set is ≤ 1. A universal covering is a convex subset of the plane that can cover a translated, reflected and/or rotated version of every subset of the plane with diameter 1. In 1914, the famous mathematician Henri Lebesgue sent a letter to a fellow named Pál, challenging him to find the universal covering with the least area.

Pál worked on this problem, and 6 years later he published a paper on it. He found a very nice universal covering: a regular hexagon in which one can inscribe a circle of diameter 1. This has area

0.86602540…

But he also found a universal covering with less area, by removing two triangles from this hexagon—for example, the triangles C1C2C3 and E1E2E3 here:

The resulting universal covering has area

0.84529946…

In 1936, Sprague went on to prove that more area could be removed from another corner of Pál’s original hexagon, giving a universal covering of area

0.8441377708435…

In 1992, Hansen took these reductions even further by removing two more pieces from Pál’s hexagon. Each piece is a thin sliver bounded by two straight lines and an arc. The first piece is tiny. The second is downright microscopic!

Hansen claimed the areas of these regions were 4 · 10-11 and 6 · 10-18. This turned out to be wrong. The actual areas are 3.7507 · 10-11 and 8.4460 · 10-21. The resulting universal covering had an area of

0.844137708416…

This tiny improvement over Sprague’s work led Klee and Wagon to write:

it does seem safe to guess that progress on [this problem], which has been painfully slow in the past, may be even more painfully slow in the future.

However, in 2015 Philip Gibbs found a way to remove about a million times more area than Hansen’s larger region: a whopping 2.233 · 10-5. This gave a universal covering with area

0.844115376859…

Karine Bagdasaryan and I helped Gibbs write up a rigorous proof of this result, and we published it here:

• John Baez, Karine Bagdasaryan and Philip Gibbs, The Lebesgue universal covering problem, Journal of Computational Geometry 6 (2015), 288–299.

Greg Egan played an instrumental role as well, catching various computational errors.

At the time Philip was sure he could remove even more area, at the expense of a more complicated proof. Since the proof was already quite complicated, we decided to stick with what we had.

But this week I met Philip at The philosophy and physics of Noether’s theorems, a wonderful workshop in London which deserves a full blog article of its own. It turns out that he has gone further: he claims to have found a vastly better universal covering, with area

0.8440935944…

This is an improvement of 2.178245 × 10-5 over our earlier work—roughly equal to our improvement over Hansen.

You can read his argument here:

• Philip Gibbs, An upper bound for Lebesgue’s universal covering problem, 22 January 2018.

I say ‘claims’ not because I doubt his result—he’s clearly a master at this kind of mathematics!—but because I haven’t checked it and it’s easy to make mistakes, for example mistakes in computing the areas of the shapes removed.

It seems we are closing in on the final result; however, Philip Gibbs believes there is still room for improvement, so I expect it will take at least a decade or two to solve this problem… unless, of course, some mathematicians start working on it full-time, which could speed things up considerably.

by John Baez at October 07, 2018 02:08 PM

October 06, 2018

John Baez - Azimuth

Riverside Math Workshop

We’re having a workshop with a bunch of cool math talks at U. C. Riverside, and you can register for it here:

Riverside Mathematics Workshop for Excellence and Diversity, Friday 19 October – Saturday 20 October, 2018. Organized by John Baez, Carl Mautner, José González and Chen Weitao.

This is the first of an annual series of workshops to showcase and celebrate excellence in research by women and other under-represented groups for the purpose of fostering and encouraging growth in the U.C. Riverside mathematical community.

After tea at 3:30 p.m. on Friday there will be two plenary talks, lasting until 5:00. Catherine Searle will talk on “Symmetries of spaces with lower curvature bounds”, and Edray Goins will give a talk called “Clocks, parking garages, and the solvability of the quintic: a friendly introduction to monodromy”. There will then be a banquet in the Alumni Center 6:30 – 8:30 p.m.

On Saturday there will be coffee and a poster session at 8:30 a.m., and then two parallel sessions on pure and applied mathematics, with talks at 9:30, 10:30, 11:30, 1:00 and 2:00. Check out the abstracts here!

(I’m especially interested in Christina Vasilakopoulou’s talk on Frobenius and Hopf monoids in enriched categories, but she’s my postdoc so I’m biased.)

by John Baez at October 06, 2018 07:22 AM

October 02, 2018

John Baez - Azimuth

Applied Category Theory 2019

 

animation by Marius Buliga

I’m helping organize ACT 2019, an applied category theory conference and school at Oxford, July 15-26, 2019.

More details will come later, but here’s the basic idea. If you’re a grad student interested in this subject, you should apply for the ‘school’. Not yet—we’ll let you know when.

Dear all,

As part of a new growing community in Applied Category Theory, now with a dedicated journal Compositionality, a traveling workshop series SYCO, a forthcoming Cambridge U. Press book series Reasoning with Categories, and several one-off events including at NIST, we launch an annual conference+school series named Applied Category Theory, the coming one being at Oxford, July 15-19 for the conference, and July 22-26 for the school. The dates are chosen such that CT 2019 (Edinburgh) and the ACT 2019 conference (Oxford) will be back-to-back, for those wishing to participate in both.

There already was a successful invitation-only pilot, ACT 2018, last year at the Lorentz Centre in Leiden, also in the format of school+workshop.

For the conference, for those who are familiar with the successful QPL conference series, we will follow a very similar format for the ACT conference. This means that we will accept both new papers which then will be published in a proceedings volume (most likely a Compositionality special Proceedings issue), as well as shorter abstracts of papers published elsewhere. There will be a thorough selection process, as typical in computer science conferences. The idea is that all the best work in applied category theory will be presented at the conference, and that acceptance is something that means something, just like in CS conferences. This is particularly important for young people as it will help them with their careers.

Expect a call for submissions soon, and start preparing your papers now!

The school in ACT 2018 was unique in that small groups of students worked closely with an experienced researcher (these were John Baez, Aleks Kissinger, Martha Lewis and Pawel Sobociński), and each group ended up producing a paper. We will continue with this format or a closely related one, with Jules Hedges and Daniel Cicala as organisers this year. As there were 80 applications last year for 16 slots, we may want to try to find a way to involve more students.

We are fortunate to have a number of private sector companies closely associated in some way or another, who will also participate, with Cambridge Quantum Computing Inc. and StateBox having already made major financial/logistic contributions.

On behalf of the ACT Steering Committee,

John Baez, Bob Coecke, David Spivak, Christina Vasilakopoulou

by John Baez at October 02, 2018 04:11 PM

October 01, 2018

Clifford V. Johnson - Asymptotia

Diverse Futures

I was asked by editors of the magazine Physics World's 30th anniversary edition to do a drawing that somehow captures changes in physics over the last 30 years, and looks forward to 30 years from now. This was an interesting challenge. There was not anything like the freedom to use space that I had in other works I've done, like my graphic book about science "The Dialogues", or my glimpse of the near future in my SF story "Resolution" in the Twelve Tomorrows anthology. I had over 230 pages for the former, and 20 pages for the latter. Here, I had one page. Well, actually a little over 2/3 of a page (once you take into account the introductory text, etc).

So I thought about it a lot. The editors wanted to show an active working environment, and so I thought about the interiors of labs for some time, looked up lots of physics breakthroughs over the years, and reflected on what might come. I eventually realized that the most important single change in the science that can be visually depicted (and arguably the single most important change of any kind) is the change that's happened to the scientists. Most importantly, we've become more diverse in various ways (not uniformly across all fields though), much more collaborative, and the means by which we communicate in order to do science have expanded greatly. All of this has benefited the science greatly, and I think that if you were to get a time machine and visit a lab 30 years ago, or 30 years from now, it will be the changes in the people that will most strike you, if you're paying attention. So I decided to focus on the break/discussion area of the lab, and imagined that someone stood in the same spot each year and took a snapshot. What we're seeing is those photos tacked to a noticeboard somewhere, and that's our time machine. Have a look, and keep an eye out for various details I put in to reflect the different periods. Enjoy! (Direct link here, and below I've embedded the image itself that's from the magazine. I recommend reading the whole issue, as it is a great survey of the last 30 years.)

Physics World Illustration showing snapshots in time by Clifford V. Johnson

-cvj Click to continue reading this post

The post Diverse Futures appeared first on Asymptotia.

by Clifford at October 01, 2018 06:00 PM

September 29, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

History of Physics at the IoP

This week saw a most enjoyable conference on the history of physics at the Institute of Physics in London. The IoP has had an active subgroup in the history of physics for many years, complete with its own newsletter, but this was the group’s first official workshop for a long while. It proved to be a most enjoyable and informative occasion, I hope it is the first of many to come.

download

The Institute of Physics at Portland Place in London (made famous by writer Ian McEwan in the novel ‘Solar’, as the scene of a dramatic clash between a brilliant physicist of questionable integrity and a Professor of Science Studies)

There were plenty of talks on what might be called ‘classical history’, such as Maxwell, Kelvin and the Inverse Square law of Electrostatics (by Isobel Falconer of the University of St. Andrews) and Newton’s First Law – a History (by Paul Ranford of University College London), while the more socially-minded historian might have enjoyed talks such as Psychical and Optical Research; Between Lord Rayleigh’s Naturalism and Dualism (by Gregory Bridgman of the University of Cambridge) and The Paradigm Shift of Physics -Religion-Unbelief Relationship from the Renaissance to the 21st Century (by Elisabetta Canetta of St Mary’s University). Of particular interest to me were a number of excellent talks drawn from the history of 20th century physics, such as A Partial History of Cosmic Ray Research in the UK (by the leading cosmic ray physicist Alan Watson), The Origins and Development of Free-Electron Lasers in the UK (by Elaine Seddon of Daresbury Laboratory),  When Condensed Matter became King (by Joseph Martin of the University of Cambridge), and Symmetries: On Physical and Aesthetic Argument in the Development of Relativity (by Richard Staley of the University of Cambridge). The official conference programme can be viewed here.

My own talk, Interrogating the Legend of Einstein’s “Biggest Blunder”, was a brief synopsis of our recent paper on this topic, soon to appear in the journal Physics in Perspective. Essentially our finding is that, despite recent doubts about the story, the evidence suggests that Einstein certainly did come to view his introduction of the cosmological constant term to the field equations as a serious blunder and almost certainly did declare the term his “biggest blunder” on at least one occasion. Given his awareness of contemporaneous problems such as the age of the universe predicted by cosmologies without the term, this finding has some relevance to those of today’s cosmologists who seek to describe the recently-discovered acceleration in cosmic expansion without a cosmological constant. The slides for the talk can be found here.

I must admit I missed a trick at question time. Asked about other  examples of ‘fudge factors’ that were introduced and later regretted, I forgot the obvious one. In 1900, Max Planck suggested that energy transfer between oscillators somehow occurs in small packets or ‘quanta’ of energy in order to successfully predict the spectrum of radiation from a hot body. However, he saw this as a mathematical device and was not at all supportive of the more general postulate of the ‘light quantum’ when it was proposed by a young Einstein in 1905.  Indeed, Planck rejected the light quantum for many years.

All in all, a superb conference. It was also a pleasure to visit London once again. As always, I booked a cheap ‘ n’ cheerful hotel in the city centre, walkable to the conference. On my way to the meeting, I walked past Madame Tussauds, the Royal Academy of Music, and had breakfast at the tennis courts in Regent’s Park. What a city!

IMG_1937 (1)

Walking past the Royal Academy on my way to the conference

IMG_1942IMG_1946

Views of London over a quick dinner after the conference

by cormac at September 29, 2018 09:07 PM

September 27, 2018

Axel Maas - Looking Inside the Standard Model

Unexpected connections
The history of physics is full of stuff developed for one purpose ending up being useful for an entirely different purpose. Quite often they also failed their original purpose miserably, but are paramount for the new one. Newer examples are the first attempts to describe the weak interactions, which ended up describing the strong one. Also, string theory was originally invented for the strong interactions, and failed for this purpose. Now, well, it is the popular science star, and a serious candidate for quantum gravity.

But failing is optional for having a second use. And we just start to discover a second use for our investigations of grand-unified theories. There our research used a toy model. We did this, because we wanted to understand a mechanism. And because doing the full story would have been much too complicated before we did not know, whether the mechanism works. But it turns out this toy theory may be an interesting theory on its own.

And it may be interesting for a very different topic: Dark matter. This is a hypothetical type of matter of which we see a lot of indirect evidence in the universe. But we are still mystified of what it is (and whether it is matter at all). Of course, such mysteries draw our interests like a flame the moth. Hence, our group in Graz starts to push also in this direction, being curious on what is going on. For now, we follow the most probable explanation that there are additional particles making up dark matter. Then there are two questions: What are they? And do they, and if yes how, interact with the rest of the world? Aside from gravity, of course.

Next week I will go to a workshop in which new ideas on dark matter will be explored, to get a better understanding of what is known. And in the course of preparing for this workshop I noted that there is this connection. I will actually present this idea at the workshop, as it forms a new class of possible explanations of dark matter. Perhaps not the right one, but at the current time an equally plausible one as many others.

And here is how it works. Theories of the type of grand-unified theories were for a long time expected to have a lot of massless particles. This was not bad for their original purpose, as we know quite some of them, like the photon and the gluons. However, our results showed that with an improved treatment and shift in paradigm that this is not always true. At least some of them do not have massless particles.

But dark matter needs to be massive to influence stars and galaxies gravitationally. And, except for very special circumstances, there should not be additional massless dark particles. Because otherwise the massive ones could decay into the massless ones. And then the mass is gone, and this does not work. Thus the reason why such theories had been excluded. But with our new results, they become feasible. Even more so, we have a lot of indirect evidence that dark matter is not just a single, massive particle. Rather, it needs to interact with itself, and there could be indeed many different dark matter particles. After all, if there is dark matter, it makes up four times more stuff in the universe than everything we can see. And what we see consists out of many particles, so why should not dark matter do so as well. And this is also realized in our model.

And this is how it works. The scenario I will describe (you can download my talk already now, if you want to look for yourself - though it is somewhat technical) finds two different types of stable dark matter. Furthermore, they interact. And the great thing about our approach is that we can calculate this quite precisely, giving us a chance to make predictions. Still, we need to do this, to make sure that everything works with what astrophysics tells us. Moreover, this setup gives us two more additional particles, which we can couple to the Higgs through a so-called portal. Again, we can calculate this, and how everything comes together. This allows to test this model not only by astronomical observations, but at CERN. This gives the basic idea. Now, we need to do all the detailed calculations. I am quite excited to try this out :) - so stay tuned, whether it actually makes sense. Or whether the model will have to wait for another opportunity.

by Axel Maas (noreply@blogger.com) at September 27, 2018 11:53 AM

September 25, 2018

Sean Carroll - Preposterous Universe

Atiyah and the Fine-Structure Constant

Sir Michael Atiyah, one of the world’s greatest living mathematicians, has proposed a derivation of α, the fine-structure constant of quantum electrodynamics. A preprint is here. The math here is not my forte, but from the theoretical-physics point of view, this seems misguided to me.

(He’s also proposed a proof of the Riemann conjecture, I have zero insight to give there.)

Caveat: Michael Atiyah is a smart cookie and has accomplished way more than I ever will. It’s certainly possible that, despite the considerations I mention here, he’s somehow onto something, and if so I’ll join in the general celebration. But I honestly think what I’m saying here is on the right track.

In quantum electrodynamics (QED), α tells us the strength of the electromagnetic interaction. Numerically it’s approximately 1/137. If it were larger, electromagnetism would be stronger, atoms would be smaller, etc; and inversely if it were smaller. It’s the number that tells us the overall strength of QED interactions between electrons and photons, as calculated by diagrams like these.
As Atiyah notes, in some sense α is a fundamental dimensionless numerical quantity like e or π. As such it is tempting to try to “derive” its value from some deeper principles. Arthur Eddington famously tried to derive exactly 1/137, but failed; Atiyah cites him approvingly.

But to a modern physicist, this seems like a misguided quest. First, because renormalization theory teaches us that α isn’t really a number at all; it’s a function. In particular, it’s a function of the total amount of momentum involved in the interaction you are considering. Essentially, the strength of electromagnetism is slightly different for processes happening at different energies. Atiyah isn’t even trying to derive a function, just a number.

This is basically the objection given by Sabine Hossenfelder. But to be as charitable as possible, I don’t think it’s absolutely a knock-down objection. There is a limit we can take as the momentum goes to zero, at which point α is a single number. Atiyah mentions nothing about this, which should give us skepticism that he’s on the right track, but it’s conceivable.

More importantly, I think, is the fact that α isn’t really fundamental at all. The Feynman diagrams we drew above are the simple ones, but to any given process there are also much more complicated ones, e.g.

And in fact, the total answer we get depends not only on the properties of electrons and photons, but on all of the other particles that could appear as virtual particles in these complicated diagrams. So what you and I measure as the fine-structure constant actually depends on things like the mass of the top quark and the coupling of the Higgs boson. Again, nowhere to be found in Atiyah’s paper.

Most importantly, in my mind, is that not only is α not fundamental, QED itself is not fundamental. It’s possible that the strong, weak, and electromagnetic forces are combined into some Grand Unified theory, but we honestly don’t know at this point. However, we do know, thanks to Weinberg and Salam, that the weak and electromagnetic forces are unified into the electroweak theory. In QED, α is related to the “elementary electric charge” e by the simple formula α = e2/4π. (I’ve set annoying things like Planck’s constant and the speed of light equal to one. And note that this e has nothing to do with the base of natural logarithms, e = 2.71828.) So if you’re “deriving” α, you’re really deriving e.

But e is absolutely not fundamental. In the electroweak theory, we have two coupling constants, g and g’ (for “weak isospin” and “weak hypercharge,” if you must know). There is also a “weak mixing angle” or “Weinberg angle” θW relating how the original gauge bosons get projected onto the photon and W/Z bosons after spontaneous symmetry breaking. In terms of these, we have a formula for the elementary electric charge: e = g sinθW. The elementary electric charge isn’t one of the basic ingredients of nature; it’s just something we observe fairly directly at low energies, after a bunch of complicated stuff happens at higher energies.

Not a whit of this appears in Atiyah’s paper. Indeed, as far as I can tell, there’s nothing in there about electromagnetism or QED; it just seems to be a way to calculate a number that is close enough to the measured value of α that he could plausibly claim it’s exactly right. (Though skepticism has been raised by people trying to reproduce his numerical result.) I couldn’t see any physical motivation for the fine-structure constant to have this particular value

These are not arguments why Atiyah’s particular derivation is wrong; they’re arguments why no such derivation should ever be possible. α isn’t the kind of thing for which we should expect to be able to derive a fundamental formula, it’s a messy low-energy manifestation of a lot of complicated inputs. It would be like trying to derive a fundamental formula for the average temperature in Los Angeles.

Again, I could be wrong about this. It’s possible that, despite all the reasons why we should expect α to be a messy combination of many different inputs, some mathematically elegant formula is secretly behind it all. But knowing what we know now, I wouldn’t bet on it.

by Sean Carroll at September 25, 2018 08:03 AM

September 20, 2018

John Baez - Azimuth

Patterns That Eventually Fail

Sometimes patterns can lead you astray. For example, it’s known that

\displaystyle{ \mathrm{li}(x) = \int_0^x \frac{dt}{\ln t} }

is a good approximation to \pi(x), the number of primes less than or equal to x. Numerical evidence suggests that \mathrm{li}(x) is always greater than \pi(x). For example,

\mathrm{li}(10^{12}) - \pi(10^{12}) = 38,263

and

\mathrm{li}(10^{24}) - \pi(10^{24}) = 17,146,907,278

But in 1914, Littlewood heroically showed that in fact, \mathrm{li}(x) - \pi(x) changes sign infinitely many times!

This raised the question: when does \pi(x) first exceed \mathrm{li}(x)? In 1933, Littlewood’s student Skewes showed, assuming the Riemann hypothesis, that it must do so for some x less than or equal to

\displaystyle{ 10^{10^{10^{34}}} }

Later, in 1955, Skewes showed without the Riemann hypothesis that \pi(x) must exceed \mathrm{li}(x) for some x smaller than

\displaystyle{ 10^{10^{10^{964}}} }

By now this bound has been improved enormously. We now know the two functions cross somewhere near 1.397 \times 10^{316}, but we don’t know if this is the first crossing!

All this math is quite deep. Here is something less deep, but still fun.

You can show that

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, dt = \frac{\pi}{2} }

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, dt = \frac{\pi}{2} }

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, dt = \frac{\pi}{2} }

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \, \frac{\sin \left(\frac{t}{301}\right)}{\frac{t}{301}} \, dt = \frac{\pi}{2} }

and so on.

It’s a nice pattern. But this pattern doesn’t go on forever! It lasts a very, very long time… but not forever.

More precisely, the identity

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }

holds when

n < 9.8 \cdot 10^{42}

but not for all n. At some point it stops working and never works again. In fact, it definitely fails for all

n > 7.4 \cdot 10^{43}

The explanation

The integrals here are a variant of the Borwein integrals:

\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, dx= \frac{\pi}{2} }

\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\frac{\sin(x/3)}{x/3} \, dx = \frac{\pi}{2} }

\displaystyle{ \int_0^\infty \frac{\sin(x)}{x}\, \frac{\sin(x/3)}{x/3} \, \frac{\sin(x/5)}{x/5} \, dx = \frac{\pi}{2} }

where the pattern continues until

\displaystyle{ \int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots\frac{\sin(x/13)}{x/13} \, dx = \frac{\pi}{2} }

but then fails:

\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(x/3)}{x/3}\cdots \frac{\sin(x/15)}{x/15} \, dx \approx \frac \pi 2 - 2.31\times 10^{-11} }

I never understood this until I read Greg Egan’s explanation, based on the work of Hanspeter Schmid. It’s all about convolution, and Fourier transforms:

Suppose we have a rectangular pulse, centred on the origin, with a height of 1/2 and a half-width of 1.

Now, suppose we keep taking moving averages of this function, again and again, with the average computed in a window of half-width 1/3, then 1/5, then 1/7, 1/9, and so on.

There are a couple of features of the original pulse that will persist completely unchanged for the first few stages of this process, but then they will be abruptly lost at some point.

The first feature is that F(0) = 1/2. In the original pulse, the point (0,1/2) lies on a plateau, a perfectly constant segment with a half-width of 1. The process of repeatedly taking the moving average will nibble away at this plateau, shrinking its half-width by the half-width of the averaging window. So, once the sum of the windows’ half-widths exceeds 1, at 1/3+1/5+1/7+…+1/15, F(0) will suddenly fall below 1/2, but up until that step it will remain untouched.

In the animation below, the plateau where F(x)=1/2 is marked in red.

The second feature is that F(–1)=F(1)=1/4. In the original pulse, we have a step at –1 and 1, but if we define F here as the average of the left-hand and right-hand limits we get 1/4, and once we apply the first moving average we simply have 1/4 as the function’s value.

In this case, F(–1)=F(1)=1/4 will continue to hold so long as the points (–1,1/4) and (1,1/4) are surrounded by regions where the function has a suitable symmetry: it is equal to an odd function, offset and translated from the origin to these centres. So long as that’s true for a region wider than the averaging window being applied, the average at the centre will be unchanged.

The initial half-width of each of these symmetrical slopes is 2 (stretching from the opposite end of the plateau and an equal distance away along the x-axis), and as with the plateau, this is nibbled away each time we take another moving average. And in this case, the feature persists until 1/3+1/5+1/7+…+1/113, which is when the sum first exceeds 2.

In the animation, the yellow arrows mark the extent of the symmetrical slopes.

OK, none of this is difficult to understand, but why should we care?

Because this is how Hanspeter Schmid explained the infamous Borwein integrals:

∫sin(t)/t dt = π/2
∫sin(t/3)/(t/3) × sin(t)/t dt = π/2
∫sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2

∫sin(t/13)/(t/13) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2

But then the pattern is broken:

∫sin(t/15)/(t/15) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2

Here these integrals are from t=0 to t=∞. And Schmid came up with an even more persistent pattern of his own:

∫2 cos(t) sin(t)/t dt = π/2
∫2 cos(t) sin(t/3)/(t/3) × sin(t)/t dt = π/2
∫2 cos(t) sin(t/5)/(t/5) × sin(t/3)/(t/3) × sin(t)/t dt = π/2

∫2 cos(t) sin(t/111)/(t/111) × … × sin(t/3)/(t/3) × sin(t)/t dt = π/2

But:

∫2 cos(t) sin(t/113)/(t/113) × … × sin(t/3)/(t/3) × sin(t)/t dt < π/2

The first set of integrals, due to Borwein, correspond to taking the Fourier transforms of our sequence of ever-smoother pulses and then evaluating F(0). The Fourier transform of the sinc function:

sinc(w t) = sin(w t)/(w t)

is proportional to a rectangular pulse of half-width w, and the Fourier transform of a product of sinc functions is the convolution of their transforms, which in the case of a rectangular pulse just amounts to taking a moving average.

Schmid’s integrals come from adding a clever twist: the extra factor of 2 cos(t) shifts the integral from the zero-frequency Fourier component to the sum of its components at angular frequencies –1 and 1, and hence the result depends on F(–1)+F(1)=1/2, which as we have seen persists for much longer than F(0)=1/2.

• Hanspeter Schmid, Two curious integrals and a graphic proof, Elem. Math. 69 (2014) 11–17.

I asked Greg if we could generalize these results to give even longer sequences of identities that eventually fail, and he showed me how: you can just take the Borwein integrals and replace the numbers 1, 1/3, 1/5, 1/7, … by some sequence of positive numbers

1, a_1, a_2, a_3 \dots

The integral

\displaystyle{\int_0^\infty \frac{\sin(x)}{x} \, \frac{\sin(a_1 x)}{a_1 x} \, \frac{\sin(a_2 x)}{a_2 x} \cdots \frac{\sin(a_n x)}{a_n x} \, dx }

will then equal \pi/2 as long as a_1 + \cdots + a_n \le 1, but not when it exceeds 1. You can see a full explanation on Wikipedia:

• Wikipedia, Borwein integral: general formula.

As an example, I chose the integral

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt  }

which equals \pi/2 if and only if

\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} \le 1  }

Thus, the identity holds if

\displaystyle{ \sum_{k=1}^n \frac{1}{100 k} \le 1  }

However,

\displaystyle{ \sum_{k=1}^n \frac{1}{k} \le 1 + \ln n }

so the identity holds if

\displaystyle{ \frac{1}{100} (1 + \ln n) \le 1 }

or

\ln n \le 99

or

n \le e^{99} \approx 9.8 \cdot 10^{42}

On the other hand, the identity fails if

\displaystyle{ \sum_{k=1}^n \frac{1}{100 k + 1} > 1  }

so it fails if

\displaystyle{ \sum_{k=1}^n \frac{1}{101 k} > 1  }

However,

\displaystyle{ \sum_{k=1}^n \frac{1}{k} \ge \ln n }

so the identity fails if

\displaystyle{ \frac{1}{101} \ln n > 1 }

or

\displaystyle{ \ln n > 101}

or

\displaystyle{n > e^{101} \approx 7.4 \cdot 10^{43} }

With a little work one could sharpen these estimates considerably, though it would take more work to find the exact value of n at which

\displaystyle{ \int_0^\infty \frac{\sin t}{t} \, \frac{\sin \left(\frac{t}{101}\right)}{\frac{t}{101}} \, \frac{\sin \left(\frac{t}{201}\right)}{\frac{t}{201}} \cdots \, \frac{\sin \left(\frac{t}{100 n +1}\right)}{\frac{t}{100 n + 1}} \, dt = \frac{\pi}{2} }

first fails.

by John Baez at September 20, 2018 08:32 PM

August 13, 2018

Andrew Jaffe - Leaves on the Line

Planck: Demographics and Diversity

Another aspect of Planck’s legacy bears examining.

A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).

Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.

Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)

I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).

This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)

However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.

This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.

by Andrew at August 13, 2018 10:07 PM

Axel Maas - Looking Inside the Standard Model

Fostering an idea with experience
In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.

by Axel Maas (noreply@blogger.com) at August 13, 2018 02:46 PM

July 26, 2018

Sean Carroll - Preposterous Universe

Mindscape Podcast

For anyone who hasn’t been following along on other social media, the big news is that I’ve started a podcast, called Mindscape. It’s still young, but early returns are promising!

I won’t be posting each new episode here; the podcast has a “blog” of its own, and episodes and associated show notes will be published there. You can subscribe by RSS as usual, or there is also an email list you can sign up for. For podcast aficionados, Mindscape should be available wherever finer podcasts are served, including iTunes, Google Play, Stitcher, Spotify, and so on.

As explained at the welcome post, the format will be fairly conventional: me talking to smart people about interesting ideas. It won’t be all, or even primarily, about physics; much of my personal motivation is to get the opportunity to talk about all sorts of other interesting things. I’m expecting there will be occasional solo episodes that just have me rambling on about one thing or another.

We’ve already had a bunch of cool guests, check these out:

And there are more exciting episodes on the way. Enjoy, and spread the word!

by Sean Carroll at July 26, 2018 04:15 PM

July 20, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Summer days, academics and technological universities

The heatwave in the northern hemisphere may (or may not) be an ominous portend of things to come, but it’s certainly making for an enjoyable summer here in Ireland. I usually find it quite difficult to do any meaningful research when the sun is out, but things are a bit different when the good weather is regular.  Most days, I have breakfast in the village, a swim in the sea before work, a swim after work and a game of tennis to round off the evening. Tough life, eh.

 

 

 

                                       Counsellor’s Strand in Dunmore East

So far, I’ve got one one conference proceeding written, one historical paper revamped and two articles refereed (I really enjoy the latter process, it’s so easy for academics to become isolated). Next week I hope to get back to that book I never seem to finish.

However, it would be misleading to portray a cosy image of a college full of academics beavering away over the summer. This simply isn’t the case around here – while a few researchers can be found in college this summer, the majority of lecturing staff decamped on June 20th and will not return until September 1st.

And why wouldn’t they? Isn’t that their right under the Institute of Technology contracts, especially given the heavy teaching loads during the semester? Sure – but I think it’s important to acknowledge that this is a very different set-up to the modern university sector, and doesn’t quite square with the move towards technological universities.

This week, the Irish newspapers are full of articles depicting the opening of Ireland’s first technological university, and apparently, the Prime Minister is anxious our own college should get a move on. Hmm. No mention of the prospect of a change in teaching duties, or increased facilities/time for research, as far as I can tell (I’d give a lot for an office that was fit for purpose).  So will the new designation just amount to a name change? And this is not to mention the scary business of the merging of different institutes of technology. Those who raise questions about this now tend to get cast as dismissed as resistors of progress. Yet the history of merging large organisations in Ireland hardly inspires confidence, not least because of a tendency for increased layers of bureaucracy to appear out of nowhere – HSE anyone?

by cormac at July 20, 2018 03:32 PM

July 19, 2018

Andrew Jaffe - Leaves on the Line

(Almost) The end of Planck

This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.

Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.

And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).

(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)

Planck 2018: the science

So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.

The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.

All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)

Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: PlanckSpectra (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.

As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).

Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.

The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.

(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)

Planck 2018: lessons learned

So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).

But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.

Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.

That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.

by Andrew at July 19, 2018 06:51 PM

Andrew Jaffe - Leaves on the Line

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.

by Andrew at July 19, 2018 12:02 PM

July 16, 2018

Tommaso Dorigo - Scientificblogging

A Beautiful New Spectroscopy Measurement
What is spectroscopy ? 
(A) the observation of ghosts by infrared visors or other optical devices
(B) the study of excited states of matter through observation of energy emissions

If you answered (A), you are probably using a lousy internet search engine; and btw, you are rather dumb. Ghosts do not exist. 

Otherwise you are welcome to read on. We are, in fact, about to discuss a cutting-edge spectroscopy measurement, performed by the CMS experiment using lots of proton-proton collisions by the CERN Large Hadron Collider (LHC). 

read more

by Tommaso Dorigo at July 16, 2018 09:13 AM

July 12, 2018

Matt Strassler - Of Particular Significance

“Seeing” Double: Neutrinos and Photons Observed from the Same Cosmic Source

There has long been a question as to what types of events and processes are responsible for the highest-energy neutrinos coming from space and observed by scientists.  Another question, probably related, is what creates the majority of high-energy cosmic rays — the particles, mostly protons, that are constantly raining down upon the Earth.

As scientists’ ability to detect high-energy neutrinos (particles that are hugely abundant, electrically neutral, very light-weight, and very difficult to observe) and high-energy photons (particles of light, though not necessarily of visible light) have become more powerful and precise, there’s been considerable hope of getting an answer to these question.  One of the things we’ve been awaiting (and been disappointed a couple of times) is a violent explosion out in the universe that produces both high-energy photons and neutrinos at the same time, at a high enough rate that both types of particles can be observed at the same time coming from the same direction.

In recent years, there has been some indirect evidence that blazars — narrow jets of particles, pointed in our general direction like the barrel of a gun, and created as material swirls near and almost into giant black holes in the centers of very distant galaxies — may be responsible for the high-energy neutrinos.  Strong direct evidence in favor of this hypothesis has just been presented today.   Last year, one of these blazars flared brightly, and the flare created both high-energy neutrinos and high-energy photons that were observed within the same period, coming from the same place in the sky.

I have written about the IceCube neutrino observatory before; it’s a cubic kilometer of ice under the South Pole, instrumented with light detectors, and it’s ideal for observing neutrinos whose motion-energy far exceeds that of the protons in the Large Hadron Collider, where the Higgs particle was discovered.  These neutrinos mostly pass through Ice Cube undetected, but one in 100,000 hits something, and debris from the collision produces visible light that Ice Cube’s detectors can record.   IceCube has already made important discoveries, detecting a new class of high-energy neutrinos.

On Sept 22 of last year, one of these very high-energy neutrinos was observed at IceCube. More precisely, a muon created underground by the collision of this neutrino with an atomic nucleus was observed in IceCube.  To create the observed muon, the neutrino must have had a motion-energy tens of thousand times larger than than the motion-energy of each proton at the Large Hadron Collider (LHC).  And the direction of the neutrino’s motion is known too; it’s essentially the same as that of the observed muon.  So IceCube’s scientists knew where, on the sky, this neutrino had come from.

(This doesn’t work for typical cosmic rays; protons, for instance, travel in curved paths because they are deflected by cosmic magnetic fields, so even if you measure their travel direction at their arrival to Earth, you don’t then know where they came from. Neutrinos, beng electrically neutral, aren’t affected by magnetic fields and travel in a straight line, just as photons do.)

Very close to that direction is a well-known blazar (TXS-0506), four billion light years away (a good fraction of the distance across the visible universe).

The IceCube scientists immediately reported their neutrino observation to scientists with high-energy photon detectors.  (I’ve also written about some of the detectors used to study the very high-energy photons that we find in the sky: in particular, the Fermi/LAT satellite played a role in this latest discovery.) Fermi/LAT, which continuously monitors the sky, was already detecting high-energy photons coming from the same direction.   Within a few days the Fermi scientists had confirmed that TXS-0506 was indeed flaring at the time — already starting in April 2017 in fact, six times as bright as normal.  With this news from IceCube and Fermi/LAT, many other telescopes (including the MAGIC cosmic ray detector telescopes among others) then followed suit and studied the blazar, learning more about the properties of its flare.

Now, just a single neutrino on its own isn’t entirely convincing; is it possible that this was all just a coincidence?  So the IceCube folks went back to their older data to snoop around.  There they discovered, in their 2014-2015 data, a dramatic flare in neutrinos — more than a dozen neutrinos, seen over 150 days, had come from the same direction in the sky where TXS-0506 is sitting.  (More precisely, nearly 20 from this direction were seen, in a time period where normally there’d just be 6 or 7 by random chance.)  This confirms that this blazar is indeed a source of neutrinos.  And from the energies of the neutrinos in this flare, yet more can be learned about this blazar, and how it makes  high-energy photons and neutrinos at the same time.  Interestingly, so far at least, there’s no strong evidence for this 2014 flare in photons, except perhaps an increase in the number of the highest-energy photons… but not in the total brightness of the source.

The full picture, still emerging, tends to support the idea that the blazar arises from a supermassive black hole, acting as a natural particle accelerator, making a narrow spray of particles, including protons, at extremely high energy.  These protons, millions of times more energetic than those at the Large Hadron Collider, then collide with more ordinary particles that are just wandering around, such as visible-light photons from starlight or infrared photons from the ambient heat of the universe.  The collisions produce particles called pions, made from quarks and anti-quarks and gluons (just as protons are), which in turn decay either to photons or to (among other things) neutrinos.  And its those resulting photons and neutrinos which have now been jointly observed.

Since cosmic rays, the mysterious high energy particles from outer space that are constantly raining down on our planet, are mostly protons, this is evidence that many, perhaps most, of the highest energy cosmic rays are created in the natural particle accelerators associated with blazars. Many scientists have suspected that the most extreme cosmic rays are associated with the most active black holes at the centers of galaxies, and now we have evidence and more details in favor of this idea.  It now appears likely that that this question will be answerable over time, as more blazar flares are observed and studied.

The announcement of this important discovery was made at the National Science Foundation by Francis Halzen, the IceCube principal investigator, Olga Botner, former IceCube spokesperson, Regina Caputo, the Fermi-LAT analysis coordinator, and Razmik Mirzoyan, MAGIC spokesperson.

The fact that both photons and neutrinos have been observed from the same source is an example of what people are now calling “multi-messenger astronomy”; a previous example was the observation in gravitational waves, and in photons of many different energies, of two merging neutron stars.  Of course, something like this already happened in 1987, when a supernova was seen by eye, and also observed in neutrinos.  But in this case, the neutrinos and photons have energies millions and billions of times larger!

 

by Matt Strassler at July 12, 2018 04:59 PM

July 08, 2018

Marco Frasca - The Gauge Connection

ICHEP 2018

The great high-energy physics conference ICHEP 2018 is over and, as usual, I spend some words about it. The big collaborations of CERN presented their last results. I think the most relevant of this is about the evidence (3\sigma) that the Standard Model is at odds with the measurement of spin correlation between top-antitop pair of quarks. More is given in the ATLAS communicate. As expected, increasing precision proves to be rewarding.

About the Higgs particle, after the important announcement about the existence of the ttH process, both ATLAS and CMS are pursuing further their improvement of precision. About the signal strength they give the following results. For ATLAS (see here)

\mu=1.13\pm 0.05({\rm stat.})\pm 0.05({\rm exp.})^{+0.05}_{-0.04}({\rm sig. th.})\pm 0.03({\rm bkg. th})

and CMS (see here)

\mu=1.17\pm 0.06({\rm stat.})^{+0.06}_{-0.05}({\rm sig. th.})\pm 0.06({\rm other syst.}).

The news is that the error is diminished and both agrees. They show a small tension, 13% and 17% respectively, but the overall result is consistent with the Standard Model.

When the different contributions are unpacked in the respective contributions due to different processes, CMS claims some tensions in the WW decay that should be taken under scrutiny in the future (see here). They presented the results from 35.9{\rm fb}^{-1} data and so, there is no significant improvement, for the moment, with respect to Moriond conference this year. The situation is rather better for the ZZ decay where no tension appears and the agreement with the Standard Model is there in all its glory (see here). Things are quite different, but not too much, for ATLAS as in this case they observe some tensions but these are all below 2\sigma (see here). For the WW decay, ATLAS does not see anything above 1\sigma (see here).

So, although there is something to take under attention with the increase of data, that will reach 100 {\rm fb}^{-1} this year, but the Standard Model is in good health with respect to the Higgs sector even if there is a lot to be answered yet and precision measurements are the main tool. The correlation in the tt pair is absolutely promising and we should hope this will be confirmed a discovery.

 

by mfrasca at July 08, 2018 10:58 AM

July 04, 2018

Tommaso Dorigo - Scientificblogging

Chasing The Higgs Self Coupling: New CMS Results
Happy Birthday Higgs boson! The discovery of the last fundamental particle of the Standard Model was announced exactly 6 years ago at CERN (well, plus one day, since I decided to postpone to July 5 the publication of this post...).

In the Standard Model, the theory of fundamental interactions among elementary particles which enshrines our current understanding of the subnuclear world,  particles that constitute matter are fermionic: they have a haif-integer value of a quantity we call spin; and particles that mediate interactions between those fermions, keeping them together and governing their behaviour, are bosonic: they have an integer value of spin. 

read more

by Tommaso Dorigo at July 04, 2018 12:57 PM

June 25, 2018

Sean Carroll - Preposterous Universe

On Civility

Alex Wong/Getty Images

White House Press Secretary Sarah Sanders went to have dinner at a local restaurant the other day. The owner, who is adamantly opposed to the policies of the Trump administration, politely asked her to leave, and she did. Now (who says human behavior is hard to predict?) an intense discussion has broken out concerning the role of civility in public discourse and our daily life. The Washington Post editorial board, in particular, called for public officials to be allowed to eat in peace, and people have responded in volume.

I don’t have a tweet-length response to this, as I think the issue is more complex than people want to make it out to be. I am pretty far out to one extreme when it comes to the importance of engaging constructively with people with whom we disagree. We live in a liberal democracy, and we should value the importance of getting along even in the face of fundamentally different values, much less specific political stances. Not everyone is worth talking to, but I prefer to err on the side of trying to listen to and speak with as wide a spectrum of people as I can. Hell, maybe I am even wrong and could learn something.

On the other hand, there is a limit. At some point, people become so odious and morally reprehensible that they are just monsters, not respected opponents. It’s important to keep in our list of available actions the ability to simply oppose those who are irredeemably dangerous/evil/wrong. You don’t have to let Hitler eat in your restaurant.

This raises two issues that are not so easy to adjudicate. First, where do we draw the line? What are the criteria by which we can judge someone to have crossed over from “disagreed with” to “shunned”? I honestly don’t know. I tend to err on the side of not shunning people (in public spaces) until it becomes absolutely necessary, but I’m willing to have my mind changed about this. I also think the worry that this particular administration exhibits authoritarian tendencies that could lead to a catastrophe is not a completely silly one, and is at least worth considering seriously.

More importantly, if the argument is “moral monsters should just be shunned, not reasoned with or dealt with constructively,” we have to be prepared to be shunned ourselves by those who think that we’re moral monsters (and those people are out there).  There are those who think, for what they take to be good moral reasons, that abortion and homosexuality are unforgivable sins. If we think it’s okay for restaurant owners who oppose Trump to refuse service to members of his administration, we have to allow staunch opponents of e.g. abortion rights to refuse service to politicians or judges who protect those rights.

The issue becomes especially tricky when the category of “people who are considered to be morally reprehensible” coincides with an entire class of humans who have long been discriminated against, e.g. gays or transgender people. In my view it is bigoted and wrong to discriminate against those groups, but there exist people who find it a moral imperative to do so. A sensible distinction can probably be made between groups that we as a society have decided are worthy of protection and equal treatment regardless of an individual’s moral code, so it’s at least consistent to allow restaurant owners to refuse to serve specific people they think are moral monsters because of some policy they advocate, while still requiring that they serve members of groups whose behaviors they find objectionable.

The only alternative, as I see it, is to give up on the values of liberal toleration, and to simply declare that our personal moral views are unquestionably the right ones, and everyone should be judged by them. That sounds wrong, although we do in fact enshrine certain moral judgments in our legal codes (murder is bad) while leaving others up to individual conscience (whether you want to eat meat is up to you). But it’s probably best to keep that moral core that we codify into law as minimal and widely-agreed-upon as possible, if we want to live in a diverse society.

This would all be simpler if we didn’t have an administration in power that actively works to demonize immigrants and non-straight-white-Americans more generally. Tolerating the intolerant is one of the hardest tasks in a democracy.

 

 

by Sean Carroll at June 25, 2018 06:00 PM

June 24, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

7th Robert Boyle Summer School

This weekend saw the 7th Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s one of my favourite conferences – a select number of talks on the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

220px-Robert_Boyle_0001

The Irish-born scientist and aristocrat Robert Boyle   

IMG_1745[1]

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

Born in Lismore into a wealthy landowning family, Robert Boyle became one of the most important figures in the Scientific Revolution. A contemporary of Isaac Newton and Robert Hooke, he is recognized the world over for his scientific discoveries, his role in the rise of the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

This year, the theme of the conference was ‘What do we know – and how do we know it?’. There were many interesting talks such as Boyle’s Theory of Knowledge by Dr William Eaton, Associate Professor of Early Modern Philosophy at Georgia Southern University: The How, Who & What of Scientific Discovery by Paul Strathern, author of a great many books on scientists and philosophers such as the well-known Philosophers in 90 Minutes series: Scientific Enquiry and Brain StateUnderstanding the Nature of Knowledge by Professor William T. O’Connor, Head of Teaching and Research in Physiology at the University of Limerick Graduate Entry Medical School: The Promise and Peril of Big Data by Timandra Harkness, well-know media presenter, comedian and writer. For physicists, there was a welcome opportunity to hear the well-known American philosopher of physics Robert P. Crease present the talk Science Denial: will any knowledge do? The full programme for the conference can be found here.

All in all, a hugely enjoyable summer school, culminating in a garden party in the grounds of Lismore castle, Boyle’s ancestral home. My own contribution was to provide the music for the garden party – a flute, violin and cello trio, playing the music of Boyle’s contemporaries, from Johann Sebastian Bach to Turlough O’ Carolan. In my view, the latter was a baroque composer of great importance whose music should be much better known outside Ireland.

trio

IMG_9390

IMG_9398 (1)

Images from the garden party in the grounds of Lismore Castle

by cormac at June 24, 2018 08:19 PM