Particle Physics Planet


May 26, 2018

Christian P. Robert - xi'an's og

predatory but not that smart…

An email I received earlier this week, quite typical of predatory journals seeking names for their board, but unable to distinguish comments from papers, statistics from mathematical physics, or to spot spelling mistakes:

Dear Christian P. Rober,

Greetings and good day.

I represent Editorial Office of Whioce Publishing Pte. Ltd. from Singapore. We have come across your recent article, “Comments on: Natural induction: An objective Bayesian approach” published in RACSAM – Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas. We feel that the topic of this article is very interesting. Therefore, we are delighted to invite you to join the Editorial Board of our journal, entitled International Journal of Mathematical Physics We also hope that you can submit your future work in our journal. Please reply to this email if you are interested in joining the Editorial Board.

I look forward to hearing your positive response. Thank you for your kind consideration.

 

by xi'an at May 26, 2018 10:18 PM

Peter Coles - In the Dark

Yes!

It’s a yes in the referendum to repeal the 8th amendment – by a landslide!

Congratulations to everyone who campaigned so hard for a ‘yes’ vote!

by telescoper at May 26, 2018 05:58 PM

Clifford V. Johnson - Asymptotia

Resolution

Today is the release of the short story anthology Twelve Tomorrows from MIT Press with a wonderful roster of authors. (It is an annual project of the MIT Technology Review.) I’m in there too, with a graphic novella called “Resolution”. It's the first graphic novella in this anthology's five year history, and it is the first time MIT Press is publishing it. Physicists and Mathematicians will appreciate the title choice upon reading. Order! Share!

-cvj Click to continue reading this post

The post Resolution appeared first on Asymptotia.

by Clifford at May 26, 2018 12:36 AM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A festschrift at UCC

One of my favourite academic traditions is the festschrift, i.e., a conference convened to honour the contribution of a senior academic. In a sense, it’s academia’s version of an Oscar for lifetime achievement –  instead of a gong, scholars from all around the world gather to honour their colleague and mentor, from former postgraduate students to postdoctoral researchers, from former colleagues to current collaborators.

It often makes for a truly stimulating conference, as the diverging careers of former colleagues guarantees a diverse set of talks. At the same time, there is usually some kind of common theme, namely the specialism of the professor being honoured.

This week, many of the great and the good of the world of relativity gathered at University College Cork to pay tribute to Professor Niall O’Murchadha, a theoretical physicist in UCC’s Department of Physics who is noted internationally for his seminal contributions to the general theory of relativity.  Niall was never the sort of scientist to blow his trumpet on the Six One News,  but he had a considerable impact on the field of relativity; some measure of this influence can be gauged from the status of the speakers  at the conference, from legendary theorists such as Bob Wald and Bill Unruh to Kip Thorne, recently awarded the Nobel Prize in Physics for his contribution to the detection of gravitational waves. The conference website can be found here and the program is here.

IMG_1640

IMG_1644

IMG_1642

UCC on a beautiful sunny day

As expected, we were treated to a series of talks on diverse topics, from analyses of black hole collapse to gravitational wave detection, from analysis of high energy jets from active galactic nuclei to the initial value problem in relativity.  To pick one highlight, Kip Thorne’s reminiscences of the long search for gravitational waves made for a fascinating talk, from the challenge of getting funding for early prototypes of LIGO to his prescient realisation that the best chance of success was the detection of a signal from the merger of two black holes.

All in all, a very stimulating conference. Most entertaining of all were the speakers’ recollections of Niall’s working methods and his interaction with students and colleagues over the years. Like the great piano teachers of old, one great professor leaves a legacy of critical thinkers dispersed around their world, and their students in turn inspire the next generation!

 

by cormac at May 26, 2018 12:16 AM

May 25, 2018

Christian P. Robert - xi'an's og

running by Kenilworth Castle [jatp]

Last week, while in Warwick, I had a nice warm afternoon run around Kenilworth in the fields with Nick Tawn, which brought us to the castle from the West and the former shallow lake called The Mere [for La Mare and not La Mer!]. It also exposed the fact that my first and only visit to the castle was in the summer of 1977. Which was also the summer when Star Wars was released in Britain, including Birmingham where I saw it…

by xi'an at May 25, 2018 10:18 PM

The n-Category Cafe

Laxification

Talking to my student Joe Moeller today, I bumped into a little question that seems fun. If I’ve got a monoidal category <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, is there some bigger monoidal category <semantics>A^<annotation encoding="application/x-tex">\hat{A}</annotation></semantics> such that lax monoidal functors out of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> are the same as strict monoidal functors out of <semantics>A^<annotation encoding="application/x-tex">\hat{A}</annotation></semantics>?

Someone should know the answer already, but I’ll expound on it a little…

Here’s an example where this works beautifully.

Let <semantics>Δ a<annotation encoding="application/x-tex">\Delta_a</annotation></semantics>, the augmented simplex category, be the category of finite ordinals 0, 1, 2, … and order-preserving maps between these. A famous fact is that <semantics>Δ a<annotation encoding="application/x-tex">\Delta_a</annotation></semantics> is the ‘walking monoid’. In other words, for any strict monoidal category <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, strict monoidal functors

<semantics>f:Δ aX<annotation encoding="application/x-tex"> f : \Delta_a \to X </annotation></semantics>

are the same as monoids in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

But another famous fact is that if <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> is the terminal category made strict monoidal in the only way possible, lax monoidal functors

<semantics>g:1X<annotation encoding="application/x-tex"> g : 1 \to X </annotation></semantics>

are the same as monoids in <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

So, somehow <semantics>Δ a<annotation encoding="application/x-tex">\Delta_a</annotation></semantics> is the “laxification” of <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>: a puffed-up version of the terminal category such that lax monoidal functors out of <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> can be reinterpreted as strict monoidal functors out of <semantics>Δ a<annotation encoding="application/x-tex">\Delta_a</annotation></semantics>.

Indeed, combining these two facts we get a lax monoidal functor

<semantics>p:1Δ a<annotation encoding="application/x-tex"> p: 1 \to \Delta_a</annotation></semantics>

sending the monoid in <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> to the monoid <semantics>1Δ a<annotation encoding="application/x-tex">1 \in \Delta_a</annotation></semantics>. We then have

<semantics>g=fp.<annotation encoding="application/x-tex"> g = f \circ p . </annotation></semantics>

So, I’m thinking this should be an example of a general pattern.

The idea is roughly that for any strict monoidal category <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, there should be a strict monoidal category <semantics>A^<annotation encoding="application/x-tex">\hat{A}</annotation></semantics> and a lax monoidal functor <semantics>p:AA^<annotation encoding="application/x-tex">p: A \to \hat{A}</annotation></semantics> such that every lax monoidal functor <semantics>g:AX<annotation encoding="application/x-tex">g: A \to X</annotation></semantics> is of the form <semantics>fp:A^X<annotation encoding="application/x-tex">f \circ p : \hat{A} \to X</annotation></semantics> for some strict monoidal functor <semantics>f:A^X<annotation encoding="application/x-tex">f: \hat{A} \to X</annotation></semantics>. Or more precisely, precomposition with <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> gives an equivalence of categories

<semantics>p *:StrictMon(A^,X)LaxMon(A,X)<annotation encoding="application/x-tex"> p^* : StrictMon(\hat{A}, X) \to LaxMon(A, X ) </annotation></semantics>

I imagine that the augmented simplex category will play a big role in the construction of <semantics>A^<annotation encoding="application/x-tex">\hat{A}</annotation></semantics>. I’m also imagining that some words like “monoidal nerve” and maybe “fibrant replacement” will show up.

by john (baez@math.ucr.edu) at May 25, 2018 09:24 PM

Emily Lakdawalla - The Planetary Society Blog

Dawn Journal: Getting Elliptical
For the first time in almost a year, the Dawn mission control room at JPL is aglow with blue.

May 25, 2018 05:22 PM

Emily Lakdawalla - The Planetary Society Blog

How to keep up with Hayabusa2
Hayabusa2 is approaching asteroid Ryugu! Here's how to stay on top of mission news and the mission's planned schedule for 2018.

May 25, 2018 04:33 PM

The n-Category Cafe

Tropical Algebra and Railway Optimization

Simon Willerton pointed out a wonderful workshop, which unfortunately neither he nor I can attend… nor Jamie Vicary, who is usually at Birmingham these days:

If you can go, please do — and report back!

Let me explain why it’s so cool…

Tropical algebra involves the numbers <semantics>latex(,]<annotation encoding="application/x-tex">latex (-\infty, \infty] </annotation></semantics> made into a rig with minimization as the addition and addition as the multiplication.

Tropical algebra is important in algebraic geometry, because if you take some polynomial equations and rewrite them replacing + with min and × with +, you get equations that describe shapes with flat pieces replacing curved surfaces, like this:

These simplified shapes are easier to deal with, but they shed light on the original curved ones! Click the picture for more on the subject from Johannes Rau.

Tropical algebra is also important for quantization, since classical mechanics chooses the path with minimum action while quantum mechanics sums over paths. But it’s also important for creating efficient railway time-tables, where you’re trying to minimize the total time it takes to get from one place to another. Finally these worlds are meeting!

Here’s the abstract, which shows that the reference to railway optimization is not just a joke:

Abstract. The main purpose of this workshop is to bring together specialists in tropical mathematics and mathematical optimisation applied in railway engineering and to foster further collaboration between them. It is inspired by some applications of tropical mathematics to the analysis of railway timetables. The most elementary of them is based on a controlled tropically linear dynamic system, which allows for a stability analysis of a regular timetable and can model the delay propagation. Tropical (max-plus) switching systems are one of the extensions of this elementary model. Tropical mathematics also provides appropriate mathematical language and tools for various other applications which willbe presented at the workshop.

The talks on mathematical optimisation in railway engineering will be given by Professor Clive Roberts and other prominent specialists working at the Birmingham Centre for Railway Research and Education (BCRRE). They will inform the workshop participants about the problems that are of actual interest for railways, and suggest efficient and practical methods of their solution.

For a glimpse of some of the category theory lurking in this subject, see:

by john (baez@math.ucr.edu) at May 25, 2018 04:27 PM

Peter Coles - In the Dark

Student access to marked examination scripts

I’m currently waiting for the last couple of scripts from my Physics of the Early Universe examination to arrive so I can begin the task of marking them. The examination was yesterday morning, and it’s now Friday afternoon, so I don’t know why it takes so long for the scripts to find their way to the examiner, especially when marking is on such a tight schedule. I’m away next week (in Ireland) so if I don’t get papers by this afternoon they won’t be marked until I return. The missing two are from students sitting in alternative venues, but I don’t see why that means they take over 24 hours  to get to the marker.

(By the way,  `script’ refers to what the student writes (usually in a special answer book), as opposed to the `paper’ which is the list of questions to be answered or problems to be solved in the script.)

Anyway, while I’m waiting for the missing scripts to arrive I thought I’d mention that here in the School of Physics & Astronomy at Cardiff University we have a system whereby students can get access to their marked examination scripts.  This access is limited, and for the purpose of getting feedback on where they went wrong, not for trying to argue for extra marks. The students can’t take the scripts away, nor can they make a copy, but the can take notes which will hopefully help them in future assessments. There’s a similar provision in place in the Department of Theoretical Physics at Maynooth University, where I will be relocating full-time in July, based around a so-called `Consultation Day’.

When I was Head of the School of Mathematical and Physical Sciences at Sussex University I tried to introduce such a system there, but it was met with some resistance from staff who thought this would not only cause a big increase in workload and but also lead to  difficulties with students demanding their marks be increased. That has never been the experience here at Cardiff: only a handful take up the opportunity and those that do are told quite clearly that the mark cannot be changed.  Last year I had only one student who asked to go through their script. I was happy to oblige and we had a friendly and (I think) productive meeting.

If I had my way we would actually give all students their marked examination scripts back as a matter of routine. The fact that we don’t is no doubt one reason for relatively poor performance in student satisfaction surveys about assessment and feedback. Obviously examination scripts have to go through a pretty strict quality assurance process involving the whole paraphernalia of examination boards (including external examiners), so the scripts can’t be given back immediately but once that process is complete there doesn’t seem to me any reason why we shouldn’t give their work, together with any feedback written on it,  back to the students in its entirety.

I have heard some people argue that under the provisions of the Data Protection Act students have a legal right to see what’s written on the scripts – as that constitutes part of their student record – but that’s not my point here. My point is purely educational, based on the benefit to the student’s learning experience.

Anyway, I don’t know how widespread the practice is of giving examination scripts back to students so let me conduct a totally unscientific poll. Obviously most of my readers are in physics and astronomy, but I invite anyone in any academic discipline to vote:

<noscript><a href="http://polldaddy.com/poll/9783079">Take Our Poll</a></noscript>

And, of course, if you have any further comments to make please feel free to make them through the box below!

 

by telescoper at May 25, 2018 01:44 PM

May 24, 2018

John Baez - Azimuth

Workshop on Compositional Approaches

This looks great too:

Workshop on Compositional Approaches in Physics, Natural Language Processing, and Social Sciences, 2 September 2018, Nice, France.

Compositional Approaches for Physics, NLP, and Social Sciences (CAPNS 2018) will be colocated with QI 2018. The workshop is a continuation and extension of the Workshop on Semantic Spaces at the Intersection of NLP, Physics and Cognitive Science held in June 2016.

AIMS AND SCOPE
The ability to compose parts to form a more complex whole, and to analyze a whole as a combination of elements, is desirable across disciplines. In this workshop we bring together researchers applying compositional approaches to NLP, Physics, Cognitive Science, and Game Theory. The interplay between these disciplines will foster theoretically motivated approaches to understanding how meanings of words interact in sentences and discourse, how concepts develop, and how complex games can be analyzed. Commonalities between the compositional mechanisms employed may be extracted, and applications and phenomena traditionally thought of as ‘non-compositional’ will be examined.

Topics of interests include (but are not restricted to):
Applications of quantum logic in natural language processing and cognitive science
Compositionality in vector space models of meaning
Compositionality in conceptual spaces
Compositional approaches to game theory
Reasoning in vector spaces and conceptual spaces
Conceptual spaces in linguistics
Game-theoretic models of language and conceptual change
Diagrammatic reasoning for natural language processing, cognitive science, and game theory
Compositional explanations of so-called ‘non-compositional’ phenomena such as metaphor

IMPORTANT DATES:
June 30th: Paper submission
July 15th: Notification to contributors
September 2nd: Workshop date

CONFIRMED SPEAKERS:
Gerhard Jäger, Professor of General Linguistics, University of Tübingen
Paul Smolensky, Principal Researcher, Microsoft Research, and Krieger-Eisenhower Professor of Cognitive Science, Johns Hopkins University

SUBMISSIONS:
We invite:
Original contributions (up to 12 pages) of previously unpublished work. Submission of substantial, albeit partial results of work in progress is welcomed.

Extended abstracts (3 pages) of previously published work that is recent and relevant to the workshop. These should include a link to a separately published paper or preprint.

Contributions should be submitted at:
https://easychair.org/conferences/?conf=capns2018

PROGRAMME COMMITTEE:
Peter Bruza, Queensland University of Technology
Trevor Cohen, University of Texas
Fredrik Nordvall Forsberg, University of Strathclyde
Liane Gabora, University of British Columbia
Peter Gärdenfors, Lund University
Helle Hvid Hansen, TU Delft
Chris Heunen, University of Edinburgh
Peter Hines, University of York
Alexander Kurz, University of Leicester
Antonio Lieto, University of Turin
Glyn Morrill, Universitat Politècnica de Catalunya
Dusko Pavlovic, University of Hawaii
Taher Pilehvar, University of Cambridge
Emmanuel Pothos, City, University of London
Matthew Purver, Queen Mary University of London
Mehrnoosh Sadrzadeh, Queen Mary University of London
Marta Sznajder, Munich Center for Mathematical Philosophy
Pawel Sobocinski, University of Southampton
Dominic Widdows, Grab Technologies
Geraint Wiggins, Vrije Universiteit Brussel
Victor Winschel, OICOS GmbH
Philipp Zahn, University of St. Gallen
Frank Zenker, University of Konstanz

ORGANIZATION:
Bob Coecke, University of Oxford
Jules Hedges, University of Oxford
Dimitri Kartsaklis, University of Cambridge
Martha Lewis, ILLC, University of Amsterdam
Dan Marsden, University of Oxford

by John Baez at May 24, 2018 10:20 PM

Christian P. Robert - xi'an's og

George Forsythe’s last paper

When looking for a link in a recent post, I came across Richard Brent’ arXival of historical comments on George Forsythe’s last paper (in 1972). Which is about the Forsythe-von Neumann approach to simulating exponential variates, covered in Luc Devroye’s Non-Uniform Random Variate Generation in a special section, Section 2 of Chapter 4,  is about generating a random variable from a target density proportional to g(x)exp(-F(x)), where g is a density and F is a function on (0,1). Then, after generating a realisation x⁰ from g and computing F(x⁰), generate a sequence u¹,u²,… of uniforms as long as they keep decreasing, i.e., F(x⁰) >u¹>u²>… If the maximal length k of this sequence is odd, the algorithm exists with a value x⁰ generated from  g(x)exp(-F(x)). Von Neumann (1949) treated the special case when g is constant and F(x)=x, which leads to an Exponential generator that never calls an exponential function. Which does not make the proposal a particularly efficient one as it rejects O(½) of the simulations. Refinements of the algorithm lead to using on average 1.38 uniforms per Normal generation, which does not sound much faster than a call to the Box-Muller method, despite what is written in the paper. (Brent also suggests using David Wallace’s 1999 Normal generator, which I had not encountered before. And which I am uncertain is relevant at the present time.)

by xi'an at May 24, 2018 10:18 PM

John Baez - Azimuth

Tropical Algebra and Railway Optimization

Simon Willerton pointed out a wonderful workshop, which unfortunately neither he nor I can attend… nor Jamie Vicary, who is at Birmingham:

Tropical Mathematics & Optimisation for Railways, University of Birmingham, School of Engineering, Monday 18 June 2018.

If you can go, please do—and report back!

Tropical algebra involves the numbers (-\infty, \infty] made into a rig with minimization as the addition and addition as the multiplication. It’s called a rig because it’s a “ring without negatives”.

Tropical algebra is important in algebraic geometry, because if you take some polynomial equations and rewrite them replacing + with min and × with +, you get equations that describe shapes with flat pieces replacing curved surfaces, like this:

These simplified shapes are easier to deal with, but they shed light on the original curved ones! Click the picture for more on the subject from Johannes Rau.

Tropical algebra is also important for quantization, since classical mechanics chooses the path with minimum action while quantum mechanics sums over paths. But it’s also important for creating efficient railway time-tables, where you’re trying to minimize the total time it takes to get from one place to another. Finally these worlds are meeting!

Here’s the abstract, which shows that the reference to railway optimization is not just a joke:

Abstract. The main purpose of this workshop is to bring together specialists in tropical mathematics and mathematical optimisation applied in railway engineering and to foster further collaboration between them. It is inspired by some applications of tropical mathematics to the analysis of railway timetables. The most elementary of them is based on a controlled tropically linear dynamic system, which allows for a stability analysis of a regular timetable and can model the delay propagation. Tropical (max-plus) switching systems are one of the extensions of this elementary model. Tropical mathematics also provides appropriate mathematical language and tools for various other applications which willbe presented at the workshop.

The talks on mathematical optimisation in railway engineering will be given by Professor Clive Roberts and other prominent specialists working at the Birmingham Centre for Railway Research and Education (BCRRE). They will inform the workshop participants about the problems that are of actual interest for railways, and suggest efficient and practical methods of their solution.

For a glimpse of some of the category theory lurking in this subject, see:

• Simon Willerton, Project scheduling and copresheaves, The n-Category Café.

by John Baez at May 24, 2018 09:49 PM

Emily Lakdawalla - The Planetary Society Blog

Approaching Mars on Spaceship Earth
One of the great things about space exploration is how it can shift your perspective. And you don't even need to leave home.

May 24, 2018 05:23 PM

Peter Coles - In the Dark

Thirty Years since Section 28..

I was reminded by twitter that today is the 30th anniversary of the enactment of the Local Government Act 1988, which included the now notorious Section 28, which contained the following:

I remember very well the numerous demonstrations and other protests I went on as part of the campaign against the clause that became Section 28. Indeed, these were the first large political demonstrations in which I ever took part. But that repugnant and obviously discriminatory piece of legislation passed into law anyway. Students and younger colleagues of mine born after 1988 probably don’t have any idea how much pain and anger the introduction of this piece of legislation caused at the time, but at least it also had the effect of galvanising  many groups and individuals into action. The fightback eventually succeeded; Section 28 was repealed in 2003. I know 30 years is a long time, but it’s still amazing to me that attitudes have changed so much that now we have same-sex marriage. I would never have predicted that if someone had asked me thirty years ago!

I think there’s an important lesson in the story of Section 28, which is that rights won can easily be lost again. There are plenty of people who would not hesitate to bring back similar laws if they thought they could get away with them.  That’s why it is important for LGBT+ people not only to stand up for their rights, but to campaign for a more open, inclusive and discrimination-free environment for everyone.

by telescoper at May 24, 2018 04:39 PM

Emily Lakdawalla - The Planetary Society Blog

Funpost! Audio diaries from simulated Mars
The Habitat is a new podcast about a year-long HI-SEAS mission in Hawaii.

May 24, 2018 11:00 AM

Peter Coles - In the Dark

Thoughts Suggested by a College Examination – Byron

High in the midst, surrounded by his peers,
Magnus his ample front sublime uprears:
Plac’d on his chair of state, he seems a God,
While Sophs and Freshmen tremble at his nod;
As all around sit wrapt in speechless gloom,
His voice, in thunder, shakes the sounding dome;
Denouncing dire reproach to luckless fools,
Unskill’d to plod in mathematic rules.

Happy the youth! in Euclid’s axioms tried,
Though little vers’d in any art beside;
Who, scarcely skill’d an English line to pen,
Scans Attic metres with a critic’s ken.

What! though he knows not how his fathers bled,
When civil discord pil’d the fields with dead,
When Edward bade his conquering bands advance,
Or Henry trampled on the crest of France:
Though marvelling at the name of Magna Charta,
Yet well he recollects the laws of Sparta;
Can tell, what edicts sage Lycurgus made,
While Blackstone’s on the shelf, neglected laid;
Of Grecian dramas vaunts the deathless fame,
Of Avon’s bard, rememb’ring scarce the name.

Such is the youth whose scientific pate
Class-honours, medals, fellowships, await;
Or even, perhaps, the declamation prize,
If to such glorious height, he lifts his eyes.
But lo! no common orator can hope
The envied silver cup within his scope:
Not that our heads much eloquence require,
Th’ ATHENIAN’S glowing style, or TULLY’S fire.
A manner clear or warm is useless, since
We do not try by speaking to convince;
Be other orators of pleasing proud,—
We speak to please ourselves, not move the crowd:
Our gravity prefers the muttering tone,
A proper mixture of the squeak and groan:
No borrow’d grace of action must be seen,
The slightest motion would displease the Dean;
Whilst every staring Graduate would prate,
Against what—he could never imitate.

The man, who hopes t’ obtain the promis’d cup,
Must in one posture stand, and ne’er look up;
Nor stop, but rattle over every word—
No matter what, so it can not be heard:
Thus let him hurry on, nor think to rest:
Who speaks the fastest’s sure to speak the best;
Who utters most within the shortest space,
May, safely, hope to win the wordy race.

The Sons of Science these, who, thus repaid,
Linger in ease in Granta’s sluggish shade;
Where on Cam’s sedgy banks, supine, they lie,
Unknown, unhonour’d live—unwept for die:
Dull as the pictures, which adorn their halls,
They think all learning fix’d within their walls:
In manners rude, in foolish forms precise,
All modern arts affecting to despise;
Yet prizing Bentley’s, Brunck’s, or Porson’s note,
More than the verse on which the critic wrote:
Vain as their honours, heavy as their Ale,
Sad as their wit, and tedious as their tale;
To friendship dead, though not untaught to feel,
When Self and Church demand a Bigot zeal.
With eager haste they court the lord of power,
(Whether ’tis PITT or PETTY rules the hour;)
To him, with suppliant smiles, they bend the head,
While distant mitres to their eyes are spread;
But should a storm o’erwhelm him with disgrace,
They’d fly to seek the next, who fill’d his place.
Such are the men who learning’s treasures guard!
Such is their practice, such is their reward!
This much, at least, we may presume to say—
The premium can’t exceed the price they pay.

by George Gordon Byron (1788-1824)

 

by telescoper at May 24, 2018 10:52 AM

May 23, 2018

Peter Coles - In the Dark

Glamorgan v. Middlesex

I took today off on annual leave (as I have to use all my allowance before I depart my job at Cardiff University). My intention was to make the best of the good weather to watch some cricket.

And so it came to pass that this morning I wandered down to Sophia Gardens for the start of the Royal London One-Day Cup (50-over) match between Glamorgan and Middlesex. It also came to pass that about fifteen minutes later I wandered back home again. I hadn’t checked the start time, which was actually 2pm…

The later start screwed up my plans as I had something to do in the evening but I thought I’d at least watch the first team bat (which turned out to be Middlesex).

(I’m not sure what caused the weird stripes on the picture.. .)

It was a lovely afternoon for cricket, and Middlesex got off to a good start in excellent batting conditions. Gradually though Glamorgan’s bowlers established some measure of control. After a mini-collapse of three wickets in three overs (to Ingram’s legspin) it looked like Middlesex might not make 300 (which seems to be the par score in this competition). Unfortunately for Glamorgan, however, de Lange and Wagg were expensive at the death and a flurry of boundaries took Middlesex to 304 for 6 off their 50 overs.

At that point I left Sophia Gardens to get ready to go out.

I’ve just got back to discover that Glamorgan lost by just 2 runs, ending on 302 for 9. It must have been a tense finish, and was a good game overall, but Glamorgan have now lost all three games they have played in this competition..

by telescoper at May 23, 2018 09:53 PM

ZapperZ - Physics and Physicists

That Impossible EM Drive Might Be ..... Impossible After All!
Crackpots were just having a field day when NASA announced several years ago of an EM propulsion that somehow violates momentum conservation laws. Now comes a more careful experiment from a group that tried to reproduce this result, and the outcome is rather hysterical.

The team built their EM drive with the same dimensions as the one that NASA tested, and placed it in a vacuum chamber. Then, they piped microwaves into the cavity and measured its tiny movements using lasers. As in previous tests, they found it produced thrust, as measured by a spring. But when positioned so that the microwaves could not possibly produce thrust in the direction of the spring, the drive seemed to push just as hard.

And, when the team cut the power by half, it barely affected the thrust. So, it seems there’s something else at work. The researchers say the thrust may be produced by an interaction between Earth’s magnetic field and the cables that power the microwave amplifier.

So far, this has only been reported in a conference proceeding, which is linked in the New Scientist article (you will need ResearchGate access).

I'm sure there will be many more tests of this thing soon, but I can't help but chuckle at the apparent conclusion here.

Zz.

by ZapperZ (noreply@blogger.com) at May 23, 2018 02:32 PM

Clifford V. Johnson - Asymptotia

Bull

A pair of panels from my short story “Resolution” in the Science Fiction anthology Twelve Tomorrows, out on Friday from MITPress! Preorder now, share, and tell everyone about it. See here for ordering, for example.

-cvj Click to continue reading this post

The post Bull appeared first on Asymptotia.

by Clifford at May 23, 2018 03:20 AM

May 22, 2018

Christian P. Robert - xi'an's og

best unbiased estimator of θ² for a Poisson model

A mostly traditional question on X validated about the “best” [minimum variance] unbiased estimator of θ² from a Poisson P(θ) sample leads to the Rao-Blackwell solution

\mathbb{E}[X_1X_2|\underbrace{\sum_{i=1}^n X_i}_S=s] = -\frac{s}{n^2}+\frac{s^2}{n^2}=\frac{s(s-1)}{n^2}

and a similar estimator could be constructed for θ³, θ⁴, … With the interesting limitation that this procedure stops at the power equal to the number of observations (minus one?). But,  since the expectation of a power of the sufficient statistics S [with distribution P(nθ)] is a polynomial in θ, there is de facto no limitation. More interestingly, there is no unbiased estimator of negative powers of θ in this context, while this neat comparison on Wikipedia (borrowed from the great book of counter-examples by Romano and Siegel, 1986, selling for a mere $180 on amazon!) shows why looking for an unbiased estimator of exp(-2θ) is particularly foolish: the only solution is (-1) to the power S [for a single observation]. (There is however a first way to circumvent the difficulty if having access to an arbitrary number of generations from the Poisson, since the Forsythe – von Neuman algorithm allows for an unbiased estimation of exp(-F(x)). And, as a second way, as remarked by Juho Kokkala below, a sample of at least two Poisson observations leads to a more coherent best unbiased estimator.)

by xi'an at May 22, 2018 10:18 PM

The n-Category Cafe

Linear Logic for Constructive Mathematics

Intuitionistic logic, i.e. logic without the principle of excluded middle (<semantics>P¬P<annotation encoding="application/x-tex">P \vee \neg P</annotation></semantics>), is important for many reasons. One is that it arises naturally as the internal logic of toposes and more general categories. Another is that it is the logic traditionally used by constructive mathematicians — mathematicians following Brouwer, Heyting, and Bishop, who want all proofs to have “computational” or “algorithmic” content. Brouwer observed that excluded middle is the primary origin of nonconstructive proofs; thus using intuitionistic logic yields a mathematics in which all proofs are constructive.

However, there are other logics that constructivists might have chosen for this purpose instead of intuitionistic logic. In particular, Girard’s (classical) linear logic was explicitly introduced as a “constructive” logic that nevertheless retains a form of the law of excluded middle. But so far, essentially no constructive mathematicians have seriously considered replacing intuitionistic logic with any such alternative. I will refrain from speculating on why not. Instead, in a paper appearing on the arXiv today:

I argue that in fact, constructive mathematicians (going all the way back to Brouwer) have already been using linear logic without realizing it!

Let me explain what I mean by this and how it comes about — starting with an explanation, for a category theorist, of what linear logic is in the first place.

When we first learn about logic, we often learn various tautologies such as de Morgan’s laws <semantics>¬(PQ)(¬P¬Q)<annotation encoding="application/x-tex">\neg(P \wedge Q ) \equiv (\neg P \vee \neg Q)</annotation></semantics> and the law of excluded middle <semantics>P¬P<annotation encoding="application/x-tex">P \vee \neg P</annotation></semantics>. (As usual, <semantics><annotation encoding="application/x-tex">\wedge</annotation></semantics> denotes “and”, while <semantics><annotation encoding="application/x-tex">\vee</annotation></semantics> denotes “or”.) The field of algebraic semantics of logic starts with the observation that these same laws can be regarded as axioms on a poset, with <semantics><annotation encoding="application/x-tex">\wedge</annotation></semantics> denoting the binary meet (greatest lower bound) and <semantics><annotation encoding="application/x-tex">\vee</annotation></semantics> the join (least upper bound). The laws of classical logic correspond to requiring such a poset to be a Boolean algebra. Thus, a proof in propositional logic actually shows an equation that must hold in all Boolean algebras.

This suggests there should be other “logics” corresponding to other ordered/algebraic structures. For instance, a Heyting algebra is a lattice that, considered as a thin category, is cartesian closed. That is, in addition to the meet <semantics><annotation encoding="application/x-tex">\wedge</annotation></semantics> and join <semantics><annotation encoding="application/x-tex">\vee</annotation></semantics>, it has an “implication” <semantics>PQ<annotation encoding="application/x-tex">P\to Q</annotation></semantics> satisfying <semantics>(PQR)(PQR)<annotation encoding="application/x-tex">(P\le Q\to R) \iff (P \wedge Q \le R)</annotation></semantics>. Any Boolean algebra is a Heyting algebra (with <semantics>(PQ)(¬PQ)<annotation encoding="application/x-tex">(P\to Q) \equiv (\neg P \vee Q)</annotation></semantics>), but not conversely. For instance, the open-set lattice of a topological space is a Heyting algebra, but not generally a Boolean one. The logic corresponding to Heyting algebras is usually called intuitionistic logic, and as noted above it is also the traditional logic of constructive mathematics.

(Note: calling this “intuitionistic logic” is unfaithful to Brouwer’s original meaning of “intuitionism”, but unfortunately there seems to be no better name for it.)

Now we can further weaken the notion of Heyting algebra by asking for a closed symmetric monoidal lattice instead of a cartesian closed one. The corresponding logic is called intuitionistic linear logic: in addition to the meet <semantics><annotation encoding="application/x-tex">\wedge</annotation></semantics> and join <semantics><annotation encoding="application/x-tex">\vee</annotation></semantics>, it has a tensor product <semantics><annotation encoding="application/x-tex">\otimes</annotation></semantics> with internal-hom <semantics><annotation encoding="application/x-tex">\multimap</annotation></semantics> satisfying an adjunction <semantics>(PQR)(PQR)<annotation encoding="application/x-tex">(P\le Q\multimap R) \iff (P \otimes Q \le R)</annotation></semantics>. Logically, both <semantics><annotation encoding="application/x-tex">\wedge</annotation></semantics> and <semantics><annotation encoding="application/x-tex">\otimes</annotation></semantics> are versions of “and”; we call <semantics><annotation encoding="application/x-tex">\wedge</annotation></semantics> the “additive conjunction” and <semantics><annotation encoding="application/x-tex">\otimes</annotation></semantics> the “multiplicative conjunction”.

Note that to get from closed symmetric monoidal lattices to Boolean algebras by way of Heyting algebras, we first make the monoidal structure cartesian and then impose self-duality. (A Boolean algebra is precisely a Heyting algebra such that <semantics>P(P0)0<annotation encoding="application/x-tex">P \equiv (P\to 0)\to 0</annotation></semantics>, where <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> is the bottom element and <semantics>P0<annotation encoding="application/x-tex">P\to 0</annotation></semantics> is the intuitionistic “not <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>”.) But we can also impose self-duality first and then make the monoidal structure cartesian, obtaining an intermediate structure called a star-autonomous lattice: a closed symmetric monoidal lattice equipped with an object <semantics><annotation encoding="application/x-tex">\bot</annotation></semantics> (not necessarily the bottom element) such that <semantics>P(P)<annotation encoding="application/x-tex">P \equiv (P\multimap \bot)\multimap \bot</annotation></semantics>. Such a lattice has a contravariant involution defined by <semantics>P =(P)<annotation encoding="application/x-tex">P^\perp = (P \multimap \bot)</annotation></semantics> and a “cotensor product” <semantics>(PQ)(P Q ) <annotation encoding="application/x-tex">(P \parr Q) \equiv (P^\perp \otimes Q^\perp)^\perp</annotation></semantics>, and its internal-hom is definable in terms of these: <semantics>(PQ)(P Q)<annotation encoding="application/x-tex">(P \multimap Q) \equiv (P^\perp \parr Q)</annotation></semantics>. Its logic is called (classical) linear logic: in addition to the two conjunctions <semantics><annotation encoding="application/x-tex">\wedge</annotation></semantics> and <semantics><annotation encoding="application/x-tex">\otimes</annotation></semantics>, it has two disjunctions <semantics><annotation encoding="application/x-tex">\vee</annotation></semantics> and <semantics><annotation encoding="application/x-tex">\parr</annotation></semantics>, again called “additive” and “multiplicative”.

Star-autonomous lattices are not quite as easy to come by as Heyting algebras, but one general way to produce them is the Chu construction. (I blogged about this last year from a rather different perspective; in this post we’re restricting it to the case of posets.) Suppose <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is a closed symmetric monoidal lattice, and <semantics><annotation encoding="application/x-tex">\bot</annotation></semantics> is any element of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> at all. Then there is a star-autonomous lattice <semantics>Chu(C,)<annotation encoding="application/x-tex">Chu(C,\bot)</annotation></semantics> whose objects are pairs <semantics>(P +,P )<annotation encoding="application/x-tex">(P^+,P^-)</annotation></semantics> such that <semantics>P +P <annotation encoding="application/x-tex">P^+ \otimes P^- \le \bot</annotation></semantics>, and whose order is defined by <semantics>((P +,P )(Q +,Q ))(P +Q +)and(Q P ).<annotation encoding="application/x-tex">((P^+,P^-) \le (Q^+,Q^-)) \iff (P^+ \le Q^+) \;\text{and}\; (Q^- \le P^-). </annotation></semantics> In other words, <semantics>Chu(C,)<annotation encoding="application/x-tex">Chu(C,\bot)</annotation></semantics> is a full sub-poset of <semantics>C×C op<annotation encoding="application/x-tex">C\times C^{op}</annotation></semantics>. Its lattice operations are pointwise: <semantics>(P +,P )(Q +,Q )(P +Q +,P Q )<annotation encoding="application/x-tex"> (P^+,P^-) \wedge (Q^+,Q^-) \equiv (P^+ \wedge Q^+, P^- \vee Q^-) </annotation></semantics> <semantics>(P +,P )(Q +,Q )(P +Q +,P Q )<annotation encoding="application/x-tex"> (P^+,P^-) \vee (Q^+,Q^-) \equiv (P^+ \vee Q^+, P^- \wedge Q^-) </annotation></semantics> while the tensor product is more interesting: <semantics>(P +,P )(Q +,Q )(P +Q +,(P +Q )(Q +P ))<annotation encoding="application/x-tex"> (P^+,P^-) \otimes (Q^+,Q^-) \equiv (P^+ \otimes Q^+, (P^+ \multimap Q^-) \wedge (Q^+ \multimap P^-))</annotation></semantics> The self-duality is <semantics>(P +,P ) (P ,P +).<annotation encoding="application/x-tex"> (P^+,P^-)^\bot \equiv (P^-,P^+).</annotation></semantics> from which we can deduce the definitions of <semantics><annotation encoding="application/x-tex">\parr</annotation></semantics> and <semantics><annotation encoding="application/x-tex">\multimap</annotation></semantics>: <semantics>(P +,P )(Q +,Q )((P Q +)(Q P +),P Q ).<annotation encoding="application/x-tex"> (P^+,P^-) \parr (Q^+,Q^-) \equiv ((P^- \multimap Q^+) \wedge (Q^- \multimap P^+), P^- \otimes Q^-). </annotation></semantics> <semantics>(P +,P )(Q +,Q )((P +Q +)(Q P ),P +Q ).<annotation encoding="application/x-tex"> (P^+,P^-) \multimap (Q^+,Q^-) \equiv ((P^+ \multimap Q^+) \wedge (Q^- \multimap P^-), P^+ \otimes Q^-). </annotation></semantics>

Where do closed symmetric monoidal lattices come from? Well, they include all Heyting algebras! So if we have any Heyting algebra, all we need to do to get a star-autonomous lattice from the Chu construction is to pick an object <semantics><annotation encoding="application/x-tex">\bot</annotation></semantics>. One natural choice is the bottom element <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>, since that would be the self-dualizing object if our Heyting algebra were a Boolean algebra. The resulting monoidal structure on <semantics>Chu(H,0)<annotation encoding="application/x-tex">Chu(H,0)</annotation></semantics> is actually semicartesian: the monoidal unit coincides with the top element <semantics>(1,0)<annotation encoding="application/x-tex">(1,0)</annotation></semantics>.

The point, now, is to look at this Chu construction <semantics>Chu(H,0)<annotation encoding="application/x-tex">Chu(H,0)</annotation></semantics> from the logical perspective, where elements of <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> are regarded as propositions in intuitionistic logic. The elements of <semantics>Chu(H,0)<annotation encoding="application/x-tex">Chu(H,0)</annotation></semantics> are pairs <semantics>(P +,P )<annotation encoding="application/x-tex">(P^+, P^-)</annotation></semantics> such that <semantics>P +P =0<annotation encoding="application/x-tex">P^+ \wedge P^- = 0</annotation></semantics>, i.e. pairs of mutually incompatible propositions. We think of such a pair as a proposition <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> together with information <semantics>P +<annotation encoding="application/x-tex">P^+</annotation></semantics> about what it means to affirm or prove it and also information <semantics>P <annotation encoding="application/x-tex">P^-</annotation></semantics> about what it means to refute or disprove it. The condition <semantics>P +P =0<annotation encoding="application/x-tex">P^+ \wedge P^- = 0</annotation></semantics> means that <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> cannot be both proven and refuted; but we might still have propositions such as <semantics>(0,0)<annotation encoding="application/x-tex">(0,0)</annotation></semantics> which can be neither proven nor refuted.

The above definitions of the operations in a Chu construction similarly translate into “explanations” of the additive connectives <semantics>,<annotation encoding="application/x-tex">\wedge,\vee</annotation></semantics> and the multiplicative connectives <semantics>,<annotation encoding="application/x-tex">\otimes,\parr</annotation></semantics> in terms of affirmations and refutations:

  • A proof of <semantics>PQ<annotation encoding="application/x-tex">P\wedge Q</annotation></semantics> is a proof of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> together with a proof of <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics>. A refutation of <semantics>PQ<annotation encoding="application/x-tex">P\wedge Q</annotation></semantics> is either a refutation of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> or a refutation of <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics>.
  • A proof of <semantics>PQ<annotation encoding="application/x-tex">P\vee Q</annotation></semantics> is either a proof of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> or a proof of <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics>. A refutation of <semantics>PQ<annotation encoding="application/x-tex">P\vee Q</annotation></semantics> is a refutation of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> together with a refutation of <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics>.
  • A proof of <semantics>P <annotation encoding="application/x-tex">P^\perp</annotation></semantics> is a refutation of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>. A refutation of <semantics>P <annotation encoding="application/x-tex">P^\perp</annotation></semantics> is a proof of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>.
  • A proof of <semantics>PQ<annotation encoding="application/x-tex">P\otimes Q</annotation></semantics> is, like for <semantics>PQ<annotation encoding="application/x-tex">P\wedge Q</annotation></semantics>, a proof of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> together with a proof of <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics>. But a refutation of <semantics>PQ<annotation encoding="application/x-tex">P\otimes Q</annotation></semantics> consists of both (1) a construction of a refutation of <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics> from any proof of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, and (2) a construction of a refutation of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> from any proof of <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics>.
  • A proof of <semantics>PQ<annotation encoding="application/x-tex">P\parr Q</annotation></semantics> consists of both (1) a construction of a proof of <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics> from any refutation of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, and (2) a construction of a proof of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> from any refutation of <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics>. A refutation of <semantics>PQ<annotation encoding="application/x-tex">P\parr Q</annotation></semantics> is, like for <semantics>PQ<annotation encoding="application/x-tex">P\vee Q</annotation></semantics>, a refutation of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> together with a refutation of <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics>.

These explanations constitute a “meaning explanation” for classical linear logic, parallel to the Brouwer-Heyting-Kolmogorov interpretation of intuitionistic logic. In particular, they justify the characteristic features of linear logic. For instance, the “additive law of excluded middle” <semantics>PP <annotation encoding="application/x-tex">P \vee P^\perp</annotation></semantics> fails for the same reason that it fails under the BHK-interpretation: we cannot decide for an arbitrary <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> whether to prove it or refute it. However, the “multiplicative law of excluded middle” <semantics>PP <annotation encoding="application/x-tex">P \parr P^\perp</annotation></semantics> is a tautology: if we can refute <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, then by definition we can prove <semantics>P <annotation encoding="application/x-tex">P^\perp</annotation></semantics>, while if we can refute <semantics>P <annotation encoding="application/x-tex">P^\perp</annotation></semantics>, then again by definition we can prove <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>. In general, the multiplicative disjunction <semantics><annotation encoding="application/x-tex">\parr</annotation></semantics> often carries the “constructive content” of what the classical mathematician means by “or”, whereas the additive one <semantics><annotation encoding="application/x-tex">\vee</annotation></semantics> carries the constructive mathematician’s meaning. (Note also that the “proof” clauses of <semantics>PQ<annotation encoding="application/x-tex">P\parr Q</annotation></semantics> are essentially the disjunctive syllogism.)

I think this is already pretty neat. Linear logic, regarded as a logic, has always been rather mysterious to me, especially the multiplicative disjunction <semantics><annotation encoding="application/x-tex">\parr</annotation></semantics> (and I know I’m not alone in that). But this construction explains both of them quite nicely.

However, there’s more. It’s not much good to have a logic if we can’t do mathematics with it, so let’s do some mathematics in linear logic, translate it into intuitionistic logic along this Chu construction, and see what we get. In this post I won’t be precise about the context in which this happens; the paper formalizes it more carefully as a “linear tripos”. Following the paper, I’ll call this the standard interpretation.

To start with, every set should have an equality relation. Thus, a set in linear logic (which I’ll call an “L-set”) will have a relation <semantics>(x= Ly)<annotation encoding="application/x-tex">(x=^L y)</annotation></semantics> valued in linear propositions. Since each of these is an element of <semantics>Chu(H,0)<annotation encoding="application/x-tex">Chu(H,0)</annotation></semantics>, it is a pair of mutually incompatible intuitionistic relations; we will call these <semantics>(x= Iy<annotation encoding="application/x-tex">(x=^I y</annotation></semantics> and <semantics>(x Iy)<annotation encoding="application/x-tex">(x\neq^I y)</annotation></semantics> respectively.

Now we expect equality to be, among other things, a reflexive, symmetric, and transitive relation. Reflexivity means that <semantics>(x= Ix,x Ix)(1,0)<annotation encoding="application/x-tex">(x=^I x, x\neq^I x) \equiv (1,0)</annotation></semantics>, i.e. that <semantics>x= Ix<annotation encoding="application/x-tex">x =^I x</annotation></semantics> is true (that is, <semantics>= I<annotation encoding="application/x-tex">=^I</annotation></semantics> is reflexive) and <semantics>x Ix<annotation encoding="application/x-tex">x \neq^I x</annotation></semantics> is false (that is, <semantics> I<annotation encoding="application/x-tex">\neq^I</annotation></semantics> is irreflexive). Symmetry means that <semantics>(x= Iy,x Iy)(y= Ix,y Ix)<annotation encoding="application/x-tex">(x=^I y, x\neq^I y) \equiv (y=^I x, y\neq^I x)</annotation></semantics>, i.e. that <semantics>(x= Iy)(y= Ix)<annotation encoding="application/x-tex">(x=^I y) \equiv (y=^I x)</annotation></semantics> and <semantics>(x Iy)(y Ix)<annotation encoding="application/x-tex">(x\neq^I y) \equiv (y\neq^I x)</annotation></semantics>: that is, <semantics>= I<annotation encoding="application/x-tex">=^I</annotation></semantics> and <semantics> I<annotation encoding="application/x-tex">\neq^I</annotation></semantics> are both symmetric.

Of course, transitivity says that if <semantics>x=y<annotation encoding="application/x-tex">x=y</annotation></semantics> and <semantics>y=z<annotation encoding="application/x-tex">y=z</annotation></semantics>, then <semantics>x=z<annotation encoding="application/x-tex">x=z</annotation></semantics>. But in linear logic we have two different “and”s; which do we mean here? Suppose first we use the additive conjunction <semantics><annotation encoding="application/x-tex">\wedge</annotation></semantics>, so that transitivity says <semantics>(x= Ly)(y= Lz)(x= Lz)<annotation encoding="application/x-tex">(x=^L y) \wedge (y=^L z) \vdash (x=^L z)</annotation></semantics> (here <semantics><annotation encoding="application/x-tex">\vdash</annotation></semantics> is the logical equivalent of the inequality <semantics><annotation encoding="application/x-tex">\le</annotation></semantics> in our lattices). Using the definition of <semantics><annotation encoding="application/x-tex">\wedge</annotation></semantics> in <semantics>Chu(H,0)<annotation encoding="application/x-tex">Chu(H,0)</annotation></semantics>, this says that <semantics>(x= Iy)(y= Iz)(x= Iz)<annotation encoding="application/x-tex">(x=^I y) \wedge (y=^I z) \vdash (x=^I z)</annotation></semantics> (that is, <semantics>= I<annotation encoding="application/x-tex">=^I</annotation></semantics> is transitive) and <semantics>(x Iz)(x Iy)(y Iz)<annotation encoding="application/x-tex">(x\neq^I z) \vdash (x\neq^I y) \vee (y\neq^I z)</annotation></semantics> (this is sometimes called comparison).

Put together, the assertions that <semantics>= L<annotation encoding="application/x-tex">=^L</annotation></semantics> is reflexive, symmetric, and transitive say that (1) <semantics>= I<annotation encoding="application/x-tex">=^I</annotation></semantics> is reflexive, symmetric, and transitive, and (2) <semantics> I<annotation encoding="application/x-tex">\neq^I</annotation></semantics> is irreflexive, symmetric, and a comparison. Part (1) says essentially that an L-set has an underlying I-set, while (2) says that this I-set is equipped with an apartness relation.

Apartness relations are a well-known notion in constructive mathematics; if you’ve never encountered them before, here’s the idea. In classical mathematics, if we need to say that two things are “distinct” we just say that they are “not equal”. However, in intuitionistic logic, being “not equal” is not always a very useful thing. For instance, we cannot prove intuitionistically that if a real number is not equal to zero then it is invertible. However, we can prove that every real number that is apart from zero is invertible, where two real numbers are “apart” if there is a positive rational distance between them. This notion of “apart” is an irreflexive symmetric comparison which is stronger than “not equal”, and many other sets in intuitionistic mathematics are similarly equipped with such relations.

The point is that the standard interpretation automatically produces the notion of “apartness relation”, which constructive mathematicians using intuitionistic logic were led to from purely practical considerations. This same sort of thing happens over and over, and is what I mean by “constructive mathematicians have been using linear logic without realizing it”: they invented lots of concepts which are invisible to classical mathematics, and which may seem ad hoc at first, but actually arise automatically if we do mathematics “naturally” in linear logic and then pass across the standard interpretation.

To convince you that this happens all over the place, here are a bunch more examples.

  1. If <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> are L-sets, then in the cartesian product L-set <semantics>A×B<annotation encoding="application/x-tex">A\times B</annotation></semantics> we have <semantics>(a 1,b 1)= L(a 2,b 2)<annotation encoding="application/x-tex">(a_1,b_1) =^L (a_2,b_2)</annotation></semantics> defined by <semantics>(a 1= La 2)(b 1= Lb 2)<annotation encoding="application/x-tex">(a_1 =^L a_2) \wedge (b_1 =^L b_2)</annotation></semantics>. In the standard interpretation, this corresponds to the usual pairwise equality for ordered pairs and a disjunctive apartness: <semantics>(a 1,b 1) I(a 2,b 2)<annotation encoding="application/x-tex">(a_1,b_1) \neq^I (a_2,b_2)</annotation></semantics> means <semantics>(a 1 Ia 2)(b 1 Ib 2)<annotation encoding="application/x-tex">(a_1 \neq^I a_2) \vee (b_1 \neq^I b_2)</annotation></semantics>. That is, two ordered pairs differ if they differ in one component.

  2. If <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> are sets, the elements of the function L-set <semantics>AB<annotation encoding="application/x-tex">A\to B</annotation></semantics> are I-functions that are strongly extensional: <semantics>(f(x) If(y))(x Iy)<annotation encoding="application/x-tex">(f(x) \neq^I f(y)) \vdash (x\neq^I y)</annotation></semantics> (this is called “strong” because the apartness <semantics> I<annotation encoding="application/x-tex">\neq^I</annotation></semantics> may be stronger than “not equal”). This is a common condition on functions between sets with apartness relations.

  3. Equality between functions <semantics>(f= Lg)<annotation encoding="application/x-tex">(f =^L g)</annotation></semantics> is defined by <semantics>x.(f(x)= Lg(x))<annotation encoding="application/x-tex">\forall x. (f(x) =^L g(x))</annotation></semantics>. I didn’t talk about quantifiers <semantics>/<annotation encoding="application/x-tex">\forall/\exists</annotation></semantics> above, but they act like infinitary versions of <semantics>/<annotation encoding="application/x-tex">\wedge/\vee</annotation></semantics>. Thus we get <semantics>(f= Ig)<annotation encoding="application/x-tex">(f=^I g)</annotation></semantics> meaning <semantics>x.(f(x)= Ig(x))<annotation encoding="application/x-tex">\forall x. (f(x) =^I g(x))</annotation></semantics>, the usual pointwise equality of functions, and <semantics>(f Ig)<annotation encoding="application/x-tex">(f\neq^I g)</annotation></semantics> meaning <semantics>x.(f(x) Ig(x))<annotation encoding="application/x-tex">\exists x. (f(x) \neq^I g(x))</annotation></semantics>: two functions differ if they differ on at least one input.

  4. Because linear propositions are pairs of intuitionistic propositions, the elements of the L-powerset <semantics>P(A)<annotation encoding="application/x-tex">P(A)</annotation></semantics> of an L-set <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> are pairs <semantics>(U +,U )<annotation encoding="application/x-tex">(U^+,U^-)</annotation></semantics> of I-subsets of the underlying I-set of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, which must additionally be strongly disjoint in that <semantics>(xU +)(yU )(x Iy)<annotation encoding="application/x-tex">(x\in U^+) \wedge (y\in U^-) \vdash (x\neq^I y)</annotation></semantics>. Bishop and Bridges introduced such pairs in their book Constructive analysis under the name “complemented subset”. The substitution law <semantics>(x= Ly)(xU)(yU)<annotation encoding="application/x-tex">(x=^L y) \wedge (x\in U) \vdash (y\in U)</annotation></semantics> translates to the requirement that <semantics>U <annotation encoding="application/x-tex">U^-</annotation></semantics> be strongly extensional (also called “<semantics> I<annotation encoding="application/x-tex">\neq^I</annotation></semantics>-open”): <semantics>(yU )(x Iy)(xU )<annotation encoding="application/x-tex">(y\in U^-) \vdash (x\neq^I y) \vee (x\in U^-)</annotation></semantics>.

  5. Equality between L-subsets <semantics>(U= LV)<annotation encoding="application/x-tex">(U=^L V)</annotation></semantics> means <semantics>x.((xUxV)(xVxU))<annotation encoding="application/x-tex">\forall x. ((x\in U \multimap x\in V) \wedge (x\in V \multimap x\in U))</annotation></semantics>. In the standard interpretation, <semantics>(U +,U )= I(V +,V )<annotation encoding="application/x-tex">(U^+,U^-) =^I (V^+,V^-)</annotation></semantics> means <semantics>(U +=V +)(U =V )<annotation encoding="application/x-tex">(U^+=V^+) \wedge (U^- = V^-)</annotation></semantics> as we would expect, while <semantics>(U +,U ) I(V +,V )<annotation encoding="application/x-tex">(U^+,U^-) \neq^I (V^+,V^-)</annotation></semantics> means <semantics>(xU +V )(xU V +)<annotation encoding="application/x-tex">(\exists x \in U^+ \cap V^-) \vee (\exists x\in U^- \cap V^+)</annotation></semantics>. In particular, we have <semantics>U L<annotation encoding="application/x-tex">U \neq^L \emptyset</annotation></semantics> (where the empty L-subset is <semantics>(,A)<annotation encoding="application/x-tex">(\emptyset,A)</annotation></semantics>) just when <semantics>xU +<annotation encoding="application/x-tex">\exists x\in U^+</annotation></semantics>. So the obvious linear notion of “nonempty” translates to the intuitionistic notion of inhabited that constructive mathematicians have found to be much more useful than the intuitionistic “not empty”.

  6. An L-group is an L-set with a multiplication <semantics>m:G×GG<annotation encoding="application/x-tex">m:G\times G\to G</annotation></semantics>, unit <semantics>eG<annotation encoding="application/x-tex">e\in G</annotation></semantics>, and inversion <semantics>i:GG<annotation encoding="application/x-tex">i:G\to G</annotation></semantics> satisfying the usual axioms. In the standard interpretation, this corresponds to an ordinary I-group equipped with an apartness <semantics> I<annotation encoding="application/x-tex">\neq^I</annotation></semantics> such that multiplication and inversion are strongly extensional: <semantics>(x 1 Iy 1)(x Iy)<annotation encoding="application/x-tex">(x^{-1} \neq^I y^{-1}) \vdash (x \neq^I y)</annotation></semantics> and <semantics>(xu Iyv)(x Iy)(u Iv)<annotation encoding="application/x-tex">(x u \neq^I y v) \vdash (x \neq^I y) \vee (u\neq^I v)</annotation></semantics>. Groups with apartness have been studied in intuitionistic algebra going back to Heyting, and similarly for other algebraic structures like rings and modules.

  7. An L-subgroup of an L-group corresponds to a complemented subset <semantics>(H +,H )<annotation encoding="application/x-tex">(H^+,H^-)</annotation></semantics> as above such that <semantics>H +<annotation encoding="application/x-tex">H^+</annotation></semantics> is an ordinary I-subgroup and <semantics>H <annotation encoding="application/x-tex">H^-</annotation></semantics> is an antisubgroup, meaning a set not containing <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics>, closed under inversion (<semantics>(xH )(x 1H )<annotation encoding="application/x-tex">(x\in H^-) \vdash (x^{-1}\in H^-)</annotation></semantics>), and satisfying <semantics>(xyH )(xH )(yH )<annotation encoding="application/x-tex">(x y \in H^-) \vdash (x\in H^-) \vee (y\in H^-)</annotation></semantics>. Antisubgroups have also been studied in constructive algebra; for instance, they enable us to define an apartness on the quotient <semantics>G/H +<annotation encoding="application/x-tex">G/H^+</annotation></semantics> by <semantics>[x] I[y]<annotation encoding="application/x-tex">[x]\neq^I [y]</annotation></semantics> if <semantics>xy 1H <annotation encoding="application/x-tex">x y^{-1} \in H^-</annotation></semantics>.

  8. An L-poset is an L-set with a relation <semantics> L<annotation encoding="application/x-tex">\le^L</annotation></semantics> that is reflexive, transitive (<semantics>(x Ly)(y Lz)(x Lz)<annotation encoding="application/x-tex">(x\le^L y) \wedge (y\le^L z) \vdash (x\le^L z)</annotation></semantics>) and antisymmetric (<semantics>(x Ly)(y Lx)(x= Ly)<annotation encoding="application/x-tex">(x\le^L y) \wedge (y\le^L x) \vdash (x =^L y)</annotation></semantics>). Under the standard interpretation, this corresponds to two binary relations, which it is suggestive to write <semantics> I<annotation encoding="application/x-tex">\le^I</annotation></semantics> and <semantics>< I<annotation encoding="application/x-tex">\lt^I</annotation></semantics>: then <semantics> I<annotation encoding="application/x-tex">\le^I</annotation></semantics> is an ordinary I-partial-order, <semantics>< I<annotation encoding="application/x-tex">\lt^I</annotation></semantics> is a “bimodule” over it (<semantics>(x Iy)(y< Iz)(x< Iz)<annotation encoding="application/x-tex">(x\le^I y) \wedge (y \lt^I z) \vdash (x\lt^I z)</annotation></semantics> and dually) which is cotransitive (<semantics>(x< Iz)(x< Iy)(y< Iz)<annotation encoding="application/x-tex">(x\lt^I z) \vdash (x\lt^I y) \vee (y\lt^I z)</annotation></semantics>) and “anti-antisymmetric” in the sense that <semantics>(x Iy)((x< Iy)(y< Ix))<annotation encoding="application/x-tex">(x\neq^I y) \equiv ((x\lt^I y) \vee (y\lt^I x))</annotation></semantics>. Such pairs of strict and non-strict relations are quite common in constructive mathematics, for instance on the real numbers.

In the paper there are even more examples of this sort of thing. However, even this is not all! So far, we haven’t made any use of the multiplicative connectives in linear logic. It turns out that often, replacing some or all of the additive connectives in a definition by multiplicative ones yields, under the standard interpretation, a different intuitionistic version of the same classical definition that is also useful.

Here are a few examples. Note that by semicartesianness (or “affineness” of our logic), we have <semantics>PQPQ<annotation encoding="application/x-tex">P \otimes Q \vdash P\wedge Q</annotation></semantics> and <semantics>PQPQ<annotation encoding="application/x-tex">P\vee Q \vdash P \parr Q</annotation></semantics> (in a general star-autonomous lattice, there may be no implication either way).

  1. A “<semantics><annotation encoding="application/x-tex">\vee</annotation></semantics>-field” is an L-ring such that <semantics>(x= L0)y.(xy= L1)<annotation encoding="application/x-tex">(x=^L 0) \vee \exists y. (x y =^L 1)</annotation></semantics>. Under the standard interpretation, this corresponds to an I-ring (with apartness) such that <semantics>(x= I0)y.(xy= I1)<annotation encoding="application/x-tex">(x=^I 0) \vee \exists y. (x y =^I 1)</annotation></semantics>, i.e. every element is either zero or invertible. The rational numbers are a field in this sense (sometimes called a geometric field or discrete field), but the real numbers are not. On the other hand, a “<semantics><annotation encoding="application/x-tex">\parr</annotation></semantics>-field” is an L-ring such that <semantics>(x= L0)y.(xy= L1)<annotation encoding="application/x-tex">(x=^L 0) \parr \exists y. (x y =^L 1)</annotation></semantics>. Under the standard interpretation, this corresponds to an I-ring (with apartness) such that <semantics>(x I0)y.(xy= I1)<annotation encoding="application/x-tex">(x \neq^I 0) \to \exists y. (x y =^I 1)</annotation></semantics> and <semantics>(y.(xy I1))(x= I0)<annotation encoding="application/x-tex">(\forall y. (x y \neq^I 1)) \to (x =^I 0)</annotation></semantics>, i.e. every element apart from zero is invertible and every “strongly noninvertible” element is zero. The real numbers are a field in this weaker sense (if the apartness is tight, this is called a Heyting field).

  2. In classical mathematics, <semantics>xy<annotation encoding="application/x-tex">x\le y</annotation></semantics> means <semantics>(x<y)(x=y)<annotation encoding="application/x-tex">(x\lt y) \vee (x=y)</annotation></semantics>. Constructively, this is (again) true for integers and rationals but not the reals. However, it is true for the reals in linear logic that <semantics>(x Ly)(x< Ly)(x= Ly)<annotation encoding="application/x-tex">(x\le^L y) \equiv (x\lt^L y) \parr (x=^L y)</annotation></semantics>.

  3. If in the definition of an L-set we weaken transitivity to <semantics>(x= Ly)(y= Lz)(x= Lz)<annotation encoding="application/x-tex">(x=^L y) \otimes (y=^L z) \vdash (x=^L z)</annotation></semantics>, then in the standard interpretation the comparison condition disappears, so that <semantics> I<annotation encoding="application/x-tex">\neq^I</annotation></semantics> need only be irreflexive and symmetric, i.e. an inequality relation. (To be precise, the comparison condition is actually replaced by the weaker statement that <semantics> I<annotation encoding="application/x-tex">\neq^I</annotation></semantics> satisfies substitution with respect to <semantics>= I<annotation encoding="application/x-tex">=^I</annotation></semantics>.) This is useful because not every I-set has an apartness relation, but every I-set does have at least one inequality relation, namely “not equal” (the denial inequality). There are also other inequality relations that are not apartnesses. For instance, the inequality defined above on L-powersets is not an apartness but is still stronger than “not equal”, and in the paper there is an example of a very naturally-occurring normal L-subgroup of a very naturally occurring L-group for which the inequality on the quotient is not an apartness.

Again, there are more examples in the paper, including generalized metric spaces and topological/apartness spaces. (I should warn you that the terminology and notation in the paper is somewhat different; in this post I’ve been constrained in the math symbols available, and also omitted some adjectives for simplicity.)

What does this observation mean for constructive mathematics? Well, there are three levels at which it can be applied. Firstly, we can use it as a “machine” for producing constructive versions of classical definitions: write the classical definition in linear logic, making additive or multiplicative choices for the connectives, and then pass across the standard interpretation. The examples in the paper suggest that this is more “automatic” and less error-prone than the usual process of searching for constructive versions of classical notions.

Secondly, we can also use it as a machine for producing theorems about such constructive notions, by writing a proof in linear logic (perhaps by simply translating a classical proof — many classical proofs remain valid without significant changes) and translating across the standard interpretation. This can save effort and prevent mistakes (e.g. we don’t have to manually keep track of all the apartness relations, or worry about forgetting to prove that some function is strongly extensional).

Thirdly, if we start to do that a lot, we may notice that we might as well be doing constructive mathematics directly in linear logic. Linear logic has a “computational” interpretation just like intuitionistic logic does — its proofs satisfy “cut-elimination” and the “disjunction and existence properties” — so it should be just as good at ensuring that mathematics has computational content, with the benefit of not having to deal explicitly with apartness relations and so on. And the “meaning explanation” I described above, in terms of proofs and refutations, could theoretically have been given by Brouwer or Heyting instead of the now-standard BHK-interpretation. So one might argue that the prevalence of the latter is just a historical accident; maybe in the future a linear constructive mathematics will grow up alongside today’s “intuitionistic constructive mathematics”.

by shulman (viritrilbia@gmail.com) at May 22, 2018 02:36 AM

May 21, 2018

ZapperZ - Physics and Physicists

Graphene Might Could Kill Off Cancer Cells
Here's another example of how something that came out of physics is now finding an application in other fields, namely the medical field. Graphene, which was discovered quite a while back and won its two discoverers the Nobel Prize in Physics, has now found a possible application at fighting cancer.

It began with a theory -- scientists at the University of California knew graphene could convert light into electricity, and wondered whether that electricity had the capacity to stimulate human cells. Graphene is extremely sensitive to light (1,000 times more than traditional digital cameras and smartphones) and after experimenting with different light intensities, Alex Savchenko and his team discovered that cells could indeed be stimulated via optical graphene stimulation."

I was looking at the microscope's computer screen and I'm turning the knob for light intensity and I see the cells start beating faster," he said. "I showed that to our grad students and they were yelling and jumping and asking if they could turn the knob. We had never seen this possibility of controlling cell contraction."

The source paper can be found here, and it is open-access.

Again, this is why it is vital that funding in basic physics continues at a healthy pace. Even if you do not see the immediate application or benefit from many of these seemingly esoteric research, you just never know when any of the discovery and knowledge that are gained from such areas will turn into something that could save people's lives. We have seen such examples NUMEROUS times throughout history. Unfortunately, people are often ignorant at the origin of many of the benefits that they now take for granted.

Zz.

by ZapperZ (noreply@blogger.com) at May 21, 2018 01:08 PM

Andrew Jaffe - Leaves on the Line

Leon Lucy, R.I.P.

I have the unfortunate duty of using this blog to announce the death a couple of weeks ago of Professor Leon B Lucy, who had been a Visiting Professor working here at Imperial College from 1998.

Leon got his PhD in the early 1960s at the University of Manchester, and after postdoctoral positions in Europe and the US, worked at Columbia University and the European Southern Observatory over the years, before coming to Imperial. He made significant contributions to the study of the evolution of stars, understanding in particular how they lose mass over the course of their evolution, and how very close binary stars interact and evolve inside their common envelope of hot gas.

Perhaps most importantly, early in his career Leon realised how useful computers could be in astrophysics. He made two major methodological contributions to astrophysical simulations. First, he realised that by simulating randomised trajectories of single particles, he could take into account more physical processes that occur inside stars. This is now called “Monte Carlo Radiative Transfer” (scientists often use the term “Monte Carlo” — after the European gambling capital — for techniques using random numbers). He also invented the technique now called smoothed-particle hydrodynamics which models gases and fluids as aggregates of pseudo-particles, now applied to models of stars, galaxies, and the large scale structure of the Universe, as well as many uses outside of astrophysics.

Leon’s other major numerical contributions comprise advanced techniques for interpreting the complicated astronomical data we get from our telescopes. In this realm, he was most famous for developing the methods, now known as Lucy-Richardson deconvolution, that were used for correcting the distorted images from the Hubble Space Telescope, before NASA was able to send a team of astronauts to install correcting lenses in the early 1990s.

For all of this work Leon was awarded the Gold Medal of the Royal Astronomical Society in 2000. Since then, Leon kept working on data analysis and stellar astrophysics — even during his illness, he asked me to help organise the submission and editing of what turned out to be his final papers, on extracting information on binary-star orbits and (a subject dear to my heart) the statistics of testing scientific models.

Until the end of last year, Leon was a regular presence here at Imperial, always ready to contribute an occasionally curmudgeonly but always insightful comment on the science (and sociology) of nearly any topic in astrophysics. We hope that we will be able to appropriately memorialise his life and work here at Imperial and elsewhere. He is survived by his wife and daughter. He will be missed.

by Andrew at May 21, 2018 09:27 AM

May 20, 2018

The n-Category Cafe

Circuits, Bond Graphs, and Signal-Flow Diagrams

My student Brandon Coya finished his thesis, and successfully defended it last Tuesday!

• Brandon Coya, Circuits, Bond Graphs, and Signal-Flow Diagrams: A Categorical Perspective, Ph.D. thesis, U. C. Riverside, 2018.

It’s about networks in engineering. He uses category theory to study the diagrams engineers like to draw, and functors to understand how these diagrams are interpreted.

His thesis raises some really interesting pure mathematical questions about the category of corelations and a ‘weak bimonoid’ that can be found in this category. Weak bimonoids were invented by Pastro and Street in their study of ‘quantum categories’, a generalization of quantum groups. So, it’s fascinating to see a weak bimonoid that plays an important role in electrical engineering!

However, in what follows I’ll stick to less fancy stuff: I’ll just explain the basic idea of Brandon’s thesis, say a bit about circuits and ‘bond graphs’, and outline his main results. What follows is heavily based on the introduction of his thesis, but I’ve baezified it a little.

The basic idea

People, and especially scientists and engineers, are naturally inclined to draw diagrams and pictures when they want to better understand a problem. One example is when Feynman introduced his famous diagrams in 1949; particle physicists have been using them ever since. But some other diagrams introduced by engineers are far more important to the functioning of the modern world and its technology. It’s outrageous, but sociologically understandable, that mathematicians have figured out more about Feynman diagrams than these other kinds: circuit diagrams, bond graphs and signal-flow diagrams. This is the problem Brandon aims to fix.

I’ve been unable to track down the early history of circuit diagrams, so if you know about that please tell me! But in the 1940s, Harry Olson pointed out analogies in electrical, mechanical, thermodynamic, hydraulic, and chemical systems, which allowed circuit diagrams to be applied to a wide variety of fields. And on April 24, 1959, Henry Paynter woke up and invented the diagrammatic language of bond graphs to study generalized versions of voltage and current, called ‘effort’ and ‘flow,’ which are implicit in the analogies found by Olson. Bond graphs are now widely used in engineering. On the other hand, control theorists use diagrams of a different kind, called ‘signal-flow diagrams’, to study linear open dynamical systems.

Although category theory predates some of these diagrams, it was not until the 1980s that Joyal and Street showed string digrams can be used to reason about morphisms in any symmetric monoidal category. This motivates Brandon’s first goal: viewing electrical circuits, signal-flow diagrams, and bond graphs as string diagrams for morphisms in symmetric monoidal categories.

This lets us study networks from a compositional perspective. That is, we can study a big network by describing how it is composed of smaller pieces. Treating networks as morphisms in a symmetric monoidal category lets us build larger ones from smaller ones by composing and tensoring them: this makes the compositional perspective into precise mathematics. To study a network in this way we must first define a notion of ‘input’ and ‘output’ for the network diagram. Then gluing diagrams together, so long as the outputs of one match the inputs of the other, defines the composition for a category.

Network diagrams are typically assigned data, such as the potential and current associated to a wire in an electrical circuit. Since the relation between the data tells us how a network behaves, we call this relation the ‘behavior’ of a network. The way in which we assign behavior to a network comes from first treating a network as a ‘black box’, which is a system with inputs and outputs whose internal mechanisms are unknown or ignored. A simple example is the lock on a doorknob: one can insert a key and try to turn it; it either opens the door or not, and it fulfills this function without us needing to know its inner workings. We can treat a system as a black box through the process called ‘black-boxing’, which forgets its inner workings and records only the relation it imposes between its inputs and outputs.

Since systems with inputs and outputs can be seen as morphisms in a category we expect black-boxing to be a functor out of a category of this sort. Assigning each diagram its behavior in a functorial way is formalized by functorial semantics, first introduced in Lawvere’s thesis in 1963. This consists of using categories with specific extra structure as ‘theories’ whose ‘models’ are structure-preserving functors into other such categories. We then think of the diagrams as a syntax, while the behaviors are the semantics. Thus black-boxing is actually an example of functorial semantics. This leads us to another goal: to study the functorial semantics, i.e. black-boxing functors, for electrical circuits, signal-flow diagrams, and bond graphs.

Brendan Fong and I began this type of work by showing how to describe circuits made of wires, resistors, capacitors, and inductors as morphisms in a category using ‘decorated cospans’. Jason Erbele and I, and separately Bonchi, Sobociński and Zanasi, studied signal flow diagrams as morphisms in a category. In other work Brendan Fong, Blake Pollard and I looked at Markov processes, while Blake and I studied chemical reaction networks using decorated cospans. In all of these cases, we also studied the functorial semantics of these diagram languages.

Brandon’s main tool is the framework of ‘props’, also called ‘PROPs’, introduced by Mac Lane in 1965. The acronym stands for “products and permutations”, and these operations roughly describe what a prop can do. More precisely, a prop is a strict symmetric monoidal category equipped with a distinguished object <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> such that every object is a tensor power <semantics>X n.<annotation encoding="application/x-tex">X^{\otimes n}.</annotation></semantics> Props arise because very often we think of a network as going between some set of input nodes and some set of output nodes, where the nodes are indistinguishable from each other. Thus we typically think of a network as simply having some natural number as an input and some natural number as an output, so that the network is actually a morphism in a prop.

Circuits and bond graphs

Now let’s take a quick tour of circuits and bond graphs. Much more detail can be found in Brandon’s thesis, but this may help you know what to picture when you hear terminology from electrical engineering.

Here is an electrical circuit made of only perfectly conductive wires:

This is just a graph, consisting of a set <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics> of nodes, a set <semantics>E<annotation encoding="application/x-tex">E</annotation></semantics> of edges, and maps <semantics>s,t:EN<annotation encoding="application/x-tex">s,t\colon E\to N</annotation></semantics> sending each edge to its source and target node. We refer to the edges as perfectly conductive wires and say that wires go between nodes. Then associated to each perfectly conductive wire in an electrical circuit is a pair of real numbers called ‘potential’, <semantics>ϕ,<annotation encoding="application/x-tex">\phi,</annotation></semantics> and ‘current’, <semantics>I.<annotation encoding="application/x-tex">I.</annotation></semantics>

Typically each node gets a potential, but in the above case the potential at either end of a wire would be the same so we may as well associate the potential to the wire. Current and potential in circuits like these obey two laws due to Kirchoff. First, at any node, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node. The other law states that any connected wires must have the same potential.

We say that the above circuit is closed as opposed to being open because it does not have any inputs or outputs. In order to talk about open circuits and thereby bring the ‘compositional perspective’ into play we need a notion for inputs and outputs of a circuit. We do this using two maps <semantics>i:XN<annotation encoding="application/x-tex">i\colon X\to N</annotation></semantics> and <semantics>o:YN<annotation encoding="application/x-tex">o\colon Y \to N</annotation></semantics> that specifiy the inputs and outputs of a circuit. Here is an example:

We call the sets <semantics>X,Y,<annotation encoding="application/x-tex">X, Y,</annotation></semantics> and the disjoint union <semantics>X+Y<annotation encoding="application/x-tex">X + Y</annotation></semantics> the inputs, outputs, and terminals of the circuit, respectively. To each terminal we associate a potential and current. In total this gives a space of allowed potentials and currents on the terminals and we call this space the ‘behavior’ of the circuit. Since we do this association without knowing the potentials and currents inside the rest of the circuit we call this process ‘black-boxing’ the circuit. This process hides the internal workings of the circuit and just tells us the relation between inputs and outputs. In fact this association is functorial, but to understand the functoriality first requires that we say how to compose these kinds of circuits. We save this for later.

There are also electrical circuits that have ‘components’ such as resistors, inductors, voltage sources, and current sources. These are graphs as above, but with edges now labelled by elements in some set L. Here is one for example:

We call this an L-circuit. We may also black-box an L-circuit to get a space of allowed potentials and currents, i.e. the behavior of the L-circuit, and this process is functorial as well. The components in a circuit determine the possible potential and current pairs because they impose additional relationships. For example, a resistor between two nodes has a resistance <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> and is drawn as:

In an L-circuit this would be an edge labelled by some positive real number <semantics>R.<annotation encoding="application/x-tex">R.</annotation></semantics> For a resistor like this Kirchhoff’s current law says <semantics>I 1=I 2<annotation encoding="application/x-tex">I_1=I_2</annotation></semantics> and Ohm’s Law says <semantics>ϕ 2ϕ 1=I 1R.<annotation encoding="application/x-tex">\phi_2-\phi_1 =I_1R.</annotation></semantics> This tells us how to construct the black-boxing functor that extracts the right behavior.

Engineers often work with wires that come in pairs where the current on one wire is the negative of the current on the other wire. In such a case engineers care about the difference in potential more than each individual potential. For such pairs of perfectly conductive wires:

we call <semantics>V=ϕ 2ϕ 1<annotation encoding="application/x-tex">V=\phi_2-\phi_1</annotation></semantics> the ‘voltage’ and <semantics>I=I 1=I 2<annotation encoding="application/x-tex">I=I_1=-I_2</annotation></semantics> the ‘current’. Note the word current is used for two different, yet related concepts. We call a pair of wires like this a ‘bond’ and a pair of nodes like this a ‘port’. To summarize we say that bonds go between ports, and in a ‘bond graph’ we draw a bond as follows:

Note that engineers do not explicitly draw ports at the ends of bonds; we follow this notation and simply draw a bond as a thickened edge. Engineers who work with bond graphs often use the terms ‘effort’ and ‘flow’ instead of voltage and current. Thus a bond between two ports in a bond graph is drawn equipped with an effort and flow, rather than a voltage and current, as follows:

A bond graph consists of bonds connected together using ‘1-junctions’ and ‘0-junctions’. These two types of junctions impose equations between the efforts and flows on the attached bonds. The flows on bonds connected together with a 1-junction are all equal, while the efforts sum to zero, after sprinkling in some signs depending on how we orient the bonds. For 0-junctions it works the other way: the flows are all equal while the efforts sum to zero! The duality here is well-known to engineers but perhaps less so to mathematicians. This is one topic Brandon’s thesis explores.

Brandon explains bond graphs in more detail in Chapter 5 of his thesis, but here is an example:

The arrow at the end of a bond indicates which direction of current flow counts as positive, while the bar is called the ‘causal stroke’. These are unnecessary for Brandon’s work, so he adopts a simplified notation without the arrow or bar. In engineering it’s also important to attach general circuit components, but Brandon doesn’t consider these.

Outline

In Chapter 2 of his thesis, Brandon provides the necessary background for studying four categories as props:

• the category of finite sets and spans: <semantics>FinSpan<annotation encoding="application/x-tex">\mathrm{FinSpan}</annotation></semantics>

• the category of finite sets and relations: <semantics>FinRel<annotation encoding="application/x-tex">\mathrm{FinRel}</annotation></semantics>

• the category of finite sets and cospans: <semantics>FinCospan<annotation encoding="application/x-tex">\mathrm{FinCospan}</annotation></semantics>

• the category of finite sets and corelations: <semantics>FinCorel<annotation encoding="application/x-tex">\mathrm{FinCorel}</annotation></semantics>.

In particular, <semantics>FinCospan<annotation encoding="application/x-tex">\mathrm{FinCospan}</annotation></semantics> and <semantics>FinCorel<annotation encoding="application/x-tex">\mathrm{FinCorel}</annotation></semantics> are crucial to the study of networks.

In Corollary 2.3.4 he notes that any prop has a presentation in terms of generators and equations. Then he recalls the known presentations for <semantics>FinSpan,<annotation encoding="application/x-tex">\mathrm{FinSpan},</annotation></semantics> <semantics>FinCospan,<annotation encoding="application/x-tex">\mathrm{FinCospan},</annotation></semantics> and <semantics>FinRel<annotation encoding="application/x-tex">FinRel</annotation></semantics>. Proposition 2.3.7 lets us build props as quotients of other props.

He begins Chapter 3 by showing that <semantics>FinCorel<annotation encoding="application/x-tex">\mathrm{FinCorel}</annotation></semantics> is ‘the prop for extraspecial commutative Frobenius monoids’, based on a paper he wrote with Brendan Fong. This result also gives a presentation for <semantics>FinCorel.<annotation encoding="application/x-tex">\mathrm{FinCorel}.</annotation></semantics>

Then he defines an “L-circuit” as a graph with specified inputs and outputs where all the edge are labeled by elements of some set L. L-circuits are morphisms in the prop <semantics>Circ L.<annotation encoding="application/x-tex">\mathrm{Circ}_L.</annotation></semantics> In Proposition 3.2.8 he uses a result of Rosebrugh, Sabadini and Walters to show that <semantics>Circ L<annotation encoding="application/x-tex">\mathrm{Circ}_L</annotation></semantics> can be viewed as the coproduct of <semantics>FinCospan<annotation encoding="application/x-tex">\mathrm{FinCospan}</annotation></semantics> and the free prop on the set L of labels.

Brandon then defines <semantics>Circ<annotation encoding="application/x-tex">\mathrm{Circ}</annotation></semantics> to be the prop <semantics>Circ L<annotation encoding="application/x-tex">\mathrm{Circ}_L</annotation></semantics> where L consists of a single element. This example is important, because <semantics>Circ<annotation encoding="application/x-tex">\mathrm{Circ}</annotation></semantics> can be seen as the category whose morphisms are circuits made of only perfectly conductive wires! From any morphism in <semantics>Circ<annotation encoding="application/x-tex">\mathrm{Circ}</annotation></semantics> he extracts a cospan of finite sets and then turns the cospan into a corelation. These two processes are functorial, so he gets a method for sending a circuit made of only perfectly conductive wires to a corelation:

<semantics>CircH FinCospanHFinCorel<annotation encoding="application/x-tex"> \mathrm{Circ} \stackrel{H^{'}}{\longrightarrow} \mathrm{FinCospan} \stackrel{H}{\longrightarrow} \mathrm{FinCorel} </annotation></semantics>

There is also a functor

<semantics>K:FinCorelFinRel k<annotation encoding="application/x-tex">K\colon \mathrm{FinCorel} \to \mathrm{FinRel}_k</annotation></semantics>

where <semantics>FinRel k<annotation encoding="application/x-tex">\mathrm{FinRel}_k</annotation></semantics> is the category whose objects are finite dimensional vector spaces and whose morphisms <semantics>R:UV<annotation encoding="application/x-tex">R\colon U\to V</annotation></semantics> are linear relations, that is, linear subspaces <semantics>RUV.<annotation encoding="application/x-tex">R\subseteq U \oplus V.</annotation></semantics> By composing with the above functors <semantics>H<annotation encoding="application/x-tex">H'</annotation></semantics> and <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> he associates a linear relation <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> to any circuit made of perfectly conductive wires. On the other hand he gets a subspace for any such circuit by first assigning potential and current to each terminal, and then subjecting these variables to the appropriate physical laws.

It turns out that these two ways of assigning a subspace to a morphism in <semantics>Circ<annotation encoding="application/x-tex">\mathrm{Circ}</annotation></semantics> are the same. So, he calls the linear relation associated to a circuit using the composite <semantics>KHH<annotation encoding="application/x-tex">K H H'</annotation></semantics> the “behavior” of the circuit and defines the “black-boxing” functor

<semantics>:CircFinRel k<annotation encoding="application/x-tex">\blacksquare \colon \mathrm{Circ}\to \mathrm{FinRel}_k</annotation></semantics>

to be this composite.

Note that the underlying corelation of a circuit made of perfectly conductive wires completely determines the behavior of the circuit via the functor <semantics>K.<annotation encoding="application/x-tex">K.</annotation></semantics>

In Chapter 4 he reinterprets the black-boxing functor <semantics><annotation encoding="application/x-tex">\blacksquare</annotation></semantics> as a morphism of props. He does this by introducing the category <semantics>LagRel k,<annotation encoding="application/x-tex">\mathrm{LagRel}_k,</annotation></semantics> whose objects are “symplectic” vector spaces and whose morphisms are “Lagrangian” relations. In Proposition 4.1.6 he proves that the functor <semantics>K:FinCorelFinRel k<annotation encoding="application/x-tex">K\colon \mathrm{FinCorel} \to \mathrm{FinRel}_k</annotation></semantics> actually picks out a Lagrangian relation for any corelation and thus determines a morphism of props. So, he redefines <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics> to be this morphism

<semantics>K:FinCorelLagRel k<annotation encoding="application/x-tex">K\colon \mathrm{FinCorel} \to \mathrm{LagRel}_k</annotation></semantics>

and reinterprets black-boxing as the composite

<semantics>CircH FinCospanHFinCorelKLagRel k<annotation encoding="application/x-tex">\mathrm{Circ} \stackrel{H^{'}}{\longrightarrow} \mathrm{FinCospan} \stackrel{H}{\longrightarrow} \mathrm{FinCorel} \stackrel{K}{\longrightarrow} \mathrm{LagRel}_k </annotation></semantics>

After doing all this hard work for circuits made of perfectly conductive wires — a warmup exercise that engineers might scoff at — Brandon shows the power of his results by easily extending the black-boxing functor to circuits with arbitrary label sets in Theorem 4.2.1. He applies this result to a prop whose morphisms are circuits made of resistors, inductors, and capacitors. Then he considers a more general and mathematically more natural approach to linear circuits using the prop <semantics>Circ k.<annotation encoding="application/x-tex">\mathrm{Circ}_k.</annotation></semantics> The morphisms here are open circuits with wires labelled by elements of some chosen field <semantics>k.<annotation encoding="application/x-tex">k.</annotation></semantics> In Theorem 4.2.4 he prove the existence of a morphism of props

<semantics>:Circ kLagRel k<annotation encoding="application/x-tex">\blacksquare \colon \mathrm{Circ}_k \to \mathrm{LagRel}_k</annotation></semantics>

that describes the black-boxing of circuits built from arbitrary linear components.

Brandon then picks up where Jason Erbele’s thesis left off, and recalls how control theorists use “signal-flow diagrams” to draw linear relations. These diagrams make up the category <semantics>SigFlow k<annotation encoding="application/x-tex">\mathrm{SigFlow}_k</annotation></semantics>, which is the free prop generated by the same generators as <semantics>FinRel k<annotation encoding="application/x-tex">\mathrm{FinRel}_k</annotation></semantics>. Similarly he defines the prop <semantics>Circ˜ k<annotation encoding="application/x-tex"> \widetilde{\mathrm{Circ}}_k</annotation></semantics> as the free prop generated by the same generators as <semantics>Circ k<annotation encoding="application/x-tex">\mathrm{Circ}_k</annotation></semantics>. Then there is a strict symmetric monoidal functor <semantics>T:Circ˜ kSigFlow k<annotation encoding="application/x-tex">T \colon \widetilde{\mathrm{Circ}}_k \to \mathrm{SigFlow}_k</annotation></semantics> giving a commutative square:

Of course, circuits made of perfectly conductive wires are a special case of linear circuits. We can express this fact using another commutative square:

Combining the diagrams so far, Brandon gets a commutative diagram summarizing the relationship between linear circuits, cospans, corelations, and signal-flow diagrams:

Brandon concludes Chapter 4 by extending his work to circuits with voltage and current sources. These types of circuits define affine relations instead of linear relations. The prop framework lets Brandon extend black-boxing to these types of circuits by showing that affine Lagrangian relations are morphisms in a prop <semantics>AffLagRel k.<annotation encoding="application/x-tex">\mathrm{AffLagRel}_k.</annotation></semantics> This leads to Theorem 4.4.5, which says that for any field <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> and label set L there is a unique morphism of props

<semantics>:Circ LAffLagRel k<annotation encoding="application/x-tex">\blacksquare \colon \mathrm{Circ}_L \to \mathrm{AffLagRel}_k </annotation></semantics>

extending the other black-boxing functor and sending each element of L to an arbitrarily chosen affine Lagrangian relation between potentials and currents.

In Chapter 5, Brandon study bonies graphs as morphisms in a category. His goal is to define a category <semantics>BondGraph,<annotation encoding="application/x-tex">\mathrm{BondGraph},</annotation></semantics> whose morphisms are bond graphs, and then assign a space of efforts and flows as behavior to any bond graph using a functor. He also constructs a functor that assigns a space of potentials and currents to any bond graph, which agrees with the way that potential and current relate to effort and flow.

The subtle way he defines <semantics>BondGraph<annotation encoding="application/x-tex">\mathrm{BondGraph}</annotation></semantics> comes from two different approaches to studying bond graphs, and the problems inherent in each approach. The first approach leads him to a subcategory <semantics>FinCorel <annotation encoding="application/x-tex">\mathrm{FinCorel}^\circ</annotation></semantics> of <semantics>FinCorel<annotation encoding="application/x-tex">\mathrm{FinCorel}</annotation></semantics>, while the second leads him to a subcategory <semantics>LagRel k <annotation encoding="application/x-tex">\mathrm{LagRel}_k^\circ</annotation></semantics> of <semantics>LagRel k<annotation encoding="application/x-tex">\mathrm{LagRel}_k</annotation></semantics>. There isn’t a commutative square relating these four categories, but Brandon obtains a pentagon that commutes up to a natural transformation by inventing a new category <semantics>BondGraph<annotation encoding="application/x-tex">BondGraph</annotation></semantics>:

This category is a way of formalizing Paynter’s idea of bond graphs.

In his first approach, Brandon views a bond graph as an electrical circuit. He takes advantage of his earlier work on circuits and corelations by taking <semantics>FinCorel<annotation encoding="application/x-tex">\mathrm{FinCorel}</annotation></semantics> to be the category whose morphisms are circuits made of perfectly conductive wires. In this approach a terminal is the object 1 and a wire is the identity corelation from 1 to 1, while a circuit from m terminals to n terminals is a corelation from m to n.

In this approach Brandon thinks of a port as the object 2, since a port is a pair of nodes. Then he thinks of a bond as a pair of wires and hence the identity corelation from 2 to 2. Lastly, the two junctions are two different ways of connecting ports together, and thus specific corelations from 2m to 2n. It turns out that by following these ideas he can equip the object 2 with two different Frobenius monoid structures, which behave very much like 1-junctions and 0-junctions in bond graphs!

It would be great if the morphisms built from these two Frobenius monoids corresponded perfectly to bond graphs. Unfortunately there are some equations which hold between morphisms made from these Frobenius monoids that do not hold for corresponding bond graphs. So, Brandon defines a category <semantics>FinCorel <annotation encoding="application/x-tex">\mathrm{FinCorel}^\circ</annotation></semantics> using the morphisms that come from these two Frobenius monoids and moves on to a second attempt at defining <semantics>BondGraph<annotation encoding="application/x-tex">BondGraph</annotation></semantics>.

Since bond graphs impose Lagrangian relations between effort and flow, this second approach starts by looking back at <semantics>LagRel k<annotation encoding="application/x-tex">\mathrm{LagRel}_k</annotation></semantics>. The relations associated to a 1-junction make <semantics>kk<annotation encoding="application/x-tex">k\oplus k</annotation></semantics> into yet another Frobenius monoid, while the relations associated to a 0-junction make <semantics>kk<annotation encoding="application/x-tex">k\oplus k</annotation></semantics> into a different Frobenius monoid. These two Frobenius monoid structures interact to form a bimonoid! Unfortunately, a bimonoid has some equations between morphisms that do not correspond to equations between bond graphs, so this approach also does not result in morphisms that are bond graphs. Nonetheless, Brandon defines a category <semantics>LagRel k <annotation encoding="application/x-tex">\mathrm{LagRel}_k^\circ</annotation></semantics> using the two Frobenius monoid structures <semantics>kk.<annotation encoding="application/x-tex">k\oplus k.</annotation></semantics>

Since it turns out that <semantics>FinCorel <annotation encoding="application/x-tex">\mathrm{FinCorel}^\circ</annotation></semantics> and <semantics>LagRel k <annotation encoding="application/x-tex">\mathrm{LagRel}_k^\circ</annotation></semantics> have corresponding generators, Brandon defines <semantics>BondGraph<annotation encoding="application/x-tex">\mathrm{BondGraph}</annotation></semantics> as a prop that also has corresponding generators, but with only the equations found in both <semantics>FinCorel <annotation encoding="application/x-tex">\mathrm{FinCorel}^\circ</annotation></semantics> and <semantics>LagRel k .<annotation encoding="application/x-tex">\mathrm{LagRel}_k^\circ.</annotation></semantics> By defining <semantics>BondGraph<annotation encoding="application/x-tex">\mathrm{BondGraph}</annotation></semantics> in this way he automatically gets two functors

<semantics>F:BondGraphLagRel k <annotation encoding="application/x-tex">F\colon \mathrm{BondGraph} \to \mathrm{LagRel}_k^\circ</annotation></semantics>

and

<semantics>G:BondGraphFinCorel <annotation encoding="application/x-tex">G\colon \mathrm{BondGraph} \to \mathrm{FinCorel}^\circ</annotation></semantics>

The functor <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> associates effort and flow to a bond graph, while the functor <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> lets us associate potential and current to a bond graph using the previous work done on <semantics>FinCorel.<annotation encoding="application/x-tex">\mathrm{FinCorel}.</annotation></semantics> Then the Lagrangian subspace relating effort, flow, potential, and current:

<semantics>{(V,I,ϕ 1,I 1,ϕ 2,I 2)|V=ϕ 2ϕ 1,I=I 1=I 2}<annotation encoding="application/x-tex"> \{(V,I,\phi_1,I_1,\phi_2,I_2) | V = \phi_2-\phi_1, I = I_1 = -I_2\}</annotation></semantics>

defines a natural transformation in the following diagram:

Putting this together with the diagram we saw before, Brandon gets a giant diagram which encompasses the relationships between circuits, signal-flow diagrams, bond graphs, and their behaviors in category theoretic terms:

This diagram is a nice quick road map of his thesis. Of course, you need to understand all the categories in this diagram, all the functors, and also their applications to engineering, to fully appreciate what he has accomplished! But his thesis explains that.

To learn more

Coya’s thesis has lots of references, but if you want to see diagrams at work in actual engineering, here are some good textbooks on bond graphs:

• D. C. Karnopp, D. L. Margolis and R. C. Rosenberg, System Dynamics: A Unified Approach, Wiley, New York, 1990.

• F. T. Brown, Engineering System Dynamics: A Unified Graph-Centered Approach, Taylor and Francis, New York, 2007.

and here’s a good one on signal-flow diagrams:

• B. Friedland, Control System Design: An Introduction to State-Space Methods, S. W. Director (ed.), McGraw–Hill Higher Education, 1985.

by john (baez@math.ucr.edu) at May 20, 2018 04:13 PM

The n-Category Cafe

Postdoc at the Centre of Australian Category Theory

The Centre of Australian Category Theory is advertising for a postdoc. The position is for 2 years and the ad is here.

Applications close on 15 June. Most questions about the position would be best directed to Richard Garner or Steve Lack. You can also find out more about CoACT here.

This is a great opportunity to join a fantastic research group. Please help spread the word to those who might be interested!

by riehl (eriehl@math.jhu.edu) at May 20, 2018 02:33 PM

May 19, 2018

John Baez - Azimuth

Circuits, Bond Graphs, and Signal-Flow Diagrams

 

My student Brandon Coya finished his thesis, and successfully defended it last Tuesday!

• Brandon Coya, Circuits, Bond Graphs, and Signal-Flow Diagrams: A Categorical Perspective, Ph.D. thesis, U. C. Riverside, 2018.

It’s about networks in engineering. He uses category theory to study the diagrams engineers like to draw, and functors to understand how these diagrams are interpreted.

His thesis raises some really interesting pure mathematical questions about the category of corelations and a ‘weak bimonoid’ that can be found in this category. Weak bimonoids were invented by Pastro and Street in their study of ‘quantum categories’, a generalization of quantum groups. So, it’s fascinating to see a weak bimonoid that plays an important role in electrical engineering!

However, in what follows I’ll stick to less fancy stuff: I’ll just explain the basic idea of Brandon’s thesis, say a bit about circuits and ‘bond graphs’, and outline his main results. What follows is heavily based on the introduction of his thesis, but I’ve baezified it a little.

The basic idea

People, and especially scientists and engineers, are naturally inclined to draw diagrams and pictures when they want to better understand a problem. One example is when Feynman introduced his famous diagrams in 1949; particle physicists have been using them ever since. But some other diagrams introduced by engineers are far more important to the functioning of the modern world and its technology. It’s outrageous, but sociologically understandable, that mathematicians have figured out more about Feynman diagrams than these other kinds: circuit diagrams, bond graphs and signal-flow diagrams. This is the problem Brandon aims to fix.

I’ve been unable to track down the early history of circuit diagrams, so if you know about that please tell me! But in the 1940s, Harry Olson pointed out analogies in electrical, mechanical, thermodynamic, hydraulic, and chemical systems, which allowed circuit diagrams to be applied to a wide variety of fields. On April 24, 1959, Henry Paynter woke up and invented the diagrammatic language of bond graphs to study generalized versions of voltage and current, called ‘effort’ and ‘flow,’ which are implicit in the analogies found by Olson. Bond graphs are now widely used in engineering. On the other hand, control theorists use diagrams of a different kind, called ‘signal-flow diagrams’, to study linear open dynamical systems.

Although category theory predates some of these diagrams, it was not until the 1980s that Joyal and Street showed string digrams can be used to reason about morphisms in any symmetric monoidal category. This motivates Brandon’s first goal: viewing electrical circuits, signal-flow diagrams, and bond graphs as string diagrams for morphisms in symmetric monoidal categories.

This lets us study networks from a compositional perspective. That is, we can study a big network by describing how it is composed of smaller pieces. Treating networks as morphisms in a symmetric monoidal category lets us build larger ones from smaller ones by composing and tensoring them: this makes the compositional perspective into precise mathematics. To study a network in this way we must first define a notion of ‘input’ and ‘output’ for the network diagram. Then gluing diagrams together, so long as the outputs of one match the inputs of the other, defines the composition for a category.

Network diagrams are typically assigned data, such as the potential and current associated to a wire in an electrical circuit. Since the relation between the data tells us how a network behaves, we call this relation the ‘behavior’ of a network. The way in which we assign behavior to a network comes from first treating a network as a ‘black box’, which is a system with inputs and outputs whose internal mechanisms are unknown or ignored. A simple example is the lock on a doorknob: one can insert a key and try to turn it; it either opens the door or not, and it fulfills this function without us needing to know its inner workings. We can treat a system as a black box through the process called ‘black-boxing’, which forgets its inner workings and records only the relation it imposes between its inputs and outputs.

Since systems with inputs and outputs can be seen as morphisms in a category we expect black-boxing to be a functor out of a category of this sort. Assigning each diagram its behavior in a functorial way is formalized by functorial semantics, first introduced in Lawvere’s thesis in 1963. This consists of using categories with specific extra structure as ‘theories’ whose ‘models’ are structure-preserving functors into other such categories. We then think of the diagrams as a syntax, while the behaviors are the semantics. Thus black-boxing is actually an example of functorial semantics. This leads us to another goal: to study the functorial semantics, i.e. black-boxing functors, for electrical circuits, signal-flow diagrams, and bond graphs.

Brendan Fong and I began this type of work by showing how to describe circuits made of wires, resistors, capacitors, and inductors as morphisms in a category using ‘decorated cospans’. Jason Erbele and I, and separately Bonchi, Sobociński and Zanasi, studied signal flow diagrams as morphisms in a category. In other work Brendan Fong, Blake Pollard and I looked at Markov processes, while Blake and I studied chemical reaction networks using decorated cospans. In all of these cases, we also studied the functorial semantics of these diagram languages.

Brandon’s main tool is the framework of ‘props’, also called ‘PROPs’, introduced by Mac Lane in 1965. The acronym stands for “products and permutations”, and these operations roughly describe what a prop can do. More precisely, a prop is a strict symmetric monoidal category equipped with a distinguished object latex X$ such that every object is a tensor power X^{\otimes n}. Props arise because very often we think of a network as going between some set of input nodes and some set of output nodes, where the nodes are indistinguishable from each other. Thus we typically think of a network as simply having some natural number as an input and some natural number as an output, so that the network is actually a morphism in a prop.

Circuits and bond graphs

Now let’s take a quick tour of circuits and bond graphs. Much more detail can be found in Brandon’s thesis, but this may help you know what to picture when you hear terminology from electrical engineering.

Here is an electrical circuit made of only perfectly conductive wires:

This is just a graph, consisting of a set N of nodes, a set E of edges, and maps s,t\colon E\to N sending each edge to its source and target node. We refer to the edges as perfectly conductive wires and say that wires go between nodes. Then associated to each perfectly conductive wire in an electrical circuit is a pair of real numbers called ‘potential’, \phi, and ‘current’, I.

Typically each node gets a potential, but in the above case the potential at either end of a wire would be the same so we may as well associate the potential to the wire. Current and potential in circuits like these obey two laws due to Kirchoff. First, at any node, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node. The other law states that any connected wires must have the same potential.

We say that the above circuit is closed as opposed to being open because it does not have any inputs or outputs. In order to talk about open circuits and thereby bring the ‘compositional perspective’ into play we need a notion for inputs and outputs of a circuit. We do this using two maps i\colon X\to N and o\colon Y \to N that specifiy the inputs and outputs of a circuit. Here is an example:

We call the sets X, Y, and the disjoint union X + Y the inputs, outputs, and terminals of the circuit, respectively. To each terminal we associate a potential and current. In total this gives a space of allowed potentials and currents on the terminals and we call this space the ‘behavior’ of the circuit. Since we do this association without knowing the potentials and currents inside the rest of the circuit we call this process ‘black-boxing’ the circuit. This process hides the internal workings of the circuit and just tells us the relation between inputs and outputs. In fact this association is functorial, but to understand the functoriality first requires that we say how to compose these kinds of circuits. We save this for later.

There are also electrical circuits that have ‘components’ such as resistors, inductors, voltage sources, and current sources. These are graphs as above, but with edges now labelled by elements in some set L. Here is one for example:

We call this an L-circuit. We may also black-box an L-circuit to get a space of allowed potentials and currents, i.e. the behavior of the L-circuit, and this process is functorial as well. The components in a circuit determine the possible potential and current pairs because they impose additional relationships. For example, a resistor between two nodes has a resistance R and is drawn as:

In an L-circuit this would be an edge labelled by some positive real number R. For a resistor like this Kirchhoff’s current law says I_1=I_2 and Ohm’s Law says \phi_2-\phi_1 =I_1R. This tells us how to construct the black-boxing functor that extracts the right behavior.

Engineers often work with wires that come in pairs where the current on one wire is the negative of the current on the other wire. In such a case engineers care about the difference in potential more than each individual potential. For such pairs of perfectly conductive wires:

we call V=\phi_2-\phi_1 the ‘voltage’ and I=I_1=-I_2 the ‘current’. Note the word current is used for two different, yet related concepts. We call a pair of wires like this a ‘bond’ and a pair of nodes like this a ‘port’. To summarize we say that bonds go between ports, and in a ‘bond graph’ we draw a bond as follows:

Note that engineers do not explicitly draw ports at the ends of bonds; we follow this notation and simply draw a bond as a thickened edge. Engineers who work with bond graphs often use the terms ‘effort’ and ‘flow’ instead of voltage and current. Thus a bond between two ports in a bond graph is drawn equipped with an effort and flow, rather than a voltage and current, as follows:

A bond graph consists of bonds connected together using ‘1-junctions’ and ‘0-junctions’. These two types of junctions impose equations between the efforts and flows on the attached bonds. The flows on bonds connected together with a 1-junction are all equal, while the efforts sum to zero, after sprinkling in some signs depending on how we orient the bonds. For 0-junctions it works the other way: the efforts are all equal while the flows sum to zero! The duality here is well-known to engineers but perhaps less so to mathematicians. This is one topic Brandon’s thesis explores.

Brandon explains bond graphs in more detail in Chapter 5 of his thesis, but here is an example:

The arrow at the end of a bond indicates which direction of current flow counts as positive, while the bar is called the ‘causal stroke’. These are unnecessary for Brandon’s work, so he adopts a simplified notation without the arrow or bar. In engineering it’s also important to attach general circuit components, but Brandon doesn’t consider these.

Outline

In Chapter 2 of his thesis, Brandon provides the necessary background for studying four categories as props:

• the category of finite sets and spans: \textrm{FinSpan}

• the category of finite sets and relations: \textrm{FinRel}

• the category of finite sets and cospans: \textrm{FinCospan}

• the category of finite sets and corelations: \textrm{FinCorel}.

In particular, \textrm{FinCospan} and \textrm{FinCorel} are crucial to the study of networks.

In Corollary 2.3.4 he notes that any prop has a presentation in terms of generators and equations. Then he recalls the known presentations for \textrm{FinSpan}, \textrm{FinCospan}, and \textrm{FinRel}. Proposition 2.3.7 lets us build props as quotients of other props.

He begins Chapter 3 by showing that $\mathrm{FinCorel}$ is ‘the prop for extraspecial commutative Frobenius monoids’, based on a paper he wrote with Brendan Fong. This result also gives a presentation for \mathrm{FinCorel}.

Then he defines an “L-circuit” as a graph with specified inputs and outputs, together with a labelling set for the edges of the graph. L-circuits are morphisms in the prop \textrm{Circ}_L. In Proposition 3.2.8 he uses a result of Rosebrugh, Sabadini and Walters to show that \textrm{Circ}_L can be viewed as the coproduct of \textrm{FinCospan} and the free prop on the set L of labels.

Brandon then defines \textrm{Circ} to be the prop \textrm{Circ}_L where L consists of a single element. This example is important, because \textrm{Circ} can be seen as the category whose morphisms are circuits made of only perfectly conductive wires! From any morphism in \textrm{Circ} he extracts a cospan of finite sets and then turns the cospan into a corelation. These two processes are functorial, so he gets a method for sending a circuit made of only perfectly conductive wires to a corelation:

\textrm{Circ} \stackrel{H'}{\longrightarrow} \textrm{FinCospan} \stackrel{H}{\longrightarrow} \textrm{FinCorel}

There is also a functor

K\colon \textrm{FinCorel} \to \textrm{FinRel}_k

where \textrm{FinRel}_k is the category whose objects are finite dimensional vector spaces and whose morphisms R\colon U\to V are linear relations, that is, linear subspaces R\subseteq U \oplus V. By composing with the above functors H' and H he associates a linear relation R to any circuit made of perfectly conductive wires. On the other hand he gets a subspace for any such circuit by first assigning potential and current to each terminal, and then subjecting these variables to the appropriate physical laws.

It turns out that these two ways of assigning a subspace to a morphism in \textrm{Circ} are the same. So, he calls the linear relation associated to a circuit using the composite KHH' the “behavior” of the circuit and defines the “black-boxing” functor

\blacksquare \colon \textrm{Circ}\to \textrm{FinRel}_k

to be the composite of these:

\textrm{Circ} \stackrel{H'}{\longrightarrow} \textrm{FinCospan} \stackrel{H}{\longrightarrow} \textrm{FinCorel} \stackrel{K}{\longrightarrow} \textrm{FinRel}_k

Note that the underlying corelation of a circuit made of perfectly conductive wires completely determines the behavior of the circuit via the functor K.

In Chapter 4 he reinterprets the black-boxing functor \blacksquare as a morphism of props. He does this by introducing the category \textrm{LagRel}_k, whose objects are “symplectic” vector spaces and whose morphisms are “Lagrangian” relations. In Proposition 4.1.6 he proves that the functor K\colon \textrm{FinCorel} \to \textrm{FinRel}_k actually picks out a Lagrangian relation for any corelation and thus determines a morphism of props. So, he redefines K to be this morphism

K\colon \mathrm{FinCorel} \to \mathrm{LagRel}_k

and reinterprets black-boxing as the composite

\mathrm{Circ} \stackrel{H'}{\longrightarrow} \mathrm{FinCospan} \stackrel{H}{\longrightarrow} \mathrm{FinCorel} \stackrel{K}{\longrightarrow} \mathrm{LagRel}_k

After doing al this hard work for circuits made of perfectly conductive wires—a warmup exercises that engineers might scoff at—Brandon shows the power of his results by easily extending the black-boxing functor to circuits with arbitrary label sets in Theorem 4.2.1. He applies this result to a prop whose morphisms are circuits made of resistors, inductors, and capacitors. Then he considers a more general and mathematically more natural approach to linear circuits using the prop \textrm{Circ}_k. The morphisms here are open circuits with wires labelled by elements of some chosen field k. In Theorem 4.2.4 he prove the existence of a morphism of props

\blacksquare \colon \textrm{Circ}_k \to \textrm{LagRel}_k

that describes the black-boxing of circuits built from arbitrary linear components.

Brandon then picks up where Jason Erbele’s thesis left off, and recalls how control theorists use “signal-flow diagrams” to draw linear relations. These diagrams make up the category \textrm{SigFlow}_k, which is the free prop generated by the same generators as \textrm{FinRel}_k. Similarly he defines the prop \widetilde{\mathrm{Circ}}_k as the free prop generated by the same generators as \textrm{Circ}_k. Then there is a strict symmetric monoidal functor T\colon \widetilde{\mathrm{Circ}}_k \to \textrm{SigFlow}_k giving a commutative square:

Of course, circuits made of perfectly conductive wires are a special case of linear circuits. We can express this fact using another commutative square:

Combining the diagrams so far, Brandon gets a commutative diagram summarizing the relationship between linear circuits, cospans, corelations, and signal-flow diagrams:

Brandon concludes Chapter 4 by extending his work to circuits with voltage and current sources. These types of circuits define affine relations instead of linear relations. The prop framework lets Brandon extend black-boxing to these types of circuits by showing that affine Lagrangian relations are morphisms in a prop \textrm{AffLagRel}_k. This leads to Theorem 4.4.5, which says that for any field k and label set L there is a unique morphism of props

\blacksquare \colon \textrm{Circ}_L \to \textrm{AffLagRel}_k

extending the other black-boxing functor and sending each element of L to an arbitrarily chosen affine Lagrangian relation between potentials and currents.

In Chapter 5, Brandon studies bond graphs as morphisms in a category. His goal is to define a category \textrm{BondGraph}, whose morphisms are bond graphs, and then assign a space of efforts and flows as behavior to any bond graph using a functor. He also constructs a functor that assigns a space of potentials and currents to any bond graph, which agrees with the way that potential and current relate to effort and flow.

The subtle way he defines \textrm{BondGraph} comes from two different approaches to studying bond graphs, and the problems inherent in each approach. The first approach leads him to a subcategory \textrm{FinCorel}^\circ of \textrm{FinCorel}, while the second leads him to a subcategory \textrm{LagRel}_k^\circ of \textrm{LagRel}_k. There isn’t a commutative square relating these four categories, but Brandon obtains a pentagon that commutes up to a natural transformation by inventing a new category \textrm{BondGraph}:

This category is a way of formalizing Paynter’s idea of bond graphs.

In his first approach, Brandon views a bond graph as an electrical circuit. He takes advantage of his earlier work on circuits and corelations by taking \textrm{FinCorel} to be the category whose morphisms are circuits made of perfectly conductive wires. In this approach a terminal is the object 1 and a wire is the identity corelation from 1 to 1, while a circuit from m terminals to n terminals is a corelation from m to n.

In this approach Brandon thinks of a port as the object 2, since a port is a pair of nodes. Then he thinks of a bond as a pair of wires and hence the identity corelation from 2 to 2. Lastly, the two junctions are two different ways of connecting ports together, and thus specific corelations from 2m to 2n. It turns out that by following these ideas he can equip the object 2 with two different Frobenius monoid structures, which behave very much like 1-junctions and 0-junctions in bond graphs!

It would be great if the morphisms built from these two Frobenius monoids corresponded perfectly to bond graphs. Unfortunately there are some equations which hold between morphisms made from these Frobenius monoids that do not hold for corresponding bond graphs. So, Brandon defines a category \textrm{FinCorel}^\circ using the morphisms that come from these two Frobenius monoids and moves on to a second attempt at defining \textrm{BondGraph}.

Since bond graphs impose Lagrangian relations between effort and flow, this second approach starts by looking back at \textrm{LagRel}_k. The relations associated to a 1-junction make k\oplus k into yet another Frobenius monoid, while the relations associated to a 0-junction make k\oplus k into a different Frobenius monoid. These two Frobenius monoid structures interact to form a bimonoid! Unfortunately, a bimonoid has some equations between morphisms that do not correspond to equations between bond graphs, so this approach also does not result in morphisms that are bond graphs. Nonetheless, Brandon defines a category \textrm{LagRel}_k^\circ using the two Frobenius monoid structures k\oplus k.

Since it turns out that \textrm{FinCorel}^\circ and \textrm{LagRel}_k^\circ have corresponding generators, Brandon defines \textrm{BondGraph} as a prop that also has corresponding generators, but with only the equations found in both \textrm{FinCorel}^\circ and \textrm{LagRel}_k^\circ. By defining \textrm{BondGraph} in this way he automatically gets two functors

F\colon \textrm{BondGraph} \to \textrm{LagRel}_k^\circ

and

G\colon \textrm{BondGraph} \to \textrm{FinCorel}^\circ

The functor F associates effort and flow to a bond graph, while the functor G lets us associate potential and current to a bond graph using the previous work done on \textrm{FinCorel}. Then the Lagrangian subspace relating effort, flow, potential, and current:

\{(V,I,\phi_1,I_1,\phi_2,I_2) | V = \phi_2-\phi_1, I = I_1 = -I_2\}

defines a natural transformation in the following diagram:

Putting this together with the diagram we saw before, Brandon gets a giant diagram which encompasses the relationships between circuits, signal-flow diagrams, bond graphs, and their behaviors in category theoretic terms:

This diagram is a nice quick road map of his thesis. Of course, you need to understand all the categories in this diagram, all the functors, and also their applications to engineering, to fully appreciate what he has accomplished! But his thesis explains that.

To learn more

Coya’s thesis has lots of references, but if you want to see diagrams at work in actual engineering, here are some good textbooks on bond graphs:

• D. C. Karnopp, D. L. Margolis and R. C. Rosenberg, System Dynamics: A Unified Approach, Wiley, New York, 1990.

• F. T. Brown, Engineering System Dynamics: A Unified Graph-Centered Approach, Taylor and Francis, New York, 2007.

and here’s a good one on signal-flow diagrams:

• B. Friedland, Control System Design: An Introduction to State-Space Methods, S. W. Director (ed.), McGraw–Hill Higher Education, 1985.

by John Baez at May 19, 2018 09:32 PM

Tommaso Dorigo - Scientificblogging

Piero Martin At TedX: An Eulogy Of The Error
Living in Padova has its merits. I moved here since January 1st and am enjoying every bit of it. I used to live in Venice, my home town, and commute with Padova during weekdays, but a number of factors led me to decide on this move (not last the fact that I could afford to buy a spacious place close to my office in Padova, while in Venice I was confined to a rented apartment).

read more

by Tommaso Dorigo at May 19, 2018 03:28 PM

May 18, 2018

Clifford V. Johnson - Asymptotia

Make with Me!

Bay Area! You're up next! The Maker Faire is a wonderful event/movement that I've heard about for years and which always struck me as very much in line with my own way of being (making, tinkering, building, creating, as time permits...) On Sunday I'll have the honour of being on one of the centre stages (3:45pm) talking with Kishore Hari (of the podcast Inquiring Minds) about how I made The Dialogues, and why. I might go into some extra detail about my research into making graphic books, and the techniques I used, given the audience. Why yes, I'll sign books for you afterwards, of course. Thanks for asking.

I recommend getting a day pass and see a ton of interesting events that day! Here's a link to the Sunday schedule and amor there you can see links to the whole faire and tickets!

-cvj Click to continue reading this post

The post Make with Me! appeared first on Asymptotia.

by Clifford at May 18, 2018 02:48 PM

May 17, 2018

ZapperZ - Physics and Physicists

Noether Theorem And Symmetries
This is not something new that I'm highlighting on this blog. I've mentioned a link to Emmy Noether theorem before in this post, and also highlighted a history of her work here. However, I think that there is no such thing as too much publicity on Emmy Noether, because she deserves to be remembered and admired through eternity for her accomplishments and insights.

This video tries to explain the significance of her work connecting conservation laws with symmetry principles.



However, I think that if I were a layperson, I'd miss the important point in this video. So here is the takeaway message if you want one:

Everything that we see and every behavior of our universe can be traced to some conservation laws. Each conservation law is a manifestation of some underlying symmetry of our universe.

This is the insight, and a very important insight, that Noether brought to the table, and it was revolutionary to physics. These symmetries are what we currently have as the most fundamental description of the universe that we live in.

Watch this video, and read the links that I gave above, several times if you must, because you owe it to yourself to know about this person and her immense effect on our understanding of our world.

Zz.

by ZapperZ (noreply@blogger.com) at May 17, 2018 03:47 PM

Jester - Resonaances

Proton's weak charge, and what's it for

In the particle world the LHC still attracts the most attention, but in parallel there is ongoing progress at the low-energy frontier. A new episode in that story is the Qweak experiment in Jefferson Lab in the US, which just published their final results.  Qweak was shooting a beam of 1 GeV electrons on a hydrogen (so basically proton) target to determine how the scattering rate depends on electron's polarization. Electrons and protons interact with each other via the electromagnetic and weak forces. The former is much stronger, but it is parity-invariant, i.e. it does not care about the direction of polarization. On the other hand, since the classic Wu experiment in 1956, the weak force is known to violate parity. Indeed, the Standard Model postulates that the Z boson, who mediates the weak force,  couples with different strength to left- and right-handed particles. The resulting asymmetry between the low-energy electron-proton scattering cross sections of left- and right-handed polarized electrons is predicted to be at the 10^-7 level. That has been experimentally observed many times before, but Qweak was able to measure it with the best precision to date (relative 4%), and at a lower momentum transfer than the previous experiments.   

What is the point of this exercise? Low-energy parity violation experiments are often sold as precision measurements of the so-called Weinberg angle, which is a function of the electroweak gauge couplings - the fundamental parameters of the Standard Model. I don't like too much that perspective because the electroweak couplings, and thus the Weinberg angle, can be more precisely determined from other observables, and Qweak is far from achieving a competing accuracy. The utility of Qweak is better visible in the effective theory picture. At low energies one can parameterize the relevant parity-violating interactions between protons and electrons by the contact term
where v ≈ 246 GeV, and QW is the so-called weak charge of the proton. Such interactions arise thanks to the Z boson in the Standard Model being exchanged between electrons and quarks that make up the proton. At low energies, the exchange diagram is well approximated by the contact term above with QW = 0.0708  (somewhat smaller than the "natural" value QW ~ 1  due to numerical accidents making the Z boson effectively protophobic). The measured polarization asymmetry in electron-proton scattering can be re-interpreted as a determination of the proton weak charge: QW = 0.0719 ± 0.0045, in perfect agreement with the Standard Model prediction.

New physics may affect the magnitude of the proton weak charge in two distinct ways. One is by altering the strength with which the Z boson couples to matter. This happens for example when light quarks mix with their heavier exotic cousins with different quantum numbers, as is often the case in the models from the Randall-Sundrum family. More generally, modified couplings to the Z boson could be a sign of quark compositeness. Another way is by generating new parity-violating contact interactions between electrons and quarks. This can be a result of yet unknown short-range forces which distinguish left- and right-handed electrons. Note that the observation of lepton flavor violation in B-meson decays can be interpreted as a hint for existence of such forces (although for that purpose the new force carriers do not need to couple to 1st generation quarks).  Qweak's measurement puts novel limits on such broad scenarios. Whatever the origin, simple dimensional analysis allows one to estimate  the possible change of the proton weak charge as 
   where M* is the mass scale of new particles beyond the Standard Model, and g* is their coupling strength to matter. Thus, Qweak can constrain new weakly coupled particles with masses up to a few TeV, or even 50 TeV particles if they are strongly coupled to matter (g*~4π).

What is the place of Qweak in the larger landscape of precision experiments? One can illustrate it by considering a simple example where heavy new physics modifies only the vector couplings of the Z boson to up and down quarks. The best existing constraints on such a scenario are displayed in this plot:
From the size of the rotten egg region you see that the Z boson couplings to light quarks are currently known with a per-mille accuracy. Somewhat surprisingly, the LEP collider, which back in the 1990s produced tens of millions of Z boson to precisely study their couplings, is not at all the leader in this field. In fact, better constraints come from precision measurements at very low energies: pion, kaon, and neutron decays,  parity-violating transitions in cesium atoms,  and the latest Qweak results which make a difference too. The importance of Qweak is even more pronounced in more complex scenarios where the parameter space is multi-dimensional.

Qweak is certainly not the last salvo on the low-energy frontier. Similar but more precise experiments are being prepared as we read (I wish the follow up were called SuperQweak, or SQweak in short). Who knows, maybe quarks are made of more fundamental building blocks at the scale of ~100 TeV,  and we'll first find it out thanks to parity violation at very low energies.

by Mad Hatter (noreply@blogger.com) at May 17, 2018 12:36 PM

ZapperZ - Physics and Physicists

Relativistic Velocity Addition
If I get $1 for every time someone asks me "If I'm moving in a spaceship and I turn on my flash light...."

Here's Don Lincoln's lesson on relativistic velocity addition:



Zz.

by ZapperZ (noreply@blogger.com) at May 17, 2018 12:25 PM

May 16, 2018

ZapperZ - Physics and Physicists

RIP David Pines
This is another one of the physicist who is a giant in his field, but relatively unknown to the general public.

Renowned condensed matter theorist David Pines passed away on May 3, 2018 at the age of 93. I practically read his text (co-authored by Nozieres) on Fermi Liquid from cover to cover while I was a graduate student. In fact, he was on the cusp of a Nobel Prize when he was working with John Bardeen at UIUC. They published a paper on the electron-phonon interaction in superconductors in 1955, a paper that many thought was the precursor to the subsequent BCS Theory paper in 1957. Unfortunately, he left UIUC, and Bob Schrieffer took over his work on this, which ultimately led to the BCS theory and the Nobel prize.

This did not diminished his body of work throughout his life. He certainly was a main figure during the High-Tc superconductivity craze of the late 80's and 90's. His 1991 PRL paper with Monthoux and Balatsky and the 1992 PRL paper with Monthoux, both on the spin-fluctuation effect as the possible "glue" in the cuprate superconductors, where ground-breaking and highly cited.

His contribution to this body of knowledge will have a lasting impact.

Zz.

by ZapperZ (noreply@blogger.com) at May 16, 2018 10:11 PM

Lubos Motl - string vacua and pheno

Kaggle: reconstruct tracks from 75 GB of point data
Fartel Engelbert has told me that there is a new CERN-sponsored machine learning contest at Kaggle.com:
TrackML Particle Tracking Challenge
To make the story short, the data you will have to download include 5 times 15 GB train files plus 1 GB train sample and 1 GB test file. A sample submission has 30 MB, detectors.zip have 175 kB.




Well, readers whose infrastructure is similar to mine have already given up. I don't know what to do with 75 GB. On Windows, there's no trouble to store this much data but I would have to manipulate it with Mathematica and that would clearly be too slow with 75 GB.




On the other hand, I could run a VirtualBox with some Linux, like during the Higgs Kaggle contest, but then I would have to study whether I have to allocate some extra hard disk for the simulated Linux hard disk and face similar problems that I am not experienced with. I just don't want to do that – this dataset is simply too big for me.

If such things aren't trouble for you, you should try. In the first phase of the contest – three months are left – you need to design the most accurate algorithm to reconstruct particles' tracks from the points that the huge datasets are composed of.

There will be another part of the contest that starts in the summer where the speed of the calculation will matter.

The leaderboard shows the first contestant among 222 to have the score of just 0.46 – so I believe that there's a lot of room for improvement. The preliminary leaderboard is based on some 29% of the data, the final one will be based on the remaining 71% of the data so it may be different.

Most importantly, the prizes are $12k, $8k, $5k (in total, $25k) for the first, second, third place.

Good luck.

by Luboš Motl (noreply@blogger.com) at May 16, 2018 06:15 AM

May 15, 2018

Clifford V. Johnson - Asymptotia

Wild

Here's a little montage of some of the wildflowers beginning to emerge in the garden this season. Some months ago I sprinkled the seeds in a fee patches, raked the beds and remembered to keep things moist over the days and weeks that followed. These are some of the results... (Click for a larger view.)

-cvj Click to continue reading this post

The post Wild appeared first on Asymptotia.

by Clifford at May 15, 2018 03:35 PM

CERN Bulletin

News from the Staff Association Executive Committee

On 17 April, the Staff Council proceeded to the election of the Executive Committee of the Staff Association and the members of the Bureau.

First of all, why a new election of the Executive Committee elected in April 2018 after that of December 2017 (Echo No. 281)? Quite simply because a Crisis Executive Committee with a provisional Bureau had been elected for a period from 1st January to 16 April 2018 with defined and restricted objectives (Echo No. 283).

Therefore, on 17 April, G. Roy presented for election a list of 12 persons, including five members for the Bureau, who agreed to continue their work within the Executive Committee, based on an intensive programme with the following main axes:

  • Crèche and School and in particular the establishment of a foundation;
  • Concertation: review and relaunch of the concertation process;
  • Finalisation of the 2015 five-yearly review;
  • Preparation and start of the 2020 five-yearly review;
  • Actuarial reviews of the Pension Fund and the CHIS;
  • Internal enquiries and justice;
  • Improving the Association’s internal procedures (secretariat, documentation, protection of personal data).

Following the presentation of the list and the programme, the delegates of the Staff Council showed their support for the proposed list and elected the Executive Committee by ballot, with 24 votes in favour and three votes against.

Members elected to the Executive Committee one 17 April 2018

Thank you to the 12 colleagues who agreed to take on responsibility within the Staff Association and stay committed to the CERN personnel.

May 15, 2018 08:05 AM

CERN Bulletin

Science and Sport bringing people together

ASCERI is the Association of the Sports Communities of the European Research Institutes and aims to contribute to a united Europe through regular sports meetings, bringing together members of public Research Institutes at European level. The Association's members come from over 42 Research Institutes spanning 15 countries.

The association was born from the German "Kernforschungszentrum Karlsruhe" (KfK) football team who had the idea to play against other teams from institutes also involved in nuclear research. Therefore, six teams from different German centres were invited to take part in a "Reaktoren Fußballturnier" in Karlsruhe on 2 July 1966.

Ever since, The Winter-ATOMIADE has taken place every three years and alternating with the Summer-ATOMIADE and a Mini Atomiade in between with numerous sports and leisure activities including football, skiing, golf, athletics, tennis, volleyball to name a few. CERN has been a regular participant in these events and even hosted the mini atomiade in 2016 (Bulletin No. 28-29/2016).

Since 1989, regular meetings have been held yearly for ASCERI delegates and this year’s annual ASCERI conference was hosted by JRC-GEEL in Antwerp. For the first time ever a female president, Anne-Françoise Maydew from ESRF Grenoble was elected along with a diverse set of board members, including Rachel Bray, delegate representing CERN.

Along with the election of a new committee and set of board members, one of the bigger topics of the 33 Annual conference was the upcoming Summer Atomiade in June, organised by JRC-ISPRA. CERN will be participating with a team of 60, participating in football, tennis, golf, table tennis, athletics, volleyball and cycling.

ASCERI delegates also gave the outgoing President, Henry Koekenberg, and his team a fine send off the final evening of the conference.

May 15, 2018 08:05 AM

May 14, 2018

Sean Carroll - Preposterous Universe

Intro to Cosmology Videos

In completely separate video news, someone has (I don’t know how) found videos of lectures I gave a CERN several years ago: “Cosmology for Particle Physicists.” (2005, maybe?) These are slightly technical — at the very least they presume you know calculus and basic physics — but are still basically accurate despite their age.

  1. Introduction to Cosmology
  2. Dark Matter
  3. Dark Energy
  4. Thermodynamics and the Early Universe
  5. Inflation and Beyond

by Sean Carroll at May 14, 2018 07:09 PM

CERN Bulletin

CERN Relay Race 2018

The CERN running club, in collaboration with the Staff Association, is happy to announce the 2018 relay race edition. It will take place on Thursday, May 24th and will consist as every year in a round trip of the CERN Meyrin site in teams of 6 members. It is a fun event, and you do not have to run fast to enjoy it.

Registrations will be open from May 1st to May 22nd on the running club web site. All information concerning the race and the registration are available there too: http://runningclub.web.cern.ch/content/cern-relay-race.

A video of the previous edition is also available here : http://cern.ch/go/Nk7C.

As every year, there will be animations starting at noon on the lawn in front of restaurant 1, and information stands for many CERN associations and clubs will be available. The running club partners will also be participate in the event, namely Berthie Sport, Interfon and Uniqa.

May 14, 2018 05:05 PM

CERN Bulletin

Interfon

Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.

May 14, 2018 05:05 PM

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juillet et décembre.

La prochaine permanence se tiendra le :

Mardi 29 mai de 13 h 30 à 16 h 00
Salle de réunion de l’Association du personnel

Les permanences suivantes auront lieu les mardis 26 juin, 28 août, 25 septembre, 30 octobre et 27 novembre 2018.

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/
Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

May 14, 2018 05:05 PM

May 13, 2018

Tommaso Dorigo - Scientificblogging

Is Dark Matter Lurking In Anomalous Neutron Decays ?
A paper by B. Fornal and B. Grinstein published last week in Physical Review Letters is drawing a lot of interest to one of the most well-known pieces of subnuclear physics since the days of Enrico Fermi: beta decay.

read more

by Tommaso Dorigo at May 13, 2018 02:52 PM

May 12, 2018

John Baez - Azimuth

RChain

guest post by Christian Williams

Mike Stay has been doing some really cool stuff since earning his doctorate. He’s been collaborating with Greg Meredith, who studied the π-calculus with Abramsky, and then conducted impactful research and design in the software industry before some big ideas led him into the new frontier of decentralization. They and a great team are developing RChain, a distributed computing infrastructure based on the reflective higher-order π-calculus, the ρ-calculus.

They’ve made significant progress in the first year, and on April 17-18 they held the RChain Developer Conference in Boulder, Colorado. Just five months ago, the first conference was a handful of people; now this received well over a hundred. Programmers, venture capitalists, blockchain enthusiasts, experts in software, finance, and mathematics: myriad perspectives from around the globe came to join in the dawn of a new internet. Let’s just say, it’s a lot to take in. This project is the real deal – the idea is revolutionary, the language is powerful, the architecture is elegant; the ambition is immense and skilled developers are actually bringing it to reality. There’s no need for hype: you’re gonna be hearing about RChain.

Documentation , GitHub , Architecture

Here’s something from the documentation:

The open-source RChain project is building a decentralized, economic, censorship-resistant, public compute infrastructure and blockchain. It will host and execute programs popularly referred to as “smart contracts”. It will be trustworthy, scalable, concurrent, with proof-of-stake consensus and content delivery.

The decentralization movement is ambitious and will provide awesome opportunities for new social and economic interactions. Decentralization also provides a counterbalance to abuses and corruption that occasionally occur in large organizations where power is concentrated. Decentralization supports self-determination and the rights of individuals to self-organize. Of course, the realities of a more decentralized world will also have its challenges and issues, such as how the needs of international law, public good, and compassion will be honored.

We admire the awesome innovations of Bitcoin, Ethereum, and other platforms that have dramatically advanced the state of decentralized systems and ushered in this new age of cryptocurrency and smart contracts. However, we also see symptoms that those projects did not use the best engineering and formal models for scaling and correctness in order to support mission-critical solutions. The ongoing debates about scaling and reliability are symptomatic of foundational architectural issues. For example, is it a scalable design to insist on an explicit serialized processing order for all of a blockchain’s transactions conducted on planet earth?

To become a blockchain solution with industrial-scale utility, RChain must provide content delivery at the scale of Facebook and support transactions at the speed of Visa. After due diligence on the current state of many blockchain projects, after deep collaboration with other blockchain developers, and after understanding their respective roadmaps, we concluded that the current and near-term blockchain architectures cannot meet these requirements. In mid-2016, we resolved to build a better blockchain architecture.

Together with the blockchain industry, we are still at the dawn of this decentralized movement. Now is the time to lay down a solid architectural foundation. The journey ahead for those who share this ambitious vision is as challenging as it is worthwhile, and this document summarizes that vision and how we seek to accomplish it.

We began by admitting the following minimal requirements:

  • Dynamic, responsive, and provably correct smart contracts.
  • Concurrent execution of independent smart contracts.
  • Data separation to reduce unnecessary data replication of otherwise independent tokens and smart contracts.
  • Dynamic and responsive node-to-node communication.
  • Computationally non-intensive consensus/validation protocol.

Building quality software is challenging. It is easier to build “clever” software; however, the resulting software is often of poor quality, riddled with bugs, difficult to maintain, and difficult to evolve. Inheriting and working on such software can be hellish for development teams, not to mention their customers. When building an open-source system to support a mission-critical economy, we reject a minimal-success mindset in favor of end-to-end correctness.

To accomplish the requirements above, our design approach is committed to:

  • A computational model that assumes fine-grained concurrency and dynamic network topology.
  • A composable and dynamic resource addressing scheme.
  • The functional programming paradigm, as it more naturally accommodates distributed and parallel processing.
  • Formally verified, correct-by-construction protocols which leverage model checking and theorem proving.
  • The principles of intension and compositionality.

RChain is light years ahead of the industry. Why? It is upholding the principle of correct by construction with the depth and rigor of mathematics. For years, Mike and Greg have been developing original ideas for distributed computation: in particular, logic as a distributive law is an “algorithm for deriving a spatial-behavioral type system from a formal presentation of a computational calculus.” This is a powerful way to integrate operational semantics into a language, and prove soundness with a single natural transformation; it also provides an extremely expressive query language, with which you could search the entire world to find “code that does x”. Mike’s strong background in higher category theory has enabled the formalization of Greg’s ideas, which he has developed over decades of thinking deeply and comprehensively about the world of computing. Of all of these, there is one concept which is the heart and pulse of RChain, which unifies the system as a rational whole: the ρ-calculus.

So what’s the big idea? First, some history: back in the late 80s, Greg developed a classification of computational models called “the 4 C’s”:

completeness,
compositionality,
(a good notion of)
complexity, and
concurrency.

He found that there was none which had all four, and predicted the existence of one. Just a few years later, Milner invented the π-calculus, and since then it has reigned as the natural language of network computing. It presents a totally different way of thinking: instead of representing sequential instructions for a single machine, the π-calculus is fundamentally concurrent—processes or agents communicate over names or channels, and computation occurs through the interaction of processes. The language is simple yet remarkably powerful; it is deeply connected with game semantics and linear logic, and has become an essential tool in systems engineering and biocomputing: see mobile process calculi for programming the blockchain.


Here is the basic syntax. The variables x,y are names, and P,Q are processes:

P,Q := 0 | (νx)P | x?(y).P | x!(y).P | P|Q

(do nothing | create new x; run P | receive on x and bind to y; run P | send value y on x; run P | run P and Q in parallel)

The computational engine, the basic reduction analogous to beta-reduction of lambda calculus, is the communication rule:

COMM : x!(y).P|x?(z).Q → P|Q[y/z]

(given parallel output and input processes along the same channel, the value is transferred from the output to the input, and is substituted for all occurrences of the input variable in the continuation process)

The definition of a process calculus must specify structural congruence: these express the equivalences between processes—for example, ({P},|,0) forms a commutative monoid.


The π-calculus reforms computation, on the most basic level, to be a cooperative activity. Why is this important? To have a permanently free internet, we have to be able to support it without reliance on centralized powers. This is one of the simplest points, but there are many deeper reasons which I am not yet knowledgeable enough to express. It’s all about the philosophy of openness which is characteristic of applied category theory: historically, we have developed theories and practices which are isolated from each other and the world, and had to fabricate their interrelation and cooperation ad hoc; this leaves us needlessly struggling with inadequate systems, and limits our thought and action.

Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it. — John Backus, 1977 ACM Turing Award

There have been various mitigations to these kind of problems, but the cognitive limitation remains, and a total renewal is necessary; the π-calculus completely reimagines the nature of computation in society, and opens vast possibility. We can begin to conceive of the many interwoven strata of computation as a coherent, fluid whole. While I was out of my depth in many talks during the conference, I began to appreciate that this was a truly comprehensive innovation: RChain reforms almost every aspect of computing, from the highest abstraction all the way down to the metal. Coalescing the architecture, as mentioned earlier, is the formal calculus as the core guiding principle. There was some concern that because the ρ-calculus is so different from traditional languages, there may be resistance to adoption; but our era is a paradigm shift, the call is for a new way of thinking, and we must adapt.

So why are we using the reflective higher-order π-calculus, the ρ-calculus? Because there’s just one problem with the conventional π-calculus: it presupposes a countably infinite collection of atomic names. These are not only problematic to generate and manage, but the absence of structure is a massive waste. In this regard, the π-calculus was incomplete, until Greg realized that you can “close the loop” with reflection, a powerful form of self-reference:

Code ←→ Data

The mantra is that names are quoted processes; this idea pervades and guides the design of RChain. There is no need to import infinitely many opaque, meaningless symbols—the code itself is nothing but clear, meaningful syntax. If there is an intrinsic method of reference and dereference, or “quoting and unquoting”, code can be turned into data, sent as a message, and then turned back into code; known as “code mobility”, one can communicate big programs as easily as emails. This allows for metaprogramming: on an industrial level, not only people write programs—programs write programs. This is essential to creating a robust virtual infrastructure.

So, how can the π-calculus be made reflective? By solving for the least fixed point of a recursive equation, which parametrizes processes by names:

P[x] = 0 | x?(x).P[x] | x!(P[x]) | P[x]|P[x] | @P[x] | *x

RP = P[RP]

This is reminiscent of how the Y combinator enables recursion by giving the fixed point of any function, Yf = f(Yf). The last two terms of the syntax are reference and dereference, which turn code into data and data into code. Notice that we did not include a continuation for output: the ρ-calculus is asynchronous, meaning that the sender does not get confirmation that the message has been received; this is important for efficient parallel computation and corresponds to polarised linear logic. We adopt the convention that names are output and processes are input. The last two modifications to complete the official ρ-calculus syntax are multi-input and pattern-matching:


P,Q := 0                                                null process

| for(p1←x1,…,pn←xn).P                input guarded process

| x!(@Q)                                        output a name

| *x                                          dereference, evaluate code

| P|Q                                        parallel composition

x,p := @P                                            name or quoted process

(each ‘pi’ is a “pattern” or formula to collect terms on channel ‘xi’—this is extremely useful and general, and enables powerful functionality throughout the system)


Simple. Of course, this is not really a programming language yet, though it is more usable than the pure λ-calculus. Rholang, the actual language of RChain, adds some essential features:

ρ-calculus + variables + useful ground terms + new name construction + arithmetic + pattern matching = Rholang

Here’s the specification, syntax and semantics, and a visualization; explore code and examples in GitHub and learn the contract design in the documentation—you can even try coding on rchain.cloud! For those who don’t like clicking all these links, let’s see just one concrete example of a contract, the basic program in Rholang: a process with persistent state, associated code, and associated addresses. This is a Cell, which stores a value until it is accessed or updated:


contract Cell( get, set, state ) = {
 select {
   case rtn <- get; v <- state => {
     rtn!( *v ) | state!( *v ) | Cell( get, set, state ) }
   case newValue <- set; v <- state => {
     state!( *newValue ) | Cell( get, set, state ) }

}}


The parameters are the channels on which the contract communicates. Cell selects from two possibilities: either it is being accessed, i.e. there is data (the return channel) to receive on get, then it outputs on rtn and maintains its state and call; or it is being updated, i.e. there is data (the new value) to receive on set, then it updates state and calls itself again. This shows how the ontology of the language enables natural recursion, and thereby persistent storage: state is Cell’s way of “talking to itself”—since the sequential aspect of Rholang is functional, one “cycles” data to persist. The storage layer uses a similar idea; the semantics may be related to traced monoidal categories.

Curiously, the categorical semantics of the ρ-calculus has proven elusive. There is the general ideology that λ:sequential :: π:concurrent, that the latter is truly fundamental, but the Curry-Howard-Lambek isomorphism has not yet been generalized canonically—though there has been partial success, involving functor-category denotational semantics, linear logic, and session types. Despite its great power and universality, the ρ-calculus remains a bit of a mystery in mathematics: this fact should intrigue anyone who cares about logic, types, and categories as the foundations of abstract thought.

Now, the actual system—the architecture consists of five interwoven layers (all better explained in the documentation):


Storage: based on Special K – “a pattern language for the web.” This layer stores both key-value pairs and continuations through an essential duality of database and query—if you don’t find what you’re looking for, leave what you would have done in its place, and when it arrives, the process will continue. Greg characterizes this idea as the computational equivalent of the law of excluded middle.

[channel, pattern, data, continuation]

RChain has a refined, multidimensional view of resources – compute, memory, storage, and network—and accounts for their production and consumption linearly.

Execution: a Rho Virtual Machine instance is a context for ρ-calculus reduction of storage elements. The entire state of the blockchain is one big Rholang term, which is updated by a transaction: a receive invokes a process which changes key values, and the difference must be verified by consensus. Keys permanently maintain the entire history of state transitions. While currently based on the Java VM, it will be natively hosted.

Namespace: a set of channels, i.e. resources, organized by a logic for access and authority. The primary significance is scalability – a user does not need to deal with the whole chain, only pertinent namespaces. ‘A namespace definition may control the interactions that occur in the space, for example, by specifying: accepted addresses, namespaces, or behavioral types; maximum or minimum data size; or input-output structure.’ These handle nondeterminism of the two basic “race conditions”, contention for resources:

x!(@Q1) | for(ptrn <- x){P} | x!(@Q2)

for(ptrn <- x){P1} | x!(@Q) | for(ptrn <- x){P2}

Contrasted with flat public keys of other blockchains, domains work with DNS and extend them by a compositional tree structure. Each node as a named channel is itself a namespace, and hence definitions can be built up inductively, with precise control.

Consensus: verify partial orders of changes to the one-big-Rholang-term state; the block structure should persist as a directed acyclic graph. The algorithm is Proof of Stake – the capacity to validate in a namespace is tied to the “stake” one holds in it. Greg explains via tangles, and how the complex CASPER protocol works naturally with RChain.

Contracts: ‘An RChain contract is a well-specified, well-behaved, and formally verified program that interacts with other such programs.’ (K Framework) ; (DAO attack) ‘A behavioral type is a property of an object that binds it to a discrete range of action patterns. Behavioral types constrain not only the structure of input and output, but the permitted order of inputs and outputs among communicating and (possibly) concurrent processes under varying conditions… The Rholang behavioral type system will iteratively decorate terms with modal logical operators, which are propositions about the behavior of those terms. Ultimately properties [such as] data information flow, resource access, will be concretized in a type system that can be checked at compile-time. The behavioral type systems Rholang will support make it possible to evaluate collections of contracts against how their code is shaped and how it behaves. As such, Rholang contracts elevate semantics to a type-level vantage point, where we are able to scope how entire protocols can safely interface.’ (LADL)

So what can you build on RChain? Anything.

Decentralized applications: identity, reputation, tokens, timestamping, financial services, content delivery, exchanges, social networks, marketplaces, (decentralized autonomous) organizations, games, oracles, (Ethereum dApps), … new forms of code yet to be imagined. It’s much more than a better internet: RChain is a potential abstract foundation for a rational global society. The system is a minimalist framework of universally principled design; it is a canvas with which we can begin to explore how the world should really work. If we are open and thoughtful, if we care enough, we can learn to do things right.

The project is remarkably unknown for its magnitude, and building widespread adoption may be one of RChain’s greatest challenges. Granted, it is new; but education will not be easy. It’s too big a reformation for a public mindset which thinks of (technological) progress as incrementally better specs or added features; this is conceptual progression, an alien notion to many. That’s why the small but growing community is vital. This article is nothing; I’m clearly unqualified—click links, read papers, watch videos.  The scale of ambition, the depth of insight, the lucidity of design, the unity of theory and practice—it’s something to behold. And it’s real. Mercury will be complete in December. It’s happening, right now, and you can be a part of it. Learn, spread the word. Get involved; join the discussion or even the development—the RChain website has all the resources you need.

40% of the world population lives within 100km of the ocean. Greg pointed out that if we can’t even handle today’s refugee crises, what will possibly happen when the waters rise? At the very least, we desperately need better large-scale coordination systems. Will we make it to the next millennium? We can—just a matter of will.

Thank you for reading. You are great.

by John Baez at May 12, 2018 02:23 AM

John Baez - Azimuth

Applied Category Theory Course: Resource Theories

 

My course on applied category theory is continuing! After a two-week break where the students did exercises, I’m back to lecturing about Fong and Spivak’s book Seven Sketches. Now we’re talking about “resource theories”. Resource theories help us answer questions like this:

  1. Given what I have, is it possible to get what I want?
  2. Given what I have, how much will it cost to get what I want?
  3. Given what I have, how long will it take to get what I want?
  4. Given what I have, what is the set of ways to get what I want?

Resource theories in their modern form were arguably born in these papers:

• Bob Coecke, Tobias Fritz and Robert W. Spekkens, A mathematical theory of resources.

• Tobias Fritz, Resource convertibility and ordered commutative monoids.

We are lucky to have Tobias in our course, helping the discussions along! He’s already posted some articles on resource theory here on this blog:

• Tobias Fritz, Resource convertibility (part 1), Azimuth, 7 April 2015.

• Tobias Fritz, Resource convertibility (part 2), Azimuth, 10 April 2015.

• Tobias Fritz, Resource convertibility (part 3), Azimuth, 13 April 2015.

We’re having fun bouncing between the relatively abstract world of monoidal preorders and their very concrete real-world applications to chemistry, scheduling, manufacturing and other topics. Here are the lectures so far:

Lecture 18 – Chapter 2: Resource Theories
Lecture 19 – Chapter 2: Chemistry and Scheduling
Lecture 20 – Chapter 2: Manufacturing
Lecture 21 – Chapter 2: Monoidal Preorders
Lecture 22 – Chapter 2: Symmetric Monoidal Preorders
Lecture 23 – Chapter 2: Commutative Monoidal Posets
Lecture 24 – Chapter 2: Pricing Resources
Lecture 25 – Chapter 2: Reaction Networks
Lecture 26 – Chapter 2: Monoidal Monotones
Lecture 27 – Chapter 2: Adjoints of Monoidal Monotones
Lecture 28 – Chapter 2: Ignoring Externalities

 

by John Baez at May 12, 2018 01:16 AM

Clifford V. Johnson - Asymptotia

Feynman Centenary

Turns out that 100 years ago today, Richard Feynman was born. His contributions to physics - science in general - are huge, and if you dig a little you'll find lots of discussion about him. His beautiful "Lectures on Physics..." books are deservedly legendary, and I wish that my old Imperial College lecturers had spent more time impressing upon us young impressionable undergraduate minds (c1986) to read those instead of urging us at every opportunity to read the famous "Surely You're Joking..." book, which even back then in my naivety, I began to recognise as partly a physicist's user manual for how to be a jerk to those around you. (I know I'm in the minority on this point...)

But anyway, in honour of the occasion, I give you a full page from my book containing a chat about the Feynman diagram. It's an example of how something that's essentially a cartoon can play a central role in understanding our world (something that's of course, not unknown in cartoons...) Click the image above for an enlarged view.

-cvj Click to continue reading this post

The post Feynman Centenary appeared first on Asymptotia.

by Clifford at May 12, 2018 12:01 AM

May 11, 2018

Jester - Resonaances

Dark Matter goes sub-GeV
It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.

Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles. 
                       
It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing  developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T.  In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck:  ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However,  this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10.   

It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space).  Nevertheless, dedicated experiments will soon  be taking over. Recently, two collaborations published first results from their prototype detectors:  one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor.  Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV.  A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.
     
Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.

by Mad Hatter (noreply@blogger.com) at May 11, 2018 02:35 PM

May 10, 2018

Sean Carroll - Preposterous Universe

User-Friendly Naturalism Videos

Some of you might be familiar with the Moving Naturalism Forward workshop I organized way back in 2012. For two and a half days, an interdisciplinary group of naturalists (in the sense of “not believing in the supernatural”) sat around to hash out the following basic question: “So we don’t believe in God, what next?” How do we describe reality, how can we be moral, what are free will and consciousness, those kinds of things. Participants included Jerry Coyne, Richard Dawkins, Terrence Deacon, Simon DeDeo, Daniel Dennett, Owen Flanagan, Rebecca Newberger Goldstein, Janna Levin, Massimo Pigliucci, David Poeppel, Nicholas Pritzker, Alex Rosenberg, Don Ross, and Steven Weinberg.

Happily we recorded all of the sessions to video, and put them on YouTube. Unhappily, those were just unedited proceedings of each session — so ten videos, at least an hour and a half each, full of gems but without any very clear way to find them if you weren’t patient enough to sift through the entire thing.

No more! Thanks to the heroic efforts of Gia Mora, the proceedings have been edited down to a number of much more accessible and content-centered highlights. There are over 80 videos (!), with a median length of maybe 5 minutes, though they range up to about 20 minutes and down to less than one. Each video centers on a particular idea, theme, or point of discussion, so you can dive right into whatever particular issues you may be interested in. Here, for example, is a conversation on “Mattering and Secular Communities,” featuring Rebecca Goldstein, Dan Dennett, and Owen Flanagan.

The videos can be seen on the workshop web page, or on my YouTube channel. They’re divided into categories:

A lot of good stuff in there. Enjoy!

by Sean Carroll at May 10, 2018 02:48 PM

May 09, 2018

Jester - Resonaances

Per kaons ad astra
NA62 is a precision experiment at CERN. From their name you wouldn't suspect that they're doing anything noteworthy: the collaboration was running in the contest for the most unimaginative name, only narrowly losing to CMS...  NA62 employs an intense beam of charged kaons to search for the very rare decay K+ → 𝝿+ 𝜈 𝜈. The Standard Model predicts the branching fraction BR(K+ → 𝝿+ 𝜈 𝜈) = 8.4x10^-11 with a small, 10% theoretical uncertainty (precious stuff in the flavor business). The previous measurement by the BNL-E949 experiment reported BR(K+ → 𝝿+ 𝜈 𝜈) = (1.7 ± 1.1)x10^-10, consistent with the Standard Model, but still  leaving room for large deviations.  NA62 is expected to pinpoint the decay and measure the branching fraction with a 10% accuracy, thus severely constraining new physics contributions. The wires, pipes, and gory details of the analysis  were nicely summarized by Tommaso. Let me jump directly to explaining what is it good for from the theory point of view.

To this end it is useful to adopt the effective theory perspective. At a more fundamental level, the decay occurs due to the strange quark inside the kaon undergoing the transformation  sbardbar 𝜈 𝜈bar. In the Standard Model, the amplitude for that process is dominated by one-loop diagrams with W/Z bosons and heavy quarks. But kaons live at low energies and do not really see the fine details of the loop amplitude. Instead, they effectively see the 4-fermion contact interaction:
The mass scale suppressing this interaction is quite large, more than 1000 times larger than the W boson mass, which is due to the loop factor and small CKM matrix elements entering the amplitude. The strong suppression is the reason why the K+ → 𝝿+ 𝜈 𝜈  decay is so rare in the first place. The corollary is that even a small new physics effect inducing that effective interaction may dramatically change the branching fraction. Even a particle with a mass as large as 1 PeV coupled to the quarks and leptons with order one strength could produce an observable shift of the decay rate.  In this sense, NA62 is a microscope probing physics down to 10^-20 cm  distances, or up to PeV energies, well beyond the reach of the LHC or other colliders in this century. If the new particle is lighter, say order TeV mass, NA62 can be sensitive to a tiny milli-coupling of that particle to quarks and leptons.

So, from a model-independent perspective, the advantages  of studying the K+ → 𝝿+ 𝜈 𝜈  decay are quite clear. A less trivial question is what can the future NA62 measurements teach us about our cherished models of new physics. One interesting application is in the industry of explaining the apparent violation of lepton flavor universality in BK l+ l-, and BD l 𝜈 decays. Those anomalies involve the 3rd generation bottom quark, thus a priori they do not need to have anything to do with kaon decays. However, many of the existing models introduce flavor symmetries controlling the couplings of the new particles to matter (instead of just ad-hoc interactions to address the anomalies). The flavor symmetries may then relate the couplings of different quark generations, and thus predict  correlations between new physics contributions to B meson and to kaon decays. One nice example is illustrated in this plot:

The observable RD(*) parametrizes the preference for BD 𝜏 𝜈 over similar decays with electrons and muon, and its measurement by the BaBar collaboration deviates from the Standard Model prediction by roughly 3 sigma. The plot shows that, in a model based on U(2)xU(2) flavor symmetry, a significant contribution to RD(*) generically implies a large enhancement of BR(K+ → 𝝿+ 𝜈 𝜈), unless the model parameters are tuned to avoid that.  The anomalies in the BK(*) 𝜇 𝜇 decays can also be correlated with large effects in K+ → 𝝿+ 𝜈 𝜈, see here for an example. Finally, in the presence of new light invisible particles, such as axions, the NA62 observations can be polluted by exotic decay channels, such as e.g.  K+ → axion 𝝿+.

The  K+ → 𝝿+ 𝜈 𝜈 decay is by no means the magic bullet that will inevitably break the Standard Model.  It should be seen as one piece of a larger puzzle that may or may not provide crucial hints about new physics. For the moment, NA62 has analyzed only a small batch of data collected in 2016, and their error bars are still larger than those of BNL-E949. That should change soon when the 2017  dataset is analyzed. More data will be acquired this year, with 20 signal events expected  before the long LHC shutdown. Simultaneously, another experiment called KOTO studies an even more rare process where neutral kaons undergo the CP-violating decay KL → 𝝿0 𝜈 𝜈,  which probes the imaginary part of the effective operator written above. As I wrote recently, my feeling is that low-energy precision experiments are currently our best hope for a better understanding of fundamental interactions, and I'm glad to see a good pace of progress on this front.

by Mad Hatter (noreply@blogger.com) at May 09, 2018 07:31 PM

Jester - Resonaances

Singularity is now
Artificial intelligence (AI) is entering into our lives.  It's been 20 years now since the watershed moment of Deep Blue versus Garry Kasparov.  Today, people study the games of AlphaGo against itself to get a glimpse of what a superior intelligence would be like. But at the same time AI is getting better in copying human behavior.  Many Apple users have got emotionally attached to Siri. Computers have not only learnt  to drive cars, but also not to slow down when a pedestrian is crossing the road. The progress is very well visible to the bloggers community. Bots commenting under my posts have evolved well past !!!buy!!!viagra!!!cialis!!!hot!!!naked!!!  sort of thing. Now they refer to the topic of the post, drop an informed comment, an interesting remark,  or a relevant question, before pasting a link to a revenge porn website. Sometimes it's really a pity to delete those comments, as they can be more to-the-point than those written by human readers.   

AI is also entering the field of science at an accelerated pace, and particle physics is as usual in the avant-garde. It's not a secret that physics analyses for the LHC papers (even if finally signed by 1000s of humans) are in reality performed by neural networks, which are just beefed up versions of Alexa developed at CERN. The hottest topic in high-energy physics experiment is now machine learning,  where computers teach  humans the optimal way of clustering jets, or telling quarks from gluons. The question is when, not if, AI will become sophisticated enough to perform a creative work of theoreticians. 

It seems that the answer is now.

Some of you might have noticed a certain Alan Irvine, affiliated with the Los Alamos National Laboratory, regularly posting on arXiv single-author theoretical papers on fashionable topics such as the ATLAS diphoton excess, LHCb B-meson anomalies, DAMPE spectral feature, etc. Many of us have received emails from this author requesting citations. Recently I got one myself; it seemed overly polite, but otherwise it didn't differ in relevance or substance from other similar requests. During the last two and half years,  A. Irvine has accumulated a decent h-factor of 18.  His papers have been submitted to prestigious journals in the field, such as the PRL, JHEP, or PRD, and some of them were even accepted after revisions. The scandal broke out a week ago when a JHEP editor noticed that the extensive revision, together with a long cover letter, was submitted within 10 seconds from receiving the referee's comments. Upon investigation, it turned out that A. Irvine never worked in Los Alamos, nobody in the field has ever met him in person, and the IP from which the paper was submitted was that of the well-known Ragnarok Thor server. A closer analysis of his past papers showed that, although linguistically and logically correct, they were merely a compilation of equations and text from the previous literature without any original addition. 

Incidentally, arXiv administrators have been aware that, since a few years, all source files in daily hep-ph listings were downloaded for an unknown purpose by automated bots. When you have excluded the impossible, whatever remains, however improbable, must be the truth. There is no doubt that A. Irvine is an AI bot, that was trained on the real hep-ph input to produce genuinely-looking  particle theory papers.     

The works of A. Irvine have been quietly removed from arXiv and journals, but difficult questions remain. What was the purpose of it? Was it a spoof? A parody? A social experiment? A Facebook research project? A Russian provocation?  And how could it pass unnoticed for so long within  the theoretical particle community?  What's most troubling is that, if there was one, there can easily be more. Which other papers on arXiv are written by AI? How can we recognize them?  Should we even try, or maybe the dam is already broken and we have to accept the inevitable?  Is Résonaances written by a real person? How can you be sure that you are real?

Update: obviously, this post is an April Fools' prank. It is absolutely unthinkable that the creative process of writing modern particle theory papers can ever be automatized. Also, the neural network referred to in the LHC papers is nothing like Alexa; it's simply a codename for PhD students.  Finally, I assure you that Résonaances is written by a hum 00105e0 e6b0 343b 9c74 0804 e7bc 0804 e7d5 0804 [core dump]

by Mad Hatter (noreply@blogger.com) at May 09, 2018 07:31 PM

Jester - Resonaances

Where were we?
Last time this blog was active, particle physics was entering a sharp curve. That the infamous 750 GeV resonance had petered out was not a big deal in itself - one expects these things to happen every now and then.  But the lack of any new physics at the LHC when it had already collected a significant chunk of data was a reason to worry. We know that we don't know everything yet about the fundamental interactions, and that there is a deeper layer of reality that needs to be uncovered (at least to explain dark matter, neutrino masses, baryogenesis, inflation, and physics at energies above the Planck scale). For a hundred years, increasing the energy of particle collisions has been the best way to increase our understanding of the basic constituents of nature. However, with nothing at the LHC and the next higher energy collider decades away, a feeling was growing that the progress might stall.

In this respect, nothing much has changed during the time when the blog was dormant, except that these sentiments are now firmly established. Crisis is no longer a whispered word, but it's openly discussed in corridors, on blogs, on arXiv, and in color magazines.  The clear message from the LHC is that the dominant paradigms about the physics at the weak scale were completely misguided. The Standard Model seems to be a perfect effective theory at least up to a few TeV, and there is no indication at what energy scale new particles have to show up. While everyone goes through the five stages of grief at their own pace, my impression is that most are already well past the denial. The open question is what should be the next steps to make sure that exploration of fundamental interactions will not halt. 

One possible reaction to a crisis is more of the same.  Historically, such an approach has often been efficient, for example it worked for a long time in the case of the Soviet economy. In our case one could easily go on with more models, more epicycles, more parameter space,  more speculations.  But the driving force for all these SusyWarpedCompositeStringBlackHairyHole enterprise has always been the (small but still) possibility of being vindicated by the LHC. Without serious prospects of experimental verification, model building is reduced to intellectual gymnastics that can hardly stir imagination.  Thus the business-as-usual is not an option in the long run: it couldn't elicit any enthusiasm among the physicists or the public,  it wouldn't attract new bright students, and thus it would be a straight path to irrelevance.

So, particle physics has to change. On the experimental side we will inevitably see, just for economical reasons, less focus on high-energy colliders and more on smaller experiments. Theoretical particle physics will also have to evolve to remain relevant.  Certainly, the emphasis needs to be shifted away from empty speculations in favor of more solid research. I don't pretend to know all the answers or have a clear vision of the optimal strategy, but I see three promising directions.

One is astrophysics where there are much better prospects of experimental progress.  The cosmos is a natural collider that is constantly testing fundamental interactions independently of current fashions or funding agencies.  This gives us an opportunity to learn more  about dark matter and neutrinos, and also about various hypothetical particles like axions or milli-charged matter. The most recent story of the 21cm absorption signal shows that there are still treasure troves of data waiting for us out there. Moreover, new observational windows keep opening up, as recently illustrated by the nascent gravitational wave astronomy. This avenue is of course a non-brainer, already explored since a long time by particle theorists, but I expect it will further gain in importance in the coming years. 

Another direction is precision physics. This, also, has been an integral part of particle physics research for quite some time, but it should grow in relevance. The point is that one can probe very heavy particles, often beyond the reach of present colliders,  by precisely measuring low-energy observables. In the most spectacular example, studying proton decay may give insight into new particles with masses of order 10^16 GeV - unlikely to be ever attainable directly. There is a whole array of observables that can probe new physics well beyond the direct LHC reach: a myriad of rare flavor processes, electric dipole moments of the electron and neutron, atomic parity violation, neutrino scattering,  and so on. This road may be long and tedious but it is bound to succeed: at some point some experiment somewhere must observe a phenomenon that does not fit into the Standard Model. If we're very lucky, it  may be that the anomalies currently observed by the LHCb in certain rare B-meson decays are already the first harbingers of a breakdown of the Standard Model at higher energies.

Finally, I should mention formal theoretical developments. The naturalness problem of the cosmological constant and of the Higgs mass may suggest some fundamental misunderstanding of quantum field theory on our part. Perhaps this should not be too surprising.  In many ways we have reached an amazing proficiency in QFT when applied to certain precision observables or even to LHC processes. Yet at the same time QFT is often used and taught in the same way as magic in Hogwarts: mechanically,  blindly following prescriptions from old dusty books, without a deeper understanding of the sense and meaning.  Recent years have seen a brisk development of alternative approaches: a revival of the old S-matrix techniques, new amplitude calculation methods based on recursion relations, but also complete reformulations of the QFT basics demoting the sacred cows like fields, Lagrangians, and gauge symmetry. Theory alone rarely leads to progress, but it may help to make more sense of the data we already have. Could better understanding or complete reformulating of QFT bring new answers to the old questions? I think that is  not impossible. 

All in all, there are good reasons to worry, but also tons of new data in store and lots of fascinating questions to answer.  How will the B-meson anomalies pan out? What shall we do after we hit the neutrino floor? Will the 21cm observations allow us to understand what dark matter is? Will China build a 100 TeV collider? Or maybe a radio telescope on the Moon instead?  Are experimentalists still needed now that we have machine learning? How will physics change with the centre of gravity moving to Asia?  I will tell you my take on such and other questions and  highlight old and new ideas that could help us understand the nature better.  Let's see how far I'll get this time ;)

by Mad Hatter (noreply@blogger.com) at May 09, 2018 07:31 PM

April 30, 2018

Tommaso Dorigo - Scientificblogging

Jupiter Last Friday
Visual observation of the planets of our solar system has always been an appealing pastime for amateur astronomers, but the digital era has taken away a little bit of glamour to this activity. Until 30 years ago you could spot with your eye more detail than was at reach of normal photography even for large telescopes, so amateur astronomers could contribute to planetary science by producing detailed drawings of the surface of Jupiter, Saturn, Venus, and Mars. 

read more

by Tommaso Dorigo at April 30, 2018 06:06 PM

Tommaso Dorigo - Scientificblogging

Guest Post: Tony Smith: One Or Three Higgs Bosons ?
Frank D. Smith (Tony Smith for his friends) has been following this blog since the beginning. He is an independent researcher who is very interested in phenomena connected with the top quark and the Higgs boson. He has a theory of his own and he has been trying to check whether LHC data is compatible or not with it. His ideas are reported here as a guest post, as a tribute to his faithfulness to this site. Of course the views expressed below are his own, as I retain a healthy dose of scepticism to any bit of new physics apparent in today's data... Also, I will comment in the thread below to inform the reader of what my ideas are on his interpretation of public LHC results.

read more

by Tommaso Dorigo at April 30, 2018 02:50 PM

April 26, 2018

Tommaso Dorigo - Scientificblogging

Correct Blitz Chess - A Nice Miniature
Playing chess games flawlessly is a super-human endeavour, which even machines are still having a hard time achieving. However, the occasional flawless game does arise in human practice, albeit rarely. Usually it is a grandmaster who pulls it off. The absence of sub-optimal moves can be ascertained by extensive computer analysis these days, so the quality of the moves is not in question. 

read more

by Tommaso Dorigo at April 26, 2018 12:00 PM

April 24, 2018

Lubos Motl - string vacua and pheno

It's wrong to summarize the multiverse as "left-wing"
And Keating's proposed Nobel prize reforms are left-wing lunacy

Nick has asked whether Brian Keating, the designer of BICEP1 and the author of "Losing the Nobel Prize" (which will be released today), was conservative. At least according to some methodologies, the answer is Yes.



His 50-minute interview in Whiskey Politics, a right-wing podcast, has shown that he had the courage to hang the picture of George W. Bush in his University of California office – where most of his colleagues would prefer to hang Bush himself. Well, he didn't support Trump throughout most of his campaign, however.

He deplored the Che Café at UCSD where lots of taxpayer money is being spent to renovate the business and celebrate the mass killer by drinking coffee (which is a carcinogenic substance according to the Californian law but I guess that Che's café may get an exemption). And Keating has also followed me on Twitter so he can't be too left-wing. ;-)




The interview is sort of amusing – about the group think in the Academia, about Keating's idiosyncratic claims that the Nobel prize will be boycotted and killed (he hates the nomination process, I don't quite get how he wants to pick the candidates instead), against tenure (which he says to greatly contribute to the amount of rubbish published by the soft, social scientists). He also gives an introduction to the Cosmic Microwave Background and its polarization and his feelings about his ex-boss and father-like figure Andrew Lang's suicide.




One of the comments he made was that just like the climatological community is pushed in a direction by the left-wing bias (Will Happer talked at the podcast in January), the left-wing group think also penetrates to cosmology – and it manifests itself as the support for the multiverse.

Well, it's not the first time I heard about this identification. I can see some justifications of this identification. But I think that the identification is oversimplified and exaggerated.

Eight years ago, I was invited to the French Riviera for a week. The scholars did things that were considered heretical according to the Academia's group think. So most of the folks were top defenders of the Intelligent Design. Richard Lindzen was there as a leading climate skeptic. And I was there because I was known to be politically incorrect. But it was assumed that I had to have such "right-wing" opinions about cosmology – which means to be against the Universe.

I didn't really meet those expectations. While I think that the anthropic principle is partly tautological and partly wrong (and lots of papers written to promote it have a very poor quality) – so that it's not useful to say true things about the Universe, at least at this moment – the very existence of the multiverse is a different thing. It seems rather likely – and probably more likely than 50% – that the multiverse is needed to properly understand the initial conditions at the Big Bang in our visible Universe, the vacuum selection, and other things.

Why do I think so? Well, inflation works and explains lots of things. And there are good reasons why a good inflationary theory may be automatically assumed to be eternal, and therefore produce the multiverse. It's a likely additional consequence of a theory in cosmology that seems to pass some tests to be believed to be correct. How could a rational person think that it doesn't matter? On top of that, string theory also has very good reasons to be the correct quantum theory of gravity and all other forces. And string theory seems to imply the landscape as well as the processes needed to change the vacuum of one type into another. An honest, competent, rational person just can't overlook these powerful arguments.

One can discuss the quasi-technical issues of whether or not the evidence for inflationary cosmology itself (or the string theory landscape) is strong or sufficient, whether the theory is natural, whether the most natural types of inflation are eternal, whether one should trust the eternal inflation in other parts of the multiverse that they seem to envision, and other things.

But the experience with the French Riviera and Brian Keating suggests that something more powerful than the rational arguments is deciding inside many folks. Many people apparently decide what to think about the multiverse by identifying the multiverse with some politics – usually left-wing politics. And if they like the left-wing politics, they decide to become the multiverse supporters; if they're not left-wing, they become the critics of the multiverse.

Needless to say, this rule isn't universally valid. There are lots of very left-wing people who are critics of the multiverse; and I am a right-wing example that is "mostly" a supporter of the multiverse. (Well, maybe the correlation between one's being religious and one's being a critic of the multiverse is stronger but it is surely not perfect, either.) But some people on both sides think that it "should be" valid. Why?

I think that the reasoning is just silly.

Whether the multiverse "exists" is a question about the world at the longest possible distance scales and time scales. But at the end, it's really just a question about the "size of the whole world". The multiverse research needs "more advanced, modern insights" but it's not "that different" from the question whether the Earth is flat, whether the Sun is the only star, whether the Milky Way is the only galaxy, or whether the Earth is the only inhabited planet. Even if you care about God's existence or in His holy absence, it's just a technical detail of a sort.

If God could have created (the laws that produced) a round Earth, small planets and large planets, one galaxy and billions of other galaxies, He could have created laws that produce a single patch of the visible Universe, a trillion of patches, googol to the 5th power of patches, or infinitely many patches. What is the problem? I think that you must imagine a very weak, anthropomorphic God if such things are a problem for you.

Years ago, Leonard Susskind promoted the multiverse as a weapon to kill God. Susskind believes that there is no God which is why it's so important to kill Him. ;-) His argument is that God has a good taste and creates pretty, ordered things. To prove that God is dead, just show that the Universe is maximally messy and the multiverse seems šitty enough for that – so that all the šit really looks beautiful to a staunch atheist. OK, Susskind stood on the opposite side than Keating but the underlying logic is equally unscientific and both of them "politicize" a topic that shouldn't be political.

If you look at the structural character of the argumentation, you could reasonably argue that the right identification is the other one: the multiverse and especially the anthropic principle often build on the kind of arguments that are similar to those by the Christian apologists. The anthropic principle differs from Christianity but both of them look like "some forms of faith". The evidence is really lacking and the belief in the importance of "the size of God" or "the number of intelligent observers' souls" seem to trump any "finite" empirical argument. So maybe this could be a better simplification: the most ambitious versions of the multiverse are on par with religion.

But my primary point is that none of these simplifications is the right starting point to discuss the existence of the multiverse and/or the existence of the multiverse or the validity of an inflationary theory. When things are simplified or politicized according to any of these vague templates, the discussion simply invites too many superficial people whose arguments are shallow and who will support any claim whose apparent goal is to strengthen the "politically correct" side of the argument, independently of the quality of the claim. And that's just wrong.

The existence of the multiverse is a deep question but it's still a scientific, in some sense technical question, and no one should be assumed to defend one side of this debate or another just because it's claimed to be correlated with some (known) political or religious opinions of the person. It's the pressure arising from such expectations that is wrong for science; and it's the numerous people's inability to resist the pressure that also hurts proper objective science.

Back to lawsuits against the Nobel committee

At 33:40 of the interview, he discusses a website he founded that is meant to pressure the Nobel committee to reform the prize in some incomprehensible ways, in order to avoid the lawsuits and/or lost of allure, and also to help women and minorities. Holy cow. What does he exactly want, what is the justification, and how is this desire compatible with his being conservative?

Alfred Nobel wrote a will and some folks in his foundation tried to fulfill it. I think it would be very hard to fulfill it literally because Nobel didn't have a terribly good idea about the number of scientists who would exist in 2018, about the size of the relevant teams, and about their complex relationships with each other, with the organizers and sponsors of the scientific enterprises, and about the timescales it takes to complete an experiment or decide about the validity of a theory. If Nobel got familiar with all these things, he could very well agree that what is being done with his Nobel prize in physics is mostly reasonable. Or not.

Can Alfred Nobel sue the Nobel committee? He cannot because he's dead. Can someone else sue the committee on behalf of Alfred Nobel? I don't see how someone else could claim to better understand his intents than the committee that was specifically picked to do such decisions. But even if someone convinced the whole world that the committee deviates from the will in important aspects, what would it be good for? Does Keating really have a system for a better prize? It doesn't seem to be the case. That's an example of a situation that shows why it's so wise for the legal systems to demand the plaintiffs to have some standing. It seems clear that Keating has no standing in a hypothetical lawsuit about the "right way to interpret and fulfill Alfred Nobel's will".

After 42:00, he criticizes people's will to win the Olympic medals – some athletes would agree to die at age of 35 if they won one. Well, that's extreme but it's surely a reflection of a legitimate list of priorities that some people may have. A life that ends at this modest age but includes an Olympic victory may be considered a "better life" than a longer (just twice longer), more ordinary life, by some people. Some people simply are ambitious, some aren't. I think that the ambitions themselves are important for the progress of the mankind. So I don't share Keating's "horror" about it.

He says that the same extreme ambitions also exist in cosmology. Well, he has only provided us with some evidence from sports. But even if similar things exist in cosmology, and they may exist, I don't see anything unacceptable about it, either. Some people want to do great things (and even though the Nobel prize is just an honor, not the "real thing", as Feynman puts it, it's still a great enough thing for many people). This ambition exists independently of the Nobel prize. I think that Keating's logic is defective when he wants to sue the Nobel committee for the fact that some humans have ambitions. The ambitions are a universal constant of the humanity. In between the lines, I think that he is a great example of ambitious people himself.

Also, I understood some of his comments as urging the committee to give the Nobel prizes to everyone who wants it so that they're satisfied (Keating says that too many people fail to get the Nobel LOL). OK, that's a terrible idea (and the comment that "too many people are shut out" sounds like a joke; I literally cannot tell whether he's serious; of course that most people should be "shut out", it's a prestigious prize given at most to 3 physicists a year in a world that has over 7 billion people). I can't believe he's serious. They could bury meritocracy in this straightforward way. That would probably kill the people's interest in the Nobel prize, indeed. This move would actually kill the prize, unlike the real world events that Keating incorrectly predicts to lead to the death of the prize.

But the death of the Nobel prize wouldn't be enough to kill the people's ambitions. These people would naturally set other, more or less equivalent goals (when it comes to their will to shorten their lives), in front of themselves and these goals would arguably be less noble than a Nobel when it comes to the character of the activities that the people would do. And that would be bad for the mankind. One reason why Nobel's will is so useful for the mankind is that it is one of the motivations that makes people do great things such as top science. If you kill that prize, you will reduce the motivation of the average people to do this great stuff – and that's bad! Nobel knew about that effect of a prize and he wanted to encourage people to do great things – one reason was that he felt guilty that the dynamite was going to do some bad things that he needed to compensate.

At 43:30, Keating starts to sound like a generic extreme left-wing fruitcake again. Rosalind Franklin wasn't given the prize for DNA just because of some petty details – she died before they made the decision. How can such an unimportant thing that the candidate is dead affect whether she wins? Honestly, Keating must be joking. Implicitly, he thinks that he's just like Rosalind Franklin which is why he launched this jihad against the Nobel prize. Holy cow.

These are real sour grapes, a textbook example of what they mean. There are very good meritocratic reasons (not just the death) why Franklin hasn't won the prize; and why Keating hasn't won one, either. Even if someone is the deepest thinker in the world, and it could very well be Edward Witten (or late Stephen Hawking) or someone else, there isn't any law of Nature that saying the Nobel prize is a necessary condition for him to be the world's deepest thinker. Unlike Keating and despite his modesty, Edward Witten knows that he may be the world's smartest man even without the "confirmation" from Stockholm. The Nobel prize is just an important prize with its own rules; the rules can't be precisely equivalent to everyone's definition of greatness. Keating seems to blame his colleagues that they have distorted definitions of greatness but it seems to me that Keating is one of the best examples that deserve that criticism of his.

While he's right-wing in some respects, I found his calls to "give the Nobel prize to women, minorities, and everyone who wants it so badly" to be examples of the generic, currently omnipresent, "progressive" insanity. Nobel wanted the price to go to one physicist a year and the cap was tripled soon. But the cap shouldn't be lifted or loosened (especially not substantially) because the prize would cease to play the positive role it plays.

by Luboš Motl (noreply@blogger.com) at April 24, 2018 07:34 AM

April 22, 2018

Lubos Motl - string vacua and pheno

Brian Keating's Nobel prize obsession surprised me
Brian Keating will release his first book, "Losing the Nobel Prize", on April 24th. I don't own it and I haven't read it. But I was still intrigued by some of the discussions about it.

Backreation wrote a review and Keating responded.

I used to think that the title was just a trick to emphasize the importance of Keating's work: He has done work that could have led to a Nobel prize but Nature wasn't generous enough, it has seemed for some 3 years. But the two articles linked to in the previous paragraph suggest that Keating is much more obsessed with the Nobel prize. That's ironic because the book seems to say that Keating is not obsessed, and he doesn't even want such a lame prize, but it's his colleagues, the spherical bastards, who are obsessed. ;-)




OK, let me start to react to basic statements by Keating and Hossenfelder. First, Keating designed BICEP1 and lots of us were very excited about BICEP2, an upgraded version of that gadget. It could have seen the primordial gravitational waves. Even though I had theoretical prejudices leading me to believe that those waves should be weak enough so that they shouldn't have been seen, I was impressed by the actual graphs and claims by the BICEP2 collaboration and willing to believe that they really found the waves and proved us wrong (by "us", I mean people around the Weak Gravity Conjecture and related schools of thought).




Keating has clearly designed a nice gadget and he deserves to be considered a top professional in his field. Because that gadget hasn't made a breakthrough that we would still believe to be real and solid, Keating hasn't won any major prize that also requires some collaboration of Mother Nature. He's still a top professional who rightfully earns a regular salary for that work and skills but his big lottery ticket hasn't won so he wasn't given a Nobel prize, an extraordinary donation.

During the excitement about BICEP2, if you told me that the Keating was this obsessed with the Nobel prize, I would have probably been more skeptical about the claims than I was. From my perspective, this obsession looks like a warning. If you really want a Nobel prize, it's natural to think that you make the arguments in favor of your discovery look a little bit clearer than what follows from your cold hard data. I don't really claim that Keating has committed such an "improvement" but I do claim that the expectation value of the "improvement" that I would have believed if I had known about his Nobel prize obsession would be positive and significant.

Keating seems to combine comments about his particular work with some more general criticism of the Nobel prize. Only 1/4 of the Nobel prize winners in physics are theorists; the rest are experimenters and observational people. Keating says that the fraction of theorists should be higher. I agree. He also says that experimenters shouldn't be getting Nobel prizes for things that some theorists outlined before them. I have mixed feelings about that claim – on some days, I would subscribe to that, on others, I wouldn't.

Hossenfelder seems upset about that very statement:
You read that right. No Nobel for the Higgs, no Nobel for B-modes, and no Nobel for a direct discovery of dark matter (should it ever happen), because someone predicted that.
Ms Hossenfelder must have missed it but one of these experimental discoveries has been made, that of the Higgs boson, and the experimenters indeed didn't get any Nobel prize. The 2013 Nobel prize went to Higgs and Englert, two of the theorists who discovered the mechanism and (perhaps) the particle theoretically. There have been several reasons why the experimenters haven't received the award (yet?): the CERN teams are too large, too many people could be said to deserve it (Alfred Nobel's limit is 3 – well, his will actually said 1 but soon afterwards, the number was tripled and another change would seem too radical now). But I think that Keating's thinking has also played a role. CERN has really done something straightforward. They knew what they should see. In my opinion, this makes the contribution by the experimenters less groundbreaking.

In 2017, Weiss, Thorn, and Barish got their experimental Nobel prize for something that was predicted by the theorists – such as Albert Einstein – namely the gravitational waves. But if you look at the justification, they got the prize both for LIGO and the discovery of the gravitational waves. So they were the "first men" who created LIGO and/or made it very powerful. It seems to me that no one who has done something this groundbreaking in particle physics experimentation was a visible member of the teams that discovered the Higgs boson. That discovery was made by a gradual improvement of the collider technology – by a large collective of people.

I think that if the primordial gravitational waves were discovered by BICEP2 and the discovery were confirmed and withstood the tests, Keating would both deserve the Nobel prize and he would get the Nobel prize. Now, some theorists have predicted strong enough primordial gravitational waves. But these waves may also be weak or non-existent. The difference from the Higgs boson is that the Higgs boson was really agreed to be necessarily there by good particle physicists, it was the unique player that makes the \(W_L W_L\to W_L W_L\) scattering unitary. On the other hand, there's no such uniqueness in the case of the primordial gravitational waves and their strength (and similarly in the problem of the identity of the dark matter). When the answers aren't agreed to be unique by the theorists, the experimenters play a much bigger role and they arguably deserve the Nobel prize.

Some people are very upset when Keating (or I) point out that the confirmation of a theory by an experiment – when the experimenter already knows what to look for – is less spectacular. For example:
naivetheorist said: Keating writes: "I am advocating that more theorists should win it, and experimentalists should not win it if they/we merely confirm a theory". Merely? that's an incredibly condescending attitude. Keating's rather lame response' affirms my decision to cancel my order for his book.
The attitude may look condescending but there are very good reasons for this "condescension". The Nobel prize simply is meant to reward the original contribution and when someone is just confirming the work (theory) by someone else, this work is more derivative even if the first guy is a theorist and the second guy is an experimenter. It's great that experimenters are confirming or refuting hypotheses formulated by the theorists. But that's merely the scientific "business as usual". Prizes such as the Nobel prize are given for something extraordinary that isn't just "business as usual". One needs to be the really first person to do something – and luck or Nature's cooperation is often needed.

Keating seems to propose some boycotts of the Nobel prize or lawsuits against the Nobel prize. I don't get these comments. The inventor of some explosives got rich and created a system in which his money is invested and some fraction is paid to some people who are chosen as worthy the award by a committee that Nobel envisioned in his Nobel. It's a private activity. Well, one that has become globally famous, but the global fame is a consequence of the fame of Nobel himself and the winners (plus the money that attracts human eyes), not something that defines the award. Just because the award is followed by many people in the world doesn't mean that these people have the right to change the rules. After all, it's not their money.

As I said, the Nobel prize could be "better" according to many of us – and a higher percentage of the theorists could be a part of this "improvement". But this discussion is detached from reality. The Nobel prize is whatever it is. Alfred Nobel was a very practical person – explosives are rather practical compounds – and I believe that if he knew the whole list of the winners of his physics prize, he would be surprised by the high percentage of nerds and pure theorists. And maybe he would find it OK. And maybe he would want to increase the number of theorists, too. We don't know. But the prize has some traditional rules and expectations. Theorists only get their prizes for theories that have been experimentally verified – like Higgs and Englert.

The original BICEP2 claims about the very strong gravitational waves seem largely discredited now. This simple fact seems much more important for the question whether BICEP2 should be awarded a Nobel prize or not than some proposals to increase the number of theorists or reduce the number of experimental winners who just confirm predictions by theorists.

Concerning the obsession by the Nobel prizes, well, I think it's normal for the people who get close enough to be eligible to think about the prize. Some of the fathers of QCD knew that they deserved the prize and they were patiently waiting for some 30 years. The winners get some money directly, some extra money indirectly, and they may enjoy the life more than they did previously.

I think that the people who work on hep-th and ambitious hep-ph – like string theory and particle physics beyond the Standard Model – must know that according to the current scheme of things, the Nobel prize for their work is unlikely. But that doesn't mean that their work isn't the most valuable thing done in science. The best things in hep-th almost certainly are the most valuable part of science. But things are just arranged in such a way that authors of such ground-breaking theoretical papers haven't gotten a Nobel prize and they're expected not to get it soon, either.

Is that such a problem? I don't think so. The Nobel prize is a distinguished award and – with the exception of the Nobel prize in peace and perhaps literature – it keeps on rewarding people who have done something important and who are usually very smart, too. But the precise criteria that decide who is rewarded are a bit more subtle – the physics prize isn't meant to reward people who are smart and/or made a deep contribution, without additional adjectives. The contributions must be confirmed experimentally because that's how "physics" is defined in the Nobel prize context. So there are rather good reasons why even Stephen Hawking hasn't ever received a Nobel prize although most quantum gravity theorists – and most formal theoretical particle physicists – would agree that his contributions to physics have been greater than those of the average Nobel prize winner. But the Hawking radiation hasn't really been seen. For me, the observation is a formality – I have no real doubts about the existence of the Hawking radiation and other things – but I have no trouble to respect the rules of the game in which these formalities decide about the prize. These are just the rules of the Nobel prize – and those ultimately reflect the rules of the scientific method.

By the way, I think that many people who have been doing similar things as your humble correspondent are often reminded that "they wanted a Nobel prize". It's possible that as a kid, I have independently talked about such things as well but at the end, I think that the obsession with the Nobel prize has primarily been widespread in my (or our) environment, not in my own thinking. The real excitement that underlined some of my important ideas – and even the hopes that one can get much further with these ideas – have had virtually nothing to do with the Nobel prize for over 20 years.



If you look rationally, the Nobel prize is just an honor. I actually think that my opinions about these matters – including the importance of the Nobel prize – were largely shaped by Feynman's view above since the moment when I read "Surely You're Joking Mr Feynman" for the first time. And I was 17. Well, the Nobel prize is still a better honor than almost all others. After all, e.g. Richard Feynman who didn't like honors was one of those who got that particular honor. ;-) But it's unwise to be obsessed with the selection process and generic winners of that prize. At the end, the decision is one made by a smart but imperfect committee, and the prize primarily affects the winners only.

by Luboš Motl (noreply@blogger.com) at April 22, 2018 04:31 PM

April 17, 2018

Lubos Motl - string vacua and pheno

Einstein's amateur popularizer in Florida sketched 10D (stringy) spacetime in 1928
Thanks to Willie Soon, Paul Halpern.

St Petersburg Times, Sunday, November 11th, 1928
Guest blog by John Nations, 3141 Twenty-sixth avenue South, City (St. Petersburg), Nov. 9, 1928



Mr Nations played with glimpses of string theory in 1928 and in that year, Lonnie Johnson recorded "Playing with the strings" about that achievement.

Open forum (on the right side from the picture)
UNDERSTANDING EINSTEIN

Editor The Times:

A lot of people believe that Einstein is as transparent as boiler iron, one able authority estimating roughly that at least eight people in the world understand him.




This should not be considered a disparagement. Those who understand Einstein can easily vindicate themselves by explaining him in "street" terms to those who avoid the subject for the sake of two things, honesty and delicacy. Those who admit that they understand Einstein might choose to tell just what would happen if an Antares should derail and go through the curve where space "curves around." It is beyond the small comprehensive powers of a large group, just what would happen to that great orb should it become entangled in a void of nothingness that isn't even space. When Mr. Einstein declares that space is not infinite but curves around, that settles it for those with broad vision, but not for the great masses who insist upon speculating upon what exists just outside the "curve" where space is claimed to stop.




And as to time being the fourth dimension, a lot of ignorant folks might say that it is as good name for it as anything, but they might also ask about the ninth dimension or the tenth—not yet being reconciled to the fact that there has to be a fourth dimension tucked away somewhere in time, space or music, and figuring that since there is bound to be a fourth dimension there is bound to be a sixteenth dimension, since one is quite as reasonable as the other to their small conception.

[Bold face added by LM for emphasis.]

A concise explanation of Einstein's theory of relativity would doubtless be appreciated by thousands of people, but anyone attempting an explanation should refrain from Einsteinian phraseology—the big crowd doesn't understand that. For instance, in attempting to explain the location and predicament of Antares should that orb break jail and plunge through Einstein's "curve 'around'" it would not be advisable to say: Function measured in speed, amplitude, frequency, infrequency etc.; nor that Antares bumped into fourth dimension and rebounded like a hailstone off a greenhouse.



Maurice Ravel's "Bolero" premiered in 1928. Bolero is from Spanish "bula" (ball, whirling motion).

That might all be to the point but so many could not understand. It would be more tenable, less abstruse, if explained in terms indigenous of the ignorant. Many of the ignorant persist in the belief that time and space are brothers and infinite, and when they are told that either space or time is limited they are sure to ask about what is outside of space or, after time ceases how long such a condition can prevail—it is very difficult to explain those simple little details so that the average man can grasp your meaning.

It is easy to state that Einstein is simple and clear and unerring, and not so difficult to explain him in terms that you do not understand yourself—that is the usual way it is done. It sometimes scares the crowd and makes them envious of your deep insight, but when a poor, dumb fellow who has been too weak to grasp the impossibility of attaching a meaning to your baffling claims, asks you some of these simple questions about what happens after time ends or outside the domains of space, it is comfortable to have a long list of scrambled, incoherent words already prepared to smother him or he will cause you trouble.

JOHN NATIONS,
3141 Twenty-sixth avenue South, City, Nov. 9, 1928


\[

\left(\beta mc^2 + c\left(\sum_{n \mathop =1}^{3}\alpha_n p_n\right)\right) \psi (x,t) = i \hbar \frac{\partial\psi(x,t) }{\partial t}

\] The (original) Dirac equation above was also published in 1928. Too bad, Mr Dirac didn't cooperate with Mr Nations. They could have obtained string theory (or the superstring) 40 or 45 years earlier.


LM: Thanks for the nice contribution, John, and sorry for the delay before I published this guest blog. I guess that you are already dead by now and your house seems to be replaced by a highway. But if I understood you well, you recommend popularizers of relativity to start with plain English but switch to the fancy technical language as soon as the audience starts to ask something, even if the speaker doesn't understand the meaning of these fancy words, just to reduce the annoying questions. Clever. ;-)

Concerning the 9 spatial or 10 spacetime dimensions of string theory, it seems that you (or the people who annoyed you with obvious questions) found them as straightforward and as valid as the curved and possibly compact topology of the spacetime according to general relativity. It was a great guess. Indeed, when combined with the entanglement of quantum mechanics, when music of string theory and perhaps Antares are allowed, and when the Woit of nothingness is eliminated, Einstein's general relativity implies string theory with its 10-dimensional spacetime. Did you have a proof or did you guess? I am asking because even now, almost 90 years after your letter, I only have a partial proof of your statement.

Thanks, I will probably need a truly compact spacetime with closed time-like curves to get the answers from you.

by Luboš Motl (noreply@blogger.com) at April 17, 2018 04:24 PM

April 01, 2018

Lubos Motl - string vacua and pheno

Stephen Hawking writes a post-mortem paper
Stephen Hawking had a funeral in Cambridge yesterday. Some 500 people attended. I think that the family members were wise not to completely destroy the body because it could also include the soul. Hours later, the decision already produced its fruits.



Stephen Hawking just posted a new paper to the arXiv:
Imaginary time as a path to resurrection (screenshot)
It's just five pages long but it's using some very hard mathematics so I haven't had the time to fully comprehend it yet.




The abstract looks simple and intriguing, however:
We exploit the machinery of imaginary time to circumvent any particular point on the temporal real axis. The methodology may also be considered a refined realization of the process called "the resurrection" by the laymen. We describe a successful experiment in which a lifetime started on the anniversary of Galileo Galilei's death was interrupted on Albert Einstein's birthday and, through the complex plane, continued on the anniversary of the resurrection of Jesus Christ.
Is there a reader who understands the contours in the complex plane well enough?




I have some doubts about the applicability of the method. He could have easily continued himself through the contour. After all, the same trick was already made by Hawking's Jewish colleague 1985 or 1988 years ago. And no one wants to live forever, anyway. But a more important question is: May it be applied to objects such as food? If you like a particular fried chicken from Kentucky, can you make sure that you eat it as many times as you want?

OK, Hawking managed to be resurrected and write a paper again. But can he walk again? Only if he came again, we could admit that his derivation allows the second coming of Stephen Hawking.

by Luboš Motl (noreply@blogger.com) at April 01, 2018 05:52 AM

March 29, 2018

Robert Helling - atdotde

Machine Learning for Physics?!?
Today was the last day of a nice workshop here at the Arnold Sommerfeld Center organised by Thomas Grimm and Sven Krippendorf on the use of Big Data and Machine Learning in string theory. While the former (at this workshop mainly in the form of developments following Kreuzer/Skarke and taking it further for F-theory constructions, orbifolds and the like) appears to be quite advanced as of today, the latter is still in its very early days. At best.

I got the impression that for many physicists that have not yet spent too much time with this, deep learning and in particular deep neural networks are expected to be some kind of silver bullet that can answer all kinds of questions that humans have not been able to answer despite some effort. I think this hope is at best premature and looking at the (admittedly impressive) examples where it works (playing Go, classifying images, speech recognition, event filtering at LHC) these seem to be more like those problems where humans have at least a rough idea how to solve them (if it is not something that humans do everyday like understanding text) and also roughly how one would code it but that are too messy or vague to be treated by a traditional program.

So, during some of the less entertaining talks I sat down and thought about problems where I would expect neural networks to perform badly. And then, if this approach fails even in simpler cases that are fully under control one should maybe curb the expectations for the more complex cases that one would love to have the answer for. In the case of the workshop that would be guessing some topological (discrete) data (that depends very discontinuously on the model parameters). Here a simple problem would be a 2-torus wrapped by two 1-branes. And the computer is supposed to compute the number of matter generations arising from open strings at the intersections, i.e. given two branes (in terms of their slope w.r.t. the cycles of the torus) how often do they intersect? Of course these numbers depend sensitively on the slope (as a real number) as for rational slopes [latex]p/q[/latex] and [latex]m/n[/latex] the intersection number is the absolute value of [latex]pn-qm[/latex]. My guess would be that this is almost impossible to get right for a neural network, let alone the much more complicated variants of this simple problem.

Related but with the possibility for nicer pictures is the following: Can a neural network learn the shape of the Mandelbrot set? Let me remind those of you who cannot remember the 80ies anymore, for a complex number c you recursively apply the function
[latex]f_c(z)= z^2 +c[/latex]
starting from 0 and ask if this stays bounded (a quick check shows that once you are outside [latex]|z| < 2[/latex] you cannot avoid running to infinity). You color the point c in the complex plane according to the number of times you have to apply f_c to 0 to leave this circle. I decided to do this for complex numbers x+iy in the rectangle -0.74
I have written a small mathematica program to compute this image. Built into mathematica is also a neural network: You can feed training data to the function Predict[], for me these were 1,000,000 points in this rectangle and the number of steps it takes to leave the 2-ball. Then mathematica thinks for about 24 hours and spits out a predictor function. Then you can plot this as well:


There is some similarity but clearly it has no idea about the fractal nature of the Mandelbrot set. If you really believe in magic powers of neural networks, you might even hope that once it learned the function for this rectangle one could extrapolate to outside this rectangle. Well, at least in this case, this hope is not justified: The neural network thinks the correct continuation looks like this:
Ehm. No.

All this of course with the caveat that I am no expert on neural networks and I did not attempt anything to tune the result. I only took the neural network function built into mathematica. Maybe, with a bit of coding and TensorFlow one can do much better. But on the other hand, this is a simple two dimensional problem. At least for traditional approaches this should be much simpler than the other much higher dimensional problems the physicists are really interested in.

by Robert Helling (noreply@blogger.com) at March 29, 2018 07:35 PM

Axel Maas - Looking Inside the Standard Model

Asking questions leads to a change of mind
In this entry, I would like to digress a bit from my usual discussion of our physics research subject. Rather, I would like to talk a bit about how I do this kind of research. There is a twofold motivation for me to do this.

One is that I am currently teaching, together with somebody from the philosophy department, a course on science philosophy of physics. It cam to me as a surprise that one thing the students of philosophy are interested in is, how I think. What are the objects, or subjects, and how I connect them when doing research. Or even when I just think about a physics theory. The other is the review I have have recently written. Both topics may seem unrelated at first. But there is deep connection. It is less about what I have written in the review, but rather what led me up to this point. This requires some historical digression in my own research.

In the very beginning, I started out with doing research on the strong interactions. One of the features of the strong interactions is that the supposed elementary particles, quarks and gluons, are never seen separately, but only in combinations as hadrons. This is a phenomenon which is called confinement. It always somehow presented as a mystery. And as such, it is interesting. Thus, one question in my early research was how to understand this phenomenon.

Doing that I came across an interesting result from the 1970ies. It appears that a, at first sight completely unrelated, effect is very intimately related to confinement. At least in some theories. This is the Brout-Englert-Higgs effect. However, we seem to observe the particles responsible for and affected by the Higgs effect. And indeed, at that time, I was still thinking that the particles affected by the Brout-Englert-Higgs effect, especially  the Higgs and the W and Z bosons, are just ordinary, observable particles. When one reads my first paper of this time on the Higgs, this is quite obvious. But then there was the results of the 1970ies. It stated that, on a very formal level, there should be no difference between confinement and the Brout-Englert-Higgs effect, in a very definite way.

Now the implications of that serious sparked my interest. But I thought this would help me to understand confinement, as it was still very ingrained into me that confinement is a particular feature of the strong interactions. The mathematical connection I just took as a curiosity. And so I started to do extensive numerical simulations of the situation.

But while trying to do so, things which did not add up started to accumulate. This is probably most evident in a conference proceeding where I tried to put sense into something which, with hindsight, could never be interpreted in the way I did there. I still tried to press the result into the scheme of thinking that the Higgs and the W/Z are physical particles, which we observe in experiment, as this is the standard lore. But the data would not fit this picture, and the more and better data I gathered, the more conflicted the results became. At some point, it was clear that something was amiss.

At that point, I had two options. Either keep with the concepts of confinement and the Brout-Englert-Higgs effect as they have been since the 1960ies. Or to take the data seriously, assuming that these conceptions were wrong. It is probably signifying my difficulties that it took me more than a year to come to terms with the results. In the end, the decisive point was that, as a theoretician, I needed to take my theory seriously, no matter the results. There is no way around it. And it gave a prediction which did not fit my view of the experiments than necessarily either my view was incorrect or the theory. The latter seemed more improbable than the first, as it fits experiment very well. So, finally, I found an explanation, which was consistent. And this explanation accepted the curious mathematical statement from the 1970ies that confinement and the Brout-Englert-Higgs effect are qualitatively the same, but not quantitatively. And thus the conclusion was what we observe are not really the Higgs and the W/Z bosons, but rather some interesting composite objects, just like hadrons, which due to a quirk of the theory just behave almost as if they are the elementary particles.

This was still a very challenging thought to me. After all, this was quite contradictory to usual notions. Thus, it came as a very great relief to me that during a trip a couple months later someone pointed me to a few, almost forgotten by most, papers from the early 1980ies, which gave, for a completely different reason, the same answer. Together with my own observation, this made click, and everything started to fit together - the 1970ies curiosity, the standard notions, my data. That I published in the mid of 2012, even though this still lacked some more systematic stuff. But it required still to shift my thinking from agreement to really understanding. That came then in the years to follow.

The important click was to recognize that confinement and the Brout-Englert-Higgs effect are, just as pointed out in the 1970ies mathematically, really just two faces to the same underlying phenomena. On a very abstract level, essentially all particles which make up the standard model, are really just a means to an end. What we observe are objects which are described by them, but which they are not themselves. They emerge, just like hadrons emerge in the strong interaction, but with very different technical details. This is actually very deeply connected with the concept of gauge symmetry, but this becomes quickly technical. Of course, since this is fundamentally different from the usual way, this required confirmation. So we went, made predictions which could distinguish between the standard way of thinking and this way of thinking, and tested them. And it came out as we predicted. So, seems we are on the right track. And all details, all the if, how, and why, and all the technicalities and math you can find in the review.

To make now full circle to the starting point: That what happened during this decade in my mind was that the way I thought about how the physical theory I tried to describe, the standard model, changed. In the beginning I was thinking in terms of particles and their interactions. Now, very much motivated by gauge symmetry, and, not incidental, by its more deeper conceptual challenges, I think differently. I think no longer in terms of the elementary particles as entities themselves, but rather as auxiliary building blocks of actually experimentally accessible quantities. The standard 'small-ball' analogy went fully away, and there formed, well, hard to say, a new class of entities, which does not necessarily has any analogy. Perhaps the best analogy is that of, no, I really do not know how to phrase it. Perhaps at a later time I will come across something. Right now, it is more math than words.

This also transformed the way how I think about the original problem, confinement. I am curious, where this, and all the rest, will lead to. For now, the next step will be to go ahead from simulations, and see whether we can find some way how to test this actually in experiment. We have some ideas, but in the end, it may be that present experiments will not be sensitive enough. Stay tuned.

by Axel Maas (noreply@blogger.com) at March 29, 2018 01:09 PM

March 28, 2018

Marco Frasca - The Gauge Connection

Paper with a proof of confinement has been accepted

Recently, I wrote a paper together with Masud Chaichian (see here) containing a mathematical proof of confinement of a non-Abelian gauge theory based on Kugo-Ojima criterion. This paper underwent an extended review by several colleagues well before its submission. One of them has been Taichiro Kugo, one of the discoverers of the confinement criterion, that helped a lot to improve the paper and clarify some points. Then, after a review round of about two months, the paper has been accepted in Physics Letters B, one of the most important journals in particle physics.

This paper contains the exact beta function of a Yang-Mills theory. This confirms that confinement arises by the combination of the running coupling and the propagator. This idea was around in some papers in these latter years. It emerged as soon as people realized that the propagator by itself was not enough to grant confinement, after extended studies on the lattice.

It is interesting to point out that confinement is rooted in the BRST invariance and asymptotic freedom. The Kugo-Ojima confinement criterion permits to close the argument in a rigorous way yielding the exact beta funtion of the theory.

by mfrasca at March 28, 2018 09:34 AM

March 20, 2018

Marco Frasca - The Gauge Connection

Good news from Moriond

Some days ago, Rencontres of Moriond 2018 ended with the CERN presenting a wealth of results also about the Higgs particle. The direction that the two great experiments, ATLAS and CMS, took is that of improving the measurements on the Standard Model as no evidence has been seen so far of possible new particles. Also, the studies of the properties of the Higgs particle have been refined as promised and the news are really striking.

In a communicate to the public (see here), CERN finally acknowledge, for the first time, a significant discrepancy between data from CMS and Standard Model for the signal strengths in the Higgs decay channels. They claim a 17% difference. This is what I advocated for some years and I have published in reputable journals. I will discuss this below. I would like only to show you the CMS results in the figure below.

ATLAS, by its side, is seeing significant discrepancy in the ZZ channel (2\sigma) and a 1\sigma compatibility for the WW channel. Here are their results.

On the left the WW channel is shown and on the right there are the combined \gamma\gamma and ZZ channels.

The reason of the discrepancy is due, as I have shown in some papers (see here, here and here), to the improper use of perturbation theory to evaluate the Higgs sector. The true propagator of the theory is a sum of Yukawa-like propagators with a harmonic oscillator spectrum. I solved exactly this sector of the Standard Model. So, when the full propagator is taken into account, the discrepancy is toward an increase of the signal strength. Is it worth a try?

This means that this is not physics beyond the Standard Model but, rather, the Standard Model in its full glory that is teaching something new to us about quantum field theory. Now, we are eager to see the improvements in the data to come with the new run of LHC starting now. In the summer conferences we will have reasons to be excited.

by mfrasca at March 20, 2018 09:17 AM

March 17, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Remembering Stephen Hawking

Like many physicists, I woke to some sad news early last Wednesday morning, and to a phoneful of requests from journalists for a soundbyte. In fact, although I bumped into Stephen at various conferences, I only had one significant meeting with him – he was intrigued by my research group’s discovery that Einstein once attempted a steady-state model of the universe. It was a slightly scary but very funny meeting during which his famous sense of humour was fully at play.

4Hawking_1 (1)

Yours truly talking steady-state cosmology with Stephen Hawking

I recalled the incident in a radio interview with RTE Radio 1 on Wednesday. As I say in the piece, the first words that appeared on Stephen’s screen were “I knew..” My heart sank as I assumed he was about to say “I knew about that manuscript“. But when I had recovered sufficiently to look again, what Stephen was actually saying was “I knew ..your father”. Phew! You can find the podcast here.

Image result for cormac o raifeartaigh stephen hawking

Hawking in conversation with my late father (LHS) and with Ernest Walton (RHS)

RTE TV had a very nice obituary on the Six One News, I have a cameo appearence a few minutes into the piece here.

In my view, few could question Hawking’s brilliant contributions to physics, or his outstanding contribution to the public awareness of science. His legacy also includes the presence of many brilliant young physicists at the University of Cambridge today. However, as I point out in a letter in today’s Irish Times, had Hawking lived in Ireland, he probably would have found it very difficult to acquire government funding for his work. Indeed, he would have found that research into the workings of the universe does not qualify as one of the “strategic research areas” identified by our national funding body, Science Foundation Ireland. I suspect the letter will provoke an angry from certain quarters, but it is tragically true.

Update

The above notwithstanding, it’s important not to overstate the importance of one scientist. Indeed, today’s Sunday Times contains a good example of the dangers of science history being written by journalists. Discussing Stephen’s 1974 work on black holes, Bryan Appleyard states  “The paper in effect launched the next four decades of cutting edge physics. Odd flowers with odd names bloomed in the garden of cosmic speculation – branes, worldsheets , supersymmetry …. and, strangest of all, the colossal tree of string theory”.

What? String theory, supersymmetry and brane theory are all modern theories of particle physics (the study of the world of the very small). While these theories were used to some extent by Stephen in his research in cosmology (the study of the very large), it is ludicrous to suggest that they were launched by his work.

 

by cormac at March 17, 2018 08:27 PM

March 16, 2018

Sean Carroll - Preposterous Universe

Stephen Hawking’s Scientific Legacy

Stephen Hawking died Wednesday morning, age 76. Plenty of memories and tributes have been written, including these by me:

I can also point to my Story Collider story from a few years ago, about how I turned down a job offer from Hawking, and eventually took lessons from his way of dealing with the world.

Of course Hawking has been mentioned on this blog many times.

When I started writing the above pieces (mostly yesterday, in a bit of a rush), I stumbled across this article I had written several years ago about Hawking’s scientific legacy. It was solicited by a magazine at a time when Hawking was very ill and people thought he would die relatively quickly — it wasn’t the only time people thought that, only to be proven wrong. I’m pretty sure the article was never printed, and I never got paid for it; so here it is!

(If you’re interested in a much better description of Hawking’s scientific legacy by someone who should know, see this article in The Guardian by Roger Penrose.)

Stephen Hawking’s Scientific Legacy

Stephen Hawking is the rare scientist who is also a celebrity and cultural phenomenon. But he is also the rare cultural phenomenon whose celebrity is entirely deserved. His contributions can be characterized very simply: Hawking contributed more to our understanding of gravity than any physicist since Albert Einstein.

“Gravity” is an important word here. For much of Hawking’s career, theoretical physicists as a community were more interested in particle physics and the other forces of nature — electromagnetism and the strong and weak nuclear forces. “Classical” gravity (ignoring the complications of quantum mechanics) had been figured out by Einstein in his theory of general relativity, and “quantum” gravity (creating a quantum version of general relativity) seemed too hard. By applying his prodigious intellect to the most well-known force of nature, Hawking was able to come up with several results that took the wider community completely by surprise.

By acclimation, Hawking’s most important result is the realization that black holes are not completely black — they give off radiation, just like ordinary objects. Before that famous paper, he proved important theorems about black holes and singularities, and afterward studied the universe as a whole. In each phase of his career, his contributions were central.

The Classical Period

While working on his Ph.D. thesis in Cambridge in the mid-1960’s, Hawking became interested in the question of the origin and ultimate fate of the universe. The right tool for investigating this problem is general relativity, Einstein’s theory of space, time, and gravity. According to general relativity, what we perceive as “gravity” is a reflection of the curvature of spacetime. By understanding how that curvature is created by matter and energy, we can predict how the universe evolves. This may be thought of as Hawking’s “classical” period, to contrast classical general relativity with his later investigations in quantum field theory and quantum gravity.

Around the same time, Roger Penrose at Oxford had proven a remarkable result: that according to general relativity, under very broad circumstances, space and time would crash in on themselves to form a singularity. If gravity is the curvature of spacetime, a singularity is a moment in time when that curvature becomes infinitely big. This theorem showed that singularities weren’t just curiosities; they are an important feature of general relativity.

Penrose’s result applied to black holes — regions of spacetime where the gravitational field is so strong that even light cannot escape. Inside a black hole, the singularity lurks in the future. Hawking took Penrose’s idea and turned it around, aiming at the past of our universe. He showed that, under similarly general circumstances, space must have come into existence at a singularity: the Big Bang. Modern cosmologists talk (confusingly) about both the Big Bang “model,” which is the very successful theory that describes the evolution of an expanding universe over billions of years, and also the Big Bang “singularity,” which we still don’t claim to understand.

Hawking then turned his own attention to black holes. Another interesting result by Penrose had shown that it’s possible to extract energy from a rotating black hole, essentially by bleeding off its spin until it’s no longer rotating. Hawking was able to demonstrate that, although you can extract energy, the area of the event horizon surrounding the black hole will always increase in any physical process. This “area theorem” was both important in its own right, and also evocative of a completely separate area of physics: thermodynamics, the study of heat.

Thermodynamics obeys a set of famous laws. For example, the first law tells us that energy is conserved, while the second law tells us that entropy — a measure of the disorderliness of the universe — never decreases for an isolated system. Working with James Bardeen and Brandon Carter, Hawking proposed a set of laws for “black hole mechanics,” in close analogy with thermodynamics. Just as in thermodynamics, the first law of black hole mechanics ensures that energy is conserved. The second law is Hawking’s area theorem, that the area of the event horizon never decreases. In other words, the area of the event horizon of a black hole is very analogous to the entropy of a thermodynamic system — they both tend to increase over time.

Black Hole Evaporation

Hawking and his collaborators were justly proud of the laws of black hole mechanics, but they viewed them as simply a formal analogy, not a literal connection between gravity and thermodynamics. In 1972, a graduate student at Princeton University named Jacob Bekenstein suggested that there was more to it than that. Bekenstein, on the basis of some ingenious thought experiments, suggested that the behavior of black holes isn’t simply like thermodynamics, it actually is thermodynamics. In particular, black holes have entropy.

Like many bold ideas, this one was met with resistance from experts — and at this point, Stephen Hawking was the world’s expert on black holes. Hawking was certainly skeptical, and for good reason. If black hole mechanics is really just a form of thermodynamics, that means black holes have a temperature. And objects that have a temperature emit radiation — the famous “black body radiation” that played a central role in the development of quantum mechanics. So if Bekenstein were right, it would seemingly imply that black holes weren’t really black (although Bekenstein himself didn’t quite go that far).

To address this problem seriously, you need to look beyond general relativity itself, since Einstein’s theory is purely “classical” — it doesn’t incorporate the insights of quantum mechanics. Hawking knew that Russian physicists Alexander Starobinsky and Yakov Zel’dovich had investigated quantum effects in the vicinity of black holes, and had predicted a phenomenon called “superradiance.” Just as Penrose had showed that you could extract energy from a spinning black hole, Starobinsky and Zel’dovich showed that rotating black holes could emit radiation spontaneously via quantum mechanics. Hawking himself was not an expert in the techniques of quantum field theory, which at the time were the province of particle physicists rather than general relativists. But he was a quick study, and threw himself into the difficult task of understanding the quantum aspects of black holes, so that he could find Bekenstein’s mistake.

Instead, he surprised himself, and in the process turned theoretical physics on its head. What Hawking eventually discovered was that Bekenstein was right — black holes do have entropy — and that the extraordinary implications of this idea were actually true — black holes are not completely black. These days we refer to the “Bekenstein-Hawking entropy” of black holes, which emit “Hawking radiation” at their “Hawking temperature.”

There is a nice hand-waving way of understanding Hawking radiation. Quantum mechanics says (among other things) that you can’t pin a system down to a definite classical state; there is always some intrinsic uncertainty in what you will see when you look at it. This is even true for empty space itself — when you look closely enough, what you thought was empty space is really alive with “virtual particles,” constantly popping in and out of existence. Hawking showed that, in the vicinity of a black hole, a pair of virtual particles can be split apart, one falling into the hole and the other escaping as radiation. Amazingly, the infalling particle has a negative energy as measured by an observer outside. The result is that the radiation gradually takes mass away from the black hole — it evaporates.

Hawking’s result had obvious and profound implications for how we think about black holes. Instead of being a cosmic dead end, where matter and energy disappear forever, they are dynamical objects that will eventually evaporate completely. But more importantly for theoretical physics, this discovery raised a question to which we still don’t know the answer: when matter falls into a black hole, and then the black hole radiates away, where does the information go?

If you take an encyclopedia and toss it into a fire, you might think the information contained inside is lost forever. But according to the laws of quantum mechanics, it isn’t really lost at all; if you were able to capture every bit of light and ash that emerged from the fire, in principle you could exactly reconstruct everything that went into it, even the print on the book pages. But black holes, if Hawking’s result is taken at face value, seem to destroy information, at least from the perspective of the outside world. This conundrum is the “black hole information loss puzzle,” and has been nagging at physicists for decades.

In recent years, progress in understanding quantum gravity (at a purely thought-experiment level) has convinced more people that the information really is preserved. In 1997 Hawking made a bet with American physicists Kip Thorne and John Preskill; Hawking and Thorne said that information was destroyed, Preskill said that somehow it was preserved. In 2007 Hawking conceded his end of the bet, admitting that black holes don’t destroy information. However, Thorne has not conceded for his part, and Preskill himself thinks the concession was premature. Black hole radiation and entropy continue to be central guiding principles in our search for a better understanding of quantum gravity.

Quantum Cosmology

Hawking’s work on black hole radiation relied on a mixture of quantum and classical ideas. In his model, the black hole itself was treated classically, according to the rules of general relativity; meanwhile, the virtual particles near the black hole were treated using the rules of quantum mechanics. The ultimate goal of many theoretical physicists is to construct a true theory of quantum gravity, in which spacetime itself would be part of the quantum system.

If there is one place where quantum mechanics and gravity both play a central role, it’s at the origin of the universe itself. And it’s to this question, unsurprisingly, that Hawking devoted the latter part of his career. In doing so, he established the agenda for physicists’ ambitious project of understanding where our universe came from.

In quantum mechanics, a system doesn’t have a position or velocity; its state is described by a “wave function,” which tells us the probability that we would measure a particular position or velocity if we were to observe the system. In 1983, Hawking and James Hartle published a paper entitled simply “Wave Function of the Universe.” They proposed a simple procedure from which — in principle! — the state of the entire universe could be calculated. We don’t know whether the Hartle-Hawking wave function is actually the correct description of the universe. Indeed, because we don’t actually have a full theory of quantum gravity, we don’t even know whether their procedure is sensible. But their paper showed that we could talk about the very beginning of the universe in a scientific way.

Studying the origin of the universe offers the prospect of connecting quantum gravity to observable features of the universe. Cosmologists believe that tiny variations in the density of matter from very early times gradually grew into the distribution of stars and galaxies we observe today. A complete theory of the origin of the universe might be able to predict these variations, and carrying out this program is a major occupation of physicists today. Hawking made a number of contributions to this program, both from his wave function of the universe and in the context of the “inflationary universe” model proposed by Alan Guth.

Simply talking about the origin of the universe is a provocative step. It raises the prospect that science might be able to provide a complete and self-contained description of reality — a prospect that stretches beyond science, into the realms of philosophy and theology. Hawking, always provocative, never shied away from these implications. He was fond of recalling a cosmology conference hosted by the Vatican, at which Pope John Paul II allegedly told the assembled scientists not to inquire into the origin of the universe, “because that was the moment of creation and therefore the work of God.” Admonitions of this sort didn’t slow Hawking down; he lived his life in a tireless pursuit of the most fundamental questions science could tackle.

 

by Sean Carroll at March 16, 2018 11:23 PM

Ben Still - Neutrino Blog

Particle Physics Brick by Brick
It has been a very long time since I last posted and I apologise for that. I have been working the LEGO analogy, as described in the pentaquark series and elsewhere, into a book. The book is called Particle Physics Brick by Brick and the aim is to stretch the LEGO analogy to breaking point while covering as much of the standard model of particle physics as possible. I have had enormous fun writing it and I hope that you will enjoy it as much if you choose to buy it.

It has been available in the UK since September 2017 and you can buy it from Foyles / Waterstones / Blackwell's / AmazonUK where it is receiving ★★★★★ reviews

It is released in the US this Wednesday 21st March 2018 and you can buy it from all good book stores and Amazon.com 

I just wanted to share a few reviews of the book as well because it makes me happy!

Spend a few hours perusing these pages and you'll be in a much better frame of mind to understand your place in the cosmos... The astronomically large objects of the universe are no easier to grasp than the atomically small particles of matter. That's where Ben Still comes in, carrying a box of Legos. A British physicist with a knack for explaining abstract concepts... He starts by matching the weird properties and interactions described by the Standard Model of particle physics with the perfectly ordinary blocks of a collection of Legos. Quarks and leptons, gluons and charms are assigned to various colors and combinations of plastic bricks. Once you've got that system in mind, hang on: Still races off to illustrate the Big Bang, the birth of stars, electromagnetism and all matter of fantastical-sounding phenomenon, like mesons and beta decay. "Given enough plastic bricks, the rules in this book and enough time," Still concludes, "one might imagine that a plastic Universe could be built by us, brick by brick." Remember that the next time you accidentally step on one barefoot.--Ron Charles, The Washington Post

Complex topics explained simply An excellent book. I am Head of Physics at a school and have just ordered 60 copies of this for our L6th students for summer reading before studying the topic on particle physics early next year. Highly recommended. - Ben ★★★★★ AmazonUK

It's beautifully illustrated and very eloquently explains the fundamentals of particle ...
This is a gem of a pop science book. It's beautifully illustrated and very eloquently explains the fundamentals of particle physics without hitting you over the head with quantum field theory and Lagrangian dynamics. The author has done an exceptional job. This is a must have for all students and academics of both physics and applied maths! - Jamie ★★★★★ AmazonUK

by Ben (noreply@blogger.com) at March 16, 2018 09:32 PM

March 02, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Snowbound academics are better academics

Like most people in Ireland, I am working at home today. We got quite a dump of snow in the last two days, and there is no question of going anywhere until the roads clear. Worse, our college closed quite abruptly and I was caught on the hop – there are a lot of things (flash drives, books and papers) sitting smugly in my office that I need for my usual research.

IMG_1459

The college on Monday evening

That said, I must admit I’m finding it all quite refreshing. For the first time in years, I have time to read interesting things in my daily email; all those postings from academic listings that I never seem to get time to read normally. I’m enjoying it so much, I wonder how much stuff I miss the rest of the time.

IMG_1470

The view from my window as I write this

This morning, I thoroughly enjoyed a paper by Nicholas Campion on the representation of astronomy and cosmology in the works of William Shakespeare. I’ve often wondered about this as Shakespeare lived long enough to know of Galileo’s ground-breaking astronomical observations. However, anyone expecting coded references to new ideas about the universe in Shakespeare’s sonnets and plays will be disappointed; apparently he mainly sticks to classical ideas, with a few vague references to the changing order.

I’m also reading about early attempts to measure the parallax of light from a comet, especially by the great Danish astronomer Tycho de Brahe. This paper comes courtesy of the History of Astronomy Discussion Group listings, a really useful resource for anyone interested in the history of astronomy.

While I’m reading all this, I’m also trying to keep abreast of a thoroughly modern debate taking place worldwide, concerning the veracity of an exciting new result in cosmology on the formation of the first stars. It seems a group studying the cosmic microwave background think they have found evidence of a signal representing the absorption of radiation from the first stars. This is exciting enough if correct, but the dramatic part is that the signal is much larger than expected, and one explanation is that this effect may be due to the presence of Dark Matter.

If true, the result would be a major step in our understanding of the formation of stars,  plus a major step in the demonstration of the existence of Dark Matter. However, it’s early days – there are many possible sources of a spurious signal and signals that are larger than expected have a poor history in modern physics! There is a nice article on this in The Guardian, and you can see some of the debate on Peter Coles’s blog In the Dark.  Right or wrong, it’s a good example of how scientific discovery works – if the team can show they have taken all possible spurious results into account, and if other groups find the same result, skepticism will soon be converted into excited acceptance.

All in all, a great day so far. My only concern is that this is the way academia should be – with our day-to-day commitments in teaching and research, it’s easy to forget there is a larger academic world out there.

Update

Of course, the best part is the walk into the village when it finally stops chucking down. can’t believe my local pub is open!

IMG_1480

Dunmore East in the snow today

 

by cormac at March 02, 2018 01:44 PM

March 01, 2018

Sean Carroll - Preposterous Universe

Dark Matter and the Earliest Stars

So here’s something intriguing: an observational signature from the very first stars in the universe, which formed about 180 million years after the Big Bang (a little over one percent of the current age of the universe). This is exciting all by itself, and well worthy of our attention; getting data about the earliest generation of stars is notoriously difficult, and any morsel of information we can scrounge up is very helpful in putting together a picture of how the universe evolved from a relatively smooth plasma to the lumpy riot of stars and galaxies we see today. (Pop-level writeups at The Guardian and Science News, plus a helpful Twitter thread from Emma Chapman.)

But the intrigue gets kicked up a notch by an additional feature of the new results: the data imply that the cosmic gas surrounding these early stars is quite a bit cooler than we expected. What’s more, there’s a provocative explanation for why this might be the case: the gas might be cooled by interacting with dark matter. That’s quite a bit more speculative, of course, but sensible enough (and grounded in data) that it’s worth taking the possibility seriously.

[Update: skepticism has already been raised about the result. See this comment by Tim Brandt below.]

Illustration: NR Fuller, National Science Foundation

Let’s think about the stars first. We’re not seeing them directly; what we’re actually looking at is the cosmic microwave background (CMB) radiation, from about 380,000 years after the Big Bang. That radiation passes through the cosmic gas spread throughout the universe, occasionally getting absorbed. But when stars first start shining, they can very gently excite the gas around them (the 21cm hyperfine transition, for you experts), which in turn can affect the wavelength of radiation that gets absorbed. This shows up as a tiny distortion in the spectrum of the CMB itself. It’s that distortion which has now been observed, and the exact wavelength at which the distortion appears lets us work out the time at which those earliest stars began to shine.

Two cool things about this. First, it’s a tour de force bit of observational cosmology by Judd Bowman and collaborators. Not that collecting the data is hard by modern standards (observing the CMB is something we’re good at), but that the researchers were able to account for all of the different ways such a distortion could be produced other than by the first stars. (Contamination by such “foregrounds” is a notoriously tricky problem in CMB observations…) Second, the experiment itself is totally charming. EDGES (Experiment to Detect Global EoR [Epoch of Reionization] Signature) is a small-table-sized gizmo surrounded by a metal mesh, plopped down in a desert in Western Australia. Three cheers for small science!

But we all knew that the first stars had to be somewhen, it was just a matter of when. The surprise is that the spectral distortion is larger than expected (at 3.8 sigma), a sign that the cosmic gas surrounding the stars is colder than expected (and can therefore absorb more radiation). Why would that be the case? It’s not easy to come up with explanations — there are plenty of ways to heat up gas, but it’s not easy to cool it down.

One bold hypothesis is put forward by Rennan Barkana in a companion paper. One way to cool down gas is to have it interact with something even colder. So maybe — cold dark matter? Barkana runs the numbers, given what we know about the density of dark matter, and finds that we could get the requisite amount of cooling with a relatively light dark-matter particle — less than five times the mass of the proton, well less than expected in typical models of Weakly Interacting Massive Particles. But not completely crazy. And not really constrained by current detection limits from underground experiments, which are generally sensitive to higher masses.

The tricky part is figuring out how the dark matter could interact with the ordinary matter to cool it down. Barkana doesn’t propose any specific model, but looks at interactions that depend sharply on the relative velocity of the particles, as v^{-4}. You might get that, for example, if there was an extremely light (perhaps massless) boson mediating the interaction between dark and ordinary matter. There are already tight limits on such things, but not enough to completely squelch the idea.

This is all extraordinarily speculative, but worth keeping an eye on. It will be full employment for particle-physics model-builders, who will be tasked with coming up with full theories that predict the right relic abundance of dark matter, have the right velocity-dependent force between dark and ordinary matter, and are compatible with all other known experimental constraints. It’s worth doing, as currently all of our information about dark matter comes from its gravitational interactions, not its interactions directly with ordinary matter. Any tiny hint of that is worth taking very seriously.

But of course it might all go away. More work will be necessary to verify the observations, and to work out the possible theoretical implications. Such is life at the cutting edge of science!

by Sean Carroll at March 01, 2018 12:00 AM

February 25, 2018

February 08, 2018

Sean Carroll - Preposterous Universe

Why Is There Something, Rather Than Nothing?

A good question!

Or is it?

I’ve talked before about the issue of why the universe exists at all (1, 2), but now I’ve had the opportunity to do a relatively careful job with it, courtesy of Eleanor Knox and Alastair Wilson. They are editing an upcoming volume, the Routledge Companion to the Philosophy of Physics, and asked me to contribute a chapter on this topic. Final edits aren’t done yet, but I’ve decided to put the draft on the arxiv:

Why Is There Something, Rather Than Nothing?
Sean M. Carroll

It seems natural to ask why the universe exists at all. Modern physics suggests that the universe can exist all by itself as a self-contained system, without anything external to create or sustain it. But there might not be an absolute answer to why it exists. I argue that any attempt to account for the existence of something rather than nothing must ultimately bottom out in a set of brute facts; the universe simply is, without ultimate cause or explanation.

As you can see, my basic tack hasn’t changed: this kind of question might be the kind of thing that doesn’t have a sensible answer. In our everyday lives, it makes sense to ask “why” this or that event occurs, but such questions have answers only because they are embedded in a larger explanatory context. In particular, because the world of our everyday experience is an emergent approximation with an extremely strong arrow of time, such that we can safely associate “causes” with subsequent “effects.” The universe, considered as all of reality (i.e. let’s include the multiverse, if any), isn’t like that. The right question to ask isn’t “Why did this happen?”, but “Could this have happened in accordance with the laws of physics?” As far as the universe and our current knowledge of the laws of physics is concerned, the answer is a resounding “Yes.” The demand for something more — a reason why the universe exists at all — is a relic piece of metaphysical baggage we would be better off to discard.

This perspective gets pushback from two different sides. On the one hand we have theists, who believe that they can answer why the universe exists, and the answer is God. As we all know, this raises the question of why God exists; but aha, say the theists, that’s different, because God necessarily exists, unlike the universe which could plausibly have not. The problem with that is that nothing exists necessarily, so the move is pretty obviously a cheat. I didn’t have a lot of room in the paper to discuss this in detail (in what after all was meant as a contribution to a volume on the philosophy of physics, not the philosophy of religion), but the basic idea is there. Whether or not you want to invoke God, you will be left with certain features of reality that have to be explained by “and that’s just the way it is.” (Theism could possibly offer a better account of the nature of reality than naturalism — that’s a different question — but it doesn’t let you wiggle out of positing some brute facts about what exists.)

The other side are those scientists who think that modern physics explains why the universe exists. It doesn’t! One purported answer — “because Nothing is unstable” — was never even supposed to explain why the universe exists; it was suggested by Frank Wilczek as a way of explaining why there is more matter than antimatter. But any such line of reasoning has to start by assuming a certain set of laws of physics in the first place. Why is there even a universe that obeys those laws? This, I argue, is not a question to which science is ever going to provide a snappy and convincing answer. The right response is “that’s just the way things are.” It’s up to us as a species to cultivate the intellectual maturity to accept that some questions don’t have the kinds of answers that are designed to make us feel satisfied.

by Sean Carroll at February 08, 2018 05:19 PM

February 07, 2018

Axel Maas - Looking Inside the Standard Model

How large is an elementary particle?
Recently, in the context of a master thesis, our group has begun to determine the size of the W boson. The natural questions on this project is: Why do you do that? Do we not know it already? And does elementary particles have a size at all?

It is best to answer these questions in reverse order.

So, do elementary particles have a size at all? Well, elementary particles are called elementary as they are the most basic constituents. In our theories today, they start out as pointlike. Only particles made from other particles, so-called bound states like a nucleus or a hadron, have a size. And now comes the but.

First of all, we do not yet know whether our elementary particles are really elementary. They may also be bound states of even more elementary particles. But in experiments we can only determine upper bounds to the size. Making better experiments will reduce this upper bound. Eventually, we may see that a particle previously thought of as point-like has a size. This has happened quite frequently over time. It always opened up a new level of elementary particle theories. Therefore measuring the size is important. But for us, as theoreticians, this type of question is only important if we have an idea about what could be the more elementary particles. And while some of our research is going into this direction, this project is not.

The other issue is that quantum effects give all elementary particles an 'apparent' size. This comes about by how we measure the size of a particle. We do this by shooting some other particle at it, and measure how strongly it becomes deflected. A truly pointlike particle has a very characteristic reflection profile. But quantum effects allow for additional particles to be created and destroyed in the vicinity of any particle. Especially, they allow for the existence of another particle of the same type, at least briefly. We cannot distinguish whether we hit the original particle or one of these. Since they are not at the same place as the original particle, their average distance looks like a size. This gives even a pointlike particle an apparent size, which we can measure. In this sense even an elementary particle has a size.

So, how can we then distinguish this size from an actual size of a bound state? We can do this by calculations. We determine the apparent size due to the quantum fluctuations and compare it to the measurement. Deviations indicate an actual size. This is because for a real bound state we can scatter somewhere in its structure, and not only in its core. This difference looks pictorially like this:


So, do we know the size already? Well, as said, we can only determine upper limits. Searching for them is difficult, and often goes via detours. One of such detours are so-called anomalous couplings. Measuring how they depend on energy provides indirect information on the size. There is an active program at CERN underway to do this experimentally. The results are so far say that the size of the W is below 0.0000000000000001 meter. This seems tiny, but in the world of particle physics this is not that strong a limit.

And now the interesting question: Why do we do this? As written, we do not want to make the W a bound state of something new. But one of our main research topics is driven by an interesting theoretical structure. If the standard model is taken seriously, the particle which we observe in an experiment and call the W is actually not the W of the underlying theory. Rather, it is a bound state, which is very, very similar to the elementary particle, but actually build from the elementary particles. The difference has been so small that identifying one with the other was a very good approximation up to today. But with better and better experiments may change. Thus, we need to test this.

Because then the thing we measure is a bound state it should have a, probably tiny, size. This would be a hallmark of this theoretical structure. And that we understood it. If the size is such that it could be actually measured at CERN, then this would be an important test of our theoretical understanding of the standard model.

However, this is not a simple quantity to calculate. Bound states are intrinsically complicated. Thus, we use simulations for this purpose. In fact, we actually go over the same detour as the experiments, and will determine an anomalous coupling. From this we then infer the size indirectly. In addition, the need to perform efficient simulations forces us to simplify the problem substantially. Hence, we will not get the perfect number. But we may get the order of magnitude, or be perhaps within a factor of two, or so. And this is all we need to currently say whether a measurement is possible, or whether this will have to wait for the next generation of experiments. And thus whether we will know whether we understood the theory within a few years or within a few decades.

by Axel Maas (noreply@blogger.com) at February 07, 2018 11:18 AM

February 05, 2018

Matt Strassler - Of Particular Significance

In Memory of Joe Polchinski, the Brane Master

This week, the community of high-energy physicists — of those of us fascinated by particles, fields, strings, black holes, and the universe at large — is mourning the loss of one of the great theoretical physicists of our time, Joe Polchinski. It pains me deeply to write these words.

Everyone who knew him personally will miss his special qualities — his boyish grin, his slightly wicked sense of humor, his charming way of stopping mid-sentence to think deeply, his athleticism and friendly competitiveness. Everyone who knew his research will feel the absence of his particular form of genius, his exceptional insight, his unique combination of abilities, which I’ll try to sketch for you below. Those of us who were lucky enough to know him both personally and scientifically — well, we lose twice.

Image result for joe polchinski

Polchinski — Joe, to all his colleagues — had one of those brains that works magic, and works magically. Scientific minds are as individual as personalities. Each physicist has a unique combination of talents and skills (and weaknesses); in modern lingo, each of us has a superpower or two. Rarely do you find two scientists who have the same ones.

Joe had several superpowers, and they were really strong. He had a tremendous knack for looking at old problems and seeing them in a new light, often overturning conventional wisdom or restating that wisdom in a new, clearer way. And he had prodigious technical ability, which allowed him to follow difficult calculations all the way to the end, on paths that would have deterred most of us.

One of the greatest privileges of my life was to work with Joe, not once but four times. I think I can best tell you a little about him, and about some of his greatest achievements, through the lens of that unforgettable experience.

[To my colleagues: this post was obviously written in trying circumstances, and it is certainly possible that my memory of distant events is foggy and in error.  I welcome any corrections that you might wish to suggest.]

Our papers between 1999 and 2006 were a sequence of sorts, aimed at understanding more fully the profound connection between quantum field theory — the language of particle physics — and string theory — best-known today as a candidate for a quantum theory of gravity. In each of those papers, as in many thousands of others written after 1995, Joe’s most influential contribution to physics played a central role. This was the discovery of objects known as “D-branes”, which he found in the context of string theory. (The term is a generalization of the word `membrane’.)

I can already hear the polemical haters of string theory screaming at me. ‘A discovery in string theory,’ some will shout, pounding the table, ‘an untested and untestable theory that’s not even wrong, should not be called a discovery in physics.’ Pay them no mind; they’re not even close, as you’ll see by the end of my remarks.

The Great D-scovery

In 1989, Joe, working with two young scientists, Jin Dai and Rob Leigh, was exploring some details of string theory, and carrying out a little mathematical exercise. Normally, in string theory, strings are little lines or loops that are free to move around anywhere they like, much like particles moving around in this room. But in some cases, particles aren’t in fact free to move around; you could, for instance, study particles that are trapped on the surface of a liquid, or trapped in a very thin whisker of metal. With strings, there can be a new type of trapping that particles can’t have — you could perhaps trap one end, or both ends, of the string within a surface, while allowing the middle of the string to move freely. The place where a string’s end may be trapped — whether a point, a line, a surface, or something more exotic in higher dimensions — is what we now call a “D-brane”.  [The `D’ arises for uninteresting technical reasons.]

Joe and his co-workers hit the jackpot, but they didn’t realize it yet. What they discovered, in retrospect, was that D-branes are an automatic feature of string theory. They’re not optional; you can’t choose to study string theories that don’t have them. And they aren’t just surfaces or lines that sit still. They’re physical objects that can roam the world. They have mass and create gravitational effects. They can move around and scatter off each other. They’re just as real, and just as important, as the strings themselves!

D-Branes

Fig. 1: D branes (in green) are physical objects on which a fundamental string (in red) can terminate.

It was as though Joe and his collaborators started off trying to understand why the chicken crossed the road, and ended up discovering the existence of bicycles, cars, trucks, buses, and jet aircraft.  It was that unexpected, and that rich.

And yet, nobody, not even Joe and his colleagues, quite realized what they’d done. Rob Leigh, Joe’s co-author, had the office next to mine for a couple of years, and we wrote five papers together between 1993 and 1995. Yet I think Rob mentioned his work on D-branes to me just once or twice, in passing, and never explained it to me in detail. Their paper had less than twenty citations as 1995 began.

In 1995 the understanding of string theory took a huge leap forward. That was the moment when it was realized that all five known types of string theory are different sides of the same die — that there’s really only one string theory.  A flood of papers appeared in which certain black holes, and generalizations of black holes — black strings, black surfaces, and the like — played a central role. The relations among these were fascinating, but often confusing.

And then, on October 5, 1995, a paper appeared that changed the whole discussion, forever. It was Joe, explaining D-branes to those of us who’d barely heard of his earlier work, and showing that many of these black holes, black strings and black surfaces were actually D-branes in disguise. His paper made everything clearer, simpler, and easier to calculate; it was an immediate hit. By the beginning of 1996 it had 50 citations; twelve months later, the citation count was approaching 300.

So what? Great for string theorists, but without any connection to experiment and the real world.  What good is it to the rest of us? Patience. I’m just getting to that.

What’s it Got to Do With Nature?

Our current understanding of the make-up and workings of the universe is in terms of particles. Material objects are made from atoms, themselves made from electrons orbiting a nucleus; and the nucleus is made from neutrons and protons. We learned in the 1970s that protons and neutrons are themselves made from particles called quarks and antiquarks and gluons — specifically, from a “sea” of gluons and a few quark/anti-quark pairs, within which sit three additional quarks with no anti-quark partner… often called the `valence quarks’.  We call protons and neutrons, and all other particles with three valence quarks, `baryons”.   (Note that there are no particles with just one valence quark, or two, or four — all you get is baryons, with three.)

In the 1950s and 1960s, physicists discovered short-lived particles much like protons and neutrons, with a similar sea, but which  contain one valence quark and one valence anti-quark. Particles of this type are referred to as “mesons”.  I’ve sketched a typical meson and a typical baryon in Figure 2.  (The simplest meson is called a “pion”; it’s the most common particle produced in the proton-proton collisions at the Large Hadron Collider.)

 

MesonBaryonPictures

Fig. 2: Baryons (such as protons and neutrons) and mesons each contain a sea of gluons and quark-antiquark pairs; baryons have three unpaired “valence” quarks, while mesons have a valence quark and a valence anti-quark.  (What determines whether a quark is valence or sea involves subtle quantum effects, not discussed here.)

But the quark/gluon picture of mesons and baryons, back in the late 1960s, was just an idea, and it was in competition with a proposal that mesons are little strings. These are not, I hasten to add, the “theory of everything” strings that you learn about in Brian Greene’s books, which are a billion billion times smaller than a proton. In a “theory of everything” string theory, often all the types of particles of nature, including electrons, photons and Higgs bosons, are tiny tiny strings. What I’m talking about is a “theory of mesons” string theory, a much less ambitious idea, in which only the mesons are strings.  They’re much larger: just about as long as a proton is wide. That’s small by human standards, but immense compared to theory-of-everything strings.

Why did people think mesons were strings? Because there was experimental evidence for it! (Here’s another example.)  And that evidence didn’t go away after quarks were discovered. Instead, theoretical physicists gradually understood why quarks and gluons might produce mesons that behave a bit like strings. If you spin a meson fast enough (and this can happen by accident in experiments), its valence quark and anti-quark may separate, and the sea of objects between them forms what is called a “flux tube.” See Figure 3. [In certain superconductors, somewhat similar flux tubes can trap magnetic fields.] It’s kind of a thick string rather than a thin one, but still, it shares enough properties with a string in string theory that it can produce experimental results that are similar to string theory’s predictions.

SpinningMeson

Fig. 3: One reason mesons behave like strings in experiment is that a spinning meson acts like a thick string, with the valence quark and anti-quark at the two ends.

And so, from the mid-1970s onward, people were confident that quantum field theories like the one that describes quarks and gluons can create objects with stringy behavior. A number of physicists — including some of the most famous and respected ones — made a bolder, more ambitious claim: that quantum field theory and string theory are profoundly related, in some fundamental way. But they weren’t able to be precise about it; they had strong evidence, but it wasn’t ever entirely clear or convincing.

In particular, there was an important unresolved puzzle. If mesons are strings, then what are baryons? What are protons and neutrons, with their three valence quarks? What do they look like if you spin them quickly? The sketches people drew looked something like Figure 3. A baryon would perhaps become three joined flux tubes (with one possibly much longer than the other two), each with its own valence quark at the end.  In a stringy cartoon, that baryon would be three strings, each with a free end, with the strings attached to some sort of junction. This junction of three strings was called a “baryon vertex.”  If mesons are little strings, the fundamental objects in a string theory, what is the baryon vertex from the string theory point of view?!  Where is it hiding — what is it made of — in the mathematics of string theory?

SpinningBaryon.png

Fig. 4: A fast-spinning baryon looks vaguely like the letter Y — three valence quarks connected by flux tubes to a “baryon vertex”.  A cartoon of how this would appear from a stringy viewpoint, analogous to Fig. 3, leads to a mystery: what, in string theory, is this vertex?!

[Experts: Notice that the vertex has nothing to do with the quarks. It’s a property of the sea — specifically, of the gluons. Thus, in a world with only gluons — a world whose strings naively form loops without ends — it must still be possible, with sufficient energy, to create a vertex-antivertex pair. Thus field theory predicts that these vertices must exist in closed string theories, though they are linearly confined.]

BaryonPuzzle1.png

The baryon puzzle: what is a baryon from the string theory viewpoint?

No one knew. But isn’t it interesting that the most prominent feature of this vertex is that it is a location where a string’s end can be trapped?

Everything changed in the period 1997-2000. Following insights from many other physicists, and using D-branes as the essential tool, Juan Maldacena finally made the connection between quantum field theory and string theory precise. He was able to relate strings with gravity and extra dimensions, which you can read about in Brian Greene’s books, with the physics of particles in just three spatial dimensions, similar to those of the real world, with only non-gravitational forces.  It was soon clear that the most ambitious and radical thinking of the ’70s was correct — that almost every quantum field theory, with its particles and forces, can alternatively be viewed as a string theory. It’s a bit analogous to the way that a painting can be described in English or in Japanese — fields/particles and strings/gravity are, in this context, two very different languages for talking about exactly the same thing.

The saga of the baryon vertex took a turn in May 1998, when Ed Witten showed how a similar vertex appears in Maldacena’s examples. [Note added: I had forgotten that two days after Witten’s paper, David Gross and Hirosi Ooguri submitted a beautiful, wide-ranging paper, whose section on baryons contains many of the same ideas.] Not surprisingly, this vertex was a D-brane — specifically a D-particle, an object on which the strings extending from freely-moving quarks could end. It wasn’t yet quite satisfactory, because the gluons and quarks in Maldacena’s examples roam free and don’t form mesons or baryons. Correspondingly the baryon vertex isn’t really a physical object; if you make one, it quickly diffuses away into nothing. Nevertheless, Witten’s paper made it obvious what was going on. To the extent real-world mesons can be viewed as strings, real-world protons and neutrons can be viewed as strings attached to a D-brane.

BaryonPuzzle2.png

The baryon puzzle, resolved.  A baryon is made from three strings and a point-like D-brane. [Note there is yet another viewpoint in which a baryon is something known as a skyrmion, a soliton made from meson fields — but that is an issue for another day.]

It didn’t take long for more realistic examples, with actual baryons, to be found by theorists. I don’t remember who found one first, but I do know that one of the earliest examples showed up in my first paper with Joe, in the year 2000.

 

Working with Joe

That project arose during my September 1999 visit to the KITP (Kavli Institute for Theoretical Physics) in Santa Barbara, where Joe was a faculty member. Some time before that I happened to have studied a field theory (called N=1*) that differed from Maldacena’s examples only slightly, but in which meson-like objects do form. One of the first talks I heard when I arrived at KITP was by Rob Myers, about a weird property of D-branes that he’d discovered. During that talk I made a connection between Myers’ observation and a feature of the N=1* field theory, and I had one of those “aha” moments that physicists live for. I suddenly knew what the string theory that describes the N=1*  field theory must look like.

But for me, the answer was bad news. To work out the details was clearly going to require a very difficult set of calculations, using aspects of string theory about which I knew almost nothing [non-holomorphic curved branes in high-dimensional curved geometry.] The best I could hope to do, if I worked alone, would be to write a conceptual paper with lots of pictures, and far more conjectures than demonstrable facts.

But I was at KITP.  Joe and I had had a good personal rapport for some years, and I knew that we found similar questions exciting. And Joe was the brane-master; he knew everything about D-branes. So I decided my best hope was to persuade Joe to join me. I engaged in a bit of persistent cajoling. Very fortunately for me, it paid off.

I went back to the east coast, and Joe and I went to work. Every week or two Joe would email some research notes with some preliminary calculations in string theory. They had such a high level of technical sophistication, and so few pedagogical details, that I felt like a child; I could barely understand anything he was doing. We made slow progress. Joe did an important warm-up calculation, but I found it really hard to follow. If the warm-up string theory calculation was so complex, had we any hope of solving the full problem?  Even Joe was a little concerned.

Image result for polchinski joeAnd then one day, I received a message that resounded with a triumphant cackle — a sort of “we got ’em!” that anyone who knew Joe will recognize. Through a spectacular trick, he’d figured out how use his warm-up example to make the full problem easy! Instead of months of work ahead of us, we were essentially done.

From then on, it was great fun! Almost every week had the same pattern. I’d be thinking about a quantum field theory phenomenon that I knew about, one that should be visible from the string viewpoint — such as the baryon vertex. I knew enough about D-branes to develop a heuristic argument about how it should show up. I’d call Joe and tell him about it, and maybe send him a sketch. A few days later, a set of notes would arrive by email, containing a complete calculation verifying the phenomenon. Each calculation was unique, a little gem, involving a distinctive investigation of exotically-shaped D-branes sitting in a curved space. It was breathtaking to witness the speed with which Joe worked, the breadth and depth of his mathematical talent, and his unmatched understanding of these branes.

[Experts: It’s not instantly obvious that the N=1* theory has physical baryons, but it does; you have to choose the right vacuum, where the theory is partially Higgsed and partially confining. Then to infer, from Witten’s work, what the baryon vertex is, you have to understand brane crossings (which I knew about from Hanany-Witten days): Witten’s D5-brane baryon vertex operator creates a  physical baryon vertex in the form of a D3-brane 3-ball, whose boundary is an NS 5-brane 2-sphere located at a point in the usual three dimensions. And finally, a physical baryon is a vertex with n strings that are connected to nearby D5-brane 2-spheres. See chapter VI, sections B, C, and E, of our paper from 2000.]

Throughout our years of collaboration, it was always that way when we needed to go head-first into the equations; Joe inevitably left me in the dust, shaking my head in disbelief. That’s partly my weakness… I’m pretty average (for a physicist) when it comes to calculation. But a lot of it was Joe being so incredibly good at it.

Fortunately for me, the collaboration was still enjoyable, because I was almost always able to keep pace with Joe on the conceptual issues, sometimes running ahead of him. Among my favorite memories as a scientist are moments when I taught Joe something he didn’t know; he’d be silent for a few seconds, nodding rapidly, with an intent look — his eyes narrow and his mouth slightly open — as he absorbed the point.  “Uh-huh… uh-huh…”, he’d say.

But another side of Joe came out in our second paper. As we stood chatting in the KITP hallway, before we’d even decided exactly which question we were going to work on, Joe suddenly guessed the answer! And I couldn’t get him to explain which problem he’d solved, much less the solution, for several days!! It was quite disorienting.

This was another classic feature of Joe. Often he knew he’d found the answer to a puzzle (and he was almost always right), but he couldn’t say anything comprehensible about it until he’d had a few days to think and to turn his ideas into equations. During our collaboration, this happened several times. (I never said “Use your words, Joe…”, but perhaps I should have.) Somehow his mind was working in places that language doesn’t go, in ways that none of us outside his brain will ever understand. In him, there was something of an oracle.

Looking Toward The Horizon

Our interests gradually diverged after 2006; I focused on the Large Hadron Collider [also known as the Large D-brane Collider], while Joe, after some other explorations, ended up thinking about black hole horizons and the information paradox. But I enjoyed his work from afar, especially when, in 2012, Joe and three colleagues (Ahmed Almheiri, Don Marolf, and James Sully) blew apart the idea of black hole complementarity, widely hoped to be the solution to the paradox. [I explained this subject here, and also mentioned a talk Joe gave about it here.]  The wreckage is still smoldering, and the paradox remains.

Then Joe fell ill, and we began to lose him, at far too young an age.  One of his last gifts to us was his memoirs, which taught each of us something about him that we didn’t know.  Finally, on Friday last, he crossed the horizon of no return.  If there’s no firewall there, he knows it now.

What, we may already wonder, will Joe’s scientific legacy be, decades from now?  It’s difficult to foresee how a theorist’s work will be viewed a century hence; science changes in unexpected ways, and what seems unimportant now may become central in future… as was the path for D-branes themselves in the course of the 1990s.  For those of us working today, D-branes in string theory are clearly Joe’s most important discovery — though his contributions to our understanding of black holes, cosmic strings, and aspects of field theory aren’t soon, if ever, to be forgotten.  But who knows? By the year 2100, string theory may be the accepted theory of quantum gravity, or it may just be a little-known tool for the study of quantum fields.

Yet even if the latter were to be string theory’s fate, I still suspect it will be D-branes that Joe is remembered for. Because — as I’ve tried to make clear — they’re real.  Really real.  There’s one in every proton, one in every neutron. Our bodies contain them by the billion billion billions. For that insight, that elemental contribution to human knowledge, our descendants can blame Joseph Polchinski.

Thanks for everything, Joe.  We’ll miss you terribly.  You so often taught us new ways to look at the world — and even at ourselves.

Image result for joe polchinski

 

by Matt Strassler at February 05, 2018 03:59 PM

January 29, 2018

Georg von Hippel - Life on the lattice

Looking for guest blogger(s) to cover LATTICE 2018
Since I will not be attending LATTICE 2018 for some excellent personal reasons, I am looking for a guest blogger or even better several guest bloggers from the lattice community who would be interested in covering the conference. Especially for advanced PhD students or junior postdocs, this might be a great opportunity to get your name some visibility. If you are interested, drop me a line either in the comment section or by email (my university address is easy to find).

by Georg v. Hippel (noreply@blogger.com) at January 29, 2018 11:49 AM

January 25, 2018

Alexey Petrov - Symmetry factor

Rapid-response (non-linear) teaching: report

Some of you might remember my previous post about non-linear teaching, where I described a new teaching strategy that I came up with and was about to implement in teaching my undergraduate Classical Mechanics I class. Here I want to report on the outcomes of this experiment and share some of my impressions on teaching.

Course description

Our Classical Mechanics class is a gateway class for our physics majors. It is the first class they take after they are done with general physics lectures. So the students are already familiar with the (simpler version of the) material they are going to be taught. The goal of this class is to start molding physicists out of physics students. It is a rather small class (max allowed enrollment is 20 students; I had 22 in my class), which makes professor-student interaction rather easy.

Rapid-response (non-linear) teaching: generalities

To motivate the method that I proposed, I looked at some studies in experimental psychology, in particular in memory and learning studies. What I was curious about is how much is currently known about the process of learning and what suggestions I can take from the psychologists who know something about the way our brain works in retaining the knowledge we receive.

As it turns out, there are some studies on this subject (I have references, if you are interested). The earliest ones go back to 1880’s when German psychologist Hermann Ebbinghaus hypothesized the way our brain retains information over time. The “forgetting curve” that he introduced gives approximate representation of information retention as a function of time. His studies have been replicated with similar conclusions in recent experiments.

EbbinghausCurveThe upshot of these studies is that loss of learned information is pretty much exponential; as can be seen from the figure on the left, in about a day we only retain about 40% of what we learned.

Psychologists also learned that one of the ways to overcome the loss of information is to (meaningfully) retrieve it: this is how learning  happens. Retrieval is critical for robust, durable, and long-term learning. It appears that every time we retrieve learned information, it becomes more accessible in the future. It is, however, important how we retrieve that stored information: simple re-reading of notes or looking through the examples will not be as effective as re-working the lecture material. It is also important how often we retrieve the stored info.

So, here is what I decided to change in the way I teach my class in light of the above-mentioned information (no pun intended).

Rapid-response (non-linear) teaching: details

To counter the single-day information loss, I changed the way homework is assigned: instead of assigning homework sets with 3-4-5 problems per week, I introduced two types of homework assignments: short homeworks and projects.

Short homework assignments are single-problem assignments given after each class that must be done by the next class. They are designed such that a student needs to re-derive material that was discussed previously in class (with small new twist added). For example, if the block-down-to-incline problem was discussed in class, the short assignment asks to redo the problem with a different choice of coordinate axes. This way, instead of doing an assignment in the last minute at the end of the week, the students are forced to work out what they just learned in class every day (meaningful retrieval)!

The second type of assignments, project homework assignments are designed to develop understanding of how topics in a given chapter relate to each other. There are as many project assignments as there are chapters. Students get two weeks to complete them.

At the end, the students get to solve approximately the same number of problems over the course of the semester.

For a professor, the introduction of short homework assignments changes the way class material is presented. Depending on how students performed on the previous short homework, I adjusted the material (both speed and volume) that we discussed in class. I also designed examples for the future sections in such a way that I could repeat parts of the topic that posed some difficulties in comprehension. Overall, instead of a usual “linear” propagation of the course, we moved along something akin to helical motion, returning and spending more time on topics that students found more difficult (hence “rapid-response or non-linear” teaching).

Other things were easy to introduce: for instance, using Socrates’ method in doing examples. The lecture itself was an open discussion between the prof and students.

Outcomes

So, I have implemented this method in teaching Classical Mechanics I class in Fall 2017 semester. It was not an easy exercise, mostly because it was the first time I was teaching GraphNonlinearTeachingthis class and had no grader help. I would say the results confirmed my expectations: introduction of short homework assignments helps students to perform better on the exams. Now, my statistics is still limited: I only had 20 students in my class. Yet, among students there were several who decided to either largely ignore short homework assignments or did them irregularly. They were given zero points for each missed short assignment. All students generally did well on their project assignments, yet there appears some correlation (see graph above) between the total number of points acquired on short homework assignments and exam performance (measured by a total score on the Final and two midterms). This makes me thing that short assignments were beneficial for students. I plan to teach this course again next year, which will increase my statistics.

I was quite surprised that my students generally liked this way of teaching. In fact, they were disappointed that I decided not to apply this method for the Mechanics II class that I am teaching this semester. They also found that problems assigned in projects were considerably harder than the problems from the short assignments (this is how it was supposed to be).

For me, this was not an easy semester. I had to develop my set of lectures — so big thanks go to my colleagues Joern Putschke and Rob Harr who made their notes available. I spent a lot of time preparing this course, which, I think, affected my research outcome last semester. Yet, most difficulties are mainly Wayne State-specifics: Wayne State does not provide TAs for small classes, so I had to not only design all homework assignments, but also grade them (on top of developing the lectures from the ground up). During the semester, it was important to grade short assignments in the same day I received them to re-tune lectures, this did take a lot of my time. I would say TAs would certainly help to run this course — so I’ll be applying for some internal WSU educational grants to continue development of this method. I plan to employ it again next year to teach Classical Mechanics.

Advertisements
&b
&b

by apetrov at January 25, 2018 08:18 PM